Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                

Introduction to

Tenth Edition Introduction to Operations Research Ann Frederick S. Hillier • Gerald J. Lieberman y ive sar r INSTALLING ANALYTIC SOLVER PLATFORM FOR EDUCATION Instructors: A course code will enable your students to download and install Analytic Solver Platform for Education with a semester-long (140 day) license, and will enable Frontline Systems to assist students with installation, and provide technical support to you during the course. To set up a course code for your course, please email Frontline Systems at academic@solver.com, or call 775-831-0300, press 0, and ask for the Academic Coordinator. Course codes MUST be renewed each year. The course code is free, and it can usually be issued within 24 to 48 hours (often the same day). Please give the course code, plus the instructions below, to your students. If you’re evaluating the book for adoption, you can use the course code yourself to download and install the software. Students: 1) To download and install Analytic Solver Platform for Education from Frontline Systems to work with Excel for Windows, please visit: www.solver.com/student. Don’t try to download from any other page. If you have a Mac, you’ll need to install “dual-boot” or VM software, Microsoft Windows, and Office or Excel for Windows first. Excel for Mac will NOT work. Learn more at www.solver.com/using-frontline-solvers-macintosh. 2) Fill out the registration form on the page visited is step 1, supplying your name, school, email address (key information will be sent to this address), course code (obtain this from your instructor), and textbook code (enter HLIOR10). If you have this textbook but you aren’t enrolled in a course, call 775-831-0300 and press 0 for assistance with the software. 3) On the download page, change 32-bit to 64-bit ONLY if you’ve confirmed that you have 64-bit Excel. Click the Download Now button, and save the downloaded file (SolverSetup.exe or SolverSetup64.exe). Most users have 64-bit Windows and 32-bit Excel. For Excel 2007, always download SolverSetup. In Excel 2010, choose File > Help and look in the lower right. In Excel 2013, choose File > Account > About Excel and look at the top of the dialog. Download SolverSetup64 ONLY if you see “64-bit” displayed. 4) Close any Excel windows you have open. 5) Run SolverSetup/SolverSetup64 to install the software. When prompted, enter the installation password and the license activation code contained in the email sent to the address you entered on the form above. If you have problems downloading or installing, please email support@solver.com or call 775-831-0300 and press 4 (tech support). Say that you have Analytic Solver Platform for Education, and have your course code and textbook code available. If you have problems setting up or solving your model, or interpreting the results, please ask your instructor for assistance. Frontline Systems cannot help you with homework problems. INTRODUCTION TO OPERATIONS RESEARCH INTRODUCTION TO OPERATIONS RESEARCH Tenth Edition FREDERICK S. HILLIER Stanford University GERALD J. LIEBERMAN Late of Stanford University INTRODUCTION TO OPERATIONS RESEARCH, TENTH EDITION Published by McGraw-Hill Education, 2 Penn Plaza, New York, NY 10121. Copyright © 2015 by McGraw-Hill Education. All rights reserved. Printed in the United States of America. Previous editions © 2010, 2005, and 2001. No part of this publication may be reproduced or distributed in any form or by any means, or stored in a database or retrieval system, without the prior written consent of McGraw-Hill Education, including, but not limited to, in any network or other electronic storage or transmission, or broadcast for distance learning. Some ancillaries, including electronic and print components, may not be available to customers outside the United States. This book is printed on acid-free paper. 1 2 3 4 5 6 7 8 9 0 QVS/QVS 1 0 9 8 7 6 5 4 ISBN 978-0-07-352345-3 MHID 0-07-352345-3 Senior Vice President, Products & Markets: Kurt L. Strand Vice President, General Manager, Products & Markets: Marty Lange Vice President, Content Production & Technology Services: Kimberly Meriwether David Global Publisher: Raghothaman Srinivasan Development Editor: Vincent Bradshaw Marketing Manager: Nick McFadden Director, Content Production: Terri Schiesl Content Project Manager: Mary Jane Lampe Buyer: Laura Fuller Cover Designer: Studio Montage, St. Louis, MO Compositor: Laserwords Private Limited Typeface: 10/12 Times Roman Printer: Quad/Graphics All credits appearing on page or at the end of the book are considered to be an extension of the copyright page. Library of Congress Cataloging-in-Publication Data Hillier, Frederick S. Introduction to operations research / Frederick S. Hillier, Stanford University, Gerald J. Lieberman, late, of Stanford University.—Tenth edition. pages cm Includes bibliographical references and indexes. ISBN 978-0-07-352345-3 (alk. paper) — ISBN 0-07-352345-3 (alk. paper) 1. Operations research. I. Lieberman, Gerald J. II. Title. T57.6.H53 2015 658.4'032--dc23 2013035901 The Internet addresses listed in the text were accurate at the time of publication. The inclusion of a website does not indicate an endorsement by the authors or McGraw-Hill Education, and McGraw-Hill Education does not guarantee the accuracy of the information presented at these sites. www.mhhe.com ABOUT THE AUTHORS Frederick S. Hillier was born and raised in Aberdeen, Washington, where he was an award winner in statewide high school contests in essay writing, mathematics, debate, and music. As an undergraduate at Stanford University, he ranked first in his engineering class of over 300 students. He also won the McKinsey Prize for technical writing, won the Outstanding Sophomore Debater award, played in the Stanford Woodwind Quintet and Stanford Symphony Orchestra, and won the Hamilton Award for combining excellence in engineering with notable achievements in the humanities and social sciences. Upon his graduation with a BS degree in industrial engineering, he was awarded three national fellowships (National Science Foundation, Tau Beta Pi, and Danforth) for graduate study at Stanford with specialization in operations research. During his three years of graduate study, he took numerous additional courses in mathematics, statistics, and economics beyond what was required for his MS and PhD degrees while also teaching two courses (including “Introduction to Operations Research”). Upon receiving his PhD degree, he joined the faculty of Stanford University and began work on the 1st edition of this textbook two years later. He subsequently earned tenure at the age of 28 and the rank of full professor at 32. He also received visiting appointments at Cornell University, Carnegie-Mellon University, the Technical University of Denmark, the University of Canterbury (New Zealand), and the University of Cambridge (England). After 35 years on the Stanford faculty, he took early retirement from his faculty responsibilities in order to focus full time on textbook writing, and now is Professor Emeritus of Operations Research at Stanford. Dr. Hillier’s research has extended into a variety of areas, including integer programming, queueing theory and its application, statistical quality control, and the application of operations research to the design of production systems and to capital budgeting. He has published widely, and his seminal papers have been selected for republication in books of selected readings at least 10 times. He was the first-prize winner of a research contest on “Capital Budgeting of Interrelated Projects” sponsored by The Institute of Management Sciences (TIMS) and the U.S. Office of Naval Research. He and Dr. Lieberman also received the honorable mention award for the 1995 Lanchester Prize (best English-language publication of any kind in the field of operations research), which was awarded by the Institute of Operations Research and the Management Sciences (INFORMS) for the 6th edition of this book. In addition, he was the recipient of the prestigious 2004 INFORMS Expository Writing Award for the 8th edition of this book. Dr. Hillier has held many leadership positions with the professional societies in his field. For example, he has served as treasurer of the Operations Research Society of America (ORSA), vice president for meetings of TIMS, co-general chairman of the 1989 TIMS International Meeting in Osaka, Japan, chair of the TIMS Publications Committee, chair of the ORSA Search Committee for Editor of Operations Research, chair of the ORSA Resources Planning Committee, chair of the ORSA/TIMS Combined Meetings Committee, and chair of the John von Neumann Theory Prize Selection Committee for INFORMS. He also is a Fellow of INFORMS. In addition, he recently completed a 20-year tenure as the series editor for Springer’s International Series in Operations Research and Management Science, a particularly prominent book series with over 200 published books that he founded in 1993. vii viii ABOUT THE AUTHORS In addition to Introduction to Operations Research and two companion volumes, Introduction to Mathematical Programming (2nd ed., 1995) and Introduction to Stochastic Models in Operations Research (1990), his books are The Evaluation of Risky Interrelated Investments (North-Holland, 1969), Queueing Tables and Graphs (Elsevier North-Holland, 1981, co-authored by O. S. Yu, with D. M. Avis, L. D. Fossett, F. D. Lo, and M. I. Reiman), and Introduction to Management Science: A Modeling and Case Studies Approach with Spreadsheets (5th ed., McGraw-Hill/Irwin, 2014, co-authored by his son Mark Hillier). The late Gerald J. Lieberman sadly passed away in 1999. He had been Professor Emeritus of Operations Research and Statistics at Stanford University, where he was the founding chair of the Department of Operations Research. He was both an engineer (having received an undergraduate degree in mechanical engineering from Cooper Union) and an operations research statistician (with an AM from Columbia University in mathematical statistics, and a PhD from Stanford University in statistics). Dr. Lieberman was one of Stanford’s most eminent leaders in recent decades. After chairing the Department of Operations Research, he served as associate dean of the School of Humanities and Sciences, vice provost and dean of research, vice provost and dean of graduate studies, chair of the faculty senate, member of the University Advisory Board, and chair of the Centennial Celebration Committee. He also served as provost or acting provost under three different Stanford presidents. Throughout these years of university leadership, he also remained active professionally. His research was in the stochastic areas of operations research, often at the interface of applied probability and statistics. He published extensively in the areas of reliability and quality control, and in the modeling of complex systems, including their optimal design, when resources are limited. Highly respected as a senior statesman of the field of operations research, Dr. Lieberman served in numerous leadership roles, including as the elected president of The Institute of Management Sciences. His professional honors included being elected to the National Academy of Engineering, receiving the Shewhart Medal of the American Society for Quality Control, receiving the Cuthbertson Award for exceptional service to Stanford University, and serving as a fellow at the Center for Advanced Study in the Behavioral Sciences. In addition, the Institute of Operations Research and the Management Sciences (INFORMS) awarded him and Dr. Hillier the honorable mention award for the 1995 Lanchester Prize for the 6th edition of this book. In 1996, INFORMS also awarded him the prestigious Kimball Medal for his exceptional contributions to the field of operations research and management science. In addition to Introduction to Operations Research and two companion volumes, Introduction to Mathematical Programming (2nd ed., 1995) and Introduction to Stochastic Models in Operations Research (1990), his books are Handbook of Industrial Statistics (PrenticeHall, 1955, co-authored by A. H. Bowker), Tables of the Non-Central t-Distribution (Stanford University Press, 1957, co-authored by G. J. Resnikoff), Tables of the Hypergeometric Probability Distribution (Stanford University Press, 1961, co-authored by D. Owen), Engineering Statistics, (2nd ed., Prentice-Hall, 1972, co-authored by A. H. Bowker), and Introduction to Management Science: A Modeling and Case Studies Approach with Spreadsheets (McGraw-Hill/Irwin, 2000, co-authored by F. S. Hillier and M. S. Hillier). ABOUT THE CASE WRITERS Karl Schmedders is professor of quantitative business administration at the University of Zurich in Switzerland and a visiting associate professor at the Kellogg Graduate School of Management (Northwestern University). His research interests include management science, financial economics, and computational economics and finance. in 2003, a paper by Dr. Schmedders received a nomination for the Smith-Breeden Prize for the best paper in Journal of Finance. He received his doctorate in operations research from Stanford University, where he taught both undergraduate and graduate classes in operations research, including a case studies course in operations research. He received several teaching awards at Stanford, including the university’s prestigious Walter J. Gores Teaching Award. After post-doctoral research at the Hoover Institution, a think tank on the Stanford campus, he became assistant professor of managerial economics and decision sciences at the Kellogg School. He was promoted to associate professor in 2001 and received tenure in 2005. In 2008, he joined the University of Zurich, where he currently teaches courses in management science, spreadsheet modeling, and computational economics and finance. At Kellogg he received several teaching awards, including the L. G. Lavengood Professor of the Year Award. More recently he won the best professor award of the Kellogg School’s European EMBA program (2008, 2009, and 2011) and its Miami EMBA program (2011). Molly Stephens is a partner in the Los Angeles office of Quinn, Emanuel, Urquhart & Sullivan, LLP. She graduated from Stanford University with a BS degree in industrial engineering and an MS degree in operations research. Ms. Stephens taught public speaking in Stanford’s School of Engineering and served as a teaching assistant for a case studies course in operations research. As a teaching assistant, she analyzed operations research problems encountered in the real world and the transformation of these problems into classroom case studies. Her research was rewarded when she won an undergraduate research grant from Stanford to continue her work and was invited to speak at an INFORMS conference to present her conclusions regarding successful classroom case studies. Following graduation, Ms. Stephens worked at Andersen Consulting as a systems integrator, experiencing real cases from the inside, before resuming her graduate studies to earn a JD degree (with honors) from the University of Texas Law School at Austin. She is a partner in the largest law firm in the United States devoted solely to business litigation, where her practice focuses on complex financial and securities litigation. ix DEDICATION To the memory of our parents and To the memory of my beloved mentor, Gerald J. Lieberman, who was one of the true giants of our field TABLE OF CONTENTS http://highered.mheducation.com/sites/0073523453/information_center_view0/index.html PREFACE xxii CHAPTER 1 Introduction 1 1.1 The Origins of Operations Research 1 1.2 The Nature of Operations Research 2 1.3 The Rise of Analytics Together with Operations Research 1.4 The Impact of Operations Research 5 1.5 Algorithms and OR Courseware 7 Selected References 9 Problems 9 3 CHAPTER 2 Overview of the Operations Research Modeling Approach 10 2.1 Defining the Problem and Gathering Data 2.2 Formulating a Mathematical Model 13 2.3 Deriving Solutions from the Model 15 2.4 Testing the Model 18 2.5 Preparing to Apply the Model 19 2.6 Implementation 20 2.7 Conclusions 21 Selected References 21 Problems 23 CHAPTER 3 Introduction to Linear Programming 10 25 3.1 Prototype Example 26 3.2 The Linear Programming Model 32 3.3 Assumptions of Linear Programming 38 3.4 Additional Examples 44 3.5 Formulating and Solving Linear Programming Models on a Spreadsheet 62 3.6 Formulating Very Large Linear Programming Models 71 3.7 Conclusions 79 Selected References 79 Learning Aids for This Chapter on Our Website 80 Problems 81 Case 3.1 Auto Assembly 90 Previews of Added Cases on Our Website 92 Case 3.2 Cutting Cafeteria Costs 92 Case 3.3 Staffing a Call Center 92 Case 3.4 Promoting a Breakfast Cereal 92 xi xii CONTENTS CHAPTER 4 Solving Linear Programming Problems: The Simplex Method 93 4.1 The Essence of the Simplex Method 93 4.2 Setting Up the Simplex Method 98 4.3 The Algebra of the Simplex Method 101 4.4 The Simplex Method in Tabular Form 107 4.5 Tie Breaking in the Simplex Method 112 4.6 Adapting to Other Model Forms 115 4.7 Postoptimality Analysis 133 4.8 Computer Implementation 141 4.9 The Interior-Point Approach to Solving Linear Programming Problems 4.10 Conclusions 147 Appendix 4.1 An Introduction to Using LINDO and LINGO 147 Selected References 151 Learning Aids for This Chapter on Our Website 151 Problems 152 Case 4.1 Fabrics and Fall Fashions 160 Previews of Added Cases on Our Website 162 Case 4.2 New Frontiers 162 Case 4.3 Assigning Students to Schools 162 CHAPTER 5 The Theory of the Simplex Method 163 5.1 Foundations of the Simplex Method 163 5.2 The Simplex Method in Matrix Form 174 5.3 A Fundamental Insight 183 5.4 The Revised Simplex Method 186 5.5 Conclusions 189 Selected References 189 Learning Aids for This Chapter on Our Website Problems 190 190 CHAPTER 6 Duality Theory 197 6.1 The Essence of Duality Theory 197 6.2 Economic Interpretation of Duality 205 6.3 Primal–Dual Relationships 208 6.4 Adapting to Other Primal Forms 213 6.5 The Role of Duality Theory in Sensitivity Analysis 217 6.6 Conclusions 220 Selected References 220 Learning Aids for This Chapter on Our Website 220 Problems 221 CHAPTER 7 Linear Programming under Uncertainty 225 7.1 7.2 7.3 7.4 7.5 The Essence of Sensitivity Analysis 226 Applying Sensitivity Analysis 233 Performing Sensitivity Analysis on a Spreadsheet 250 Robust Optimization 264 Chance Constraints 268 143 xiii CONTENTS 7.6 Stochastic Programming with Recourse 271 7.7 Conclusions 276 Selected References 276 Learning Aids for This Chapter on Our Website 277 Problems 277 Case 7.1 Controlling Air Pollution 288 Previews of Added Cases on Our Website 289 Case 7.2 Farm Management 289 Case 7.3 Assigning Students to Schools, Revisited 289 Case 7.4 Writing a Nontechnical Memo 289 CHAPTER 8 Other Algorithms for Linear Programming 8.1 The Dual Simplex Method 290 8.2 Parametric Linear Programming 294 8.3 The Upper Bound Technique 299 8.4 An Interior-Point Algorithm 301 8.5 Conclusions 312 Selected References 313 Learning Aids for This Chapter on Our Website Problems 314 290 313 CHAPTER 9 The Transportation and Assignment Problems 318 9.1 The Transportation Problem 319 9.2 A Streamlined Simplex Method for the Transportation Problem 333 9.3 The Assignment Problem 348 9.4 A Special Algorithm for the Assignment Problem 356 9.5 Conclusions 360 Selected References 361 Learning Aids for This Chapter on Our Website 361 Problems 362 Case 9.1 Shipping Wood to Market 370 Previews of Added Cases on Our Website 371 Case 9.2 Continuation of the Texago Case Study 371 Case 9.3 Project Pickings 371 CHAPTER 10 Network Optimization Models 372 10.1 Prototype Example 373 10.2 The Terminology of Networks 374 10.3 The Shortest-Path Problem 377 10.4 The Minimum Spanning Tree Problem 382 10.5 The Maximum Flow Problem 387 10.6 The Minimum Cost Flow Problem 395 10.7 The Network Simplex Method 403 10.8 A Network Model for Optimizing a Project’s Time–Cost Trade-Off 413 10.9 Conclusions 424 Selected References 425 Learning Aids for This Chapter on Our Website 425 xiv CONTENTS Problems 426 Case 10.1 Money in Motion 434 Previews of Added Cases on Our Website Case 10.2 Aiding Allies 437 Case 10.3 Steps to Success 437 CHAPTER 11 Dynamic Programming 437 438 11.1 A Prototype Example for Dynamic Programming 438 11.2 Characteristics of Dynamic Programming Problems 443 11.3 Deterministic Dynamic Programming 445 11.4 Probabilistic Dynamic Programming 462 11.5 Conclusions 468 Selected References 468 Learning Aids for This Chapter on Our Website 468 Problems 469 CHAPTER 12 Integer Programming 474 12.1 Prototype Example 475 12.2 Some BIP Applications 478 12.3 Innovative Uses of Binary Variables in Model Formulation 483 12.4 Some Formulation Examples 489 12.5 Some Perspectives on Solving Integer Programming Problems 497 12.6 The Branch-and-Bound Technique and Its Application to Binary Integer Programming 501 12.7 A Branch-and-Bound Algorithm for Mixed Integer Programming 513 12.8 The Branch-and-Cut Approach to Solving BIP Problems 519 12.9 The Incorporation of Constraint Programming 525 12.10 Conclusions 531 Selected References 532 Learning Aids for This Chapter on Our Website 533 Problems 534 Case 12.1 Capacity Concerns 543 Previews of Added Cases on Our Website 545 Case 12.2 Assigning Art 545 Case 12.3 Stocking Sets 545 Case 12.4 Assigning Students to Schools, Revisited Again 546 CHAPTER 13 Nonlinear Programming 13.1 13.2 13.3 13.4 13.5 13.6 13.7 547 Sample Applications 548 Graphical Illustration of Nonlinear Programming Problems 552 Types of Nonlinear Programming Problems 556 One-Variable Unconstrained Optimization 562 Multivariable Unconstrained Optimization 567 The Karush-Kuhn-Tucker (KKT) Conditions for Constrained Optimization Quadratic Programming 577 573 xv CONTENTS 13.8 Separable Programming 583 13.9 Convex Programming 590 13.10 Nonconvex Programming (with Spreadsheets) 598 13.11 Conclusions 602 Selected References 603 Learning Aids for This Chapter on Our Website 603 Problems 604 Case 13.1 Savvy Stock Selection 615 Previews of Added Cases on Our Website 616 Case 13.2 International Investments 616 Case 13.3 Promoting a Breakfast Cereal, Revisited 616 CHAPTER 14 Metaheuristics 617 14.1 The Nature of Metaheuristics 618 14.2 Tabu Search 625 14.3 Simulated Annealing 636 14.4 Genetic Algorithms 645 14.5 Conclusions 655 Selected References 656 Learning Aids for This Chapter on Our Website Problems 657 656 CHAPTER 15 Game Theory 661 15.1 The Formulation of Two-Person, Zero-Sum Games 661 15.2 Solving Simple Games—A Prototype Example 663 15.3 Games with Mixed Strategies 668 15.4 Graphical Solution Procedure 670 15.5 Solving by Linear Programming 672 15.6 Extensions 676 15.7 Conclusions 677 Selected References 677 Learning Aids for This Chapter on Our Website 677 Problems 678 CHAPTER 16 Decision Analysis 682 16.1 A Prototype Example 683 16.2 Decision Making without Experimentation 684 16.3 Decision Making with Experimentation 690 16.4 Decision Trees 696 16.5 Using Spreadsheets to Perform Sensitivity Analysis on Decision Trees 16.6 Utility Theory 707 16.7 The Practical Application of Decision Analysis 715 16.8 Conclusions 716 Selected References 716 Learning Aids for This Chapter on Our Website 717 Problems 718 Case 16.1 Brainy Business 728 700 xvi CONTENTS Preview of Added Cases on Our Website 730 Case 16.2 Smart Steering Support 730 Case 16.3 Who Wants to be a Millionaire? 730 Case 16.4 University Toys and the Engineering Professor Action Figures CHAPTER 17 Queueing Theory 731 17.1 Prototype Example 732 17.2 Basic Structure of Queueing Models 732 17.3 Examples of Real Queueing Systems 737 17.4 The Role of the Exponential Distribution 739 17.5 The Birth-and-Death Process 745 17.6 Queueing Models Based on the Birth-and-Death Process 750 17.7 Queueing Models Involving Nonexponential Distributions 762 17.8 Priority-Discipline Queueing Models 770 17.9 Queueing Networks 775 17.10 The Application of Queueing Theory 779 17.11 Conclusions 784 Selected References 784 Learning Aids for This Chapter on Our Website 785 Problems 786 Case 17.1 Reducing In-Process Inventory 798 Preview of an Added Case on Our Website 799 Case 17.2 Queueing Quandary 799 CHAPTER 18 Inventory Theory 800 18.1 Examples 801 18.2 Components of Inventory Models 803 18.3 Deterministic Continuous-Review Models 805 18.4 A Deterministic Periodic-Review Model 815 18.5 Deterministic Multiechelon Inventory Models for Supply Chain Management 820 18.6 A Stochastic Continuous-Review Model 838 18.7 A Stochastic Single-Period Model for Perishable Products 842 18.8 Revenue Management 854 18.9 Conclusions 862 Selected References 862 Learning Aids for This Chapter on Our Website 863 Problems 864 Case 18.1 Brushing Up on Inventory Control 874 Previews of Added Cases on Our Website 876 Case 18.2 TNT: Tackling Newsboy’s Teaching 876 Case 18.3 Jettisoning Surplus Stock 876 CHAPTER 19 Markov Decision Processes 877 19.1 A Prototype Example 878 19.2 A Model for Markov Decision Processes 880 730 xvii CONTENTS 19.3 Linear Programming and Optimal Policies 883 19.4 Conclusions 887 Selected References 888 Learning Aids for This Chapter on Our Website 888 Problems 889 CHAPTER 20 Simulation 892 20.1 The Essence of Simulation 892 20.2 Some Common Types of Applications of Simulation 904 20.3 Generation of Random Numbers 908 20.4 Generation of Random Observations from a Probability Distribution 20.5 Outline of a Major Simulation Study 917 20.6 Performing Simulations on Spreadsheets 921 20.7 Conclusions 939 Selected References 941 Learning Aids for This Chapter on Our Website 942 Problems 943 Case 20.1 Reducing In-Process Inventory, Revisited 950 Case 20.2 Action Adventures 950 Previews of Added Cases on Our Website 951 Case 20.3 Planning Planers 951 Case 20.4 Pricing under Pressure 951 APPENDIXES 1. Documentation for the OR Courseware 952 2. Convexity 954 3. Classical Optimization Methods 959 4. Matrices and Matrix Operations 962 5. Table for a Normal Distribution 967 PARTIAL ANSWERS TO SELECTED PROBLEMS 969 INDEXES Author Index 983 Subject Index 992 912 SUPPLEMENTS AVAILABLE ON THE TEXT WEBSITE www.mhhe.com/hillier ADDITIONAL CASES Case 3.2 Cutting Cafeteria Costs Case 3.3 Staffing a Call Center Case 3.4 Promoting a Breakfast Cereal Case 4.2 New Frontiers Case 4.3 Assigning Students to Schools Case 7.2 Farm Management Case 7.3 Assigning Students to Schools, Revisited Case 7.4 Writing a Nontechnical Memo Case 9.2 Continuation of the Texago Case Study Case 9.3 Project Pickings Case 10.2 Aiding Allies Case 10.3 Steps to Success Case 12.2 Assigning Art Case 12.3 Stocking Sets Case 12.4 Assigning Students to Schools, Revisited Again Case 13.2 International Investments Case 13.3 Promoting a Breakfast Cereal, Revisited Case 16.2 Smart Steering Support Case 16.3 Who Wants to be a Millionaire? Case 16.4 University Toys and the Engineering Professor Action Figures Case 17.2 Queueing Quandary Case 18.2 TNT: Tackling Newsboy’s Teachings Case 18.3 Jettisoning Surplus Stock Case 20.3 Planning Planers Case 20.4 Pricing under Pressure SUPPLEMENT 1 TO CHAPTER 3 The LINGO Modeling Language SUPPLEMENT 2 TO CHAPTER 3 More about LINGO SUPPLEMENT TO CHAPTER 8 Linear Goal Programming and Its Solution Procedures Problems Case 8S.1 A Cure for Cuba Case 8S.2 Airport Security SUPPLEMENT TO CHAPTER 9 A Case Study with Many Transportation Problems xviii SUPPLEMENT TO CHAPTER 16 Using TreePlan Software for Decision Trees SUPPLEMENTS AVAILABLE ON THE TEXT WEBSITE SUPPLEMENT 1 TO CHAPTER 18 Derivation of the Optimal Policy for the Stochastic Single-Period Model for Perishable Products Problems SUPPLEMENT 2 TO CHAPTER 18 Stochastic Periodic-Review Models Problems SUPPLEMENT 1 TO CHAPTER 19 A Policy Improvement Algorithm for Finding Optimal Policies Problems SUPPLEMENT 2 TO CHAPTER 19 A Discounted Cost Criterion Problems SUPPLEMENT 1 TO CHAPTER 20 Variance-Reducing Techniques Problems SUPPLEMENT 2 TO CHAPTER 20 Regenerative Method of Statistical Analysis Problems CHAPTER 21 The Art of Modeling with Spreadsheets 21.1 A Case Study: The Everglade Golden Years Company Cash Flow Problem 21.2 Overview of the Process of Modeling with Spreadsheets 21.3 Some Guidelines for Building “Good” Spreadsheet Models 21.4 Debugging a Spreadsheet Model 21.5 Conclusions Selected References Learning Aids for This Chapter on Our Website Problems Case 21.1 Prudent Provisions for Pensions CHAPTER 22 Project Management with PERT/CPM 22.1 A Prototype Example—The Reliable Construction Co. Project 22.2 Using a Network to Visually Display a Project 22.3 Scheduling a Project with PERT/CPM 22.4 Dealing with Uncertain Activity Durations 22.5 Considering Time-Cost Trade-Offs 22.6 Scheduling and Controlling Project Costs 22.7 An Evaluation of PERT/CPM 22.8 Conclusions Selected References Learning Aids for This Chapter on Our Website Problems Case 22.1 “School’s out forever . . .” xix xx SUPPLEMENTS AVAILABLE ON THE TEXT WEBSITE CHAPTER 23 Additional Special Types of Linear Programming Problems 23.1 The Transshipment Problem 23.2 Multidivisional Problems 23.3 The Decomposition Principle for Multidivisional Problems 23.4 Multitime Period Problems 23.5 Multidivisional Multitime Period Problems 23.6 Conclusions Selected References Problems CHAPTER 24 Probability Theory 24.1 Sample Space 24.2 Random Variables 24.3 Probability and Probability Distributions 24.4 Conditional Probability and Independent Events 24.5 Discrete Probability Distributions 24.6 Continuous Probability Distributions 24.7 Expectation 24.8 Moments 24.9 Bivariate Probability Distribution 24.10 Marginal and Conditional Probability Distributions 24.11 Expectations for Bivariate Distributions 24.12 Independent Random Variables and Random Samples 24.13 Law of Large Numbers 24.14 Central Limit Theorem 24.15 Functions of Random Variables Selected References Problems CHAPTER 25 Reliability 25.1 Structure Function of a System 25.2 System Reliability 25.3 Calculation of Exact System Reliability 25.4 Bounds on System Reliability 25.5 Bounds on Reliability Based upon Failure Times 25.6 Conclusions Selected References Problems CHAPTER 26 The Application of Queueing Theory 26.1 Examples 26.2 Decision Making 26.3 Formulation of Waiting-Cost Functions 26.4 Decision Models 26.5 The Evaluation of Travel Time 26.6 Conclusions Selected References SUPPLEMENTS AVAILABLE ON THE TEXT WEBSITE Learning Aids for This Chapter on Our Website Problems CHAPTER 27 Forecasting 27.1 Some Applications of Forecasting 27.2 Judgmental Forecasting Methods 27.3 Time Series 27.4 Forecasting Methods for a Constant-Level Model 27.5 Incorporating Seasonal Effects into Forecasting Methods 27.6 An Exponential Smoothing Method for a Linear Trend Model 27.7 Forecasting Errors 27.8 Box-Jenkins Method 27.9 Causal Forecasting with Linear Regression 27.10 Forecasting in Practice 27.11 Conclusions Selected References Learning Aids for This Chapter on Our Website Problems Case 27.1 Finagling the Forecasts CHAPTER 28 Examples of Performing Simulations on Spreadsheets with Analytic Solver Platform 28.1 Bidding for a Construction Project 28.2 Project Management 28.3 Cash Flow Management 28.4 Financial Risk Analysis 28.5 Revenue Management in the Travel Industry 28.6 Choosing the Right Distribution 28.7 Decision Making with Parameter Analysis Reports and Trend Charts 28.8 Conclusions Selected References Learning Aids for This Chapter on Our Website Problems CHAPTER 29 Markov Chains 29.1 Stochastic Processes 29.2 Markov Chains 29.3 Chapman-Kolmogorov Equations 29.4 Classification of States of a Markov Chain 29.5 Long-Run Properties of Markov Chains 29.6 First Passage Times 29.7 Absorbing States 29.8 Continuous Time Markov Chains Selected References Learning Aids for This Chapter on Our Website Problems APPENDIX 6 Simultaneous Linear Equations xxi PREFACE W hen Jerry Lieberman and I started working on the first edition of this book 50 years ago, our goal was to develop a pathbreaking textbook that would help establish the future direction of education in what was then the emerging field of operations research. Following publication, it was unclear how well this particular goal was met, but what did become clear was that the demand for the book was far larger than either of us had anticipated. Neither of us could have imagined that this extensive worldwide demand would continue at such a high level for such an extended period of time. The enthusiastic response to our first nine editions has been most gratifying. It was a particular pleasure to have the field’s leading professional society, the international Institute for Operations Research and the Management Sciences (INFORMS), award the 6th edition honorable mention for the 1995 INFORMS Lanchester Prize (the prize awarded for the year’s most outstanding English-language publication of any kind in the field of operations research). Then, just after the publication of the eighth edition, it was especially gratifying to be the recipient of the prestigious 2004 INFORMS Expository Writing Award for this book, including receiving the following citation: Over 37 years, successive editions of this book have introduced more than one-half million students to the field and have attracted many people to enter the field for academic activity and professional practice. Many leaders in the field and many current instructors first learned about the field via an edition of this book. The extensive use of international student editions and translations into 15 other languages has contributed to spreading the field around the world. The book remains preeminent even after 37 years. Although the eighth edition just appeared, the seventh edition had 46 percent of the market for books of its kind, and it ranked second in international sales among all McGraw-Hill publications in engineering. Two features account for this success. First, the editions have been outstanding from students’ points of view due to excellent motivation, clear and intuitive explanations, good examples of professional practice, excellent organization of material, very useful supporting software, and appropriate but not excessive mathematics. Second, the editions have been attractive from instructors’ points of view because they repeatedly infuse stateof-the-art material with remarkable lucidity and plain language. For example, a wonderful chapter on metaheuristics was created for the eighth edition. When we began work on the book 50 years ago, Jerry already was a prominent member of the field, a successful textbook writer, and the chairman of a renowned operations research program at Stanford University. I was a very young assistant professor just starting my career. It was a wonderful opportunity for me to work with and to learn from the master. I will be forever indebted to Jerry for giving me this opportunity. Now, sadly, Jerry is no longer with us. During the progressive illness that led to his death 14 years ago, I resolved that I would pick up the torch and devote myself to subsequent editions of this book, maintaining a standard that would fully honor Jerry. Therefore, I took early retirement from my faculty responsibilities at Stanford in order to work full time on textbook writing for the foreseeable future. This has enabled me to spend far more than the usual amount of time in preparing each new edition. It also has enabled me to closely monitor new trends and developments in the field in order to bring this edition completely up to date. This monitoring has led to the choice of the major additions to the new edition outlined next. xxii PREFACE xxiii ■ WHAT’S NEW IN THIS EDITION • Analytic Solver Platform for Education. This edition continues to provide the option • • • • of using Excel and its Solver (a product of Frontline Systems, Inc.) to formulate and solve some operations research (OR) models. Frontline Systems also has developed some advanced Excel-based software packages. One recently released package, Analytic Solver Platform, is particularly exciting because of its tremendous versatility. It provides strong capability for dealing with the types of OR models considered in most of the chapters considered in this book, including linear programming, integer programming, nonlinear programming, decision analysis, simulation, and forecasting. Rather than requiring the use of a collection of Excel add-ins to deal with all of these areas (as in the preceding edition), Analytic Solver Platform provides an all-in-one package for formulating and solving many OR models in spreadsheets. We are delighted to have integrated the student version of this package, Analytic Solver Platform for Education (ASPE), into this new edition. A special arrangement has been made with Frontline Systems to provide students with a free 140-day license for ASPE. At the same time, we have integrated ASPE in such a way that it can readily be skipped over without loss of continuity for those who do not wish to use spreadsheets. A number of other attractive software options continue to be provided in this edition (as described later). In addition, a relatively brief introduction to spreadsheet modeling can also be obtained by only using Excel’s standard Solver. However, we believe that many instructors and students will welcome the great power and versatility of ASPE. A New Section on Robust Optimization. OR models typically are formulated to help select some future course of action, so the values of the model parameters need to be based on a prediction of future conditions. This sometimes results in having a significant amount of uncertainty about what the parameter values actually will turn out to be when the optimal solution from the model is implemented. For problems where there is no latitude for violating the constraints even a little bit, a relatively new technique called robust optimization provides a way of obtaining a solution that is virtually guaranteed to be feasible and nearly optimal regardless of reasonable deviations of the parameter values from their estimated values. The new Section 7.4 introduces the robust optimization approach when dealing with linear programming problems. A New Section on Chance Constraints. The new Section 7.5 continues the discussion in Section 7.4 by turning to the case where there is some latitude for violating some constraints a little bit without very serious complications. This leads to the option of using chance constraints, where each chance constraint modifies an original constraint by only requiring that there be some very high probability that the original constraint will be satisfied. When the original problem is a linear programming problem, each of these chance constraints can be converted into a deterministic equivalent that still is a linear programming constraint. Section 7.5 describes how this important idea is implemented. A New Section on Stochastic Programming with Recourse. Stochastic programming provides still another way of reformulating a linear programming model (or another type of model) where there is some uncertainty about what the values of the parameters will turn out to be. This approach is particularly valuable for those problems where the decisions will be made in two (or more) stages, so the decisions in stage 2 can help compensate for any stage 1 decisions that do not turn out as well as hoped because of errors in estimating some parameter values. The new Section 7.6 describes stochastic programming with recourse for dealing with such problems. A New Chapter on Linear Programming under Uncertainty That Includes These New Sections. One of the key assumptions of linear programming (as for many other OR models) is the certainty assumption, which says that the value assigned to each parameter xxiv PREFACE • • • • of a linear programming model is assumed to be a known constant. This is a convenient assumption, but it seldom is satisfied precisely. One of the most important concepts to get across in an introductory OR course is that (1) although it usually is necessary to make some simplifying assumptions when formulating a model of a problem, (2) it then is very important after solving the model to explore the impact of these simplifying assumptions. This concept can be most readily conveyed in the context of linear programming because of all the methodology that now has been developed for dealing with linear programming under uncertainty. One key technique of this type is sensitivity analysis, but some other relatively elementary techniques now have also been well developed, including particularly the ones presented in the three new sections described above. Therefore, the old Chapter 6 (Duality Theory and Sensitivity Analysis) now has been divided into two new chapters—Chapter 6 (Duality Theory) and Chapter 7 (Linear Programming under Uncertainty). The new Chapter 7 includes the three sections on sensitivity analysis in the old Chapter 6 but also adds the three new sections described above. A New Section on the Rise of Analytics Together with Operations Research. A particularly dramatic development in the field of operations research over the last several years has been the great buzz throughout the business world about something called analytics (or business analytics) and the importance of incorporating analytics into managerial decision making. As it turns out, the discipline of analytics is closely related to the discipline of operations research, although there are some differences in emphases. OR can be thought of as focusing mainly on advanced analytics whereas analytics professionals might get more involved with less advanced aspects of the study. Some fads come and go, but this appears to be a permanent shift in the direction of OR in the coming years. In fact, we could even find analytics eventually replacing operations research as the common name for this integrated discipline. Because of this close and growing tie between the two disciplines, it has become important to describe this relationship and to put it into perspective in an introductory OR course. This has been done in the new Section 1.3. Many New or Revised Problems. A significant number of new problems have been added to support the new topics and application vignettes. In addition, many of the problems from the ninth edition have been revised. Therefore, an instructor who does not wish to assign problems that were assigned in previous classes has a substantial number from which to choose. A Reorganization to Reduce the Size of the Book. An unfortunate trend with early editions of this book was that each new edition was significantly larger than the previous one. This continued until the seventh edition had become considerably larger than is desirable for an introductory survey textbook. Therefore, I worked hard to substantially reduce the size of the eighth edition and and then further reduced the size of the ninth edition slightly. I also adopted the goal of avoiding any growth in subsequent editions. Indeed, this edition is 35 pages shorter than the ninth edition. This was accomplished through a variety of means. One was being careful not to add too much new material. Another was deleting certain low-priority material, including the presentation of parametric linear programming in conjunction with sensitivity analysis (it already is covered later in Section 8.2) and a complicated dynamic programming example (the Wyndor problem with three state variables) that can be solved much more easily in other ways. Finally, and most importantly, 50 pages were saved by shifting two littleused items (the chapter on Markov chains and the last two major sections on Markov decision processes) to the supplements on the book’s website. Markov chains are a central topic of probability theory and stochastic processes that have been borrowed as a tool of operations research, so this chapter better fits as a reference in the supplements. Updating to Reflect the Current State of the Art. A special effort has been made to keep the book completely up to date. This included adding relatively new developments (the four new sections mentioned above) that now warrant consideration in an PREFACE xxv introductory survey course, as well as making sure that all the material in the ninth edition has been brought up to date. It also included carefully updating both the application vignettes and selected references for each chapter. ■ OTHER SPECIAL FEATURES OF THIS BOOK • An Emphasis on Real Applications. The field of operations research is continuing to • • • have a dramatic impact on the success of numerous companies and organizations around the world. Therefore, one of the goals of this book is to tell this story clearly and thereby excite students about the great relevance of the material they are studying. This goal is pursued in four ways. One is the inclusion of many application vignettes scattered throughout the book that describe in a few paragraphs how an actual application of operations research had a powerful impact on a company or organization by using techniques like those studied in that portion of the book. For each application vignette, a problem also is included in the problems section of that chapter that requires the student to read the full article describing the application and then answer some questions. Second, real applications also are briefly described (especially in Chapters 2 and 12) as part of the presentation of some OR technique to illustrate its use. Third, many cases patterned after real applications are included at the end of chapters and on the book’s website. Fourth, many selected references of award winning OR applications are given at the end of some of the chapters. Once again, problems are included at the end of these chapters that require reading one or more of the articles describing these applications. The next bullet point describes how students have immediate access to these articles. Links to Many Articles Describing Dramatic OR Applications. We are excited about a partnership with The Institute for Operations Research and the Management Sciences (INFORMS), our field’s preeminent professional society, to provide a link on this book’s website to approximately 100 articles describing award winning OR applications, including the ones described in all of the application vignettes. (Information about INFORMS journals, meetings, job bank, scholarships, awards, and teaching materials is at www.informs.org.) These articles and the corresponding end-of-chapter problems provide instructors with the option of having their students delve into real applications that dramatically demonstrate the relevance of the material being covered in the lectures. It would even be possible to devote significant course time to discussing real applications. A Wealth of Supplementary Chapters and Sections on the Website. In addition to the approximately 1,000 pages in this book, another several hundred pages of supplementary material also are provided on this book’s website (as outlined in the table of contents). This includes nine complete chapters and a considerable number of supplements to chapters in the book, as well as a substantial number of additional cases. All of the supplementary chapters include problems and selected references. Most of the supplements to chapters also have problems. Today, when students think nothing of accessing material electronically, instructors should feel free to include some of this supplementary material in their courses. Many Additional Examples Are Available. An especially important learning aid on the book’s website is a set of Solved Examples for almost every chapter in the book. We believe that most students will find the examples in the book fully adequate but that others will feel the need to go through additional examples. These solved examples on the website will provide the latter category of students the needed help, but without interrupting the flow of the material in the book on those many occasions when most students don’t need to see an additional example. Many students also might find these additional examples helpful when preparing for an examination. We recommend to instructors that they point out this important learning aid to their students. xxvi PREFACE • Great Flexibility for What to Emphasize. We have found that there is great variabil- • ity in what instructors want to emphasize in an introductory OR survey course. They might want to emphasize the mathematics and algorithms of operations research. Others will emphasize model formulation with little concern for the details of the algorithms needed to solve these models. Others want an even more applied course, with emphasis on applications and the role of OR in managerial decision making. Some instructors will focus on the deterministic models of OR, while others will emphasize stochastic models. There also are great differences in the kind of software (if any) that instructors want their students to use. All of this helps to explain why the book is a relatively large one. We believe that we have provided enough material to meet the needs of all of these kinds of instructors. Furthermore, the book is organized in such a way that it is relatively easy to pick and choose the desired material without loss of continuity. It even is possible to provide great flexibility on the kind of software (if any) that instructors want their students to use, as described below in the section on software options. A Customizable Version of the Text Also is Available. Because the text provides great flexibility for what to emphasize, an instructor can easily pick and choose just certain portions of the book to cover. Rather than covering nearly all of the 1,000 pages in the book, perhaps you wish to use only a much smaller portion of the text. Fortunately, McGraw-Hill provides an option for using a considerably smaller and less expensive version of the book that is customized to meet your needs. With McGraw-Hill Create™, you can include only the chapters you want to cover. You also can easily rearrange chapters, combine material from other content sources, and quickly upload content you have written, like your course syllabus or teaching notes. If desired, you can use Create to search for useful supplementary material in various other leading McGraw-Hill textbooks. For example, if you wish to emphasize spreadsheet modeling and applications, we would recommend including some chapters from the Hillier-Hillier textbook, Introduction to Management Science: A Modeling and Case Studies Approach with Spreadsheets. Arrange your book to fit your teaching style. Create even allows you to personalize your book’s appearance by selecting the cover and adding your name, school, and course information. Order a Create book and you’ll receive a complimentary print review copy in 3–5 business days or a complimentary electronic review copy (eComp) via e-mail in minutes. You can go to www.mcgrawhillcreate.com and register to experience how McGraw-Hill Create empowers you to teach your students your way. ■ A WEALTH OF SOFTWARE OPTIONS A wealth of software options is provided on the book’s website www.mhhe.com/hillier as outlined below: • Excel spreadsheets: state-of-the-art spreadsheet formulations in Excel files for all rel• • • • • • evant examples throughout the book. The standard Excel Solver can solve most of these examples. As described earlier, the powerful Analytic Solver Platform for Education (ASPE) to formulate and solve a wide variety of OR models in an Excel environment. A number of Excel templates for solving basic models. Student versions of LINDO (a traditional optimizer) and LINGO (a popular algebraic modeling language), along with formulations and solutions for all relevant examples throughout the book. Student versions of MPL (a leading algebraic modeling language) along with an MPL Tutorial and MPL formulations and solutions for all relevant examples throughout the book. Student versions of several elite MPL solvers for linear programming, integer programming, convex programming, global optimization, etc. Queueing Simulator (for the simulation of queueing systems). PREFACE xxvii • OR Tutor for illustrating various algorithms in action. • Interactive Operations Research (IOR) Tutorial for efficiently learning and executing algorithms interactively, implemented in Java 2 in order to be platform independent. Numerous students have found OR Tutor and IOR Tutorial very helpful for learning algorithms of operations research. When moving to the next stage of solving OR models automatically, surveys have found instructors almost equally split in preferring one of the following options for their students’ use: (1) Excel spreadsheets, including Excel’s Solver (and now ASPE), (2) convenient traditional software (LINDO and LINGO), and (3) stateof-the-art OR software (MPL and its elite solvers). For this edition, therefore, I have retained the philosophy of the last few editions of providing enough introduction in the book to enable the basic use of any of the three options without distracting those using another, while also providing ample supporting material for each option on the book’s website. Because of the power and versatility of ASPE, we no longer include a number of Excel-based software packages (Crystal Ball, Premium Solver for Education, TreePlan, SensIt, RiskSim, and Solver Table) that were bundled with recent editions. ASPE alone matches or exceeds the capabilities of all these previous packages. Additional Online Resources • A glossary for every book chapter. • Data files for various cases to enable students to focus on analysis rather than inputting large data sets. • A test bank featuring moderately difficult questions that require students to show their • work is being provided to instructors. Many of the questions in this test bank have previously been used successfully as test questions by the authors. The test bank for this new edition has been greatly expanded from the one for the 9th edition, so many new test questions now are available to instructors. A solutions manual and image files for instructors. ■ POWERFUL NEW ONLINE RESOURCES CourseSmart Provides an eBook Version of This Text This text is available as an eBook at www.CourseSmart.com. At CourseSmart you can take advantage of significant savings off the cost of a print textbook, reduce their impact on the environment, and gain access to powerful web tools for learning. CourseSmart eBooks can be viewed online or downloaded to a computer. The eBooks allow readers to do full text searches, add highlighting and notes, and share notes with others. CourseSmart has the largest selection of eBooks available anywhere. Visit www.CourseSmart.com to learn more and to try a sample chapter. McGraw-Hill Connect® The online resources for this edition include McGraw-Hill Connect, a web-based assignment and assessment platform that can help students to perform better in their coursework and to master important concepts. With Connect, instructors can deliver assignments, quizzes, and tests easily online. Students can practice important skills at their own pace and on their own schedule. Ask your McGraw-Hill Representative for more detail and check it out at www.mcgrawhillconnect.com/engineering. McGraw-Hill LearnSmart® McGraw-Hill LearnSmart® is an adaptive learning system designed to help students learn faster, study more efficiently, and retain more knowledge for greater success. Through a xxviii PREFACE series of adaptive questions, LearnSmart pinpoints concepts the student does not understand and maps out a personalized study plan for success. It also lets instructors see exactly what students have accomplished, and it features a built-in assessment tool for graded assignments. Ask your McGraw-Hill Representative for more information, and visit www.mhlearnsmart.com for a demonstration. McGraw-Hill SmartBook™ Powered by the intelligent and adaptive LearnSmart engine, SmartBook is the first and only continuously adaptive reading experience available today. Distinguishing what students know from what they don’t, and honing in on concepts they are most likely to forget, SmartBook personalizes content for each student. Reading is no longer a passive and linear experience but an engaging and dynamic one, where students are more likely to master and retain important concepts, coming to class better prepared. SmartBook includes powerful reports that identify specific topics and learning objectives students need to study. These valuable reports also provide instructors insight into how students are progressing through textbook content and are useful for identifying class trends, focusing precious class time, providing personalized feedback to students, and tailoring assessment. How does SmartBook work? Each SmartBook contains four components: Preview, Read, Practice, and Recharge. Starting with an initial preview of each chapter and key learning objectives, students read the material and are guided to topics for which they need the most practice based on their responses to a continuously adapting diagnostic. Read and practice continue until SmartBook directs students to recharge important material they are most likely to forget to ensure concept mastery and retention. ■ THE USE OF THE BOOK The overall thrust of all the revision efforts has been to build upon the strengths of previous editions to more fully meet the needs of today’s students. These revisions make the book even more suitable for use in a modern course that reflects contemporary practice in the field. The use of software is integral to the practice of operations research, so the wealth of software options accompanying the book provides great flexibility to the instructor in choosing the preferred types of software for student use. All the educational resources accompanying the book further enhance the learning experience. Therefore, the book and its website should fit a course where the instructor wants the students to have a single selfcontained textbook that complements and supports what happens in the classroom. The McGraw-Hill editorial team and I think that the net effect of the revision has been to make this edition even more of a “student’s book”—clear, interesting, and well-organized with lots of helpful examples and illustrations, good motivation and perspective, easy-to-find important material, and enjoyable homework, without too much notation, terminology, and dense mathematics. We believe and trust that the numerous instructors who have used previous editions will agree that this is the best edition yet. The prerequisites for a course using this book can be relatively modest. As with previous editions, the mathematics has been kept at a relatively elementary level. Most of Chaps. 1 to 15 (introduction, linear programming, and mathematical programming) require no mathematics beyond high school algebra. Calculus is used only in Chap. 13 (Nonlinear Programming) and in one example in Chap. 11 (Dynamic Programming). Matrix notation is used in Chap. 5 (The Theory of the Simplex Method), Chap. 6 (Duality Theory), Chap. 7 (Linear Programming under Uncertainty), Sec. 8.4 (An Interior-Point Algorithm), and Chap. 13, but the only background needed for this is presented in Appendix 4. For Chaps. 16 to 20 (probabilistic models), a previous introduction to probability theory is assumed, and calculus is used in a few places. In general terms, the mathematical maturity that a student achieves through taking an elementary calculus course is useful throughout Chaps. 16 to 20 and for the more advanced material in the preceding chapters. PREFACE xxix The content of the book is aimed largely at the upper-division undergraduate level (including well-prepared sophomores) and at first-year (master’s level) graduate students. Because of the book’s great flexibility, there are many ways to package the material into a course. Chapters 1 and 2 give an introduction to the subject of operations research. Chapters 3 to 15 (on linear programming and mathematical programming) may essentially be covered independently of Chaps. 16 to 20 (on probabilistic models), and vice-versa. Furthermore, the individual chapters among Chaps. 3 to 15 are almost independent, except that they all use basic material presented in Chap. 3 and perhaps in Chap. 4. Chapters 6 and 7 and Sec. 8.2 also draw upon Chap. 5. Sections 8.1 and 8.2 use parts of Chaps. 6 and 7. Section 10.6 assumes an acquaintance with the problem formulations in Secs. 9.1 and 9.3, while prior exposure to Secs. 8.3 and 9.2 is helpful (but not essential) in Sec. 10.7. Within Chaps. 16 to 20, there is considerable flexibility of coverage, although some integration of the material is available. An elementary survey course covering linear programming, mathematical programming, and some probabilistic models can be presented in a quarter (40 hours) or semester by selectively drawing from material throughout the book. For example, a good survey of the field can be obtained from Chaps. 1, 2, 3, 4, 16, 17, 18, and 20, along with parts of Chaps. 10 to 14. A more extensive elementary survey course can be completed in two quarters (60 to 80 hours) by excluding just a few chapters, for example, Chaps. 8, 15, and 19. Chapters 1 to 9 (and perhaps part of Chap. 10) form an excellent basis for a (one-quarter) course in linear programming. The material in Chaps. 10 to 15 covers topics for another (one-quarter) course in other deterministic models. Finally, the material in Chaps. 16 to 20 covers the probabilistic (stochastic) models of operations research suitable for presentation in a (one-quarter) course. In fact, these latter three courses (the material in the entire text) can be viewed as a basic oneyear sequence in the techniques of operations research, forming the core of a master’s degree program. Each course outlined has been presented at either the undergraduate or graduate level at Stanford University, and this text has been used in basically the manner suggested. The book’s website will provide updates about the book, including an errata. To access this site, visit www.mhhe.com/hillier. ■ ACKNOWLEDGMENTS I am indebted to an excellent group of reviewers who provided sage advice for the revision process. This group included Linda Chattin, Arizona State University Antoine Deza, McMaster University Jeff Kennington, Southern Methodist University Adeel Khalid, Southern Polytechnic State University James Luedtke, University of Wisconsin–Madison Layek Abdel-Malek, New Jersey Institute of Technology Jason Trobaugh, Washington University in St. Louis Yiliu Tu, University of Calgary Li Zhang, The Citadel Xiang Zhou, City University of Hong Kong In addition, thanks go to those instructors and students who sent email messages to provide their feedback on the 9th edition. This edition was very much of a team effort. Our case writers, Karl Schmedders and Molly Stephens (both graduates of our department), wrote 24 elaborate cases for the 7th edition, and all of these cases continue to accompany this new edition. One of our department’s former PhD students, Michael O’Sullivan, developed OR Tutor for the 7th edition (and continued here), based on part of the software that my son Mark Hillier had developed xxx PREFACE for the 5th and 6th editions. Mark (who was born the same year as the first edition, earned his PhD at Stanford, and now is a tenured Associate Professor of Quantitative Methods at the University of Washington) provided both the spreadsheets and the Excel files (including many Excel templates) once again for this edition, as well as the Queueing Simulator. He also gave important help on the textual material involving ASPE and contributed greatly to Chaps. 21 and 28 on the book’s website. In addition, he updated the 10th edition version of the solutions manual. Earlier editions of this solutions manual were prepared in an exemplary manner by a long sequence of PhD students from our department, including Che-Lin Su for the 8th edition and Pelin Canbolat for the 9th edition. Che-Lin and Pelin did outstanding work that nicely paved the way for Mark’s work on the solutions manual. Last, but definitely not least, my dear wife, Ann Hillier (another Stanford graduate with a minor in operations research), provided me with important help on an almost daily basis. All the individuals named above were vital members of the team. I also owe a great debt of gratitude to four individuals and their companies for providing the special software and related information for the book. Another Stanford PhD graduate, William Sun (CEO of the software company Accelet Corporation), and his team did a brilliant job of starting with much of Mark Hillier’s earlier software and implementing it anew in Java 2 as IOR Tutorial for the 7th edition, as well as further enhancing IOR Tutorial for the subsequent editions. Linus Schrage of the University of Chicago and LINDO Systems (and who took an introductory operations research course from me 50 years ago) provided LINGO and LINDO for the book’s website. He also supervised the further development of LINGO/LINDO files for the various chapters as well as providing tutorial material for the book’s website. Another long-time friend, Bjarni Kristjansson (who heads Maximal Software), did the same thing for the MPL/Solvers files and MPL tutorial material, as well as arranging to provide a student version of MPL and various elite solvers for the book’s website. Still another friend, Daniel Flystra (head of Frontline Systems), has arranged to provide users of this book with a free 140-day license to use a student version of his company’s exciting new software package, Analytic Solver Platform. These four individuals and their companies—Accelet Corporation, LINDO Systems, Maximal Software, and Frontline Systems—have made an invaluable contribution to this book. I also am excited about the partnership with INFORMS that began with the 9th edition. Students can benefit greatly by reading about top-quality applications of operations research. This preeminent professional OR society is enabling this by providing a link to the articles in Interfaces that describe the applications of OR that are summarized in the application vignettes and other selected references of award winning OR applications provided in the book. It was a real pleasure working with McGraw-Hill’s thoroughly professional editorial and production staff, including Raghu Srinivasan (Global Publisher), Kathryn Neubauer Carney (the Developmental Editor during most of the development of this edition), Vincent Bradshaw (the Developmental Editor for the completion of this edition), and Mary Jane Lampe (Content Project Manager). Just as so many individuals made important contributions to this edition, I would like to invite each of you to start contributing to the next edition by using my email address below to send me your comments, suggestions, and errata to help me improve the book in the future. In giving my email address, let me also assure instructors that I will continue to follow the policy of not providing solutions to problems and cases in the book to anybody (including your students) who contacts me. Enjoy the book. Frederick S. Hillier Stanford University (fhillier@stanford.edu) May 2013 1 C H A P T E R Introduction ■ 1.1 THE ORIGINS OF OPERATIONS RESEARCH Since the advent of the industrial revolution, the world has seen a remarkable growth in the size and complexity of organizations. The artisans’ small shops of an earlier era have evolved into the billion-dollar corporations of today. An integral part of this revolutionary change has been a tremendous increase in the division of labor and segmentation of management responsibilities in these organizations. The results have been spectacular. However, along with its blessings, this increasing specialization has created new problems, problems that are still occurring in many organizations. One problem is a tendency for the many components of an organization to grow into relatively autonomous empires with their own goals and value systems, thereby losing sight of how their activities and objectives mesh with those of the overall organization. What is best for one component frequently is detrimental to another, so the components may end up working at cross purposes. A related problem is that as the complexity and specialization in an organization increase, it becomes more and more difficult to allocate the available resources to the various activities in a way that is most effective for the organization as a whole. These kinds of problems and the need to find a better way to solve them provided the environment for the emergence of operations research (commonly referred to as OR). The roots of OR can be traced back many decades,1 when early attempts were made to use a scientific approach in the management of organizations. However, the beginning of the activity called operations research has generally been attributed to the military services early in World War II. Because of the war effort, there was an urgent need to allocate scarce resources to the various military operations and to the activities within each operation in an effective manner. Therefore, the British and then the U.S. military management called upon a large number of scientists to apply a scientific approach to dealing with this and other strategic and tactical problems. In effect, they were asked to do research on (military) operations. These teams of scientists were the first OR teams. By developing effective methods of using the new tool of radar, these teams were instrumental in winning the Air Battle of Britain. Through their research on how to better manage convoy and antisubmarine operations, they also played a major role in winning the Battle of the North Atlantic. Similar efforts assisted the Island Campaign in the Pacific. 1 Selected Reference 7 provides an entertaining history of operations research that traces its roots as far back as 1564 by describing a considerable number of scientific contributions from 1564 to 2004 that influenced the subsequent development of OR. Also see Selected References 1 and 6 for further details about this history. 1 2 CHAPTER 1 INTRODUCTION When the war ended, the success of OR in the war effort spurred interest in applying OR outside the military as well. As the industrial boom following the war was running its course, the problems caused by the increasing complexity and specialization in organizations were again coming to the forefront. It was becoming apparent to a growing number of people, including business consultants who had served on or with the OR teams during the war, that these were basically the same problems that had been faced by the military but in a different context. By the early 1950s, these individuals had introduced the use of OR to a variety of organizations in business, industry, and government. The rapid spread of OR soon followed. (Selected Reference 6 recounts the development of the field of operations research by describing the lives and contributions of 43 OR pioneers.) At least two other factors that played a key role in the rapid growth of OR during this period can be identified. One was the substantial progress that was made early in improving the techniques of OR. After the war, many of the scientists who had participated on OR teams or who had heard about this work were motivated to pursue research relevant to the field; important advancements in the state of the art resulted. A prime example is the simplex method for solving linear programming problems, developed by George Dantzig in 1947. Many of the standard tools of OR, such as linear programming, dynamic programming, queueing theory, and inventory theory, were relatively well developed before the end of the 1950s. A second factor that gave great impetus to the growth of the field was the onslaught of the computer revolution. A large amount of computation is usually required to deal most effectively with the complex problems typically considered by OR. Doing this by hand would often be out of the question. Therefore, the development of electronic digital computers, with their ability to perform arithmetic calculations millions of times faster than a human being can, was a tremendous boon to OR. A further boost came in the 1980s with the development of increasingly powerful personal computers accompanied by good software packages for doing OR. This brought the use of OR within the easy reach of much larger numbers of people, and this progress further accelerated in the 1990s and into the 21st century. For example, the widely used spreadsheet package, Microsoft Excel, provides a Solver that will solve a variety of OR problems.Today, literally millions of individuals have ready access to OR software. Consequently, a whole range of computers from mainframes to laptops now are being routinely used to solve OR problems, including some of enormous size. ■ 1.2 THE NATURE OF OPERATIONS RESEARCH As its name implies, operations research involves “research on operations.” Thus, operations research is applied to problems that concern how to conduct and coordinate the operations (i.e., the activities) within an organization. The nature of the organization is essentially immaterial, and in fact, OR has been applied extensively in such diverse areas as manufacturing, transportation, construction, telecommunications, financial planning, health care, the military, and public services, to name just a few. Therefore, the breadth of application is unusually wide. The research part of the name means that operations research uses an approach that resembles the way research is conducted in established scientific fields. To a considerable extent, the scientific method is used to investigate the problem of concern. (In fact, the term management science sometimes is used as a synonym for operations research.) In particular, the process begins by carefully observing and formulating the problem, including gathering all relevant data. The next step is to construct a scientific (typically mathematical) model that attempts to abstract the essence of the real problem. It is then hypothesized that this model is a sufficiently precise representation of the essential features of the situation 1.3 THE RISE OF ANALYTICS TOGETHER WITH OPERATIONS RESEARCH 3 that the conclusions (solutions) obtained from the model are also valid for the real problem. Next, suitable experiments are conducted to test this hypothesis, modify it as needed, and eventually verify some form of the hypothesis. (This step is frequently referred to as model validation.) Thus, in a certain sense, operations research involves creative scientific research into the fundamental properties of operations. However, there is more to it than this. Specifically, OR is also concerned with the practical management of the organization. Therefore, to be successful, OR must also provide positive, understandable conclusions to the decision maker(s) when they are needed. Still another characteristic of OR is its broad viewpoint. As implied in the preceding section, OR adopts an organizational point of view. Thus, it attempts to resolve the conflicts of interest among the components of the organization in a way that is best for the organization as a whole. This does not imply that the study of each problem must give explicit consideration to all aspects of the organization; rather, the objectives being sought must be consistent with those of the overall organization. An additional characteristic is that OR frequently attempts to search for a best solution (referred to as an optimal solution) for the model that represents the problem under consideration. (We say a best instead of the best solution because multiple solutions may be tied as best.) Rather than simply improving the status quo, the goal is to identify a best possible course of action. Although it must be interpreted carefully in terms of the practical needs of management, this “search for optimality” is an important theme in OR. All these characteristics lead quite naturally to still another one. It is evident that no single individual should be expected to be an expert on all the many aspects of OR work or the problems typically considered; this would require a group of individuals having diverse backgrounds and skills. Therefore, when a full-fledged OR study of a new problem is undertaken, it is usually necessary to use a team approach. Such an OR team typically needs to include individuals who collectively are highly trained in mathematics, statistics and probability theory, economics, business administration, computer science, engineering and the physical sciences, the behavioral sciences, and the special techniques of OR. The team also needs to have the necessary experience and variety of skills to give appropriate consideration to the many ramifications of the problem throughout the organization. ■ 1.3 THE RISE OF ANALYTICS TOGETHER WITH OPERATIONS RESEARCH There has been great buzz throughout the business world in recent years about something called analytics (or business analytics) and the importance of incorporating analytics into managerial decision making. The primary impetus for this buzz was a series of articles and books by Thomas H. Davenport, a renowned thought-leader who has helped hundreds of companies worldwide to revitalize their business practices. He initially introduced the concept of analytics in the January 2006 issue of the Harvard Business Review with an article, “Competing on Analytics,” that now has been named as one of the ten must-read articles in that magazine’s 90-year history. This article soon was followed by two best-selling books entitled Competing on Analytics: The New Science of Winning and Analytics at Work: Smarter Decisions, Better Results. (See Selected References 2 and 3 at the end of the chapter for the citations.) So what is analytics? The short (but oversimplified) answer is that it is basically operations research by another name. However, there are some differences in their relative emphases. Furthermore, the strengths of the analytics approach are likely to be increasingly incorporated into the OR approach as time goes on, so it will be instructive to describe analytics a little further. 4 CHAPTER 1 INTRODUCTION Analytics fully recognizes that we have entered into the era of big data where massive amounts of data now are commonly available to many businesses and organizations to help guide managerial decision making. The current data surge is coming from sophisticated computer tracking of shipments, sales, suppliers, and customers, as well as email, Web traffic, and social networks. As indicated by the following definition, a primary focus of analytics is on how to make the most effective use of all these data. Analytics is the scientific process of transforming data into insight for making better decisions. The application of analytics can be divided into three overlapping categories. One of these is descriptive analytics, which involves using innovative techniques to locate the relevant data and identify the interesting patterns in order to better describe and understand what is going on now. One important technique for doing this is called data mining (as described in Selected Reference 8). Some analytics professionals who specialize in descriptive analytics are called data scientists. A second (and more advanced) category is predictive analytics, which involves using the data to predict what will happen in the future. Statistical forecasting methods, such as those described in Chap. 27 (on the book’s website), are prominently used here. Simulation (Chap. 20) also can be useful. The final (and most advanced) category is prescriptive analytics, which involves using the data to prescribe what should be done in the future. The powerful optimization techniques of operations research described in many of the chapters of this book generally are what are used here. Operations research analysts also often deal with all three of these categories, but not very much with the first one, somewhat more with the second one, and then heavily with the last one. Thus, OR can be thought of as focusing mainly on advanced analytics— predictive and prescriptive activities—whereas analytics professionals might get more involved than OR analysts with the entire business process, including what precedes the first category (identifying a need) and what follows the last category (implementation). Looking to the future, the two approaches should tend to merge over time. Because the name analytics (or business analytics) is more meaningful to most people than the term operations research, we might find that analytics may eventually replace operations research as the common name for this integrated discipline. Although analytics was initially introduced as a key tool for mainly business organizations, it also can be a powerful tool in other contexts. As one example, analytics (together with OR) played a key role in the 2012 presidential campaign in the United States. The Obama campaign management hired a multi-disciplinary team of statisticians, predictive modelers, data-mining experts, mathematicians, software programmers, and OR analysts. It eventually built an entire analytics department five times as large as that of its 2008 campaign. With all this analytics input, the Obama team launched a full-scale and allfront campaign, leveraging massive amounts of data from various sources to directly micro-target potential voters and donors with tailored messages. The election had been expected to be a very close one, but the Obama “ground game” that had been propelled by descriptive and predictive analytics was given much of the credit for the clear-cut Obama win. Based on this experience, both political parties undoubtedly will make extensive use of analytics in the future in major political campaigns. Another famous application of analytics is described in the book Moneyball (cited in Selected Reference 10) and a subsequent 2011 movie with the same name that is based on this book. They tell the true story of how the Oakland Athletics baseball team achieved great success, despite having one of the smallest budgets in the major leagues, by using various kinds of nontraditional data (referred to as sabermetrics) to better evaluate the 1.4 THE IMPACT OF OPERATIONS RESEARCH 5 potential of players available through a trade or the draft. Although these evaluations often flew in the face of conventional baseball wisdom, both descriptive analytics and predictive analytics were being used to identify overlooked players who could greatly help the team. After witnessing the impact of analytics, many major league baseball teams now have hired analytics professionals. Some other kinds of sports teams also are beginning to use analytics. (Selected References 4 and 5 have 17 articles describing the application of analytics in various sports.) These and numerous other success stories about the power of analytics and OR together should lead to their ever-increasing use in the future. Meanwhile, OR already has had a powerful impact, as described further in the next section. ■ 1.4 THE IMPACT OF OPERATIONS RESEARCH Operations research has had an impressive impact on improving the efficiency of numerous organizations around the world. In the process, OR has made a significant contribution to increasing the productivity of the economies of various countries. There now are a few dozen member countries in the International Federation of Operational Research Societies (IFORS), with each country having a national OR society. Both Europe and Asia have federations of OR societies to coordinate holding international conferences and publishing international journals in those continents. In addition, the Institute for Operations Research and the Management Sciences (INFORMS) is an international OR society that is headquartered in the United States. Just as in many other developed countries, OR is an important profession in the United States. According to projections from the U.S. Bureau of Labor Statistics for the year 2013, there are approximately 65,000 individuals working as operations research analysts in the United States with an average salary of about $79,000. Because of the rapid rise of analytics described in the preceding section, INFORMS has embraced analytics as an approach to decision making that largely overlaps and further enriches the OR approach. Therefore, this leading OR society now includes an annual Conference on Business Analytics and Operations Research among its major conferences. It also provides a Certified Analytics Professional credential for those individuals who satisfy certain criteria and pass an examination. In addition, INFORMS publishes many of the leading journals in the field, including one called Analytics, and another, called Interfaces, regularly publishes articles describing major OR studies and the impact they had on their organizations. To give you a better notion of the wide applicability of OR, we list some actual applications in Table 1.1 that have been described in Interfaces. Note the diversity of organizations and applications in the first two columns. The third column identifies the section where an “application vignette” devotes several paragraphs to describing the application and also references an article that provides full details. (You can see the first of these application vignettes in this section.) The last column indicates that these applications typically resulted in annual savings in the many millions of dollars. Furthermore, additional benefits not recorded in the table (e.g., improved service to customers and better managerial control) sometimes were considered to be even more important than these financial benefits. (You will have an opportunity to investigate these less tangible benefits further in Probs. 1.3-1, 1.3-2, and 1.3-3.) A link to the articles that describe these applications in detail is included on our website, www.mhhe.com/hillier. Although most routine OR studies provide considerably more modest benefits than the applications summarized in Table 1.1, the figures in the rightmost column of this table do accurately reflect the dramatic impact that large, well-designed OR studies occasionally can have. An Application Vignette FedEx Corporation is the world’s largest courier delivery services company. Every working day, it delivers many millions of documents, packages, and other items throughout the United States and hundreds of countries and territories around the world. In some cases, these shipments can be guaranteed overnight delivery by 10:30 A.M. the next morning. The logistical challenges involved in providing this service are staggering. These millions of daily shipments must be individually sorted and routed to the correct general location (usually by aircraft) and then delivered to the exact destination (usually by motorized vehicle) in an amazingly short period of time. How is all this possible? Operations research (OR) is the technological engine that drives this company. Ever since its founding in 1973, OR has helped make its major business decisions, including equipment investment, route structure, scheduling, finances, and location of facilities. After OR was credited with literally saving the company during its early years, it became the custom to have OR represented at the weekly senior management meetings and, indeed, several of the senior corporate vice presidents have come up from the outstanding FedEx OR group. FedEx has come to be acknowledged as a worldclass company. It routinely ranks among the top companies on Fortune Magazine’s annual listing of the “World’s Most Admired Companies and this same magazine named the firm as one of the top 100 companies to work for in 2013.” It also was the first winner (in 1991) of the prestigious prize now known as the INFORMS Prize, which is awarded annually for the effective and repeated integration of OR into organizational decision making in pioneering, varied, novel, and lasting ways. The company’s great dependence on OR has continued to the present day. Source: R. O. Mason, J. L. McKenney, W. Carlson, and D. Copeland, “Absolutely, Positively Operations Research: The Federal Express Story,” Interfaces, 27(2): 17–36, March—April 1997. (A link to this article is provided on our website, www.mhhe.com/hillier.) ■ TABLE 1.1 Applications of operations research to be described in application vignettes Organization Area of Application Federal Express Continental Airlines Logistical planning of shipments Reassign crews to flights when schedule disruptions occur Improve sales and manufacturing performance Design of radiation therapy Swift & Company Memorial Sloan-Kettering Cancer Center Welch’s INDEVAL Samsung Electronics Pacific Lumber Company Procter & Gamble Canadian Pacific Railway Hewlett-Packard Norwegian companies United Airlines U.S. Military MISO Netherlands Railways Taco Bell Waste Management Bank Hapoalim Group DHL Sears Optimize use and movement of raw materials Settle all securities transactions in Mexico Reduce manufacturing times and inventory levels Long-term forest ecosystem management Redesign the production and distribution system Plan routing of rail freight Product portfolio management Maximize flow of natural gas through offshore pipeline network Reassign airplanes to flights when disruptions occur Logistical planning of Operations Desert Storm Administer the transmission of electricity in 13 states Optimize operation of a railway network Plan employee work schedules at restaurants Develop a route-management system for trash collection and disposal Develop a decision-support system for investment advisors Optimize the use of marketing resources Vehicle routing and scheduling for home services and deliveries Section Annual Savings 1.4 2.2 Not estimated $40 million 3.1 $12 million 3.4 $459 million 3.5 3.6 4.3 7.2 9.1 10.3 10.5 10.5 $150,000 $150 million $200 million more revenue $398 million NPV $200 million $100 million $180 million $140 million 10.6 11.3 12.2 12.2 12.5 12.7 Not estimated Not estimated $700 million $105 million $13 million $100 million 13.1 $31 million more revenue 13.10 14.2 $22 million $42 million 1.5 7 ALGORITHMS AND OR COURSEWARE ■ TABLE 1.1 Applications of operations research to be described in application vignettes (contd) Organization Area of Application Intel Corporation Conoco-Phillips Workers’ Compensation Board Westinghouse KeyCorp General Motors Deere & Company Design and schedule the product line Evaluate petroleum exploration projects Manage high-risk disability claims and rehabilitation 14.4 16.2 16.3 Not estimated Not estimated $4 million Evaluate research-and-development projects Improve efficiency of bank teller service Improve efficiency of production lines Management of inventories throughout a supply chain Management of distribution channels for magazines Revenue management Management of credit lines and interest rates for credit cards Pricing analysis for providing financial services Improve the efficiency of its production processes Manage air traffic flows in severe weather 16.4 17.6 17.9 18.5 Not estimated $20 million $90 million $1 billion less inventory 18.7 18.8 19.2 $3.5 million more profit $400 million more revenue $75 million more profit 20.2 20.5 20.5 $50 million more revenue $23 million $200 million Time Inc. InterContinental Hotels Bank One Corporation Merrill Lynch Sasol FAA ■ 1.5 Section Annual Savings ALGORITHMS AND OR COURSEWARE An important part of this book is the presentation of the major algorithms (systematic solution procedures) of OR for solving certain types of problems. Some of these algorithms are amazingly efficient and are routinely used on problems involving hundreds or thousands of variables. You will be introduced to how these algorithms work and what makes them so efficient. You then will use these algorithms to solve a variety of problems on a computer. The OR Courseware contained on the book’s website (www.mhhe.com/hillier) will be a key tool for doing all this. One special feature in your OR Courseware is a program called OR Tutor. This program is intended to be your personal tutor to help you learn the algorithms. It consists of many demonstration examples that display and explain the algorithms in action. These “demos” supplement the examples in the book. In addition, your OR Courseware includes a special software package called Interactive Operations Research Tutorial, or IOR Tutorial for short. Implemented in Java, this innovative package is designed specifically to enhance the learning experience of students using this book. IOR Tutorial includes many interactive procedures for executing the algorithms interactively in a convenient format. The computer does all the routine calculations while you focus on learning and executing the logic of the algorithm. You should find these interactive procedures a very efficient and enlightening way of doing many of your homework problems. IOR Tutorial also includes a number of other helpful procedures, including some automatic procedures for executing algorithms automatically and several procedures that provide graphical displays of how the solution provided by an algorithm varies with the data of the problem. In practice, the algorithms normally are executed by commercial software packages. We feel that it is important to acquaint students with the nature of these packages that they will be using after graduation. Therefore, your OR Courseware includes a wealth of material to introduce you to four particularly popular software packages described next. Together, these packages will enable you to solve nearly all the OR models encountered in this book very efficiently. We have added our own automatic procedures to IOR Tutorial in a few cases where these packages are not applicable. 8 CHAPTER 1 INTRODUCTION A very popular approach now is to use today’s premier spreadsheet package, Microsoft Excel, to formulate small OR models in a spreadsheet format. Included with standard Excel is an add-in, called Solver (a product of Frontline Systems, Inc.), that can be used to solve many of these models. Your OR Courseware includes separate Excel files for nearly every chapter in this book. Each time a chapter presents an example that can be solved using Excel, the complete spreadsheet formulation and solution is given in that chapter’s Excel files. For many of the models in the book, an Excel template also is provided that already includes all the equations necessary to solve the model. New with this edition of the textbook is a powerful software package from Frontline Systems called Analytic Solver Platform for Education (ASPE), which is fully compatible with Excel and Excel’s Solver. The recently released Analytic Solver Platform combines all the capabilities of three other popular products from Frontline Systems: (1) Premium Solver Platform (a powerful spreadsheet optimizer that includes five solvers for linear, mixed-integer, nonlinear, non-smooth, and global optimization), (2) Risk Solver Pro (for simulation and risk analysis), and (3) XLMiner (an Excel-based tool for data mining and forecasting). It also has the ability to solve optimization models involving uncertainty and recourse decisions, perform sensitivity analysis, and construct decision trees. It even has an ultra-high-performance linear mixed-integer optimizer. The student version of Analytic Solver Platform retains all these capabilities when dealing with smaller problems. Among the special features of ASPE that are highlighted in this book are a greatly enhanced version of the basic Solver included with Excel (as described in Sec. 3.5), the ability to build decision trees within Excel (as described in Sec. 16.5), and tools to build simulation models within Excel (as described in Sec. 20.6). After many years, LINDO (and its companion modeling language LINGO) continues to be a popular OR software package. Student versions of LINDO and LINGO now can be downloaded free from the Web at www.lindo.com. This student version also is provided in your OR Courseware. As for Excel, each time an example can be solved with this package, all the details are given in a LINGO/LINDO file for that chapter in your OR Courseware. When dealing with large and challenging OR problems, it is common to also use a modeling system to efficiently formulate the mathematical model and enter it into the computer. MPL is a user-friendly modeling system that includes a considerable number of elite solvers for solving such problems very efficiently. These solvers include CPLEX, GUROBI, CoinMP, and SULUM for linear and integer programming (Chaps. 3-10 and 12), as well as CONOPT for convex programming (part of Chap. 13) and LGO for global optimization (Sec. 13.10), among others. A student version of MPL, along with the student version of its solvers, is available free by downloading it from the Web. For your convenience, we also have included this student version (including the six solvers just mentioned) in your OR Courseware. Once again, all the examples that can be solved with this package are detailed in MPL/Solvers files for the corresponding chapters in your OR Courseware. Furthermore, academic users can apply to receive full-sized versions of MPL, CPLEX, and GUROBI by going to their respective websites.2 This means that any academic users (professors or students) now can obtain professional versions of MPL with CPLEX and GUROBI for use in their coursework. We will further describe these four software packages and how to use them later (especially near the end of Chaps. 3 and 4). Appendix 1 also provides documentation for the OR Courseware, including OR Tutor and IOR Tutorial. To alert you to relevant material in OR Courseware, the end of each chapter from Chap. 3 onward has a list entitled Learning Aids for This Chapter on our Website. As 2 MPL: http://www.maximalsoftware.com/academic; CPLEX: http://www-03.ibm.com/ibm/university/academic/pub/ page/ban_ilog_programming; GUROBI: http://www.gurobi.com/products/licensing-and-pricing/academic-licensing PROBLEMS 9 explained at the beginning of the problem section for each of these chapters, symbols also are placed to the left of each problem number or part where any of this material (including demonstration examples and interactive procedures) can be helpful. Another learning aid provided on our website is a set of Solved Examples for each chapter (from Chap. 3 onward). These complete examples supplement the examples in the book for your use as needed, but without interrupting the flow of the material on those many occasions when you don’t need to see an additional example. You also might find these supplementary examples helpful when preparing for an examination. We always will mention whenever a supplementary example on the current topic is included in the Solved Examples section of the book’s website. To make sure you don’t overlook this mention, we will boldface the words additional example (or something similar) each time. The website also includes a glossary for each chapter. ■ SELECTED REFERENCES 1. Assad, A. A., and S. I. Gass (eds.): Profiles in Operations Research: Pioneers and Innovators, Springer, New York, 2011. 2. Davenport, T. H., and J. G. Harris: Competing on Analytics: The New Science of Winning, Harvard Business School Press, Cambridge, MA, 2007. 3. Davenport, T. H., J. G. Harris, and R. Morison: Analytics at Work: Smarter Decisions, Better Results Harvard Business School Press, Cambridge, MA, 2010. 4. Fry, M. J., and J. W. Ohlmann (eds.): Special Issue on Analytics in Sports, Part I: General Sports Applications, Interfaces, 42 (2), March–April 2012. 5. Fry, M. J., and J. W. Ohlmann (eds.): Special Issue on Analytics in Sports: Part II: Sports Scheduling Applications, Interfaces, 42 (3), May–June 2012. 6. Gass, S. I., “Model World: On the Evolution of Operations Research”, Interfaces, 41 (4): 389–393, July–August 2011. 7. Gass, S. I., and A. A. Assad: An Annotated Timeline of Operations Research: An Informal History, Kluwer Academic Publishers (now Springer), Boston, 2005. 8. Gass, S. I., and M. Fu (eds.): Encyclopedia of Operations Research and Management Science, 3rd ed., Springer, New York, 2014. 9. Han, J., M. Kamber, and J. Pei: Data Mining: Concepts and Techniques, 3rd ed., Elsevier/ Morgan Kaufmann, Waltham, MA, 2011. 10. Lewis, M.: Moneyball: The Art of Winning an Unfair Game, W. W. Norton & Company, New York, 2003. 11. Liberatore, M. J., and W. Luo: “The Analytics Movement: Implications for Operations Research,” Interfaces, 40(4): 313–324, July–August 2010. 12. Saxena, R., and A. Srinivasan: Business Analytics: A Practitioner’s Guide, Springer, New York, 2013. 13. Wein, L. M. (ed.): “50th Anniversary Issue,” Operations Research (a special issue featuring personalized accounts of some of the key early theoretical and practical developments in the field), 50(1), January–February 2002. ■ PROBLEMS 1.3-1. Select one of the applications of operations research listed in Table 1.1. Read the article that is referenced in the application vignette presented in the section shown in the third column. (A link to all these articles is provided on our website, www.mhhe.com/hillier.) Write a two-page summary of the application and the benefits (including nonfinancial benefits) it provided. 1.3-2. Select three of the applications of operations research listed in Table 1.1. For each one, read the article that is referenced in the application vignette presented in the section shown in the third column. (A link to all these articles is provided on our website, www.mhhe.com/hillier.) For each one, write a one-page summary of the application and the benefits (including nonfinancial benefits) it provided. 1.3-3. Read the referenced article that fully describes the OR study summarized in the application vignette presented in Sec. 1.4. List the various financial and nonfinancial benefits that resulted from this study. 2 C H A P T E R Overview of the Operations Research Modeling Approach T he bulk of this book is devoted to the mathematical methods of operations research (OR). This is quite appropriate because these quantitative techniques form the main part of what is known about OR. However, it does not imply that practical OR studies are primarily mathematical exercises. As a matter of fact, the mathematical analysis often represents only a relatively small part of the total effort required. The purpose of this chapter is to place things into better perspective by describing all the major phases of a typical large OR study. One way of summarizing the usual (overlapping) phases of an OR study is the following: 1. Define the problem of interest and gather relevant data. 2. Formulate a mathematical model to represent the problem. 3. Develop a computer-based procedure for deriving solutions to the problem from the model. 4. Test the model and refine it as needed. 5. Prepare for the ongoing application of the model as prescribed by management. 6. Implement. Each of these phases will be discussed in turn in the following sections. The selected references at the end of the chapter include some award-winning OR studies that provide excellent examples of how to execute these phases well. We will intersperse snippets from some of these examples throughout the chapter. If you decide that you would like to learn more about these award-winning applications of operations research, a link to the articles that describe these OR studies in detail is included on the book’s website, www.mhhe.com/hillier. ■ 2.1 DEFINING THE PROBLEM AND GATHERING DATA In contrast to textbook examples, most practical problems encountered by OR teams are initially described to them in a vague, imprecise way. Therefore, the first order of business is to study the relevant system and develop a well-defined statement of the problem to be considered. This includes determining such things as the appropriate objectives, constraints on what can be done, interrelationships between the area to be studied and other 10 2.1 DEFINING THE PROBLEM AND GATHERING DATA 11 areas of the organization, possible alternative courses of action, time limits for making a decision, and so on. This process of problem definition is a crucial one because it greatly affects how relevant the conclusions of the study will be. It is difficult to extract a “right” answer from the “wrong” problem! The first thing to recognize is that an OR team normally works in an advisory capacity. The team members are not just given a problem and told to solve it however they see fit. Instead, they advise management (often one key decision maker). The team performs a detailed technical analysis of the problem and then presents recommendations to management. Frequently, the report to management will identify a number of alternatives that are particularly attractive under different assumptions or over a different range of values of some policy parameter that can be evaluated only by management (e.g., the trade-off between cost and benefits). Management evaluates the study and its recommendations, takes into account a variety of intangible factors, and makes the final decision based on its best judgment. Consequently, it is vital for the OR team to get on the same wavelength as management, including identifying the “right” problem from management’s viewpoint, and to build the support of management for the course that the study is taking. Ascertaining the appropriate objectives is a very important aspect of problem definition. To do this, it is necessary first to identify the member (or members) of management who actually will be making the decisions concerning the system under study and then to probe into this individual’s thinking regarding the pertinent objectives. (Involving the decision maker from the outset also is essential to build her or his support for the implementation of the study.) By its nature, OR is concerned with the welfare of the entire organization rather than that of only certain of its components. An OR study seeks solutions that are optimal for the overall organization rather than suboptimal solutions that are best for only one component. Therefore, the objectives that are formulated ideally should be those of the entire organization. However, this is not always convenient. Many problems primarily concern only a portion of the organization, so the analysis would become unwieldy if the stated objectives were too general and if explicit consideration were given to all side effects on the rest of the organization. Instead, the objectives used in the study should be as specific as they can be while still encompassing the main goals of the decision maker and maintaining a reasonable degree of consistency with the higher-level objectives of the organization. For profit-making organizations, one possible approach to circumventing the problem of suboptimization is to use long-run profit maximization (considering the time value of money) as the sole objective. The adjective long-run indicates that this objective provides the flexibility to consider activities that do not translate into profits immediately (e.g., research and development projects) but need to do so eventually in order to be worthwhile. This approach has considerable merit. This objective is specific enough to be used conveniently, and yet it seems to be broad enough to encompass the basic goal of profitmaking organizations. In fact, some people believe that all other legitimate objectives can be translated into this one. However, in actual practice, many profit-making organizations do not use this approach. A number of studies of U.S. corporations have found that management tends to adopt the goal of satisfactory profits, combined with other objectives, instead of focusing on long-run profit maximization. Typically, some of these other objectives might be to maintain stable profits, increase (or maintain) one’s share of the market, provide for product diversification, maintain stable prices, improve worker morale, maintain family control of the business, and increase company prestige. Fulfilling these objectives might achieve long-run profit maximization, but the relationship may be sufficiently obscure that it may not be convenient to incorporate them all into this one objective. 12 CHAPTER 2 OVERVIEW OF THE OPERATIONS RESEARCH MODELING APPROACH Furthermore, there are additional considerations involving social responsibilities that are distinct from the profit motive. The five parties generally affected by a business firm located in a single country are (1) the owners (stockholders, etc.), who desire profits (dividends, stock appreciation, and so on); (2) the employees, who desire steady employment at reasonable wages; (3) the customers, who desire a reliable product at a reasonable price; (4) the suppliers, who desire integrity and a reasonable selling price for their goods; and (5) the government and hence the nation, which desire payment of fair taxes and consideration of the national interest. All five parties make essential contributions to the firm, and the firm should not be viewed as the exclusive servant of any one party for the exploitation of others. By the same token, international corporations acquire additional obligations to follow socially responsible practices. Therefore, while granting that management’s prime responsibility is to make profits (which ultimately benefits all five parties), we note that its broader social responsibilities also must be recognized. OR teams typically spend a surprisingly large amount of time gathering relevant data about the problem. Much data usually are needed both to gain an accurate understanding of the problem and to provide the needed input for the mathematical model being formulated in the next phase of study. Frequently, much of the needed data will not be available when the study begins, either because the information never has been kept or because what was kept is outdated or in the wrong form. Therefore, it often is necessary to install a new computer-based management information system to collect the necessary data on an ongoing basis and in the needed form. The OR team normally needs to enlist the assistance of various other key individuals in the organization, including information technology (IT) specialists, to track down all the vital data. Even with this effort, much of the data may be quite “soft,” i.e., rough estimates based only on educated guesses. Typically, an OR team will spend considerable time trying to improve the precision of the data and then will make do with the best that can be obtained. With the widespread use of databases and the explosive growth in their sizes in recent years, OR teams now frequently find that their biggest data problem is not that too little is available but that there is too much data. There may be thousands of sources of data, and the total amount of data may be measured in gigabytes or even terabytes. In this environment, locating the particularly relevant data and identifying the interesting patterns in these data can become an overwhelming task. One of the newer tools of OR teams is a technique called data mining that addresses this problem. Data mining methods search large databases for interesting patterns that may lead to useful decisions. (Selected Reference 6 at the end of the chapter provides further background about data mining.) Example. In the late 1990s, full-service financial services firms came under assault from electronic brokerage firms offering extremely low trading costs. Merrill Lynch responded by conducting a major OR study that led to a complete overhaul in how it charged for its services, ranging from a full-service asset-based option (charge a fixed percentage of the value of the assets held rather than for individual trades) to a low-cost option for clients wishing to invest online directly. Data collection and processing played a key role in the study. To analyze the impact of individual client behavior in response to different options, the team needed to assemble a comprehensive 200 gigabyte client database involving 5 million clients, 10 million accounts, 100 million trade records, and 250 million ledger records. This required merging, reconciling, filtering, and cleaning data from numerous production databases. The adoption of the recommendations of the study led to a one-year increase of nearly $50 billion in client assets held and nearly $80 million more revenue. (Selected Reference A2 describes this study in detail. Also see Selected References A1, A10, and A14 for other examples where data collection and processing played a particularly key role in an award-winning OR study.) 2.2 ■ 2.2 FORMULATING A MATHEMATICAL MODEL 13 FORMULATING A MATHEMATICAL MODEL After the decision-maker’s problem is defined, the next phase is to reformulate this problem in a form that is convenient for analysis. The conventional OR approach for doing this is to construct a mathematical model that represents the essence of the problem. Before discussing how to formulate such a model, we first explore the nature of models in general and of mathematical models in particular. Models, or idealized representations, are an integral part of everyday life. Common examples include model airplanes, portraits, globes, and so on. Similarly, models play an important role in science and business, as illustrated by models of the atom, models of genetic structure, mathematical equations describing physical laws of motion or chemical reactions, graphs, organizational charts, and industrial accounting systems. Such models are invaluable for abstracting the essence of the subject of inquiry, showing interrelationships, and facilitating analysis. Mathematical models are also idealized representations, but they are expressed in terms of mathematical symbols and expressions. Such laws of physics as F = ma and E = mc2 are familiar examples. Similarly, the mathematical model of a business problem is the system of equations and related mathematical expressions that describe the essence of the problem. Thus, if there are n related quantifiable decisions to be made, they are represented as decision variables (say, x1, x2, . . . , xn) whose respective values are to be determined. The appropriate measure of performance (e.g., profit) is then expressed as a mathematical function of these decision variables (for example, P = 3x1 + 2x2 + . . . + 5xn). This function is called the objective function. Any restrictions on the values that can be assigned to these decision variables are also expressed mathematically, typically by means of inequalities or equations (for example, x1 + 3x1x2 + 2x2  10). Such mathematical expressions for the restrictions often are called constraints. The constants (namely, the coefficients and righthand sides) in the constraints and the objective function are called the parameters of the model. The mathematical model might then say that the problem is to choose the values of the decision variables so as to maximize the objective function, subject to the specified constraints. Such a model, and minor variations of it, typifies the models used in OR. Determining the appropriate values to assign to the parameters of the model (one value per parameter) is both a critical and a challenging part of the model-building process. In contrast to textbook problems where the numbers are given to you, determining parameter values for real problems requires gathering relevant data. As discussed in the preceding section, gathering accurate data frequently is difficult. Therefore, the value assigned to a parameter often is, of necessity, only a rough estimate. Because of the uncertainty about the true value of the parameter, it is important to analyze how the solution derived from the model would change (if at all) if the value assigned to the parameter were changed to other plausible values. This process is referred to as sensitivity analysis, as discussed further in the next section (and much of Chap. 7). Although we refer to “the” mathematical model of a business problem, real problems normally don’t have just a single “right” model. Section 2.4 will describe how the process of testing a model typically leads to a succession of models that provide better and better representations of the problem. It is even possible that two or more completely different types of models may be developed to help analyze the same problem. You will see numerous examples of mathematical models throughout the remainder of this book. One particularly important type that is studied in the next several chapters is the linear programming model, where the mathematical functions appearing in both the objective function and the constraints are all linear functions. In Chap. 3, specific linear programming models are constructed to fit such diverse problems as determining (1) the 14 CHAPTER 2 OVERVIEW OF THE OPERATIONS RESEARCH MODELING APPROACH mix of products that maximizes profit, (2) the design of radiation therapy that effectively attacks a tumor while minimizing the damage to nearby healthy tissue, (3) the allocation of acreage to crops that maximizes total net return, and (4) the combination of pollution abatement methods that achieves air quality standards at minimum cost. Mathematical models have many advantages over a verbal description of the problem. One advantage is that a mathematical model describes a problem much more concisely. This tends to make the overall structure of the problem more comprehensible, and it helps to reveal important cause-and-effect relationships. In this way, it indicates more clearly what additional data are relevant to the analysis. It also facilitates dealing with the problem in its entirety and considering all its interrelationships simultaneously. Finally, a mathematical model forms a bridge to the use of high-powered mathematical techniques and computers to analyze the problem. Indeed, packaged software for both personal computers and mainframe computers has become widely available for solving many mathematical models. However, there are pitfalls to be avoided when you use mathematical models. Such a model is necessarily an abstract idealization of the problem, so approximations and simplifying assumptions generally are required if the model is to be tractable (capable of being solved). Therefore, care must be taken to ensure that the model remains a valid representation of the problem. The proper criterion for judging the validity of a model is whether the model predicts the relative effects of the alternative courses of action with sufficient accuracy to permit a sound decision. Consequently, it is not necessary to include unimportant details or factors that have approximately the same effect for all the alternative courses of action considered. It is not even necessary that the absolute magnitude of the measure of performance be approximately correct for the various alternatives, provided that their relative values (i.e., the differences between their values) are sufficiently precise. Thus, all that is required is that there be a high correlation between the prediction by the model and what would actually happen in the real world. To ascertain whether this requirement is satisfied, it is important to do considerable testing and consequent modifying of the model, which will be the subject of Sec. 2.4. Although this testing phase is placed later in the chapter, much of this model validation work actually is conducted during the model-building phase of the study to help guide the construction of the mathematical model. In developing the model, a good approach is to begin with a very simple version and then move in evolutionary fashion toward more elaborate models that more nearly reflect the complexity of the real problem. This process of model enrichment continues only as long as the model remains tractable. The basic trade-off under constant consideration is between the precision and the tractability of the model. (See Selected Reference 9 for a detailed description of this process.) A crucial step in formulating an OR model is the construction of the objective function. This requires developing a quantitative measure of performance relative to each of the decision maker’s ultimate objectives that were identified while the problem was being defined. If there are multiple objectives, their respective measures commonly are then transformed and combined into a composite measure, called the overall measure of performance. This overall measure might be something tangible (e.g., profit) corresponding to a higher goal of the organization, or it might be abstract (e.g., utility). In the latter case, the task of developing this measure tends to be a complex one requiring a careful comparison of the objectives and their relative importance. After the overall measure of performance is developed, the objective function is then obtained by expressing this measure as a mathematical function of the decision variables. Alternatively, there also are methods for explicitly considering multiple objectives simultaneously, and one of these (goal programming) is discussed in the supplement to Chap. 8. An Application Vignette Prior to its merger with United Airlines that was completed in 2012, Continental Airlines was a major U.S. air carrier that transported passengers, cargo, and mail. It operated more than 2,000 daily departures to well over 100 domestic destinations and nearly 100 foreign destinations. Following the merger under the name of United Airlines, the combined airline has a fleet of over 700 aircraft serving up to 370 destinations. Airlines like Continental (and now under its reincarnation as part of United Airlines) face schedule disruptions daily because of unexpected events, including inclement weather, aircraft mechanical problems, and crew unavailability. These disruptions can cause flight delays and cancellations. As a result, crews may not be in position to service their remaining scheduled flights. Airlines must reassign crews quickly to cover open flights and to return them to their original schedules in a cost-effective manner while honoring all government regulations, contractual obligations, and quality-of-life requirements. To address such problems, an OR team at Continental Airlines developed a detailed mathematical model for reassigning crews to flights as soon as such emergencies arise. Because the airline has thousands of crews and daily flights, the model needed to be huge to consider all possible pairings of crews with flights. Therefore, the model has millions of decision variables and many thousands of constraints. In its first year of use (mainly in 2001), the model was applied four times to recover from major schedule disruptions (two snowstorms, a flood, and the September 11 terrorist attacks). This led to savings of approximately $40 million. Subsequent applications extended to many daily minor disruptions as well. Although other airlines subsequently scrambled to apply operations research in a similar way, this initial advantage over other airlines in being able to recover more quickly from schedule disruptions with fewer delays and canceled flights left Continental Airlines in a relatively strong position as the airline industry struggled through a difficult period during the initial years of the 21st century. This initiative led to Continental winning the prestigious First Prize in the 2002 international competition for the Franz Edelman Award for Achievement in Operations Research and the Management Sciences. Source: G. Yu, M. Argüello, C. Song, S. M. McGowan, and A. White, “A New Era for Crew Recovery at Continental Airlines,” Interfaces, 33(1): 5–22, Jan.–Feb. 2003. (A link to this article is provided on our website, www.mhhe.com/hillier.) Example. The Netherlands government agency responsible for water control and public works, the Rijkswaterstaat, commissioned a major OR study to guide the development of a new national water management policy. The new policy saved hundreds of millions of dollars in investment expenditures and reduced agricultural damage by about $15 million per year, while decreasing thermal and algae pollution. Rather than formulating one mathematical model, this OR study developed a comprehensive, integrated system of 50 models! Furthermore, for some of the models, both simple and complex versions were developed. The simple version was used to gain basic insights, including trade-off analyses. The complex version then was used in the final rounds of the analysis or whenever greater accuracy or more detailed outputs were desired. The overall OR study directly involved over 125 person-years of effort (more than one-third in data gathering), created several dozen computer programs, and structured an enormous amount of data. (Selected Reference A8 describes this study in detail. Also see Selected References A3 and A9 for other examples where a large number of mathematical models were effectively integrated in an award-winning OR study.) ■ 2.3 DERIVING SOLUTIONS FROM THE MODEL After a mathematical model is formulated for the problem under consideration, the next phase in an OR study is to develop a procedure (usually a computer-based procedure) for deriving solutions to the problem from this model. You might think that this must be the major part of the study, but actually it is not in most cases. Sometimes, in fact, it is a relatively simple step, in which one of the standard algorithms (systematic solution procedures) of OR is applied on a computer by using one of a number of readily available software packages. For experienced OR practitioners, finding a solution is the fun part, whereas the real work comes in the preceding and following steps, including the postoptimality analysis discussed later in this section. 16 CHAPTER 2 OVERVIEW OF THE OPERATIONS RESEARCH MODELING APPROACH Since much of this book is devoted to the subject of how to obtain solutions for various important types of mathematical models, little needs to be said about it here. However, we do need to discuss the nature of such solutions. A common theme in OR is the search for an optimal, or best, solution. Indeed, many procedures have been developed, and are presented in this book, for finding such solutions for certain kinds of problems. However, it needs to be recognized that these solutions are optimal only with respect to the model being used. Since the model necessarily is an idealized rather than an exact representation of the real problem, there cannot be any utopian guarantee that the optimal solution for the model will prove to be the best possible solution that could have been implemented for the real problem. There just are too many imponderables and uncertainties associated with real problems. However, if the model is well formulated and tested, the resulting solution should tend to be a good approximation to an ideal course of action for the real problem. Therefore, rather than be deluded into demanding the impossible, you should make the test of the practical success of an OR study hinge on whether it provides a better guide for action than can be obtained by other means. The late Herbert Simon (an eminent management scientist and a Nobel Laureate in economics) pointed out that satisficing is much more prevalent than optimizing in actual practice. In coining the term satisficing as a combination of the words satisfactory and optimizing, Simon was describing the tendency of managers to seek a solution that is “good enough” for the problem at hand. Rather than trying to develop an overall measure of performance to optimally reconcile conflicts between various desirable objectives (including well-established criteria for judging the performance of different segments of the organization), a more pragmatic approach may be used. Goals may be set to establish minimum satisfactory levels of performance in various areas, based perhaps on past levels of performance or on what the competition is achieving. If a solution is found that enables all these goals to be met, it is likely to be adopted without further ado. Such is the nature of satisficing. The distinction between optimizing and satisficing reflects the difference between theory and the realities frequently faced in trying to implement that theory in practice. In the words of one of England’s pioneering OR leaders, Samuel Eilon, “Optimizing is the science of the ultimate; satisficing is the art of the feasible.”1 OR teams attempt to bring as much of the “science of the ultimate” as possible to the decision-making process. However, the successful team does so in full recognition of the overriding need of the decision maker to obtain a satisfactory guide for action in a reasonable period of time. Therefore, the goal of an OR study should be to conduct the study in an optimal manner, regardless of whether this involves finding an optimal solution for the model. Thus, in addition to pursuing the science of the ultimate, the team should also consider the cost of the study and the disadvantages of delaying its completion, and then attempt to maximize the net benefits resulting from the study. In recognition of this concept, OR teams occasionally use only heuristic procedures (i.e., intuitively designed procedures that do not guarantee an optimal solution) to find a good suboptimal solution. This is most often the case when the time or cost required to find an optimal solution for an adequate model of the problem would be very large. In recent years, great progress has been made in developing efficient and effective metaheuristics that provide both a general structure and strategy guidelines for designing a specific heuristic procedure to fit a particular kind of problem. The use of metaheuristics (the subject of Chap. 14) is continuing to grow. The discussion thus far has implied that an OR study seeks to find only one solution, which may or may not be required to be optimal. In fact, this usually is not the case. An 1 S. Eilon, “Goals and Constraints in Decision-making,” Operational Research Quarterly, 23: 3–15, 1972. Address given at the 1971 annual conference of the Canadian Operational Research Society. 2.3 DERIVING SOLUTIONS FROM THE MODEL 17 optimal solution for the original model may be far from ideal for the real problem, so additional analysis is needed. Therefore, postoptimality analysis (analysis done after finding an optimal solution) is a very important part of most OR studies. This analysis also is sometimes referred to as what-if analysis because it involves addressing some questions about what would happen to the optimal solution if different assumptions are made about future conditions. These questions often are raised by the managers who will be making the ultimate decisions rather than by the OR team. The advent of powerful spreadsheet software now has frequently given spreadsheets a central role in conducting postoptimality analysis. One of the great strengths of a spreadsheet is the ease with which it can be used interactively by anyone, including managers, to see what happens to the optimal solution (according to the current version of the model) when changes are made to the model. This process of experimenting with changes in the model also can be very helpful in providing understanding of the behavior of the model and increasing confidence in its validity. In part, postoptimality analysis involves conducting sensitivity analysis to determine which parameters of the model are most critical (the “sensitive parameters”) in determining the solution. A common definition of sensitive parameter (used throughout this book) is the following. For a mathematical model with specified values for all its parameters, the model’s sensitive parameters are the parameters whose value cannot be changed without changing the optimal solution. Identifying the sensitive parameters is important, because this identifies the parameters whose value must be assigned with special care to avoid distorting the output of the model. The value assigned to a parameter commonly is just an estimate of some quantity (e.g., unit profit) whose exact value will become known only after the solution has been implemented. Therefore, after the sensitive parameters are identified, special attention is given to estimating each one more closely, or at least its range of likely values. One then seeks a solution that remains a particularly good one for all the various combinations of likely values of the sensitive parameters. If the solution is implemented on an ongoing basis, any later change in the value of a sensitive parameter immediately signals a need to change the solution. In some cases, certain parameters of the model represent policy decisions (e.g., resource allocations). If so, there frequently is some flexibility in the values assigned to these parameters. Perhaps some can be increased by decreasing others. Postoptimality analysis includes the investigation of such trade-offs. In conjunction with the study phase discussed in Sec. 2.4 (testing the model), postoptimality analysis also involves obtaining a sequence of solutions that comprises a series of improving approximations to the ideal course of action. Thus, the apparent weaknesses in the initial solution are used to suggest improvements in the model, its input data, and perhaps the solution procedure. A new solution is then obtained, and the cycle is repeated. This process continues until the improvements in the succeeding solutions become too small to warrant continuation. Even then, a number of alternative solutions (perhaps solutions that are optimal for one of several plausible versions of the model and its input data) may be presented to management for the final selection. As suggested in Sec. 2.1, this presentation of alternative solutions would normally be done whenever the final choice among these alternatives should be based on considerations that are best left to the judgment of management. Example. Consider again the Rijkswaterstaat OR study of national water management policy for the Netherlands, introduced at the end of Sec. 2.2. This study did not conclude by recommending just a single solution. Instead, a number of attractive alternatives were identified, analyzed, and compared. The final choice was left to the Dutch political 18 CHAPTER 2 OVERVIEW OF THE OPERATIONS RESEARCH MODELING APPROACH process, culminating with approval by Parliament. Sensitivity analysis played a major role in this study. For example, certain parameters of the models represented environmental standards. Sensitivity analysis included assessing the impact on water management problems if the values of these parameters were changed from the current environmental standards to other reasonable values. Sensitivity analysis also was used to assess the impact of changing the assumptions of the models, e.g., the assumption on the effect of future international treaties on the amount of pollution entering the Netherlands. A variety of scenarios (e.g., an extremely dry year or an extremely wet year) also were analyzed, with appropriate probabilities assigned. (Also see Selected References A11 and A13 for other examples where quickly deriving the appropriate kinds of solutions were a key part of an award-winning OR application.) ■ 2.4 TESTING THE MODEL Developing a large mathematical model is analogous in some ways to developing a large computer program. When the first version of the computer program is completed, it inevitably contains many bugs. The program must be thoroughly tested to try to find and correct as many bugs as possible. Eventually, after a long succession of improved programs, the programmer (or programming team) concludes that the current program now is generally giving reasonably valid results. Although some minor bugs undoubtedly remain hidden in the program (and may never be detected), the major bugs have been sufficiently eliminated that the program now can be reliably used. Similarly, the first version of a large mathematical model inevitably contains many flaws. Some relevant factors or interrelationships undoubtedly have not been incorporated into the model, and some parameters undoubtedly have not been estimated correctly. This is inevitable, given the difficulty of communicating and understanding all the aspects and subtleties of a complex operational problem as well as the difficulty of collecting reliable data. Therefore, before you use the model, it must be thoroughly tested to try to identify and correct as many flaws as possible. Eventually, after a long succession of improved models, the OR team concludes that the current model now is giving reasonably valid results. Although some minor flaws undoubtedly remain hidden in the model (and may never be detected), the major flaws have been sufficiently eliminated so that the model now can be reliably used. This process of testing and improving a model to increase its validity is commonly referred to as model validation. It is difficult to describe how model validation is done, because the process depends greatly on the nature of the problem being considered and the model being used. However, we make a few general comments, and then we give an example. (See Selected Reference 3 for a detailed discussion.) Since the OR team may spend months developing all the detailed pieces of the model, it is easy to “lose the forest for the trees.” Therefore, after the details (“the trees”) of the initial version of the model are completed, a good way to begin model validation is to take a fresh look at the overall model (“the forest”) to check for obvious errors or oversights. The group doing this review preferably should include at least one individual who did not participate in the formulation of the model. Reexamining the definition of the problem and comparing it with the model may help to reveal mistakes. It is also useful to make sure that all the mathematical expressions are dimensionally consistent in the units used. Additional insight into the validity of the model can sometimes be obtained by varying the values of the parameters and/or the decision variables and checking to see whether the output from the model behaves in a plausible manner. This is often especially revealing when the parameters or variables are assigned extreme values near their maxima or minima. 2.5 PREPARING TO APPLY THE MODEL 19 A more systematic approach to testing the model is to use a retrospective test. When it is applicable, this test involves using historical data to reconstruct the past and then determining how well the model and the resulting solution would have performed if they had been used. Comparing the effectiveness of this hypothetical performance with what actually happened then indicates whether using this model tends to yield a significant improvement over current practice. It may also indicate areas where the model has shortcomings and requires modifications. Furthermore, by using alternative solutions from the model and estimating their hypothetical historical performances, considerable evidence can be gathered regarding how well the model predicts the relative effects of alternative courses of actions. On the other hand, a disadvantage of retrospective testing is that it uses the same data that guided the formulation of the model. The crucial question is whether the past is truly representative of the future. If it is not, then the model might perform quite differently in the future than it would have in the past. To circumvent this disadvantage of retrospective testing, it is sometimes useful to further test the model by continuing the status quo temporarily. This provides new data that were not available when the model was constructed. These data are then used in the same ways as those described here to evaluate the model. Documenting the process used for model validation is important. This helps to increase confidence in the model for subsequent users. Furthermore, if concerns arise in the future about the model, this documentation will be helpful in diagnosing where problems may lie. Example. Consider an OR study done for IBM to integrate its national network of spareparts inventories to improve service support for IBM’s customers. This study resulted in a new inventory system that improved customer service while reducing the value of IBM’s inventories by over $250 million and saving an additional $20 million per year through improved operational efficiency. A particularly interesting aspect of the model validation phase of this study was the way that future users of the inventory system were incorporated into the testing process. Because these future users (IBM managers in functional areas responsible for implementation of the inventory system) were skeptical about the system being developed, representatives were appointed to a user team to serve as advisers to the OR team. After a preliminary version of the new system had been developed (based on a multiechelon inventory model), a preimplementation test of the system was conducted. Extensive feedback from the user team led to major improvements in the proposed system. (Selected Reference A5 describes this study in detail.) ■ 2.5 PREPARING TO APPLY THE MODEL What happens after the testing phase has been completed and an acceptable model has been developed? If the model is to be used repeatedly, the next step is to install a well-documented system for applying the model as prescribed by management. This system will include the model, solution procedure (including postoptimality analysis), and operating procedures for implementation. Then, even as personnel changes, the system can be called on at regular intervals to provide a specific numerical solution. This system usually is computer-based. In fact, a considerable number of computer programs often need to be used and integrated. Databases and management information systems may provide up-to-date input for the model each time it is used, in which case interface programs are needed. After a solution procedure (another program) is applied to the model, additional computer programs may trigger the implementation of the results automatically. In 20 CHAPTER 2 OVERVIEW OF THE OPERATIONS RESEARCH MODELING APPROACH other cases, an interactive computer-based system called a decision support system is installed to help managers use data and models to support (rather than replace) their decision making as needed. Another program may generate managerial reports (in the language of management) that interpret the output of the model and its implications for application. In major OR studies, several months (or longer) may be required to develop, test, and install this computer system. Part of this effort involves developing and implementing a process for maintaining the system throughout its future use. As conditions change over time, this process should modify the computer system (including the model) accordingly. Example. The application vignette in Sec. 2.2 described an OR study done for Continental Airlines that led to the formulation of a huge mathematical model for reassigning crews to flights when schedule disruptions occur. Because the model needs to be applied immediately when a disruption occurs, a decision support system called CrewSolver was developed to incorporate both the model and a huge in-memory data store representing current operations. CrewSolver enables a crew coordinator to input data about the schedule disruption and then to use a graphical user interface to request an immediate solution for how to reassign crews to flights. (Also see Selected References A4 and A6 for other examples where a decision support system played a vital role in an award-winning OR application.) ■ 2.6 IMPLEMENTATION After a system is developed for applying the model, the last phase of an OR study is to implement this system as prescribed by management. This phase is a critical one because it is here, and only here, that the benefits of the study are reaped. Therefore, it is important for the OR team to participate in launching this phase, both to make sure that model solutions are accurately translated to an operating procedure and to rectify any flaws in the solutions that are then uncovered. The success of the implementation phase depends a great deal upon the support of both top management and operating management. The OR team is much more likely to gain this support if it has kept management well informed and encouraged management’s active guidance throughout the course of the study. Good communications help to ensure that the study accomplishes what management wanted, and also give management a greater sense of ownership of the study, which encourages their support for implementation. The implementation phase involves several steps. First, the OR team gives operating management a careful explanation of the new system to be adopted and how it relates to operating realities. Next, these two parties share the responsibility for developing the procedures required to put this system into operation. Operating management then sees that a detailed indoctrination is given to the personnel involved, and the new course of action is initiated. If successful, the new system may be used for years to come. With this in mind, the OR team monitors the initial experience with the course of action taken and seeks to identify any modifications that should be made in the future. Throughout the entire period during which the new system is being used, it is important to continue to obtain feedback on how well the system is working and whether the assumptions of the model continue to be satisfied. When significant deviations from the original assumptions occur, the model should be revisited to determine if any modifications should be made in the system. The postoptimality analysis done earlier (as described in Sec. 2.3) can be helpful in guiding this review process. Upon culmination of a study, it is appropriate for the OR team to document its methodology clearly and accurately enough so that the work is reproducible. Replicability should be part of the professional ethical code of the operations researcher. This condition is especially crucial when controversial public policy issues are being studied. SELECTED REFERENCES 21 Example. This example illustrates how a successful implementation phase might need to involve thousands of employees before undertaking the new procedures. Samsung Electronics Corp. initiated a major OR study in March 1996 to develop new methodologies and scheduling applications that would streamline the entire semiconductor manufacturing process and reduce work-in-progress inventories. The study continued for over five years, culminating in June 2001, largely because of the extensive effort required for the implementation phase. The OR team needed to gain the support of numerous managers, manufacturing staff, and engineering staff by training them in the principles and logic of the new manufacturing procedures. Ultimately, more than 3,000 people attended training sessions. The new procedures then were phased in gradually to build confidence. However, this patient implementation process paid huge dividends. The new procedures transformed the company from being the least efficient manufacturer in the semiconductor industry to becoming the most efficient. This resulted in increased revenues of over $1 billion by the time the implementation of the OR study was completed. (Selected Reference A12 describes this study in detail. Also see Selected References A4, A5, and A7 for other examples where an elaborate implementation strategy was a key to the success of an award-winning OR study.) ■ 2.7 CONCLUSIONS Although the remainder of this book focuses primarily on constructing and solving mathematical models, in this chapter we have tried to emphasize that this constitutes only a portion of the overall process involved in conducting a typical OR study. The other phases described here also are very important to the success of the study. Try to keep in perspective the role of the model and the solution procedure in the overall process as you move through the subsequent chapters. Then, after gaining a deeper understanding of mathematical models, we suggest that you plan to return to review this chapter again in order to further sharpen this perspective. OR is closely intertwined with the use of computers. In the early years, these generally were mainframe computers, but now personal computers and workstations are being widely used to solve OR models. In concluding this discussion of the major phases of an OR study, it should be emphasized that there are many exceptions to the “rules” prescribed in this chapter. By its very nature, OR requires considerable ingenuity and innovation, so it is impossible to write down any standard procedure that should always be followed by OR teams. Rather, the preceding description may be viewed as a model that roughly represents how successful OR studies are conducted. ■ SELECTED REFERENCES 1. Board, J., C. Sutcliffe, and W. T. Ziemba: “Applying Operations Research Techniques to Financial Markets,” Interfaces, 33(2): 12–24, March–April 2003. 2. Brown, G. G., and R. E. Rosenthal: “Optimization Tradecraft: Hard-Won Insights from Real-World Decision Support,” Interfaces, 38(5): 356–366, September–October 2008. 3. Gass, S. I.: “Decision-Aiding Models: Validation, Assessment, and Related Issues for Policy Analysis,” Operations Research, 31: 603–631, 1983. 4. Gass, S. I.: “Model World: Danger, Beware the User as Modeler,” Interfaces, 20(3): 60–64, May–June 1990. 5. Hall, R. W.: “What’s So Scientific about MS/OR?” Interfaces, 15(2): 40–45, March–April 1985. 6. Han, J., M. Kamber, and J. Pei: Data Mining: Concepts and Techniques, 3rd ed. Elsevier/Morgan Kaufmann, Waltham, MA, 2011. 7. Howard, R. A.: “The Ethical OR/MS Professional,” Interfaces, 31(6): 69–82, November–December 2001. 8. Miser, H. J.: “The Easy Chair: Observation and Experimentation,” Interfaces, 19(5): 23–30, September–October 1989. 22 CHAPTER 2 OVERVIEW OF THE OPERATIONS RESEARCH MODELING APPROACH 9. Morris, W. T.: “On the Art of Modeling,” Management Science, 13: B707–717, 1967. 10. Murphy, F. H.: “The Occasional Observer: Some Simple Precepts for Project Success,” Interfaces, 28(5): 25–28, September–October 1998. 11. Murphy, F. H.: “ASP, The Art and Science of Practice: Elements of the Practice of Operations Research: A Framework,” Interfaces, 35(2): 154–163, March–April 2005. 12. Murty, K. G.: Case Studies in Operations Research: Realistic Applications of Optimal Decision Making, Springer, New York, scheduled for publication in 2014. 13. Pidd, M.: “Just Modeling Through: A Rough Guide to Modeling,” Interfaces, 29(2):118–132, March–April 1999. 14. Williams, H. P.: Model Building in Mathematical Programming, 5th ed., Wiley, Hoboken, NJ, 2013. 15. Wright, P. D., M. J. Liberatore, and R. L. Nydick: “A Survey of Operations Research Models and Applications in Homeland Security,” Interfaces, 36(6): 514–529, November–December 2006. Some Award-Winning Applications of the OR Modeling Approach: (A link to all these articles is provided on our website, www.mhhe.com/hillier.) A1. Alden, J. M., L. D. Burns, T. Costy, R. D. Hutton, C. A. Jackson, D. S. Kim, K. A. Kohls, J. H. Owen, M. A. Turnquist, and D. J. V. Veen: “General Motors Increases Its Production Throughput,” Interfaces, 36(1): 6–25, January–February 2006. A2. Altschuler, S., D. Batavia, J. Bennett, R. Labe, B. Liao, R. Nigam, and J. Oh: “Pricing Analysis for Merrill Lynch Integrated Choice,” Interfaces, 32(1): 5–19, January–February 2002. A3. Bixby, A., B. Downs, and M. Self: “A Scheduling and Capable-to-Promise Application for Swift & Company,” Interfaces, 36(1): 69–86, January–February 2006. A4. Braklow, J. W., W. W. Graham, S. M. Hassler, K. E. Peck, and W. B. Powell: “Interactive Optimization Improves Service and Performance for Yellow Freight System,” Interfaces, 22(1): 147–172, January–February 1992. A5. Cohen, M., P. V. Kamesam, P. Kleindorfer, H. Lee, and A. Tekerian: “Optimizer: IBM’s Multi-Echelon Inventory System for Managing Service Logistics,” Interfaces, 20(1): 65–82, January–February 1990. A6. DeWitt, C. W., L. S. Lasdon, A. D. Waren, D. A. Brenner, and S. A. Melhem: “OMEGA: An Improved Gasoline Blending System for Texaco,” Interfaces, 19(1): 85–101, January–February 1990. A7. Fleuren, H., et al.: “Supply Chain-Wide Optimization at TNT Express,” Interfaces, 43(1): 5–20, January–February 2013. A8. Goeller, B. F., and the PAWN team: “Planning the Netherlands’ Water Resources,” Interfaces, 15(1): 3–33, January–February 1985. A9. Hicks, R., R. Madrid, C. Milligan, R. Pruneau, M. Kanaley, Y. Dumas, B. Lacroix, J. Desrosiers, and F. Soumis: “Bombardier Flexjet Significantly Improves Its Fractional Aircraft Ownership Operations,” Interfaces, 35(1): 49–60, January–February 2005. A10. Kaplan, E. H., and E. O’Keefe: “Let the Needles Do the Talking! Evaluating the New Haven Needle Exchange,” Interfaces, 23(1): 7–26, January–February 1993. A11. Kok, T. de, F. Janssen, J. van Doremalen, E. van Wachem, M. Clerkx, and W. Peeters: “Philips Electronics Synchronizes Its Supply Chain to End the Bullwhip Effect,” Interfaces, 35(1): 37–48, January–February 2005. A12. Leachman, R. C., J. Kang, and V. Lin: “SLIM: Short Cycle Time and Low Inventory in Manufacturing at Samsung Electronics,” Interfaces, 32(1): 61–77, January–February 2002. A13. Rash, E., and K. Kempf: “Product Line Design and Scheduling at Intel,” Interfaces, 42(5): 425–436, September–October 2012. A14. Taylor, P. E., and S. J. Huxley: “A Break from Tradition for the San Francisco Police: Patrol Officer Scheduling Using an Optimization-Based Decision Support System,” Interfaces, 19(1): 4–24, January–February 1989. PROBLEMS 23 ■ PROBLEMS 2.1-1. The example in Sec. 2.1 summarizes an award-winning OR study done for Merrill Lynch. Read Selected Reference A2 that describes this study in detail. (a) Summarize the background that led to undertaking this study. (b) Quote the one-sentence statement of the general mission of the OR group (called the management science group) that conducted this study. (c) Identify the type of data that the management science group obtained for each client. (d) Identify the new pricing options that were provided to the company’s clients as a result of this study. (e) What was the resulting impact on Merrill Lynch’s competitive position? 2.1-2. Read Selected Reference A1 that describes an awardwinning OR study done for General Motors. (a) Summarize the background that led to undertaking this study. (b) What was the goal of this study? (c) Describe how software was used to automate the collection of the needed data. (d) The improved production throughput that resulted from this study yielded how much in documented savings and increased revenue? 2.1-3. Read Selected Reference A14 that describes an OR study done for the San Francisco Police Department. (a) Summarize the background that led to undertaking this study. (b) Define part of the problem being addressed by identifying the six directives for the scheduling system to be developed. (c) Describe how the needed data were gathered. (d) List the various tangible and intangible benefits that resulted from the study. 2.1-4. Read Selected Reference A10 that describes an OR study done for the Health Department of New Haven, Connecticut. (a) Summarize the background that led to undertaking this study. (b) Outline the system developed to track and test each needle and syringe in order to gather the needed data. (c) Summarize the initial results from this tracking and testing system. (d) Describe the impact and potential impact of this study on public policy. 2.2-1. Read the referenced article that fully describes the OR study summarized in the application vignette presented in Sec. 2.2. List the various financial and nonfinancial benefits that resulted from this study. 2.2-2. Read Selected Reference A3 that describes an OR study done for Swift & Company. (a) Summarize the background that led to undertaking this study. (b) Describe the purpose of each of the three general types of models formulated during this study. (c) How many specific models does the company now use as a result of this study? (d) List the various financial and nonfinancial benefits that resulted from this study. 2.2-3. Read Selected Reference A8 that describes an OR study done for the Rijkswaterstaat of the Netherlands. (Focus especially on pp. 3–20 and 30–32.) (a) Summarize the background that led to undertaking this study. (b) Summarize the purpose of each of the five mathematical models described on pp. 10–18. (c) Summarize the “impact measures” (measures of performance) for comparing policies that are described on pp. 6–7 of this article. (d) List the various tangible and intangible benefits that resulted from the study. 2.2-4. Read Selected Reference 5. (a) Identify the author’s example of a model in the natural sciences and of a model in OR. (b) Describe the author’s viewpoint about how basic precepts of using models to do research in the natural sciences can also be used to guide research on operations (OR). 2.2-5. Read Selected Reference A9 that describes an awardwinning OR study done for Bombardier Flexjet. (a) What was the objective of this study? (b) As described on pages 53 and 58–59 of this reference, this OR study is remarkable for combining a wide range of mathematical models. Referring to the chapter titles in this book’s table of contents, list these kinds of models. (c) What are the financial benefits that resulted from this study? 2.3-1. Read Selected Reference A11 that describes an OR study done for Philips Electronics. (a) Summarize the background that led to undertaking this study. (b) What was the purpose of this study? (c) What were the benefits of developing software to support problem solving speedily? (d) List the four steps in the collaborative-planning process that resulted from this study. (e) List the various financial and nonfinancial benefits that resulted from this study. 2.3-2. Refer to Selected Reference 5. (a) Describe the author’s viewpoint about whether the sole goal in using a model should be to find its optimal solution. (b) Summarize the author’s viewpoint about the complementary roles of modeling, evaluating information from the model, and then applying the decision maker’s judgment when deciding on a course of action. 2.3-3. Read Selected Reference A13 that describes an OR study done for Intel that won the 2011 Daniel H. Wagner Prize for Excellence in Operations Research Practice. (a) What is the problem being addressed? What is the objective of the study? 24 CHAPTER 2 OVERVIEW OF THE OPERATIONS RESEARCH MODELING APPROACH (b) Because of the complexity of the problem, it is practically impossible to solve optimally. What kind of algorithm is used instead to obtain a good suboptimal solution? 2.4-1. Refer to pp. 18–20 of Selected Reference A8 that describes an OR study done for the Rijkswaterstaat of the Netherlands. Describe an important lesson that was gained from model validation in this study. 2.4-2. Read Selected Reference 8. Summarize the author’s viewpoint about the roles of observation and experimentation in the model validation process. 2.4-3. Read pp. 603–617 of Selected Reference 3. (a) What does the author say about whether a model can be completely validated? (b) Summarize the distinctions made between model validity, data validity, logical/mathematical validity, predictive validity, operational validity, and dynamic validity. (c) Describe the role of sensitivity analysis in testing the operational validity of a model. (d) What does the author say about whether there is a validation methodology that is appropriate for all models? (e) Cite the page in the article that lists basic validation steps. 2.5-1. Read Selected Reference A6 that describes an OR study done for Texaco. (a) Summarize the background that led to undertaking this study. (b) Briefly describe the user interface with the decision support system OMEGA that was developed as a result of this study. (c) OMEGA is constantly being updated and extended to reflect changes in the operating environment. Briefly describe the various kinds of changes involved. (d) Summarize how OMEGA is used. (e) List the various tangible and intangible benefits that resulted from the study. 2.5-2. Refer to Selected Reference A4 that describes an OR study done for Yellow Freight System, Inc. (a) Referring to pp. 147–149 of this article, summarize the background that led to undertaking this study. (b) Referring to p. 150, briefly describe the computer system SYSNET that was developed as a result of this study. Also summarize the applications of SYSNET. (c) Referring to pp. 162–163, describe why the interactive aspects of SYSNET proved important. (d) Referring to p. 163, summarize the outputs from SYSNET. (e) Referring to pp. 168–172, summarize the various benefits that have resulted from using SYSNET. 2.6-1. Refer to pp. 163–167 of Selected Reference A4 that describes an OR study done for Yellow Freight System, Inc., and the resulting computer system SYSNET. (a) Briefly describe how the OR team gained the support of upper management for implementing SYSNET. (b) Briefly describe the implementation strategy that was developed. (c) Briefly describe the field implementation. (d) Briefly describe how management incentives and enforcement were used in implementing SYSNET. 2.6-2. Read Selected Reference A5 that describes an OR study done for IBM and the resulting computer system Optimizer. (a) Summarize the background that led to undertaking this study. (b) List the complicating factors that the OR team members faced when they started developing a model and a solution algorithm. (c) Briefly describe the preimplementation test of Optimizer. (d) Briefly describe the field implementation test. (e) Briefly describe national implementation. (f) List the various tangible and intangible benefits that resulted from the study. 2.6-3. Read Selected Reference A7 that describes an OR study done for TNT Express that won the 2012 Franz Edelman Award for Achievement in Operations Research and the Management Sciences. This study led to a worldwide global optimization (GO) program for the company. A “GO Academy” then was established to train the key employees who would implement the program. (a) What is the main objective of the GO academy? (b) How much time do the trainees devote to this program? (c) What designation is given to graduating employees? 2.7-1. From the bottom part of the selected references given at the end of the chapter, select one of these award-winning applications of the OR modeling approach (excluding any that have been assigned for other problems). Read this article and then write a two-page summary of the application and the benefits (including nonfinancial benefits) it provided. 2.7-2. From the bottom part of the selected references given at the end of the chapter, select three of these award-winning applications of the OR modeling approach (excluding any that have been assigned for other problems). For each one, read this article and write a one-page summary of the application and the benefits (including nonfinancial benefits) it provided. 2.7-3. Read Selected Reference 4. The author describes 13 detailed phases of any OR study that develops and applies a computerbased model, whereas this chapter describes six broader phases. For each of these broader phases, list the detailed phases that fall partially or primarily within the broader phase. 3 C H A P T E R Introduction to Linear Programming T he development of linear programming has been ranked among the most important scientific advances of the mid-20th century, and we must agree with this assessment. Its impact since just 1950 has been extraordinary. Today it is a standard tool that has saved many thousands or millions of dollars for many companies or businesses of even moderate size in the various industrialized countries of the world, and its use in other sectors of society has been spreading rapidly. A major proportion of all scientific computation on computers is devoted to the use of linear programming. Dozens of textbooks have been written about linear programming, and published articles describing important applications now number in the hundreds. What is the nature of this remarkable tool, and what kinds of problems does it address? You will gain insight into this topic as you work through subsequent examples. However, a verbal summary may help provide perspective. Briefly, the most common type of application involves the general problem of allocating limited resources among competing activities in a best possible (i.e., optimal) way. More precisely, this problem involves selecting the level of certain activities that compete for scarce resources that are necessary to perform those activities. The choice of activity levels then dictates how much of each resource will be consumed by each activity. The variety of situations to which this description applies is diverse, indeed, ranging from the allocation of production facilities to products to the allocation of national resources to domestic needs, from portfolio selection to the selection of shipping patterns, from agricultural planning to the design of radiation therapy, and so on. However, the one common ingredient in each of these situations is the necessity for allocating resources to activities by choosing the levels of those activities. Linear programming uses a mathematical model to describe the problem of concern. The adjective linear means that all the mathematical functions in this model are required to be linear functions. The word programming does not refer here to computer programming; rather, it is essentially a synonym for planning. Thus, linear programming involves the planning of activities to obtain an optimal result, i.e., a result that reaches the specified goal best (according to the mathematical model) among all feasible alternatives. Although allocating resources to activities is the most common type of application, linear programming has numerous other important applications as well. In fact, any problem whose mathematical model fits the very general format for the linear programming model is a linear programming problem. (For this reason, a linear programming problem and its model often are referred to interchangeably as simply a linear program, or even as 25 26 CHAPTER 3 INTRODUCTION TO LINEAR PROGRAMMING just an LP.) Furthermore, a remarkably efficient solution procedure, called the simplex method, is available for solving linear programming problems of even enormous size. These are some of the reasons for the tremendous impact of linear programming in recent decades. Because of its great importance, we devote this and the next seven chapters specifically to linear programming. After this chapter introduces the general features of linear programming, Chaps. 4 and 5 focus on the simplex method. Chapters 6 and 7 discuss the further analysis of linear programming problems after the simplex method has been initially applied. Chapter 8 presents several widely used extensions of the simplex method and introduces an interior-point algorithm that sometimes can be used to solve even larger linear programming problems than the simplex method can handle. Chapters 9 and 10 consider some special types of linear programming problems whose importance warrants individual study. You also can look forward to seeing applications of linear programming to other areas of operations research (OR) in several later chapters. We begin this chapter by developing a miniature prototype example of a linear programming problem. This example is small enough to be solved graphically in a straightforward way. Sections 3.2 and 3.3 present the general linear programming model and its basic assumptions. Section 3.4 gives some additional examples of linear programming applications. Section 3.5 describes how linear programming models of modest size can be conveniently displayed and solved on a spreadsheet. However, some linear programming problems encountered in practice require truly massive models. Section 3.6 illustrates how a massive model can arise and how it can still be formulated successfully with the help of a special modeling language such as MPL (its formulation is described in this section) or LINGO (its formulation of this model is presented in Supplement 2 to this chapter on the book’s website). ■ 3.1 PROTOTYPE EXAMPLE The WYNDOR GLASS CO. produces high-quality glass products, including windows and glass doors. It has three plants. Aluminum frames and hardware are made in Plant 1, wood frames are made in Plant 2, and Plant 3 produces the glass and assembles the products. Because of declining earnings, top management has decided to revamp the company’s product line. Unprofitable products are being discontinued, releasing production capacity to launch two new products having large sales potential: Product 1: An 8-foot glass door with aluminum framing Product 2: A 4  6 foot double-hung wood-framed window Product 1 requires some of the production capacity in Plants 1 and 3, but none in Plant 2. Product 2 needs only Plants 2 and 3. The marketing division has concluded that the company could sell as much of either product as could be produced by these plants. However, because both products would be competing for the same production capacity in Plant 3, it is not clear which mix of the two products would be most profitable. Therefore, an OR team has been formed to study this question. The OR team began by having discussions with upper management to identify management’s objectives for the study. These discussions led to developing the following definition of the problem: Determine what the production rates should be for the two products in order to maximize their total profit, subject to the restrictions imposed by the limited production capacities available in the three plants. (Each product will be produced in batches of 20, so the An Application Vignette Swift & Company is a diversified protein-producing business based in Greeley, Colorado. With annual sales of over $8 billion, beef and related products are by far the largest portion of the company’s business. To improve the company’s sales and manufacturing performance, upper management concluded that it needed to achieve three major objectives. One was to enable the company’s customer service representatives to talk to their more than 8,000 customers with accurate information about the availability of current and future inventory while considering requested delivery dates and maximum product age upon delivery. A second was to produce an efficient shift-level schedule for each plant over a 28-day horizon. A third was to accurately determine whether a plant can ship a requested order-line-item quantity on the requested date and time given the availability of cattle and constraints on the plant’s capacity. To meet these three challenges, an OR team developed an integrated system of 45 linear programming models based on three model formulations to dynamically schedule its beef-fabrication operations at five plants in real time as it receives orders. The total audited benefits realized in the first year of operation of this system were $12.74 million, including $12 million due to optimizing the product mix. Other benefits include a reduction in orders lost, a reduction in price discounting, and better on-time delivery. Source: A. Bixby, B. Downs, and M. Self, “A Scheduling and Capable-to-Promise Application for Swift & Company,” Interfaces, 36(1): 39–50, Jan.–Feb. 2006. (A link to this article is provided on our website, www.mhhe.com/hillier.) production rate is defined as the number of batches produced per week.) Any combination of production rates that satisfies these restrictions is permitted, including producing none of one product and as much as possible of the other. The OR team also identified the data that needed to be gathered: 1. Number of hours of production time available per week in each plant for these new products. (Most of the time in these plants already is committed to current products, so the available capacity for the new products is quite limited.) 2. Number of hours of production time used in each plant for each batch produced of each new product. 3. Profit per batch produced of each new product. (Profit per batch produced was chosen as an appropriate measure after the team concluded that the incremental profit from each additional batch produced would be roughly constant regardless of the total number of batches produced. Because no substantial costs will be incurred to initiate the production and marketing of these new products, the total profit from each one is approximately this profit per batch produced times the number of batches produced.) Obtaining reasonable estimates of these quantities required enlisting the help of key personnel in various units of the company. Staff in the manufacturing division provided the data in the first category above. Developing estimates for the second category of data required some analysis by the manufacturing engineers involved in designing the production processes for the new products. By analyzing cost data from these same engineers and the marketing division, along with a pricing decision from the marketing division, the accounting department developed estimates for the third category. Table 3.1 summarizes the data gathered. The OR team immediately recognized that this was a linear programming problem of the classic product mix type, and the team next undertook the formulation of the corresponding mathematical model. 28 CHAPTER 3 INTRODUCTION TO LINEAR PROGRAMMING ■ TABLE 3.1 Data for the Wyndor Glass Co. problem Production Time per Batch, Hours Product Plant 1 2 Production Time Available per Week, Hours 1 2 3 1 0 3 0 2 2 4 12 18 Profit per batch $3,000 $5,000 Formulation as a Linear Programming Problem The definition of the problem given above indicates that the decisions to be made are the number of batches of the respective products to be produced per week so as to maximize their total profit. Therefore, to formulate the mathematical (linear programming) model for this problem, let x1  number of batches of product 1 produced per week x2  number of batches of product 2 produced per week Z  total profit per week 1in thousands of dollars 2 from producing these two products Thus, x1 and x2 are the decision variables for the model. Using the bottom row of Table 3.1, we obtain Z  3x1  5x2. The objective is to choose the values of x1 and x2 so as to maximize Z  3x1  5x2, subject to the restrictions imposed on their values by the limited production capacities available in the three plants. Table 3.1 indicates that each batch of product 1 produced per week uses 1 hour of production time per week in Plant 1, whereas only 4 hours per week are available. This restriction is expressed mathematically by the inequality x1  4. Similarly, Plant 2 imposes the restriction that 2x2  12. The number of hours of production time used per week in Plant 3 by choosing x1 and x2 as the new products’ production rates would be 3x1  2x2. Therefore, the mathematical statement of the Plant 3 restriction is 3x1  2x2  18. Finally, since production rates cannot be negative, it is necessary to restrict the decision variables to be nonnegative: x1  0 and x2  0. To summarize, in the mathematical language of linear programming, the problem is to choose values of x1 and x2 so as to Maximize Z  3x1  5x2 , subject to the restrictions  4 2x2  12 3x1  2x2  18 x1 and x1  0, x2  0. (Notice how the layout of the coefficients of x1 and x2 in this linear programming model essentially duplicates the information summarized in Table 3.1.) 3.1 29 PROTOTYPE EXAMPLE This problem is a classic example of a resource-allocation problem, the most common type of linear programming problem. The key characteristic of resource-allocation problems is that most or all of their functional constraints are resource constraints. The right-hand side of a resource constraint represents the amount available of some resource and the left-hand side represents the amount used of that resource, so the left-hand side must be  the righthand side. Product-mix problems are one type of resource-allocation problem, but you will see examples of resource-allocations problems of other types in Sec. 3.4 along with examples of other categories of linear programming problems. Graphical Solution This very small problem has only two decision variables and therefore only two dimensions, so a graphical procedure can be used to solve it. This procedure involves constructing a twodimensional graph with x1 and x2 as the axes. The first step is to identify the values of (x1, x2) that are permitted by the restrictions. This is done by drawing each line that borders the range of permissible values for one restriction. To begin, note that the nonnegativity restrictions x1  0 and x2  0 require (x1, x2) to lie on the positive side of the axes (including actually on either axis), i.e., in the first quadrant. Next, observe that the restriction x1  4 means that (x1, x2) cannot lie to the right of the line x1  4. These results are shown in Fig. 3.1, where the shaded area contains the only values of (x1, x2) that are still allowed. In a similar fashion, the restriction 2x2  12 (or, equivalently, x2  6) implies that the line 2x2  12 should be added to the boundary of the permissible region. The final restriction, 3x1  2x2  18, requires plotting the points (x1, x2) such that 3x1  2x2  18 (another line) to complete the boundary. (Note that the points such that 3x1  2x2  18 are those that lie either underneath or on the line 3x1  2x2  18, so this is the limiting line above which points do not satisfy the inequality.) The resulting region of permissible values of (x1, x2), called the feasible region, is shown in Fig. 3.2. (The demo called Graphical Method in your OR Tutor provides a more detailed example of constructing a feasible region.) The final step is to pick out the point in this feasible region that maximizes the value of Z  3x1  5x2. To discover how to perform this step efficiently, begin by trial and error. Try, for example, Z  10  3x1  5x2 to see if there are in the permissible region any values of (x1, x2) that yield a value of Z as large as 10. By drawing the line 3x1  5x2  10 (see Fig. 3.3), you can see that there are many points on this line that lie within the region. Having gained perspective by trying this arbitrarily chosen value of Z  10, you should ■ FIGURE 3.1 Shaded area shows values of (x1, x2) allowed by x1  0, x2  0, x1  4. x2 5 4 3 2 1 0 1 2 3 4 5 6 7 x1 30 CHAPTER 3 INTRODUCTION TO LINEAR PROGRAMMING x2 10 3x1  2x2  18 8 x1  4 2x2  12 6 4 Feasible region 2 ■ FIGURE 3.2 Shaded area shows the set of permissible values of (x1, x2), called the feasible region. 0 2 4 8 6 x1 x2 8 Z  36  3x1  5x2 6 Z  20  3x1  5x2 Z  10  3x1  5x2 ■ FIGURE 3.3 The value of (x1, x2) that maximizes 3x1  5x2 is (2, 6). (2, 6) 4 2 0 2 4 6 8 10 x1 next try a larger arbitrary value of Z, say, Z  20  3x1  5x2. Again, Fig. 3.3 reveals that a segment of the line 3x1  5x2  20 lies within the region, so that the maximum permissible value of Z must be at least 20. Now notice in Fig. 3.3 that the two lines just constructed are parallel. This is no coincidence, since any line constructed in this way has the form Z  3x1  5x2 for the chosen value of Z, which implies that 5x2  3x1  Z or, equivalently, 3 1 x2   x1  Z 5 5 3.1 PROTOTYPE EXAMPLE 31 This last equation, called the slope-intercept form of the objective function, demonstrates that the slope of the line is 53 (since each unit increase in x1 changes x2 by 53), whereas the intercept of the line with the x2 axis is 51 Z (since x2  51 Z when x1  0). The fact that the slope is fixed at 53 means that all lines constructed in this way are parallel. Again, comparing the 10  3x1  5x2 and 20  3x1  5x2 lines in Fig. 3.3, we note that the line giving a larger value of Z (Z  20) is farther up and away from the origin than the other line (Z  10). This fact also is implied by the slope-intercept form of the objective function, which indicates that the intercept with the x1 axis 1 51 Z2 increases when the value chosen for Z is increased. These observations imply that our trial-and-error procedure for constructing lines in Fig. 3.3 involves nothing more than drawing a family of parallel lines containing at least one point in the feasible region and selecting the line that corresponds to the largest value of Z. Figure 3.3 shows that this line passes through the point (2, 6), indicating that the optimal solution is x1  2 and x2  6. The equation of this line is 3x1  5x2  3(2)  5(6)  36  Z, indicating that the optimal value of Z is Z  36. The point (2, 6) lies at the intersection of the two lines 2x2  12 and 3x1  2x2  18, shown in Fig. 3.2, so that this point can be calculated algebraically as the simultaneous solution of these two equations. Having seen the trial-and-error procedure for finding the optimal point (2, 6), you now can streamline this approach for other problems. Rather than draw several parallel lines, it is sufficient to form a single line with a ruler to establish the slope. Then move the ruler with fixed slope through the feasible region in the direction of improving Z. (When the objective is to minimize Z, move the ruler in the direction that decreases Z.) Stop moving the ruler at the last instant that it still passes through a point in this region. This point is the desired optimal solution. This procedure often is referred to as the graphical method for linear programming. It can be used to solve any linear programming problem with two decision variables. With considerable difficulty, it is possible to extend the method to three decision variables but not more than three. (The next chapter will focus on the simplex method for solving larger problems.) Conclusions The OR team used this approach to find that the optimal solution is x1  2, x2  6, with Z  36. This solution indicates that the Wyndor Glass Co. should produce products 1 and 2 at the rate of 2 batches per week and 6 batches per week, respectively, with a resulting total profit of $36,000 per week. No other mix of the two products would be so profitable— according to the model. However, we emphasized in Chap. 2 that well-conducted OR studies do not simply find one solution for the initial model formulated and then stop. All six phases described in Chap. 2 are important, including thorough testing of the model (see Sec. 2.4) and postoptimality analysis (see Sec. 2.3). In full recognition of these practical realities, the OR team now is ready to evaluate the validity of the model more critically (to be continued in Sec. 3.3) and to perform sensitivity analysis on the effect of the estimates in Table 3.1 being different because of inaccurate estimation, changes of circumstances, etc. (to be continued in Sec. 7.2). Continuing the Learning Process with Your OR Courseware This is the first of many points in the book where you may find it helpful to use your OR Courseware on the book’s website. A key part of this courseware is a program called OR Tutor. This program includes a complete demonstration example of the graphical method introduced in this section. To provide you with another example of a model formulation as well, this demonstration begins by introducing a problem and formulating a linear programming model for the problem before then applying the graphical method step by step to 32 CHAPTER 3 INTRODUCTION TO LINEAR PROGRAMMING solve the model. Like the many other demonstration examples accompanying other sections of the book, this computer demonstration highlights concepts that are difficult to convey on the printed page. You may refer to Appendix 1 for documentation of the software. If you would like to see still more examples, you can go to the Solved Examples section of the book’s website. This section includes a few examples with complete solutions for almost every chapter as a supplement to the examples in the book and in OR Tutor. The examples for the current chapter begin with a relatively straightforward problem that involves formulating a small linear programming model and applying the graphical method. The subsequent examples become progressively more challenging. Another key part of your OR Courseware is a program called IOR Tutorial. This program features many interactive procedures for interactively executing various solution methods presented in the book, which enables you to focus on learning and executing the logic of the method efficiently while the computer does the number crunching. Included is an interactive procedure for applying the graphical method for linear programming. Once you get the hang of it, a second procedure enables you to quickly apply the graphical method for performing sensitivity analysis on the effect of revising the data of the problem. You then can print out your work and results for your homework. Like the other procedures in IOR Tutorial, these procedures are designed specifically to provide you with an efficient, enjoyable, and enlightening learning experience while you do your homework. When you formulate a linear programming model with more than two decision variables (so the graphical method cannot be used), the simplex method described in Chap. 4 enables you to still find an optimal solution immediately. Doing so also is helpful for model validation, since finding a nonsensical optimal solution signals that you have made a mistake in formulating the model. We mentioned in Sec. 1.5 that your OR Courseware introduces you to four particularly popular commercial software packages—Excel with its Solver, a powerful Excel add-in called Analytical Solver Platform, LINGO/LINDO, and MPL/Solvers—for solving a variety of OR models. All four packages include the simplex method for solving linear programming models. Section 3.5 describes how to use Excel to formulate and solve linear programming models in a spreadsheet format with either Solver or Analytical Solver Platform for Education (ASPE), descriptions of the other packages are provided in Sec. 3.6 (MPL and LINGO), Supplements 1 and 2 to this chapter on the book’s website (LINGO), Sec. 4.8 (LINDO and various solvers of MPL), and Appendix 4.1 (LINGO and LINDO). MPL, LINGO, and LINDO tutorials also are provided on the book’s website. In addition, your OR Courseware includes an Excel file, a LINGO/LINDO file, and an MPL/Solvers file showing how the respective software packages can be used to solve each of the examples in this chapter. ■ 3.2 THE LINEAR PROGRAMMING MODEL The Wyndor Glass Co. problem is intended to illustrate a typical linear programming problem (miniature version). However, linear programming is too versatile to be completely characterized by a single example. In this section we discuss the general characteristics of linear programming problems, including the various legitimate forms of the mathematical model for linear programming. Let us begin with some basic terminology and notation. The first column of Table 3.2 summarizes the components of the Wyndor Glass Co. problem. The second column then introduces more general terms for these same components that will fit many linear programming problems. The key terms are resources and activities, where m denotes the number of different kinds of resources that can be used and n denotes the number of activities being considered. Some typical resources are money and particular kinds of machines, 3.2 33 THE LINEAR PROGRAMMING MODEL ■ TABLE 3.2 Common terminology for linear programming Prototype Example General Problem Production capacities of plants 3 plants Resources m resources Production of products 2 products Production rate of product j, xj Activities n activities Level of activity j, xj Profit Z Overall measure of performance Z equipment, vehicles, and personnel. Examples of activities include investing in particular projects, advertising in particular media, and shipping goods from a particular source to a particular destination. In any application of linear programming, all the activities may be of one general kind (such as any one of these three examples), and then the individual activities would be particular alternatives within this general category. As described in the introduction to this chapter, the most common type of application of linear programming involves allocating resources to activities. The amount available of each resource is limited, so a careful allocation of resources to activities must be made. Determining this allocation involves choosing the levels of the activities that achieve the best possible value of the overall measure of performance. Certain symbols are commonly used to denote the various components of a linear programming model. These symbols are listed below, along with their interpretation for the general problem of allocating resources to activities. Z  value of overall measure of performance. xj  level of activity j 1for j  1, 2, p , n2. cj  increase in Z that would result from each unit increase in level of activity j. bi  amount of resource i that is available for allocation to activities (for i  1, 2, ..., m). aij  amount of resource i consumed by each unit of activity j. The model poses the problem in terms of making decisions about the levels of the activities, so x1, x2, . . . , xn are called the decision variables. As summarized in Table 3.3, the ■ TABLE 3.3 Data needed for a linear programming model involving the allocation of resources to activities Resource Usage per Unit of Activity Activity Resource 1 2 ... n 1 2 . . . m a11 a21 a12 a22 ... ... a1n a2n ... ... ... ... am1 am2 ... amn c1 c2 ... cn Contribution to Z per unit of activity Amount of Resource Available b1 b2 . . . bm 34 CHAPTER 3 INTRODUCTION TO LINEAR PROGRAMMING values of cj, bi, and aij (for i  1, 2, . . . , m and j  1, 2, . . . , n) are the input constants for the model. The cj, bi, and aij are also referred to as the parameters of the model. Notice the correspondence between Table 3.3 and Table 3.1. A Standard Form of the Model Proceeding as for the Wyndor Glass Co. problem, we can now formulate the mathematical model for this general problem of allocating resources to activities. In particular, this model is to select the values for x1, x2, . . . , xn so as to Maximize Z  c1x1  c2x2  . . .  cnxn , subject to the restrictions a11x1  a12x2  . . .  a1nxn  b1 a21x1  a22x2  . . .  a2nxn  b2 o . . . am1x1  am2x2   amnxn  bm , and x1  0, x2  0, . . . , xn  0. We call this our standard form1 for the linear programming problem. Any situation whose mathematical formulation fits this model is a linear programming problem. Notice that the model for the Wyndor Glass Co. problem formulated in the preceding section fits our standard form, with m  3 and n  2. Common terminology for the linear programming model can now be summarized. The function being maximized, c1x1  c2x2  · · ·  cnxn, is called the objective function. The restrictions normally are referred to as constraints. The first m constraints (those with a function of all the variables ai1x1  ai2x2  · · ·  ainxn on the left-hand side) are sometimes called functional constraints (or structural constraints). Similarly, the xj  0 restrictions are called nonnegativity constraints (or nonnegativity conditions). Other Forms We now hasten to add that the preceding model does not actually fit the natural form of some linear programming problems. The other legitimate forms are the following: 1. Minimizing rather than maximizing the objective function: Minimize Z  c1x1  c2x2  . . .  cnxn . 2. Some functional constraints with a greater-than-or-equal-to inequality: ai1x1  ai2x2  . . .  ainxn  bi for some values of i. 3. Some functional constraints in equation form: ai1x1  ai2x2  . . .  ainxn  bi for some values of i. 4. Deleting the nonnegativity constraints for some decision variables: xj unrestricted in sign for some values of j. Any problem that mixes some or all of these forms with the remaining parts of the preceding model is still a linear programming problem. Our interpretation of the words allocating 1 This is called our standard form rather than the standard form because some textbooks adopt other forms. 3.2 35 THE LINEAR PROGRAMMING MODEL limited resources among competing activities may no longer apply very well, if at all; but regardless of the interpretation or context, all that is required is that the mathematical statement of the problem fit the allowable forms. Thus, the concise definition of a linear programming problem is that each component of its model fits either the standard form or one of the other legitimate forms listed above. Terminology for Solutions of the Model You may be used to having the term solution mean the final answer to a problem, but the convention in linear programming (and its extensions) is quite different. Here, any specification of values for the decision variables (x1, x2, . . . , xn) is called a solution, regardless of whether it is a desirable or even an allowable choice. Different types of solutions are then identified by using an appropriate adjective. A feasible solution is a solution for which all the constraints are satisfied. An infeasible solution is a solution for which at least one constraint is violated. In the example, the points (2, 3) and (4, 1) in Fig. 3.2 are feasible solutions, while the points ( 1, 3) and (4, 4) are infeasible solutions. The feasible region is the collection of all feasible solutions. The feasible region in the example is the entire shaded area in Fig. 3.2. It is possible for a problem to have no feasible solutions. This would have happened in the example if the new products had been required to return a net profit of at least $50,000 per week to justify discontinuing part of the current product line. The corresponding constraint, 3x1  5x2  50, would eliminate the entire feasible region, so no mix of new products would be superior to the status quo. This case is illustrated in Fig. 3.4. Given that there are feasible solutions, the goal of linear programming is to find a best feasible solution, as measured by the value of the objective function in the model. ■ FIGURE 3.4 The Wyndor Glass Co. problem would have no feasible solutions if the constraint 3x1  5x2  50 were added to the problem. x2 Maximize Z  3x1  5x2, x1 subject to 2x2 3x1  2x2 3x1  5x2 x1  0, x2 and 10 3x1  5x2  50 8 4  12  18  50 0 6 2x2  12 4 3x1  2x2  18 x1  0 2 x1  4 x2  0 0 2 4 6 8 10 x1 36 CHAPTER 3 INTRODUCTION TO LINEAR PROGRAMMING An optimal solution is a feasible solution that has the most favorable value of the objective function. The most favorable value is the largest value if the objective function is to be maximized, whereas it is the smallest value if the objective function is to be minimized. Most problems will have just one optimal solution. However, it is possible to have more than one. This would occur in the example if the profit per batch produced of product 2 were changed to $2,000. This changes the objective function to Z  3x1  2x2, so that all the points on the line segment connecting (2, 6) and (4, 3) would be optimal. This case is illustrated in Fig. 3.5. As in this case, any problem having multiple optimal solutions will have an infinite number of them, each with the same optimal value of the objective function. Another possibility is that a problem has no optimal solutions. This occurs only if (1) it has no feasible solutions or (2) the constraints do not prevent improving the value of the objective function (Z) indefinitely in the favorable direction (positive or negative). The latter case is referred to as having an unbounded Z or an unbounded objective. To illustrate, this case would result if the last two functional constraints were mistakenly deleted in the example, as illustrated in Fig. 3.6. We next introduce a special type of feasible solution that plays the key role when the simplex method searches for an optimal solution. A corner-point feasible (CPF) solution is a solution that lies at a corner of the feasible region. (CPF solutions are commonly referred to as extreme points (or vertices) by OR professionals, but we prefer the more suggestive corner-point terminology in an introductory course.) Figure 3.7 highlights the five CPF solutions for the example. ■ FIGURE 3.5 The Wyndor Glass Co. problem would have multiple optimal solutions if the objective function were changed to Z  3x1  2x2. 3.2 37 THE LINEAR PROGRAMMING MODEL (4, ), Z  x2 10 (4, 10), Z  62 8 (4, 8), Z  52 (4, 6), Z  42 6 ■ FIGURE 3.6 The Wyndor Glass Co. problem would have no optimal solutions if the only functional constraint were x1 ≤ 4, because x2 then could be increased indefinitely in the feasible region without ever reaching the maximum value of Z  3x1  5x2. Maximize Z  3x1  5x2, subject to x1  4 and x1  0, x2  0 Feasible region 4 (4, 4), Z  32 2 (4, 2), Z  22 2 0 4 6 8 10 x1 x2 (0, 6) (2, 6) Feasible region ■ FIGURE 3.7 The five dots are the five CPF solutions for the Wyndor Glass Co. problem. (0, 0) (4, 3) (4, 0) x1 Sections 4.1 and 5.1 will delve into the various useful properties of CPF solutions for problems of any size, including the following relationship with optimal solutions. Relationship between optimal solutions and CPF solutions: Consider any linear programming problem with feasible solutions and a bounded feasible region. The problem must possess CPF solutions and at least one optimal solution. Furthermore, the best CPF solution must be an optimal solution. Thus, if a problem has exactly one optimal solution, it must be a CPF solution. If the problem has multiple optimal solutions, at least two must be CPF solutions. 38 CHAPTER 3 INTRODUCTION TO LINEAR PROGRAMMING The example has exactly one optimal solution, (x1, x2)  (2, 6), which is a CPF solution. (Think about how the graphical method leads to the one optimal solution being a CPF solution.) When the example is modified to yield multiple optimal solutions, as shown in Fig. 3.5, two of these optimal solutions—(2, 6) and (4, 3)—are CPF solutions. ■ 3.3 ASSUMPTIONS OF LINEAR PROGRAMMING All the assumptions of linear programming actually are implicit in the model formulation given in Sec. 3.2. In particular, from a mathematical viewpoint, the assumptions simply are that the model must have a linear objective function subject to linear constraints. However, from a modeling viewpoint, these mathematical properties of a linear programming model imply that certain assumptions must hold about the activities and data of the problem being modeled, including assumptions about the effect of varying the levels of the activities. It is good to highlight these assumptions so you can more easily evaluate how well linear programming applies to any given problem. Furthermore, we still need to see why the OR team for the Wyndor Glass Co. concluded that a linear programming formulation provided a satisfactory representation of the problem. Proportionality Proportionality is an assumption about both the objective function and the functional constraints, as summarized below. Proportionality assumption: The contribution of each activity to the value of the objective function Z is proportional to the level of the activity xj, as represented by the cjxj term in the objective function. Similarly, the contribution of each activity to the left-hand side of each functional constraint is proportional to the level of the activity xj, as represented by the aijxj term in the constraint. Consequently, this assumption rules out any exponent other than 1 for any variable in any term of any function (whether the objective function or the function on the left-hand side of a functional constraint) in a linear programming model.2 To illustrate this assumption, consider the first term (3x1) in the objective function (Z  3x1  5x2) for the Wyndor Glass Co. problem. This term represents the profit generated per week (in thousands of dollars) by producing product 1 at the rate of x1 batches per week. The proportionality satisfied column of Table 3.4 shows the case that was assumed in Sec. 3.1, namely, that this profit is indeed proportional to x1 so that 3x1 is the appropriate term for the objective function. By contrast, the next three columns show different hypothetical cases where the proportionality assumption would be violated. Refer first to the Case 1 column in Table 3.4. This case would arise if there were start-up costs associated with initiating the production of product 1. For example, there might be costs involved with setting up the production facilities. There might also be costs associated with arranging the distribution of the new product. Because these are one-time costs, they would need to be amortized on a per-week basis to be commensurable with Z (profit in thousands of dollars per week). Suppose that this amortization were done and that the total start-up cost amounted to reducing Z by 1, but that the profit without considering the start-up cost would be 3x1. This would mean that the contribution from product 1 to Z should be 3x1  1 for x1 > 0, 2 When the function includes any cross-product terms, proportionality should be interpreted to mean that changes in the function value are proportional to changes in each variable (xj) individually, given any fixed values for all the other variables. Therefore, a cross-product term satisfies proportionality as long as each variable in the term has an exponent of 1 (However, any cross-product term violates the additivity assumption, discussed next.) 3.3 39 ASSUMPTIONS OF LINEAR PROGRAMMING ■ TABLE 3.4 Examples of satisfying or violating proportionality Profit from Product 1 ($000 per Week) Proportionality Violated x1 Proportionality Satisfied Case 1 Case 2 Case 3 0 1 2 3 4 0 3 6 9 12 0 2 5 8 11 0 3 7 12 18 0 3 5 6 6 whereas the contribution would be 3x1  0 when x1  0 (no start-up cost). This profit function,3 which is given by the solid curve in Fig. 3.8, certainly is not proportional to x1. At first glance, it might appear that Case 2 in Table 3.4 is quite similar to Case 1. However, Case 2 actually arises in a very different way. There no longer is a start-up cost, and the profit from the first unit of product 1 per week is indeed 3, as originally assumed. However, there now is an increasing marginal return; i.e., the slope of the profit function for product 1 (see the solid curve in Fig. 3.9) keeps increasing as x1 is increased. This violation of proportionality might occur because of economies of scale that can sometimes be achieved at higher levels of production, e.g., through the use of more efficient high-volume machinery, longer production runs, quantity discounts for large purchases of raw materials, and the learning-curve effect whereby workers become more efficient as they gain experience with a particular mode of production. As the incremental cost goes down, the incremental profit will go up (assuming constant marginal revenue). Contribution of x1 to Z 12 ■ FIGURE 3.8 The solid curve violates the proportionality assumption because of the start-up cost that is incurred when x1 is increased from 0. The values at the dots are given by the Case 1 column of Table 3.4. 9 6 Satisfies proportionality assumption Violates proportionality assumption 3 0 Start-up cost 1 2 3 4 x1 3 3 If the contribution from product 1 to Z were 3x1  1 for all x1  0, including x1  0, then the fixed constant, could be deleted from the objective function without changing the optimal solution and proportionality would be restored. However, this “fix” does not work here because the 1 constant does not apply when x1  0. 1, 40 CHAPTER 3 INTRODUCTION TO LINEAR PROGRAMMING Contribution of x1 to Z 18 15 12 9 ■ FIGURE 3.9 The solid curve violates the proportionality assumption because its slope (the marginal return from product 1) keeps increasing as x1 is increased. The values at the dots are given by the Case 2 column of Table 3.4. Violates proportionality assumption Satisfies proportionality assumption 6 3 0 1 2 3 4 x1 Referring again to Table 3.4, the reverse of Case 2 is Case 3, where there is a decreasing marginal return. In this case, the slope of the profit function for product 1 (given by the solid curve in Fig. 3.10) keeps decreasing as x1 is increased. This violation of proportionality might occur because the marketing costs need to go up more than proportionally to attain increases in the level of sales. For example, it might be possible to sell product 1 at the rate of 1 per week (x1  1) with no advertising, whereas attaining sales to sustain a production rate of x1  2 might require a moderate amount of advertising, x1  3 might necessitate an extensive advertising campaign, and x1  4 might require also lowering the price. All three cases are hypothetical examples of ways in which the proportionality assumption could be violated. What is the actual situation? The actual profit from producing product 1 (or any other product) is derived from the sales revenue minus various direct and indirect costs. Inevitably, some of these cost components are not strictly proportional to the production rate, perhaps for one of the reasons illustrated above. However, the real question ■ FIGURE 3.10 The solid curve violates the proportionality assumption because its slope (the marginal return from product 1) keeps decreasing as x1 is increased. The values at the dots are given by the Case 3 column in Table 3.4. Contribution of x1 to Z 12 9 Satisfies proportionality assumption 6 Violates proportionality assumption 3 0 1 2 3 4 x1 3.3 41 ASSUMPTIONS OF LINEAR PROGRAMMING is whether, after all the components of profit have been accumulated, proportionality is a reasonable approximation for practical modeling purposes. For the Wyndor Glass Co. problem, the OR team checked both the objective function and the functional constraints. The conclusion was that proportionality could indeed be assumed without serious distortion. For other problems, what happens when the proportionality assumption does not hold even as a reasonable approximation? In most cases, this means you must use nonlinear programming instead (presented in Chap. 13). However, we do point out in Sec. 13.8 that a certain important kind of nonproportionality can still be handled by linear programming by reformulating the problem appropriately. Furthermore, if the assumption is violated only because of start-up costs, there is an extension of linear programming (mixed integer programming) that can be used, as discussed in Sec.12.3 (the fixed-charge problem). Additivity Although the proportionality assumption rules out exponents other than 1, it does not prohibit cross-product terms (terms involving the product of two or more variables). The additivity assumption does rule out this latter possibility, as summarized below. Additivity assumption: Every function in a linear programming model (whether the objective function or the function on the left-hand side of a functional constraint) is the sum of the individual contributions of the respective activities. To make this definition more concrete and clarify why we need to worry about this assumption, let us look at some examples. Table 3.5 shows some possible cases for the objective function for the Wyndor Glass Co. problem. In each case, the individual contributions from the products are just as assumed in Sec. 3.1, namely, 3x1 for product 1 and 5x2 for product 2. The difference lies in the last row, which gives the function value for Z when the two products are produced jointly. The additivity satisfied column shows the case where this function value is obtained simply by adding the first two rows (3  5  8), so that Z  3x1  5x2 as previously assumed. By contrast, the next two columns show hypothetical cases where the additivity assumption would be violated (but not the proportionality assumption). Referring to the Case 1 column of Table 3.5, this case corresponds to an objective function of Z  3x1  5x2  x1x2, so that Z  3  5  1  9 for (x1, x2)  (1, 1), thereby violating the additivity assumption that Z  3  5. (The proportionality assumption still is satisfied since after the value of one variable is fixed, the increment in Z from the other variable is proportional to the value of that variable.) This case would arise if the two products were complementary in some way that increases profit. For example, suppose that a major advertising campaign would be required to market either new product produced by itself, but that the same single campaign can effectively promote both products if the decision is made to produce both. Because a major cost is saved for the second ■ TABLE 3.5 Examples of satisfying or violating additivity for the objective function Value of Z Additivity Violated (x1, x2) Additivity Satisfied Case 1 Case 2 (1, 0) (0, 1) 3 5 3 5 3 5 (1, 1) 8 9 7 42 CHAPTER 3 INTRODUCTION TO LINEAR PROGRAMMING product, their joint profit is somewhat more than the sum of their individual profits when each is produced by itself. Case 2 in Table 3.5 also violates the additivity assumption because of the extra term in the corresponding objective function, Z  3x1  5x2  x1x2, so that Z  3  5  1  7 for (x1, x2)  (1, 1). As the reverse of the first case, Case 2 would arise if the two products were competitive in some way that decreased their joint profit. For example, suppose that both products need to use the same machinery and equipment. If either product were produced by itself, this machinery and equipment would be dedicated to this one use. However, producing both products would require switching the production processes back and forth, with substantial time and cost involved in temporarily shutting down the production of one product and setting up for the other. Because of this major extra cost, their joint profit is somewhat less than the sum of their individual profits when each is produced by itself. The same kinds of interaction between activities can affect the additivity of the constraint functions. For example, consider the third functional constraint of the Wyndor Glass Co. problem: 3x1  2x2  18. (This is the only constraint involving both products.) This constraint concerns the production capacity of Plant 3, where 18 hours of production time per week is available for the two new products, and the function on the left-hand side (3x1  2x2) represents the number of hours of production time per week that would be used by these products. The additivity satisfied column of Table 3.6 shows this case as is, whereas the next two columns display cases where the function has an extra cross-product term that violates additivity. For all three columns, the individual contributions from the products toward using the capacity of Plant 3 are just as assumed previously, namely, 3x1 for product 1 and 2x2 for product 2, or 3(2)  6 for x1  2 and 2(3)  6 for x2  3. As was true for Table 3.5, the difference lies in the last row, which now gives the total function value for production time used when the two products are produced jointly. For Case 3 (see Table 3.6), the production time used by the two products is given by the function 3x1  2x2  0.5x1x2, so the total function value is 6  6  3  15 when (x1, x2)  (2, 3), which violates the additivity assumption that the value is just 6  6  12. This case can arise in exactly the same way as described for Case 2 in Table 3.5; namely, extra time is wasted switching the production processes back and forth between the two products. The extra cross-product term (0.5x1x2) would give the production time wasted in this way. (Note that wasting time switching between products leads to a positive cross-product term here, where the total function is measuring production time used, whereas it led to a negative cross-product term for Case 2 because the total function there measures profit.) For Case 4 in Table 3.6, the function for production time used is 3x1  2x2  0.1x 21x2, so the function value for (x1, x2)  (2, 3) is 6  6  1.2  10.8. This case could arise in the following way. As in Case 3, suppose that the two products require the same type of machinery and equipment. But suppose now that the time required to switch from one product to ■ TABLE 3.6 Examples of satisfying or violating additivity for a functional constraint Amount of Resource Used Additivity Violated (x1, x2) Additivity Satisfied Case 3 Case 4 (2, 0) (0, 3) 6 6 6 6 6 6 (2, 3) 12 15 10.8 3.3 ASSUMPTIONS OF LINEAR PROGRAMMING 43 the other would be relatively small. Because each product goes through a sequence of production operations, individual production facilities normally dedicated to that product would incur occasional idle periods. During these otherwise idle periods, these facilities can be used by the other product. Consequently, the total production time used (including idle periods) when the two products are produced jointly would be less than the sum of the production times used by the individual products when each is produced by itself. After analyzing the possible kinds of interaction between the two products illustrated by these four cases, the OR team concluded that none played a major role in the actual Wyndor Glass Co. problem. Therefore, the additivity assumption was adopted as a reasonable approximation. For other problems, if additivity is not a reasonable assumption, so that some of or all the mathematical functions of the model need to be nonlinear (because of the cross-product terms), you definitely enter the realm of nonlinear programming (Chap. 13). Divisibility Our next assumption concerns the values allowed for the decision variables. Divisibility assumption: Decision variables in a linear programming model are allowed to have any values, including noninteger values, that satisfy the functional and nonnegativity constraints. Thus, these variables are not restricted to just integer values. Since each decision variable represents the level of some activity, it is being assumed that the activities can be run at fractional levels. For the Wyndor Glass Co. problem, the decision variables represent production rates (the number of batches of a product produced per week). Since these production rates can have any fractional values within the feasible region, the divisibility assumption does hold. In certain situations, the divisibility assumption does not hold because some of or all the decision variables must be restricted to integer values. Mathematical models with this restriction are called integer programming models, and they are discussed in Chap. 12. Certainty Our last assumption concerns the parameters of the model, namely, the coefficients in the objective function cj, the coefficients in the functional constraints aij, and the right-hand sides of the functional constraints bi. Certainty assumption: The value assigned to each parameter of a linear programming model is assumed to be a known constant. In real applications, the certainty assumption is seldom satisfied precisely. Linear programming models usually are formulated to select some future course of action. Therefore, the parameter values used would be based on a prediction of future conditions, which inevitably introduces some degree of uncertainty. For this reason it is usually important to conduct sensitivity analysis after a solution is found that is optimal under the assumed parameter values. As discussed in Sec. 2.3, one purpose is to identify the sensitive parameters (those whose value cannot be changed without changing the optimal solution), since any later change in the value of a sensitive parameter immediately signals a need to change the solution being used. Sensitivity analysis plays an important role in the analysis of the Wyndor Glass Co. problem, as you will see in Sec. 7.2. However, it is necessary to acquire some more background before we finish that story. Occasionally, the degree of uncertainty in the parameters is too great to be amenable to sensitivity analysis alone. Sections 7.4-7.6 describe other ways of dealing with linear programming under uncertainty. 44 CHAPTER 3 INTRODUCTION TO LINEAR PROGRAMMING The Assumptions in Perspective We emphasized in Sec. 2.2 that a mathematical model is intended to be only an idealized representation of the real problem. Approximations and simplifying assumptions generally are required in order for the model to be tractable. Adding too much detail and precision can make the model too unwieldy for useful analysis of the problem. All that is really needed is that there be a reasonably high correlation between the prediction of the model and what would actually happen in the real problem. This advice certainly is applicable to linear programming. It is very common in real applications of linear programming that almost none of the four assumptions hold completely. Except perhaps for the divisibility assumption, minor disparities are to be expected. This is especially true for the certainty assumption, so sensitivity analysis normally is a must to compensate for the violation of this assumption. However, it is important for the OR team to examine the four assumptions for the problem under study and to analyze just how large the disparities are. If any of the assumptions are violated in a major way, then a number of useful alternative models are available, as presented in later chapters of the book. A disadvantage of these other models is that the algorithms available for solving them are not nearly as powerful as those for linear programming, but this gap has been closing in some cases. For some applications, the powerful linear programming approach is used for the initial analysis, and then a more complicated model is used to refine this analysis. As you work through the examples in Sec. 3.4, you will find it good practice to analyze how well each of the four assumptions of linear programming applies. ■ 3.4 ADDITIONAL EXAMPLES The Wyndor Glass Co. problem is a prototype example of linear programming in several respects: It is a resource-allocation problem (the most common type of linear programming problem) because it involves allocating limited resources among competing activities. Furthermore, its model fits our standard form and its context is the traditional one of improved business planning. However, the applicability of linear programming is much wider. In this section we begin broadening our horizons. As you study the following examples, note that it is their underlying mathematical model rather than their context that characterizes them as linear programming problems. Then give some thought to how the same mathematical model could arise in many other contexts by merely changing the names of the activities and so forth. These examples are scaled-down versions of actual applications. Like the Wyndor problem and the demonstration example for the graphical method in OR Tutor, the first of these examples has only two decision variables and so can be solved by the graphical method. The new features are that it is a minimization problem and has a mixture of forms for the functional constraints. (This example considerably simplifies the real situation when designing radiation therapy, but the first application vignette in this section describes the exciting impact that OR actually is having in this area.) The subsequent examples have considerably more than two decision variables and so are more challenging to formulate. Although we will mention their optimal solutions that are obtained by the simplex method, the focus here is on how to formulate the linear programming model for these larger problems. Subsequent sections and the next chapter will turn to the question of the software tools and the algorithm (usually the simplex method) that are used to solve such problems. If you find that you need additional examples of formulating small and relatively straightforward linear programming models before dealing with these more challenging formulation examples, we suggest that you go back to the demonstration example for the graphical method in OR Tutor and to the examples in the Solved Examples section for this chapter on the book’s website. 3.4 ADDITIONAL EXAMPLES 45 Design of Radiation Therapy ■ FIGURE 3.11 Cross section of Mary’s tumor (viewed from above), nearby critical tissues, and the radiation beams being used. Beam 2 1 3 2 3 Beam 1 1. Bladder and tumor 2. Rectum, coccyx, etc. 3. Femur, part of pelvis, etc. MARY has just been diagnosed as having a cancer at a fairly advanced stage. Specifically, she has a large malignant tumor in the bladder area (a “whole bladder lesion”). Mary is to receive the most advanced medical care available to give her every possible chance for survival. This care will include extensive radiation therapy. Radiation therapy involves using an external beam treatment machine to pass ionizing radiation through the patient’s body, damaging both cancerous and healthy tissues. Normally, several beams are precisely administered from different angles in a two-dimensional plane. Due to attenuation, each beam delivers more radiation to the tissue near the entry point than to the tissue near the exit point. Scatter also causes some delivery of radiation to tissue outside the direct path of the beam. Because tumor cells are typically microscopically interspersed among healthy cells, the radiation dosage throughout the tumor region must be large enough to kill the malignant cells, which are slightly more radiosensitive, yet small enough to spare the healthy cells. At the same time, the aggregate dose to critical tissues must not exceed established tolerance levels, in order to prevent complications that can be more serious than the disease itself. For the same reason, the total dose to the entire healthy anatomy must be minimized. Because of the need to carefully balance all these factors, the design of radiation therapy is a very delicate process. The goal of the design is to select the combination of beams to be used, and the intensity of each one, to generate the best possible dose distribution. (The dose strength at any point in the body is measured in units called kilorads.) Once the treatment design has been developed, it is administered in many installments, spread over several weeks. In Mary’s case, the size and location of her tumor make the design of her treatment an even more delicate process than usual. Figure 3.11 shows a diagram of a cross section of the tumor viewed from above, as well as nearby critical tissues to avoid. These tissues include critical organs (e.g., the rectum) as well as bony structures (e.g., the femurs and pelvis) that will attenuate the radiation. Also shown are the entry point and direction for the only two beams that can be used with any modicum of safety in this case. (Actually, we are simplifying the example at this point, because normally dozens of possible beams must be considered.) For any proposed beam of given intensity, the analysis of what the resulting radiation absorption by various parts of the body would be requires a complicated process. In brief, based on careful anatomical analysis, the energy distribution within the two-dimensional cross section of the tissue can be plotted on an isodose map, where the contour lines represent the dose strength as a percentage of the dose strength at the entry point. A fine grid then is placed over the isodose map. By summing the radiation absorbed in the squares containing each type of tissue, the average dose that is absorbed by the tumor, healthy anatomy, and critical tissues can be calculated. With more than one beam (administered sequentially), the radiation absorption is additive. After thorough analysis of this type, the medical team has carefully estimated the data needed to design Mary’s treatment, as summarized in Table 3.7. The first column lists the areas of the body that must be considered, and then the next two columns give the fraction of the radiation dose at the entry point for each beam that is absorbed by the respective areas on average. For example, if the dose level at the entry point for beam 1 is 1 kilorad, then an average of 0.4 kilorad will be absorbed by the entire healthy anatomy in the two-dimensional plane, an average of 0.3 kilorad will be absorbed by nearby critical tissues, an average of 0.5 kilorad will be absorbed by the various parts of the tumor, and 0.6 kilorad will be absorbed by the center of the tumor. The last column gives the restrictions on the total dosage from both beams that is absorbed on average by the respective areas of the body. In particular, the average dosage absorption for the An Application Vignette Prostate cancer is the most common form of cancer diagnosed in men. It is estimated that there were nearly 240,000 new cases and nearly 30,000 deaths in just the United States alone in 2013. Like many other forms of cancer, radiation therapy is a common method of treatment for prostate cancer, where the goal is to have a sufficiently high radiation dosage in the tumor region to kill the malignant cells while minimizing the radiation exposure to critical healthy structures near the tumor. This treatment can be applied through either external beam radiation therapy (as illustrated by the first example in this section) or brachytherapy, which involves placing approximately 100 radioactive “seeds” within the tumor region. The challenge is to determine the most effective threedimensional geometric pattern for placing these seeds. Memorial Sloan-Kettering Cancer Center (MSKCC) in New York City is the world’s oldest private cancer center. An OR team from the Center for Operations Research in Medicine and HealthCare at Georgia Institute of Technology worked with physicians at MSKCC to develop a highly sophisticated next-generation method of optimizing the application of brachytherapy to prostrate cancer. The underlying model fits the structure for linear programming with one exception. In addition to having the usual continuous variables that fit linear programming, the model also has some binary variables (variables whose only possible values are 0 and 1). (This kind of extension of linear programming to what is called mixed-integer programming will be discussed in Chap. 12.) The optimization is done in a matter of minutes by an automated computerized planning system that can be operated readily by medical personnel when beginning the procedure of inserting the seeds into the patient’s prostrate. This breakthrough in optimizing the application of brachytherapy to prostrate cancer is having a profound impact on both health care costs and quality of life for treated patients because of its much greater effectiveness and the substantial reduction in side effects. When all U.S. clinics adopt this procedure, it is estimated that the annual cost savings will approximate $500 million due to eliminating the need for a pretreatment planning meeting and a postoperation CT scan, as well as providing a more efficient surgical procedure and reducing the need to treat subsequent side effects. It also is anticipated that this approach can be extended to other forms of brachytherapy, such as treatment of breast, cervix, esophagus, biliary tract, pancreas, head and neck, and eye. This application of linear programming and its extensions led to the OR team winning the prestigious First Prize in the 2007 international competition for the Franz Edelman Award for Achievement in Operations Research and the Management Sciences. Source: E. K. Lee and M. Zaider, “Operations Research Advances Cancer Therapeutics,” Interfaces, 38(1): 5–25, Jan.–Feb. 2008. (A link to this article is provided on our website, www.mhhe.com/hillier.) healthy anatomy must be as small as possible, the critical tissues must not exceed 2.7 kilorads, the average over the entire tumor must equal 6 kilorads, and the center of the tumor must be at least 6 kilorads. Formulation as a Linear Programming Problem. The decisions that need to be made are the dosages of radiation at the two entry points. Therefore, the two decision variables x1 and x2 represent the dose (in kilorads) at the entry point for beam 1 and beam 2, respectively. Because the total dosage reaching the healthy anatomy is to be ■ TABLE 3.7 Data for the design of Mary’s radiation therapy Fraction of Entry Dose Absorbed by Area (Average) Area Healthy anatomy Critical tissues Tumor region Center of tumor Beam 1 Beam 2 0.4 0.3 0.5 0.6 0.5 0.1 0.5 0.4 Restriction on Total Average Dosage, Kilorads Minimize  2.7  6  6 3.4 47 ADDITIONAL EXAMPLES minimized, let Z denote this quantity. The data from Table 3.7 can then be used directly to formulate the following linear programming model.4 Minimize Z  0.4x1  0.5x2 , subject to 0.3x1  0.1x2  2.7 0.5x1  0.5x2  6 0.6x1  0.4x2  6 and x1  0, x2  0. Notice the differences between this model and the one in Sec. 3.1 for the Wyndor Glass Co. problem. The latter model involved maximizing Z, and all the functional constraints were in  form. This new model does not fit this same standard form, but it does incorporate three other legitimate forms described in Sec. 3.2, namely, minimizing Z, functional constraints in  form, and functional constraints in  form. However, both models have only two variables, so this new problem also can be solved by the graphical method illustrated in Sec. 3.1. Figure 3.12 shows the graphical solution. The feasible region consists of just the dark line segment between (6, 6) and (7.5, 4.5), because the points on this segment are the only ones that simultaneously satisfy all the constraints. (Note that the equality constraint limits the feasible region to the line containing this line segment, and then the other two functional constraints determine the two endpoints of the line segment.) The dashed line is the objective function line that passes through the optimal solution (x1, x2)  (7.5, 4.5) with Z  5.25. This solution is optimal rather than the point (6, 6) because decreasing Z (for positive values of Z) pushes the objective function line toward the origin (where Z  0). And Z  5.25 for (7.5, 4.5) is less than Z  5.4 for (6, 6). Thus, the optimal design is to use a total dose at the entry point of 7.5 kilorads for beam 1 and 4.5 kilorads for beam 2. In contrast to the Wyndor problem, this one is not a resource-allocation problem. Instead, it fits into a category of linear programming problems called cost–benefit–trade-off problems. The key characteristic of such problems is that it seeks the best trade-off between some cost and some benefit(s). In this particular example, the cost is the damage to healthy anatomy and the benefit is the radiation reaching the center of the tumor. The third functional constraint in this model is a benefit constraint, where the right-hand side represents the minimum acceptable level of the benefit and the left-hand side represents the level of the benefit achieved. This is the most important constraint, but the other two functional constraints impose additional restrictions as well. (You will see two additional examples of cost–benefit–trade-off problems later in this section.) Regional Planning The SOUTHERN CONFEDERATION OF KIBBUTZIM is a group of three kibbutzim (communal farming communities) in Israel. Overall planning for this group is done in its 4 This model is much smaller than normally would be needed for actual applications. For the best results, a realistic model might even need many tens of thousands of decision variables and constraints. For example, see H. E. Romeijn, R. K. Ahuja, J. F. Dempsey, and A. Kumar, “A New Linear Programming Approach to Radiation Therapy Treatment Planning Problems,” Operations Research, 54(2): 201–216, March–April 2006. For alternative approaches that combine linear programming with other OR techniques (like the application vignette in this section), also see G. J. Lim, M. C. Ferris, S. J. Wright, D. M. Shepard, and M. A. Earl, “An Optimization Framework for Conformal Radiation Treatment Planning,” INFORMS Journal on Computing, 19(3): 366–380, Summer 2007. 48 CHAPTER 3 INTRODUCTION TO LINEAR PROGRAMMING x2 15 0.6x1  0.4x2  6 10 (6, 6) 5 (7.5, 4.5) Z  5.25  0.4x1  0.5x2 0.3x1  0.1x2  2.7 ■ FIGURE 3.12 Graphical solution for the design of Mary’s radiation therapy. 0.5x1  0.5x2  6 0 5 10 x1 Coordinating Technical Office. This office currently is planning agricultural production for the coming year. The agricultural output of each kibbutz is limited by both the amount of available irrigable land and the quantity of water allocated for irrigation by the Water Commissioner (a national government official). These data are given in Table 3.8. The crops suited for this region include sugar beets, cotton, and sorghum, and these are the three being considered for the upcoming season. These crops differ primarily in their expected net return per acre and their consumption of water. In addition, the Ministry of Agriculture has set a maximum quota for the total acreage that can be devoted to each of these crops by the Southern Confederation of Kibbutzim, as shown in Table 3.9. ■ TABLE 3.8 Resource data for the Southern Confederation of Kibbutzim Kibbutz Usable Land (Acres) Water Allocation (Acre Feet) 1 2 3 400 600 300 600 800 375 3.4 49 ADDITIONAL EXAMPLES ■ TABLE 3.9 Crop data for the Southern Confederation of Kibbutzim Crop Sugar beets Cotton Sorghum Maximum Quota (Acres) Water Consumption (Acre Feet/Acre) Net Return ($/Acre) 600 500 325 3 2 1 1,000 750 250 Because of the limited water available for irrigation, the Southern Confederation of Kibbutzim will not be able to use all its irrigable land for planting crops in the upcoming season. To ensure equity between the three kibbutzim, it has been agreed that every kibbutz will plant the same proportion of its available irrigable land. For example, if kibbutz 1 plants 200 of its available 400 acres, then kibbutz 2 must plant 300 of its 600 acres, while kibbutz 3 plants 150 acres of its 300 acres. However, any combination of the crops may be grown at any of the kibbutzim. The job facing the Coordinating Technical Office is to plan how many acres to devote to each crop at the respective kibbutzim while satisfying the given restrictions. The objective is to maximize the total net return to the Southern Confederation of Kibbutzim as a whole. Formulation as a Linear Programming Problem. The quantities to be decided upon are the number of acres to devote to each of the three crops at each of the three kibbutzim. The decision variables xj (j  1, 2, . . . , 9) represent these nine quantities, as shown in Table 3.10. Since the measure of effectiveness Z is the total net return, the resulting linear programming model for this problem is Maximize Z  1,0001x1  x2  x3 2  7501x4  x5  x6 2  2501x7  x8  x9 2, subject to the following constraints: 1. Usable land for each kibbutz: x1  x4  x7  400 x2  x5  x8  600 x3  x6  x9  300 2. Water allocation for each kibbutz: 3x1  2x4  x7  600 3x2  2x5  x8  800 3x3  2x6  x9  375 ■ TABLE 3.10 Decision variables for the Southern Confederation of Kibbutzim problem Allocation (Acres) Kibbutz Crop 1 2 3 Sugar beets Cotton Sorghum x1 x4 x7 x2 x5 x8 x3 x6 x9 50 CHAPTER 3 INTRODUCTION TO LINEAR PROGRAMMING 3. Total acreage for each crop: x1  x2  x3  600 x4  x5  x6  500 x7  x8  x9  325 4. Equal proportion of land planted: x2  x5  x8 x1  x4  x7  400 600 x3  x6  x9 x2  x5  x8  600 300 x3  x6  x9 x1  x4  x7  300 400 5. Nonnegativity: xj  0, for j  1, 2, p , 9. This completes the model, except that the equality constraints are not yet in an appropriate form for a linear programming model because some of the variables are on the right-hand side. Hence, their final form5 is 31x1  x4  x7 2  21x2  x5  x8 2  0 1x2  x5  x8 2  21x3  x6  x9 2  0 41x3  x6  x9 2  31x1  x4  x7 2  0 The Coordinating Technical Office formulated this model and then applied the simplex method (developed in Chap. 4) to find an optimal solution 1 1x1 , x2 , x3 , x4 , x5 , x6 , x7 , x8 , x9 2  a133 , 100, 25, 100, 250, 150, 0, 0, 0b , 3 as shown in Table 3.11. The resulting optimal value of the objective function is Z = 633, 333 13 , that is, a total net return of $633,333.33. ■ TABLE 3.11 Optimal solution for the Southern Confederation of Kibbutzim problem Best Allocation (Acres) Kibbutz Crop Sugar beets Cotton Sorghum 5 1 1 1333 100 0 2 3 100 250 0 25 150 0 Actually, any one of these equations is redundant and can be deleted if desired. Also, because of these equations, any two of the usable land constraints also could be deleted because they automatically would be satisfied when both the remaining usable land constraint and these equations are satisfied. However, no harm is done (except a little more computational effort) by including unnecessary constraints, so you don’t need to worry about identifying and deleting them in models you formulate. 3.4 51 ADDITIONAL EXAMPLES This problem is another example (like the Wyndor problem) of a resource-allocation problem. The first three categories of constraints all are resource constraints. The fourth category then adds some side constraints. Controlling Air Pollution The NORI & LEETS CO., one of the major producers of steel in its part of the world, is located in the city of Steeltown and is the only large employer there. Steeltown has grown and prospered along with the company, which now employs nearly 50,000 residents. Therefore, the attitude of the townspeople always has been, What’s good for Nori & Leets is good for the town. However, this attitude is now changing; uncontrolled air pollution from the company’s furnaces is ruining the appearance of the city and endangering the health of its residents. A recent stockholders’ revolt resulted in the election of a new enlightened board of directors for the company. These directors are determined to follow socially responsible policies, and they have been discussing with Steeltown city officials and citizens’ groups what to do about the air pollution problem. Together they have worked out stringent air quality standards for the Steeltown airshed. The three main types of pollutants in this airshed are particulate matter, sulfur oxides, and hydrocarbons. The new standards require that the company reduce its annual emission of these pollutants by the amounts shown in Table 3.12. The board of directors has instructed management to have the engineering staff determine how to achieve these reductions in the most economical way. The steelworks has two primary sources of pollution, namely, the blast furnaces for making pig iron and the open-hearth furnaces for changing iron into steel. In both cases the engineers have decided that the most effective types of abatement methods are (1) increasing the height of the smokestacks,6 (2) using filter devices (including gas traps) in the smokestacks, and (3) including cleaner, high-grade materials among the fuels for the furnaces. Each of these methods has a technological limit on how heavily it can be used (e.g., a maximum feasible increase in the height of the smokestacks), but there also is considerable flexibility for using the method at a fraction of its technological limit. Table 3.13 shows how much emission (in millions of pounds per year) can be eliminated from each type of furnace by fully using any abatement method to its technological limit. For purposes of analysis, it is assumed that each method also can be used less fully to achieve any fraction of the emission-rate reductions shown in this table. Furthermore, the fractions can be different for blast furnaces and for open-hearth furnaces. For either type of furnace, the emission reduction achieved by each method is not substantially affected by whether the other methods also are used. After these data were developed, it became clear that no single method by itself could achieve all the required reductions. On the other hand, combining all three methods at full capacity on both types of furnaces (which would be prohibitively expensive if the company’s ■ TABLE 3.12 Clean air standards for the Nori & Leets Co. Pollutant Particulates Sulfur oxides Hydrocarbons 6 Required Reduction in Annual Emission Rate (Million Pounds) 60 150 125 This particular abatement method has become a controversial one. Because its effect is to reduce ground-level pollution by spreading emissions over a greater distance, environmental groups contend that this creates more acid rain by keeping sulfur oxides in the air longer. Consequently, the U.S. Environmental Protection Agency adopted new rules in 1985 to remove incentives for using tall smokestacks. 52 CHAPTER 3 INTRODUCTION TO LINEAR PROGRAMMING ■ TABLE 3.13 Reduction in emission rate (in millions of pounds per year) from the maximum feasible use of an abatement method for Nori & Leets Co. Taller Smokestacks Pollutant Filters Better Fuels Blast Open-Hearth Blast Open-Hearth Blast Open-Hearth Furnaces Furnaces Furnaces Furnaces Furnaces Furnaces Particulates Sulfur oxides Hydrocarbons 12 35 37 9 42 53 25 18 28 20 31 24 17 56 29 13 49 20 products are to remain competitively priced) is much more than adequate. Therefore, the engineers concluded that they would have to use some combination of the methods, perhaps with fractional capacities, based upon the relative costs. Furthermore, because of the differences between the blast and the open-hearth furnaces, the two types probably should not use the same combination. An analysis was conducted to estimate the total annual cost that would be incurred by each abatement method. A method’s annual cost includes increased operating and maintenance expenses as well as reduced revenue due to any loss in the efficiency of the production process caused by using the method. The other major cost is the start-up cost (the initial capital outlay) required to install the method. To make this one-time cost commensurable with the ongoing annual costs, the time value of money was used to calculate the annual expenditure (over the expected life of the method) that would be equivalent in value to this start-up cost. This analysis led to the total annual cost estimates (in millions of dollars) given in Table 3.14 for using the methods at their full abatement capacities. It also was determined that the cost of a method being used at a lower level is roughly proportional to the fraction of the abatement capacity given in Table 3.13 that is achieved. Thus, for any given fraction achieved, the total annual cost would be roughly that fraction of the corresponding quantity in Table 3.14. The stage now was set to develop the general framework of the company’s plan for pollution abatement. This plan specifies which types of abatement methods will be used and at what fractions of their abatement capacities for (1) the blast furnaces and (2) the open-hearth furnaces. Because of the combinatorial nature of the problem of finding a plan that satisfies the requirements with the smallest possible cost, an OR team was formed to solve the problem. The team adopted a linear programming approach, formulating the model summarized next. Formulation as a Linear Programming Problem. This problem has six decision variables xj, j = 1, 2, . . . , 6, each representing the use of one of the three abatement methods for one of the two types of furnaces, expressed as a fraction of the abatement capacity (so xj cannot exceed 1). The ordering of these variables is shown in Table 3.15. Because the ■ TABLE 3.14 Total annual cost from the maximum feasible use of an abatement method for Nori & Leets Co. ($ millions) Abatement Method Taller smokestacks Filters Better fuels Blast Furnaces 8 7 11 Open-Hearth Furnaces 10 6 9 3.4 53 ADDITIONAL EXAMPLES ■ TABLE 3.15 Decision variables (fraction of the maximum feasible use of an abatement method) for Nori & Leets Co. Abatement Method Blast Furnaces Open-Hearth Furnaces x1 x3 x5 x2 x4 x6 Taller smokestacks Filters Better fuels objective is to minimize total cost while satisfying the emission reduction requirements, the data in Tables 3.12, 3.13, and 3.14 yield the following model: Minimize Z  8x1  10x2  7x3  6x4  11x5  9x6 , subject to the following constraints: 1. Emission reduction: 12x1  9x2  25x3  20x4  17x5  13x6  60 35x1  42x2  18x3  31x4  56x5  49x6  150 37x1  53x2  28x3  24x4  29x5  20x6  125 2. Technological limit: xj  1, for j  1, 2, . . . , 6 3. Nonnegativity: xj  0, for j  1, 2, . . . , 6. The OR team used this model7 to find a minimum-cost plan 1x1 , x2 , x3 , x4 , x5 , x6 2  11, 0.623, 0.343, 1, 0.048, 12, with Z  32.16 (total annual cost of $32.16 million). Sensitivity analysis then was conducted to explore the effect of making possible adjustments in the air standards given in Table 3.12, as well as to check on the effect of any inaccuracies in the cost data given in Table 3.14. (This story is continued in Case 7.1 at the end of Chap. 7.) Next came detailed planning and managerial review. Soon after, this program for controlling air pollution was fully implemented by the company, and the citizens of Steeltown breathed deep (cleaner) sighs of relief. Like the radiation therapy problem, this is another example of a cost–benefit–trade-off problem. The cost in this case is a monetary cost and the benefits are the various types of pollution abatement. The benefit constraint for each type of pollutant has the amount of abatement achieved on the left-hand side and the minimum acceptable level of abatement on the right-hand side. Reclaiming Solid Wastes The SAVE-IT COMPANY operates a reclamation center that collects four types of solid waste materials and treats them so that they can be amalgamated into a salable product. (Treating and amalgamating are separate processes.) Three different grades of this product can be made (see the first column of Table 3.16), depending upon the mix of the materials used. Although there is some flexibility in the mix for each grade, quality standards may specify the minimum or maximum amount allowed for the proportion of a material in the 7 An equivalent formulation can express each decision variable in natural units for its abatement method; for example, x1 and x2 could represent the number of feet that the heights of the smokestacks are increased. 54 CHAPTER 3 INTRODUCTION TO LINEAR PROGRAMMING ■ TABLE 3.16 Product data for Save-It Co. Grade Specification Material 1: Not more than 30% of total Material 2: Not less than 40% of total Material 3: Not more than 50% of total Material 4: Exactly 20% of total A Amalgamation Cost per Pound ($) Selling Price per Pound ($) 3.00 8.50 B Material 1: Not more than 50% of total Material 2: Not less than 10% of total Material 4: Exactly 10% of total 2.50 7.00 C Material 1: Not more than 70% of total 2.00 5.50 product grade. (This proportion is the weight of the material expressed as a percentage of the total weight for the product grade.) For each of the two higher grades, a fixed percentage is specified for one of the materials. These specifications are given in Table 3.16 along with the cost of amalgamation and the selling price for each grade. The reclamation center collects its solid waste materials from regular sources and so is normally able to maintain a steady rate for treating them. Table 3.17 gives the quantities available for collection and treatment each week, as well as the cost of treatment, for each type of material. The Save-It Co. is solely owned by Green Earth, an organization devoted to dealing with environmental issues, so Save-It’s profits are used to help support Green Earth’s activities. Green Earth has raised contributions and grants, amounting to $30,000 per week, to be used exclusively to cover the entire treatment cost for the solid waste materials. The board of directors of Green Earth has instructed the management of Save-It to divide this money among the materials in such a way that at least half of the amount available of each material is actually collected and treated. These additional restrictions are listed in Table 3.17. Within the restrictions specified in Tables 3.16 and 3.17, management wants to determine the amount of each product grade to produce and the exact mix of materials to be used for each grade. The objective is to maximize the net weekly profit (total sales income minus total amalgamation cost), exclusive of the fixed treatment cost of $30,000 per week that is being covered by gifts and grants. Formulation as a Linear Programming Problem. Before attempting to construct a linear programming model, we must give careful consideration to the proper definition of the decision variables. Although this definition is often obvious, it sometimes becomes the crux of the entire formulation. After clearly identifying what information is really desired and the most convenient form for conveying this information by means of decision variables, we can develop the objective function and the constraints on the values of these decision variables. ■ TABLE 3.17 Solid waste materials data for the Save-It Co. Material Pounds per Week Available Treatment Cost per Pound ($) 1 2 3 4 3,000 2,000 4,000 1,000 3.00 6.00 4.00 5.00 Additional Restrictions 1. For each material, at least half of the pounds per week available should be collected and treated. 2. $30,000 per week should be used to treat these materials. 3.4 55 ADDITIONAL EXAMPLES In this particular problem, the decisions to be made are well defined, but the appropriate means of conveying this information may require some thought. (Try it and see if you first obtain the following inappropriate choice of decision variables.) Because one set of decisions is the amount of each product grade to produce, it would seem natural to define one set of decision variables accordingly. Proceeding tentatively along this line, we define yi  number of pounds of product grade i produced per week 1i  A, B, C2. The other set of decisions is the mix of materials for each product grade. This mix is identified by the proportion of each material in the product grade, which would suggest defining the other set of decision variables as zij  proportion of material j in product grade i 1i  A, B, C; j  1, 2, 3, 42. However, Table 3.17 gives both the treatment cost and the availability of the materials by quantity (pounds) rather than proportion, so it is this quantity information that needs to be recorded in some of the constraints. For material j ( j  1, 2, 3, 4), Number of pounds of material j used per week  zAjyA  zBjyB  zCjyC . For example, since Table 3.17 indicates that 3,000 pounds of material 1 is available per week, one constraint in the model would be zA1yA  zB1yB  zC1yC  3,000. Unfortunately, this is not a legitimate linear programming constraint. The expression on the left-hand side is not a linear function because it involves products of variables. Therefore, a linear programming model cannot be constructed with these decision variables. Fortunately, there is another way of defining the decision variables that will fit the linear programming format. (Do you see how to do it?) It is accomplished by merely replacing each product of the old decision variables by a single variable! In other words, define xij  zijyi 1for i  A, B, C; j  1, 2, 3, 42  number of pounds of material j allocated to product grade i per week, and then we let the xij be the decision variables. Combining the xij in different ways yields the following quantities needed in the model (for i = A, B, C; j = 1, 2, 3, 4). xi1  xi2  xi3  xi4  number of pounds of product grade i produced per week. xAj  xBj  xCj  number of pounds of material j used per week. xij  proportion of material j in product grade i. xi1  xi2  xi3  xi4 The fact that this last expression is a nonlinear function does not cause a complication. For example, consider the first specification for product grade A in Table 3.16 (the proportion of material 1 should not exceed 30 percent). This restriction gives the nonlinear constraint xA1 xA1  0.3.  xA2  xA3  xA4 However, multiplying through both sides of this inequality by the denominator yields an equivalent constraint xA1  0.31xA1  xA2  xA3  xA4 2, 56 CHAPTER 3 INTRODUCTION TO LINEAR PROGRAMMING so 0.7xA1  0.3xA2  0.3xA3  0.3xA4  0, which is a legitimate linear programming constraint. With this adjustment, the three quantities given above lead directly to all the functional constraints of the model. The objective function is based on management’s objective of maximizing net weekly profit (total sales income minus total amalgamation cost) from the three product grades. Thus, for each product grade, the profit per pound is obtained by subtracting the amalgamation cost given in the third column of Table 3.16 from the selling price in the fourth column. These differences provide the coefficients for the objective function. Therefore, the complete linear programming model is Maximize Z  5.51xA1  xA2  xA3  xA4 2  4.51xB1  xB2  xB3  xB4 2  3.51xC1  xC2  xC3  xC4 2, subject to the following constraints: 1. Mixture specifications (second column of Table 3.16): xA1  0.31xA1  xA2  xA3  xA4 2 xA2  0.41xA1  xA2  xA3  xA4 2 xA3  0.51xA1  xA2  xA3  xA4 2 xA4  0.21xA1  xA2  xA3  xA4 2 xB1  0.51xB1  xB2  xB3  xB4 2 xB2  0.11xB1  xB2  xB3  xB4 2 xB4  0.11xB1  xB2  xB3  xB4 2 xC1  0.71xC1  xC2  xC3  xC4 2 1grade A, material 12 1grade A, material 22 1grade A, material 32 1grade A, material 42 1grade B, material 12 1grade B, material 22 1grade B, material 42 1grade C, material 12. 2. Availability of materials (second column of Table 3.17): xA1  xB1  xC1  3,000 xA2  xB2  xC2  2,000 xA3  xB3  xC3  4,000 xA4  xB4  xC4  1,000 1material 1 2 1material 2 2 1material 3 2 1material 4 2. 3. Restrictions on amounts treated (right side of Table 3.17): xA1  xB1  xC1  1,500 xA2  xB2  xC2  1,000 xA3  xB3  xC3  2,000 xA4  xB4  xC4  500 1material 1 2 1material 2 2 1material 3 2 1material 4 2. 4. Restriction on treatment cost (right side of Table 3.17): 31xA1  xB1  xC1 2  61xA2  xB2  xC2 2  41xA3  xB3  xC3 2  51xA4  xB4  xC4 2  30,000. 5. Nonnegativity constraints: xA1  0, xA2  0, ..., xC4  0. 3.4 57 ADDITIONAL EXAMPLES ■ TABLE 3.18 Optimal solution for the Save-It Co. problem Pounds Used per Week Material Grade 1 412.3 (19.2%) 2587.7 (50%) 0 A B C Total 3000 2 859.6 (40%) 517.5 (10%) 0 1377 3 4 447.4 (20.8%) 1552.6 (30%) 0 2000 Number of Pounds Produced per Week 429.8 (20%) 517.5 (10%) 0 2149 5175 0 947 This formulation completes the model, except that the constraints for the mixture specifications need to be rewritten in the proper form for a linear programming model by bringing all variables to the left-hand side and combining terms, as follows: Mixture specifications: 0.7xA1  0.3xA2  0.3xA3  0.3xA4  0 0.4xA1  0.6xA2  0.4xA3  0.4xA4  0 0.5xA1  0.5xA2  0.5xA3  0.5xA4  0 0.2xA1  0.2xA2  0.2xA3  0.8xA4  0 0.5xB1  0.5xB2  0.5xB3  0.5xB4  0 0.1xB1  0.9xB2  0.1xB3  0.1xB4  0 0.1xB1  0.1xB2  0.1xB3  0.9xB4  0 0.3xC1  0.7xC2  0.7xC3  0.7xC4  0 1grade A, material 12 1grade A, material 22 1grade A, material 32 1grade A, material 42 1grade B, material 12 1grade B, material 22 1grade B, material 42 1grade C, material 12 . An optimal solution for this model is shown in Table 3.18, and then these xij values are used to calculate the other quantities of interest given in the table. The resulting optimal value of the objective function is Z  35,109.65 (a total weekly profit of $35,109.65). The Save-It Co. problem is an example of a blending problem. The objective for a blending problem is to find the best blend of ingredients into final products to meet certain specifications. Some of the earliest applications of linear programming were for gasoline blending, where petroleum ingredients were blended to obtain various grades of gasoline. Other blending problems involve such final products as steel, fertilizer, and animal feed. Such problems have a wide variety of constraints (some are resource constraints, some are benefit constraints, and some are neither), so blended problems do not fall into either of the two broad categories (resource allocation problems and cost–benefit–trade-off problems) described earlier in this section. Personnel Scheduling UNION AIRWAYS is adding more flights to and from its hub airport, and so it needs to hire additional customer service agents. However, it is not clear just how many more should be hired. Management recognizes the need for cost control while also consistently providing a satisfactory level of service to customers. Therefore, an OR team is studying how to schedule the agents to provide satisfactory service with the smallest personnel cost. Based on the new schedule of flights, an analysis has been made of the minimum number of customer service agents that need to be on duty at different times of the day to 58 CHAPTER 3 INTRODUCTION TO LINEAR PROGRAMMING provide a satisfactory level of service. The rightmost column of Table 3.19 shows the number of agents needed for the time periods given in the first column. The other entries in this table reflect one of the provisions in the company’s current contract with the union that represents the customer service agents. The provision is that each agent work an 8-hour shift 5 days per week, and the authorized shifts are Shift Shift Shift Shift Shift 1: 2: 3: 4: 5: 6:00 A.M. to 2:00 P.M. 8:00 A.M. to 4:00 P.M. Noon to 8:00 P.M. 4:00 P.M. to midnight 10:00 P.M. to 6:00 A.M. Checkmarks in the main body of Table 3.19 show the hours covered by the respective shifts. Because some shifts are less desirable than others, the wages specified in the contract differ by shift. For each shift, the daily compensation (including benefits) for each agent is shown in the bottom row. The problem is to determine how many agents should be assigned to the respective shifts each day to minimize the total personnel cost for agents, based on this bottom row, while meeting (or surpassing) the service requirements given in the rightmost column. Formulation as a Linear Programming Problem. Linear programming problems always involve finding the best mix of activity levels. The key to formulating this particular problem is to recognize the nature of the activities. Activities correspond to shifts, where the level of each activity is the number of agents assigned to that shift. Thus, this problem involves finding the best mix of shift sizes. Since the decision variables always are the levels of the activities, the five decision variables here are xj  number of agents assigned to shift j, for j  1, 2, 3, 4, 5. The main restrictions on the values of these decision variables are that the number of agents working during each time period must satisfy the minimum requirement given in ■ TABLE 3.19 Data for the Union Airways personnel scheduling problem Time Periods Covered Shift Time Period 1 6:00 A.M. to 8:00 A.M. 8:00 A.M. to 10:00 A.M. 10:00 A.M. to noon Noon to 2:00 P.M. 2:00 P.M. to 4:00 P.M. 4:00 P.M. to 6:00 P.M. 6:00 P.M. to 8:00 P.M. 8:00 P.M. to 10:00 P.M. 10:00 P.M. to midnight Midnight to 6:00 A.M. ✔ ✔ ✔ ✔ Daily cost per agent $170 2 ✔ ✔ ✔ ✔ $160 3 ✔ ✔ ✔ ✔ $175 4 ✔ ✔ ✔ ✔ $180 5 Minimum Number of Agents Needed ✔ ✔ 48 79 65 87 64 73 82 43 52 15 $195 3.4 59 ADDITIONAL EXAMPLES the rightmost column of Table 3.19. For example, for 2:00 P.M. to 4:00 P.M., the total number of agents assigned to the shifts that cover this time period (shifts 2 and 3) must be at least 64, so x2  x3  64 is the functional constraint for this time period. Because the objective is to minimize the total cost of the agents assigned to the five shifts, the coefficients in the objective function are given by the last row of Table 3.19. Therefore, the complete linear programming model is Minimize Z  170x1  160x2  175x3  180x4  195x5 , subject to x1  48 x1  x2  79 x1  x2  65 x1  x2  x3  87 x2  x3  64 x3  x4  73 x3  x4  82 x4  43 x4  x5  52 x5  15 (6–8 A.M.) (8–10 A.M.) (10 A.M. to noon) (Noon–2 P.M.) (2–4 P.M.) (4–6 P.M.) (6–8 P.M.) (8–10 P.M.) (10 P.M.–midnight) (Midnight–6 A.M.) and xj  0, for j  1, 2, 3, 4, 5. With a keen eye, you might have noticed that the third constraint, x1  x2  65, actually is not necessary because the second constraint, x1  x2  79, ensures that x1  x2 will be larger than 65. Thus, x1  x2  65 is a redundant constraint that can be deleted. Similarly, the sixth constraint, x3  x4  73, also is a redundant constraint because the seventh constraint is x3  x4  82. (In fact, three of the nonnegativity constraints— x1  0, x4  0, x5  0—also are redundant constraints because of the first, eighth, and tenth functional constraints: x1  48, x4  43, and x5  15. However, no computational advantage is gained by deleting these three nonnegativity constraints.) The optimal solution for this model is (x1, x2, x3, x4, x5)  (48, 31, 39, 43, 15). This yields Z  30,610, that is, a total daily personnel cost of $30,610. This problem is an example where the divisibility assumption of linear programming actually is not satisfied. The number of agents assigned to each shift needs to be an integer. Strictly speaking, the model should have an additional constraint for each decision variable specifying that the variable must have an integer value. Adding these constraints would convert the linear programming model to an integer programming model (the topic of Chap. 12). Without these constraints, the optimal solution given above turned out to have integer values anyway, so no harm was done by not including the constraints. (The form of the functional constraints made this outcome a likely one.) If some of the variables had turned out to be noninteger, the easiest approach would have been to round up to integer values. (Rounding up is feasible for this example because all the functional constraints are in  form with nonnegative coefficients.) Rounding up does not ensure obtaining an optimal solution for the integer programming model, but the error introduced by rounding up such 60 CHAPTER 3 INTRODUCTION TO LINEAR PROGRAMMING large numbers would be negligible for most practical situations. Alternatively, integer programming techniques described in Chap. 12 could be used to solve exactly for an optimal solution with integer values. Note that all of the functional constraints in this problem are benefit constraints. The left-hand side of each of these constraints represents the benefit of having that number of agents working during that time period and the right-hand side represents the minimum acceptable level of that benefit. Since the objective is to minimize the total cost of the agents, subject to the benefit constraints, this is another example (like the radiation therapy and air pollution examples) of a cost–benefit–trade-off problem. Distributing Goods through a Distribution Network The Problem. The DISTRIBUTION UNLIMITED CO. will be producing the same new product at two different factories, and then the product must be shipped to two warehouses, where either factory can supply either warehouse. The distribution network available for shipping this product is shown in Fig. 3.13, where F1 and F2 are the two factories, W1 and W2 are the two warehouses, and DC is a distribution center. The amounts to be shipped from F1 and F2 are shown to their left, and the amounts to be received at W1 and W2 are shown to their right. Each arrow represents a feasible shipping lane. Thus, F1 can ship directly to W1 and has three possible routes (F1  DC  W2, F1  F2  DC  W2, and F1  W1  W2) for shipping to W2. Factory F2 has just one route to W2 (F2  DC  W2) and one to W1 (F2  DC  W2  W1). The cost per unit shipped through each shipping lane is shown next to the arrow. Also shown next to F1  F2 and DC  W2 are the maximum amounts that can be shipped through these lanes. The other lanes have sufficient shipping capacity to handle everything these factories can send. The decision to be made concerns how much to ship through each shipping lane. The objective is to minimize the total shipping cost. Formulation as a Linear Programming Problem. With seven shipping lanes, we need seven decision variables (xF1-F2, xF1-DC, xF1-W1, xF2-DC, xDC-W2, xW1-W2, xW2-W1) to represent the amounts shipped through the respective lanes. There are several restrictions on the values of these variables. In addition to the usual nonnegativity constraints, there are two upper-bound constraints, xF1-F2 ≤ 10 and xDC-W2 ≤ 80, imposed by the limited shipping capacities for the two lanes, F1  F2 and DC  W2. All the other restrictions arise from five net flow constraints, one for each of the five locations. These constraints have the following form. Net flow constraint for each location: Amount shipped out  amount shipped in  required amount. As indicated in Fig. 3.13, these required amounts are 50 for F1, 40 for F2, 30 for W1, and 60 for W2. What is the required amount for DC? All the units produced at the factories are ultimately needed at the warehouses, so any units shipped from the factories to the distribution center should be forwarded to the warehouses. Therefore, the total amount shipped from the distribution center to the warehouses should equal the total amount shipped from the factories to the distribution center. In other words, the difference of these two shipping amounts (the required amount for the net flow constraint) should be zero. Since the objective is to minimize the total shipping cost, the coefficients for the objective function come directly from the unit shipping costs given in Fig. 3.13. Therefore, by using money units of hundreds of dollars in this objective function, the complete linear programming model is 3.4 61 ADDITIONAL EXAMPLES 50 units produced $900/unit F1 W1 30 units needed $4 00 n /u it DC $200/unit t /u 00 its ni /u 00 un . ax m $3 $300/unit $1 80 ni t $200/unit 10 units max. ■ FIGURE 3.13 The distribution network for Distribution Unlimited Co. 40 units produced F2 W2 Minimize Z  2xF1-F2  4xF1-DC  9xF1-W1  3xF2-DC  xDC-W2  3xW1-W2  2xW2-W1 , 60 units needed subject to the following constraints: 1. Net flow constraints: xF1-F2  xF1-DC  xF1-W1 xF1-F2  xF1-DC 50 1factory 12   xF2-DC   xF2-DC  xDC-W2   xF1-W1 40 1factory 22 0 1distribution center 2  xW1-W2  xW2-W1  30 1warehouse 1 2  xDC-W2  xW1-W2  xW2-W1  60 1warehouse 2 2 2. Upper-bound constraints: xF1-F2  10, xDC-W2  80 3. Nonnegativity constraints: xF1-F2  0, xF1-DC  0, xF1-W1  0, xF2-DC  0, xW1-W2  0, xW2-W1  0. xDC-W2  0, You will see this problem again in Sec. 10.6, where we focus on linear programming problems of this type (called the minimum cost flow problem). In Sec. 10.7, we will solve for its optimal solution: xF1-F2  0, xF1-DC  40, xF1-W1  10, xF2-DC  40, xDC-W2  80, 62 CHAPTER 3 INTRODUCTION TO LINEAR PROGRAMMING xW1-W2  0, xW2-W1  20. The resulting total shipping cost is $49,000. This problem does not fit into any of the categories of linear programming problems introduced so far. Instead, it is a fixed-requirements problem because its main constraints (the net flow constraints) all are fixed-requirement constraints. Because they are equality constraints, each of these constraints imposes the fixed requirement that the net flow out of that location is required to equal a certain fixed amount. Chapters 9 and 10 will focus on linear programming problems that fall into this new category of fixedrequirements problems. ■ 3.5 FORMULATING AND SOLVING LINEAR PROGRAMMING MODELS ON A SPREADSHEET Spreadsheet software, such as Excel and its Solver, is a popular tool for analyzing and solving small linear programming problems. The main features of a linear programming model, including all its parameters, can be easily entered onto a spreadsheet. However, spreadsheet software can do much more than just display data. If we include some additional information, the spreadsheet can be used to quickly analyze potential solutions. For example, a potential solution can be checked to see if it is feasible and what Z value (profit or cost) it achieves. Much of the power of the spreadsheet lies in its ability to immediately reveal the results of any changes made in the solution. In addition, Solver can quickly apply the simplex method to find an optimal solution for the model. We will describe how this is done in the latter part of this section. To illustrate this process of formulating and solving linear programming models on a spreadsheet, we now return to the Wyndor example introduced in Sec. 3.1. Formulating the Model on a Spreadsheet Figure 3.14 displays the Wyndor problem by transferring the data from Table 3.1 onto a spreadsheet. (Columns E and F are being reserved for later entries described below.) We will refer to the cells showing the data as data cells. These cells are lightly shaded to distinguish them from other cells in the spreadsheet.8 ■ FIGURE 3.14 The initial spreadsheet for the Wyndor problem after transferring the data from Table 3.1 into data cells. A 1 2 3 4 5 6 7 8 9 8 B C D E F G Wyndor Glass Co. Product-Mix Problem Profit Per Batch ($000) Plant 1 Plant 2 Plant 3 Doors 3 Windows 5 Hours Used Per Batch Produced 1 0 0 2 3 2 Hours Available 4 12 18 Borders and cell shading can be added by using the borders menu button and the fill color menu button on the Home tab. An Application Vignette Welch’s, Inc., is the world’s largest processor of Concord and Niagara grapes, with net sales of $650 million in 2012. Such products as Welch’s grape jelly and Welch’s grape juice have been enjoyed by generations of American consumers. Every September, growers begin delivering grapes to processing plants that then press the raw grapes into juice. Time must pass before the grape juice is ready for conversion into finished jams, jellies, juices, and concentrates. Deciding how to use the grape crop is a complex task given changing demand and uncertain crop quality and quantity. Typical decisions include what recipes to use for major product groups, the transfer of grape juice between plants, and the mode of transportation for these transfers. Because Welch’s lacked a formal system for optimizing raw material movement and the recipes used for production, an OR team developed a preliminary linear programming model. This was a large model with 8,000 decision variables that focused on the component level of detail. Small-scale testing proved that the model worked. To make the model more useful, the team then revised it by aggregating demand by product group rather than by component. This reduced its size to 324 decision variables and 361 functional constraints. The model then was incorporated into a spreadsheet. The company has run the continually updated version of this spreadsheet model each month since 1994 to provide senior management with information on the optimal logistics plan generated by the Solver. The savings from using and optimizing this model were approximately $150,000 in the first year alone. A major advantage of incorporating the linear programming model into a spreadsheet has been the ease of explaining the model to managers with differing levels of mathematical understanding. This has led to a widespread appreciation of the operations research approach for both this application and others. Source: E. W. Schuster and S. J. Allen, “Raw Material Management at Welch’s, Inc.,” Interfaces, 28(5): 13–24, Sept.–Oct. 1998. (A link to this article is provided on our website, www.mhhe.com/hillier.) You will see later that the spreadsheet is made easier to interpret by using range names. A range name is a descriptive name given to a block of cells that immediately identifies what is there. Thus, the data cells in the Wyndor problem are given the range names UnitProfit (C4:D4), HoursUsedPerBatchProduced (C7:D9), and HoursAvailable (G7:G9). Note that no spaces are allowed in a range name so each new word begins with a capital letter. To enter a range name, first select the range of cells, then click in the name box on the left of the formula bar above the spreadsheet and type a name. Three questions need to be answered to begin the process of using the spreadsheet to formulate a linear programming model for the problem. 1. What are the decisions to be made? For this problem, the necessary decisions are the production rates (number of batches produced per week) for the two new products. 2. What are the constraints on these decisions? The constraints here are that the number of hours of production time used per week by the two products in the respective plants cannot exceed the number of hours available. 3. What is the overall measure of performance for these decisions? Wyndor’s overall measure of performance is the total profit per week from the two products, so the objective is to maximize this quantity. Figure 3.15 shows how these answers can be incorporated into the spreadsheet. Based on the first answer, the production rates of the two products are placed in cells C12 and D12 to locate them in the columns for these products just under the data cells. Since we don’t know yet what these production rates should be, they are just entered as zeroes at this point. (Actually, any trial solution can be entered, although negative production rates should be excluded since they are impossible.) Later, these numbers will be changed while seeking the best mix of production rates. Therefore, these cells containing the decisions to be made are called changing cells. To highlight the changing cells, they are shaded and have a border. (In the spreadsheet files contained in OR Courseware, the changing cells 64 CHAPTER 3 A ■ FIGURE 3.15 The complete spreadsheet for the Wyndor problem with an initial trial solution (both production rates equal to zero) entered into the changing cells (C12 and D12). 1 2 3 4 5 6 7 8 9 10 11 12 INTRODUCTION TO LINEAR PROGRAMMING B C D E F G Wyndor Glass Co. Product-Mix Problem Doors 3 Profit Per Batch ($000) Plant 1 Plant 2 Plant 3 Windows 5 Hours Hours Used Per Batch Produced Used 0 0 1 <= 0 2 0 <= 0 2 3 <= Batches Produced Doors 0 Windows 0 Hours Available 4 12 18 Total Profit ($000) 0 appear in bright yellow on a color monitor.) The changing cells are given the range name BatchesProduced (C12:D12). Using the answer to question 2, the total number of hours of production time used per week by the two products in the respective plants is entered in cells E7, E8, and E9, just to the right of the corresponding data cells. The Excel equations for these three cells are E7  C7*C12  D7*D12 E8  C8*C12  D8*D12 E9  C9*C12  D9*D12 where each asterisk denotes multiplication. Since each of these cells provides output that depends on the changing cells (C12 and D12), they are called output cells. Notice that each of the equations for the output cells involves the sum of two products. There is a function in Excel called SUMPRODUCT that will sum up the product of each of the individual terms in two different ranges of cells when the two ranges have the same number of rows and the same number of columns. Each product being summed is the product of a term in the first range and the term in the corresponding location in the second range. For example, consider the two ranges, C7:D7 and C12:D12, so that each range has one row and two columns. In this case, SUMPRODUCT (C7:D7, C12:D12) takes each of the individual terms in the range C7:D7, multiplies them by the corresponding term in the range C12:D12, and then sums up these individual products, as shown in the first equation above. Using the range name BatchesProduced (C12:D12), the formula becomes SUMPRODUCT (C7:D7, BatchesProduced). Although optional with such short equations, this function is especially handy as a shortcut for entering longer equations. Next, ≤ signs are entered in cells F7, F8, and F9 to indicate that each total value to their left cannot be allowed to exceed the corresponding number in column G. The spreadsheet still will allow you to enter trial solutions that violate the ≤ signs. However, these ≤ signs serve as a reminder that such trial solutions need to be rejected if no changes are made in the numbers in column G. Finally, since the answer to the third question is that the overall measure of performance is the total profit from the two products, this profit (per week) is entered in cell G12. Much like the numbers in column E, it is the sum of products, G12  SUMPRODUCT 1C4:D4, C12:D122 Utilizing range names of TotalProfit (G12), ProfitPerBatch (C4:D4), and BatchesProduced (C12:D12), this equation becomes 3.5 65 FORMULATING AND SOLVING LP MODELS ON A SPREADSHEET TotalProfit  SUMPRODUCT 1ProfitPerBatch, BatchesProduced2 This is a good example of the benefit of using range names for making the resulting equation easier to interpret. Rather than needing to refer to the spreadsheet to see what is in cells G12, C4:D4, and C12:D12, the range names immediately reveal what the equation is doing. TotalProfit (G12) is a special kind of output cell. It is the particular cell that is being targeted to be made as large as possible when making decisions regarding production rates. Therefore, TotalProfit (G12) is referred to as the objective cell. The objective cell is shaded darker than the changing cells and is further distinguished by having a heavy border. (In the spreadsheet files contained in OR Courseware, this cell appears in orange on a color monitor.) The bottom of Fig. 3.16 summarizes all the formulas that need to be entered in the Hours Used column and in the Total Profit cell. Also shown is a summary of the range names (in alphabetical order) and the corresponding cell addresses. This completes the formulation of the spreadsheet model for the Wyndor problem. With this formulation, it becomes easy to analyze any trial solution for the production rates. Each time production rates are entered in cells C12 and D12, Excel immediately calculates the output cells for hours used and total profit. However, it is not necessary to use trial and error. We shall describe next how Solver can be used to quickly find the optimal solution. Using Solver to Solve the Model Excel includes a tool called Solver that uses the simplex method to find an optimal solution. ASPE (an Excel add-in available in your OR Courseware) includes a more advanced version of Solver that can also be used to solve this same problem. ASPE’s Solver will be described in the next subsection. ■ FIGURE 3.16 The spreadsheet model for the Wyndor problem, including the formulas for the objective cell TotalProfit (G12) and the other output cells in column E, where the goal is to maximize the objective cell. A 1 2 3 4 5 6 7 8 9 10 11 12 B D C E F G Wyndor Glass Co. Product-Mix Problem Profit Per Batch ($000) Plant 1 Plant 2 Plant 3 Batches Produced Range Name BatchesProduced HoursAvailable HoursUsed HoursUsedPerBatchProduced ProfitPerBatch TotalProfit Doors 3 Windows 5 Hours Hours Used Per Batch Produced Used 0 <= 1 0 0 <= 0 2 0 <= 3 2 Doors 0 Cells C12:D12 G7:G9 E7:E9 C7:D9 C4:D4 G12 Windows 0 5 6 7 8 9 Hours Available 4 12 18 Total Profit ($000) 0 E Hours Used =SUMPRODUCT(C7:D7,BatchesProduced) =SUMPRODUCT(C8:D8,BatchesProduced) =SUMPRODUCT(C9:D9,BatchesProduced) G 11 Total Profit 12 =SUMPRODUCT(ProfitPerBatch,BatchesProduced) 66 CHAPTER 3 INTRODUCTION TO LINEAR PROGRAMMING To access the standard Solver for the first time, you need to install it. Click the Office Button, choose Excel Options, then click on Add-Ins on the left side of the window, select Manage Excel Add-Ins at the bottom of the window, and then press the Go button. Make sure Solver is selected in the Add-Ins dialog box, and then it should appear on the Data tab. For Excel 2011 (for the Mac), choose Add-Ins from the Tools menu and make sure that Solver is selected. To get started, an arbitrary trial solution has been entered in Fig. 3.16 by placing zeroes in the changing cells. Solver will then change these to the optimal values after solving the problem. This procedure is started by clicking on the Solver button on the Data tab. The Solver dialog box is shown in Fig. 3.17. Before Solver can start its work, it needs to know exactly where each component of the model is located on the spreadsheet. The Solver dialog box is used to enter this information. You have the choice of typing the range names, typing in the cell addresses, or clicking on the cells in the spreadsheet.9 Figure 3.17 shows the result of using the first choice, so TotalProfit (rather than G12) has been entered for the objective cell and BatchesProduced (rather than the range C12:D12) has been entered for the changing cells. Since the goal is to maximize the objective cell, Max also has been selected. ■ FIGURE 3.17 This Solver dialog box specifies which cells in Fig. 3.16 are the objective cell and the changing cells. It also indicates that the objective cell is to be maximized. 9 If you select cells by clicking on them, they will first appear in the dialog box with their cell addresses and with dollar signs (e.g., $C$9:$D$9). You can ignore the dollar signs. Solver will eventually replace both the cell addresses and the dollar signs with the corresponding range name (if a range name has been defined for the given cell addresses), but only after either adding a constraint or closing and reopening the Solver dialog box. 3.5 FORMULATING AND SOLVING LP MODELS ON A SPREADSHEET 67 ■ FIGURE 3.18 The Add Constraint dialog box after entering the set of constraints, HoursUsed (E7:E9) ≤ HoursAvailable (G7:G9), which specifies that cells E7, E8, and E9 in Fig. 3.16 are required to be less than or equal to cells G7, G8, and G9, respectively. Next, the cells containing the functional constraints need to be specified. This is done by clicking on the Add button on the Solver dialog box. This brings up the Add Constraint dialog box shown in Fig. 3.18. The ≤ signs in cells F7, F8, and F9 of Fig. 3.16 are a reminder that the cells in HoursUsed (E7:E9) all need to be less than or equal to the corresponding cells in HoursAvailable (G7:G9). These constraints are specified for Solver by entering HoursUsed (or E7:E9) on the left-hand side of the Add Constraint dialog box and HoursAvailable (or G7:G9) on the right-hand side. For the sign between these two sides, there is a menu to choose between < (less than or equal), , or > (greater than or equal), so < has been chosen. This choice is needed even though ≤ signs were previously entered in column F of the spreadsheet because Solver only uses the functional constraints that are specified with the Add Constraint dialog box. If there were more functional constraints to add, you would click on Add to bring up a new Add Constraint dialog box. However, since there are no more in this example, the next step is to click on OK to go back to the Solver dialog box. Before asking Solver to solve the model, two more steps need to be taken. We need to tell Solver that non-negativity constraints are needed for the changing cells to reject negative production rates. We also need to specify that this is a linear programming problem so the simplex method can be used. This is demonstrated in Figure 3.19, where the Make Unconstrained Variables Non-Negative option has been checked and the Solving Method chosen is Simplex LP (rather than GRG Nonlinear or Evolutionary, which are used for solving nonlinear problems). The Solver dialog box shown in this figure now summarizes the complete model. Now you are ready to click on Solve in the Solver dialog box, which will start the process of solving the problem in the background. After a fraction of a second (for a small problem), Solver will then indicate the outcome. Typically, it will indicate that it has found an optimal solution, as specified in the Solver Results dialog box shown in Fig. 3.20. If the model has no feasible solutions or no optimal solution, the dialog box will indicate that instead by stating that “Solver could not find a feasible solution” or that “The Objective Cell values do not converge.” The dialog box also presents the option of generating various reports. One of these (the Sensitivity Report) will be discussed later in Secs. 4.7 and 7.3. After solving the model, Solver replaces the original numbers in the changing cells with the optimal numbers, as shown in Fig. 3.21. Thus, the optimal solution is to produce two batches of doors per week and six batches of windows per week, just as was found by the graphical method in Sec. 3.1. The spreadsheet also indicates the corresponding number in the objective cell (a total profit of $36,000 per week), as well as the numbers in the output cells HoursUsed (E7:E9). 68 ■ FIGURE 3.19 The Solver dialog box after specifying the entire model in terms of the spreadsheet. ■ FIGURE 3.20 The Solver Results dialog box that indicates that an optimal solution has been found. CHAPTER 3 INTRODUCTION TO LINEAR PROGRAMMING 3.5 ■ FIGURE 3.21 The spreadsheet obtained after solving the Wyndor problem. A 1 2 3 4 5 6 7 8 9 10 11 12 69 FORMULATING AND SOLVING LP MODELS ON A SPREADSHEET B D C E G F Wyndor Glass Co. Product-Mix Problem Doors 3 Profit Per Batch ($000) Plant 1 Plant 2 Plant 3 Windows 5 Hours Hours Used Per Batch Produced Used 2 0 1 <= 12 <= 2 0 2 3 18 <= Batches Produced Solver Parameters Set Objective Cell:TotalProfit To:Max By Changing Variable Cells: BatchesProduced Subject to the Constraints: HoursUsed<= HoursAvailable Solver Options: Make Variables Nonnegative Solving Method: Simplex LP Doors 2 5 6 7 8 9 11 12 Hours Available 4 12 18 Total Profit ($000) 36 Windows 6 E Hours Used =SUMPRODUCT(C7:D7,BatchesProduced) =SUMPRODUCT(C8:D8,BatchesProduced) =SUMPRODUCT(C9:D9,BatchesProduced) G Total Profit =SUMPRODUCT(ProfitPerBatch,BatchesProduced) Range Name BatchesProduced HoursAvailable HoursUsed HoursUsedPerBatchProduced ProfitPerBatch TotalProfit Cells C12:D12 G7:G9 E7:E9 C7:D9 C4:D4 G12 At this point, you might want to check what would happen to the optimal solution if any of the numbers in the data cells were changed to other possible values. This is easy to do because Solver saves all the addresses for the objective cell, changing cells, constraints, and so on when you save the file. All you need to do is make the changes you want in the data cells and then click on Solve in the Solver dialog box again. (Sections 4.7 and 7.3 will focus on this kind of sensitivity analysis, including how to use Solver’s Sensitivity Report to expedite this type of what-if analysis.) To assist you with experimenting with these kinds of changes, your OR Courseware includes Excel files for this chapter (as for others) that provide a complete formulation and solution of the examples here (the Wyndor problem and the ones in Sec. 3.4) in a spreadsheet format. We encourage you to “play” with these examples to see what happens with different data, different solutions, and so forth. You might also find these spreadsheets useful as templates for solving homework problems. In addition, we suggest that you use this chapter’s Excel files to take a careful look at the spreadsheet formulations for some of the examples in Sec. 3.4. This will demonstrate how to formulate linear programming models in a spreadsheet that are larger and more complicated than for the Wyndor problem. You will see other examples of how to formulate and solve various kinds of OR models in a spreadsheet in later chapters. The supplementary chapters on the book’s website also include a complete chapter (Chap. 21) that is devoted to the art of modeling in spreadsheets. That chapter describes in detail both the general process and the basic guidelines for building a spreadsheet model. It also presents some techniques for debugging such models. 70 CHAPTER 3 INTRODUCTION TO LINEAR PROGRAMMING Using ASPE’s Solver to Solve the Model Frontline Systems, the original developer of the standard Solver included with Excel (hereafter referred to as Excel’s Solver in this subsection), also has developed Premium versions of Solver that provide greatly enhanced functionality. The company now features a particularly powerful Premium Solver called Analytic Solver Platform. New with this edition, we are excited to provide access to the Excel add-in, Analytic Solver Platform for Education (ASPE) from Frontline Systems. Instructions for installing this software are on the very first page of the book (before the title page) and also on the book’s website, www.mhhe.com/hillier. When ASPE is installed, a new tab is available on the Excel ribbon called Analytic Solver Platform. Choosing this tab will reveal the ribbon shown in Figure 3.22. The buttons on this ribbon will be used to interact with ASPE. This same figure also reveals a nice feature of ASPE—the Solver Options and Model Specifications pane (showing the objective cell, changing cells, constraints, etc.)—that can be seen alongside your main spreadsheet, with both visible simultaneously. This pane can be toggled on (to see the model) or off (to hide the model and leave more room for the spreadsheet) by clicking on the Model button on the far left of the Analytic Solver Platform ribbon. Also, since the model was already set up with Excel’s Solver in the previous subsection, it is already set up in the ASPE Model pane, with the objective specified as TotalProfit (G12) with changing cells BatchesProduced (C12:D12) and the constraints HoursUsed (E7:E9) <= HoursAvailable (G7:G9). The data for Excel’s Solver and ASPE are compatible with each other. Making a change with one makes the same change in the other. Thus, you can work with either Excel’s Solver or ASPE, and then go back and forth, without losing any Solver data. If the model had not been previously set up with Excel’s Solver, the steps for doing so with ASPE are analogous to the steps used with Excel’s Solver as covered in the previous subsection. In both cases, we need to specify the location of the objective cell, the changing cells, and the functional constraints, and then click to solve the model. However, the user interface is somewhat different. ASPE uses the buttons on the Analytic Solver Platform ribbon instead of the Solver dialog box. We will now walk you through the steps to set up the Wyndor problem in ASPE. To specify TotalProfit (G12) as the objective cell, select the cell in the spreadsheet and then click on the Objective button on the Analytic Solver Platform ribbon. This will drop down a menu where you can choose to minimize (Min) or maximize (Max) the objective cell. Within the options of Min or Max are further options (Normal, Expected, VaR, etc.). For now, we will always choose the Normal option. To specify UnitsProduced (C12:D12) as the changing cells, select these cells in the spreadsheet and then click on the Decisions button on the Analytic Solver Platform ribbon. This will drop down a menu where you can choose various options (Plot, Normal, Recourse). For linear programming, we will always choose the Normal option. ■ FIGURE 3.22 The Analytic Solver Platform Ribbon and spreadsheet for the Wyndor problem alongside the Solver Options and Model Specifications pane. 3.6 FORMULATING VERY LARGE LINEAR PROGRAMMING MODELS 71 ■ FIGURE 3.24 The Output tab of the Model pane shows a summary of the solution process for the Wyndor problem. ■ FIGURE 3.23 The Engine tab of the Model of ASPE includes options to select the solver Engine (Standard LP/Quadratic Engine in this case) and to set the Assume Non-negative option to True. ■ 3.6 Next the functional constraints need to be specified. For the Wyndor problem, the functional constraints are HoursUsed (E7:E9) <= HoursAvailable (G7:G9). To enter these constraints in ASPE, select the cells representing the left-hand side of these constraints (HoursUsed, or E7:E9) and click the Constraints button on the Analytic Solver Platform ribbon. This drops down a menu for various kinds of constraints. For linear programming functional constraints, choose Normal Constraint and then the type of constraint desired (either <=, =, or >=). For the Wyndor problem, choosing <= would then bring up the Add Constraint dialog box much like the Add Constraint dialog box for Excel’s Solver (see Figure 3.18). The constraint is then entered in the same way as with Excel’s Solver. Changes to the model can easily be made within the Model pane shown on the right side of Figure 3.22. For example, to delete an element of the model (e.g., the objective, changing cells, or constraints), select that part of the model and then click on the red X near the top of the Model pane. To change an element of the model, double-clicking on that element in the Model pane will bring up a dialog box allowing you to make changes to that part of the model. Selecting the Engine tab at the top of the Model pane will show information about the algorithm that will be used to solve the problem as well as a variety of options for that algorithm. The drop-down menu at the top will allow you to choose the algorithm. For a linear programming model (such as the Wyndor problem), you will want to choose the Standard LP/Quadratic Engine. This is equivalent to the Simplex LP option in Excel’s Solver. To make unconstrained variables nonnegative (as we did in Fig. 3.19 with Excel’s Solver), be sure that the Assume Non-negative option is set to true. Fig. 3.23 shows the model pane after making these selections. Once the model is all set up in ASPE, the model would be solved by clicking on the Optimize button on the Analytic Solver Platform ribbon. Just like Excel’s Solver, this will then display the results of solving the model on the spreadsheet, as shown in Figure 3.24. As seen in this figure, the Output tab of the Model pane also will show a summary of the solution process, including the message (similar to Fig. 3.20) that “Solver found a solution. All constraints and optimality conditions are satisfied.” FORMULATING VERY LARGE LINEAR PROGRAMMING MODELS Linear programming models come in many different sizes. For the examples in Secs. 3.1 and 3.4, the model sizes range from three functional constraints and two decision variables (for the Wyndor and radiation therapy problems) up to 17 functional constraints and 12 decision variables (for the Save-It Company problem). The latter case may seem like a rather large model. After all, it does take a substantial amount of time just to write down a model of this size. However, by contrast, the models for the application vignettes presented in this chapter are much, much larger. 72 CHAPTER 3 INTRODUCTION TO LINEAR PROGRAMMING Such model sizes are not at all unusual. Linear programming models in practice commonly have many hundreds or thousands of functional constraints. In fact, they occasionally will have even millions of functional constraints. The number of decision variables frequently is even larger than the number of functional constraints, and occasionally will range well into the millions. Formulating such monstrously large models can be a daunting task. Even a “mediumsized” model with a thousand functional constraints and a thousand decision variables has over a million parameters (including the million coefficients in these constraints). It simply is not practical to write out the algebraic formulation, or even to fill in the parameters on a spreadsheet, for such a model. So how are these very large models formulated in practice? It requires the use of a modeling language. Modeling Languages A mathematical modeling language is software that has been specifically designed for efficiently formulating large mathematical models, including linear programming models. Even with millions of functional constraints, they typically are of a relatively few types. Similarly, the decision variables will fall into a small number of categories. Therefore, using large blocks of data in databases, a modeling language will use a single expression to simultaneously formulate all the constraints of the same type in terms of the variables of each type. We will illustrate this process soon. In addition to efficiently formulating large models, a modeling language will expedite a number of model management tasks, including accessing data, transforming data into model parameters, modifying the model whenever desired, and analyzing solutions from the model. It also may produce summary reports in the vernacular of the decision makers, as well as document the model’s contents. Several excellent modeling languages have been developed over recent decades. These include AMPL, MPL, OPL, GAMS, and LINGO. The student version of one of these, MPL (short for Mathematical Programming Language), is provided for you on the book’s website along with extensive tutorial material. As subsequent versions are released in future years, the latest student version also can be downloaded from the website, maximalsoftware.com. MPL is a product of Maximal Software, Inc. One feature is extensive support for Excel in MPL. This includes both importing and exporting Excel ranges from MPL. Full support also is provided for the Excel VBA macro language as well as various programming languages, through OptiMax Component Library, which now is included within MPL. This feature allows the user to fully integrate MPL models into Excel and solve with any of the powerful solvers that MPL supports. LINGO is a product of LINDO Systems, Inc., which also markets a spreadsheetadd-in optimizer called What’sBest! that is designed for large industrial problems, as well as a callable subroutine library called the LINDO API. The LINGO software includes as a subset the LINDO interface that has been a popular introduction to linear programming for many people. The student version of LINGO with the LINDO interface is part of the software included on the book’s website. All of the LINDO Systems products can also be downloaded from www.lindo.com. Like MPL, LINGO is a powerful general-purpose modeling language. A notable feature of LINGO is its great flexibility for dealing with a wide variety of OR problems in addition to linear programming. For example, when dealing with highly nonlinear models, it contains a global optimizer that will find a globally optimal solution. (More about this in Sec. 13.10.). The latest LINGO also has a built-in programming language so you can do things like solve several different optimization problems as part of one run, which can be useful for such tasks as performing parametric analysis (described in Secs. 4.7 and 8.2). In addition, LINGO has special capabilities for solving An Application Vignette A key part of a country’s financial infrastructure is its securities markets. By allowing a variety of financial institutions and their clients to trade stocks, bonds, and other financial securities, they securities markets help fund both public and private initiatives. Therefore, the efficient operation of its securities markets plays a crucial role in providing a platform for the economic growth of the country. Each central securities depository and its system for quickly settling security transactions are part of the operational backbone of securities markets and a key component of financial system stability. In Mexico, an institution called INDEVAL provides both the central securities depository and its security settlement system for the entire country. This security settlement system uses electronic book entries, modifying cash and securities balances, for the various parties in the transactions. The total value of the securities transactions the INDEVAL settles averages over $250 billion daily. This makes INDEVAL the main liquidity conduit for Mexico’s entire financial sector. Therefore, it is extremely important that INDEVAL’s system for clearing securities transactions be an exceptionally efficient one that maximizes the amount of cash that can be delivered almost instantaneously after the transactions. Because of past dissatisfaction with this system, INDEVAL’s Board of Directors ordered a major study in 2005 to completely redesign the system. Following more than 12,000 man-hours devoted to this redesign, the new system was successfully launched in November 2008. The core of the new system is a huge linear programming model that is applied many times daily to choose which of thousands of pending transactions should be settled immediately with the depositor’s available balances. Linear programming is ideally suited for this application because huge models can be solved quickly to maximize the value of the transactions settled while taking into account the various relevant constraints. This application of linear programming has substantially enhanced and strengthened the Mexican financial infrastructure by reducing its daily liquidity requirements by $130 billion. It also reduces the intraday financing costs for market participants by more than $150 million annually. This application led to INDEVAL winning the prestigious First Prize in the 2010 international competition for the Franz Edelman Award for Achievement in Operations Research and the Management Sciences. Source: D. Muñoz, M. de Lascurain, O. Romeo-Hernandez, F. Solis, L. de los Santoz, A. Palacios-Brun, F. Herrería, and J. Villaseñor, “INDEVAL Develops a New Operating and Settlement System Using Operations Research,” Interfaces 41, no. 1 (January-February 2011), pp. 8–17. (A link to this article is provided on our Web site, www.mhhe.com/hillier.) stochastic programming problems (the topic of Sec. 7.4), using a variety of functions for most profitability distributions, and performing extensive graphing. The book’s website includes MPL, LINGO and LINDO formulations for essentially every example in this book to which these modeling languages and optimizers can be applied. Now let us look at a simplified example that illustrates how a very large linear programming model can arise. An Example of a Problem with a Huge Model Management of the WORLDWIDE CORPORATION needs to address a product-mix problem, but one that is vastly more complex than the Wyndor product-mix problem introduced in Sec. 3.1. This corporation has 10 plants in various parts of the world. Each of these plants produces the same 10 products and then sells them within its region. The demand (sales potential) for each of these products from each plant is known for each of the next 10 months. Although the amount of a product sold by a plant in a given month cannot exceed the demand, the amount produced can be larger, where the excess amount would be stored in inventory (at some unit cost per month) for sale in a later month. Each unit of each product takes the same amount of space in inventory, and each plant has some upper limit on the total number of units that can be stored (the inventory capacity). Each plant has the same 10 production processes (we’ll refer to them as machines), each of which can be used to produce any of the 10 products. Both the production cost per unit of a product and the production rate of the product (number of units produced per day devoted to that product) depend on the combination of plant and machine involved (but not 74 CHAPTER 3 INTRODUCTION TO LINEAR PROGRAMMING the month). The number of working days (production days available) varies somewhat from month to month. Since some plants and machines can produce a particular product either less expensively or at a faster rate than other plants and machines, it is sometimes worthwhile to ship some units of the product from one plant to another for sale by the latter plant. For each combination of a plant being shipped from (the fromplant) and a plant being shipped to (the toplant), there is a certain cost per unit shipped of any product, where this unit shipping cost is the same for all the products. Management now needs to determine how much of each product should be produced by each machine in each plant during each month, as well as how much each plant should sell of each product in each month and how much each plant should ship of each product in each month to each of the other plants. Considering the worldwide price for each product, the objective is to find the feasible plan that maximizes the total profit (total sales revenue minus the sum of the total production costs, inventory costs, and shipping costs). We should note again that this is a simplified example in a number of ways. We have assumed that the number of plants, machines, products, and months are exactly the same (10). In most real situations, the number of products probably will be far larger and the planning horizon is likely to be considerably longer than 10 months, whereas the number of “machines” (types of production processes) may be less than 10. We also have assumed that every plant has all the same types of machines (production processes) and every machine type can produce every product. In reality, the plants may have some differences in terms of their machine types and the products they are capable of producing. The net result is that the corresponding model for some corporations may be smaller than the one for this example, but the model for other corporations may be considerably larger (perhaps even vastly larger) than this one. The Structure of the Resulting Model Because of the inventory costs and the limited inventory capacities, it is necessary to keep track of the amount of each product kept in inventory in each plant during each month. Consequently, the linear programming model has four types of decision variables: production quantities, inventory quantities, sales quantities, and shipping quantities. With 10 plants, 10 machines, 10 products, and 10 months, this gives a total of 21,000 decision variables, as outlined below. Decision Variables. 10,000 production variables: one for each combination of a plant, machine, product, and month 1,000 inventory variables: one for each combination of a plant, product, and month 1,000 sales variables: one for each combination of a plant, product, and month 9,000 shipping variables: one for each combination of a product, month, plant (the fromplant), and another plant (the toplant) Multiplying each of these decision variables by the corresponding unit cost or unit revenue, and then summing over each type, the following objective function can be calculated: Objective Function. Maximize Profit  total sales revenues  total cost, where Total cost  total production cost  total inventory cost  total shipping cost. 3.6 FORMULATING VERY LARGE LINEAR PROGRAMMING MODELS 75 When maximizing this objective function, the 21,000 decision variables need to satisfy nonnegativity constraints as well as four types of functional constraints—production capacity constraints, plant balance constraints (equality constraints that provide appropriate values to the inventory variables), maximum inventory constraints, and maximum sales constraints. As enumerated below, there are a total of 3,100 functional constraints, but all the constraints of each type follow the same pattern. Functional Constraints. 1,000 production capacity constraints (one for each combination of a plant, machine, and month): Production days used  production days available, where the left-hand side is the sum of 10 fractions, one for each product, where each fraction is that product’s production quantity (a decision variable) divided by the product’s production rate (a given constant). 1,000 plant balance constraints (one for each combination of a plant, product, and month): Amount produced  inventory last month  amount shipped in  sales  current inventory  amount shipped out, where the amount produced is the sum of the decision variables representing the production quantities at the machines, the amount shipped in is the sum of the decision variables representing the shipping quantities in from the other plants, and the amount shipped out is the sum of the decision variables representing the shipping quantities out to the other plants. 100 maximum inventory constraints (one for each combination of a plant and month): Total inventory  inventory capacity, where the left-hand side is the sum of the decision variables representing the inventory quantities for the individual products. 1,000 maximum sales constraints (one for each combination of a plant, product, and month): Sales  demand. Now let us see how the MPL Modeling Language can formulate this huge model very compactly. Formulation of the Model in MPL The modeler begins by assigning a title to the model and listing an index for each of the entities of the problem, as illustrated below. TITLE Production_Planning; INDEX product month plant fromplant toplant machine := := := := := := A1..A10; (Jan, Feb, Mar, Apr, May, Jun, Jul, Aug, Sep, Oct); p1..p10; plant; plant; m1..m10; 76 CHAPTER 3 INTRODUCTION TO LINEAR PROGRAMMING Except for the months, the entries on the right-hand side are arbitrary labels for the respective products, plants, and machines, where these same labels are used in the data files. Note that a colon is placed after the name of each entry and a semicolon is placed at the end of each statement (but a statement is allowed to extend over more than one line). A big job with any large model is collecting and organizing the various types of data into data files. A data file can be in either dense format or sparse format. In dense format, the file will contain an entry for every combination of all possible values of the respective indexes. For example, suppose that the data file contains the production rates for producing the various products with the various machines (production processes) in the various plants. In dense format, the file will contain an entry for every combination of a plant, a machine, and a product. However, the entry may need to be zero for most of the combinations because that particular plant may not have that particular machine or, even if it does, that particular machine may not be capable of producing that particular product in that particular plant. The percentage of the entries in dense format that are nonzero is referred to as the density of the data set. In practice, it is common for large data sets to have a density under 5 percent, and it frequently is under 1 percent. Data sets with such a low density are referred to as being sparse. In such situations, it is more efficient to use a data file in sparse format. In this format, only the nonzero values (and an identification of the index values they refer to) are entered into the data file. Generally, data are entered in sparse format either from a text file or from corporate databases. The ability to handle sparse data sets efficiently is one key for successfully formulating and solving large-scale optimization models. MPL can readily work with data in either dense format or sparse format. In the Worldwide Corp. example, eight data files are needed to hold the product prices, demands, production costs, production rates, production days available, inventory costs, inventory capacities, and shipping costs. We assume that these data files are available in sparse format. The next step is to give a brief suggestive name to each one and to identify (inside square brackets) the index or indexes for that type of data, as shown below. DATA Price[product] := SPARSEFILE(“Price.dat”); Demand[plant, product, month] := SPARSEFILE(“Demand.dat”); ProdCost[plant, machine, product] := SPARSEFILE(“Produce.dat”, 4); ProdRate[plant, machine, product] := SPARSEFILE(“Produce.dat”, 5); ProdDaysAvail[month] := SPARSEFILE(“ProdDays.dat”); InvtCost[plant, product] := SPARSEFILE(“InvtCost.dat”); InvtCapacity[plant] := SPARSEFILE(“InvtCap.dat”); ShipCost[fromplant, toplant] := SPARSEFILE (“ShipCost.dat”); To illustrate the contents of these data files, consider the one that provides production costs and production rates. Here is a sample of the first few entries of SPARSEFILE produce.dat: ! ! Produce.dat - Production Cost and Rate ! ! ProdCost[plant, machine, product]: ! ProdRate[plant, machine, product]: ! p1, m11, A1, 73.30, 500, p1, m11, A2, 52.90, 450, p1, m12, A3, 65.40, 550, p1, m13, A3, 47.60, 350, Next, the modeler gives a short name to each type of decision variable. Following the name, inside square brackets, is the index or indexes over which the subscripts run. 3.6 FORMULATING VERY LARGE LINEAR PROGRAMMING MODELS VARIABLES Produce[plant, machine, product, month] Inventory[plant, product, month] Sales[plant, product, month] Ship[product, month, fromplant, toplant] WHERE (fromplant toplant); 77 -> Prod; -> Invt; -> Sale; In the case of the decision variables with names longer than four letters, the arrows on the right point to four-letter abbreviations to fit the size limitations of many solvers. The last line indicates that the fromplant subscript and toplant subscript are not allowed to have the same value. There is one more step before writing down the model. To make the model easier to read, it is useful first to introduce macros to represent the summations in the objective function. MACROS Total Revenue TotalProdCost TotalInvtCost TotalShipCost TotalCost := SUM(plant, product, month: Price*Sales); := SUM(plant, machine, product, month: ProdCost*Produce); := SUM(plant, product, month: InvtCost*Inventory); := SUM(product, month, fromplant, toplant: ShipCost*Ship); := TotalProdCost + TotalInvtCost + TotalShipCost; The first four macros use the MPL keyword SUM to execute the summation involved. Following each SUM keyword (inside the parentheses) is, first, the index or indexes over which the summation runs. Next (after the colon) is the vector product of a data vector (one of the data files) times a variable vector (one of the four types of decision variables). Now this model with 3,100 functional constraints and 21,000 decision variables can be written down in the following compact form. MODEL MAX Profit = TotalRevenue – TotalCost; SUBJECT TO ProdCapacity[plant, machine, month] -> PCap: SUM(product: Produce/ProdRate) <= ProdDaysAvail; PlantBal[plant, product, month] -> PBal: SUM(machine: Produce) + Inventory [month – 1] + SUM(fromplant: Ship[fromplant, toplant:= plant]) = Sales + Inventory + SUM(toplant: Ship[fromplant:= plant, toplant]); MaxInventory [plant, month] -> MaxI: SUM(product: Inventory) <= InvtCapacity; BOUNDS Sales <= Demand; END For each of the four types of constraints, the first line gives the name for this type. There is one constraint of this type for each combination of values for the indexes inside the square brackets following the name. To the right of the brackets, the arrow points to a four-letter abbreviation of the name that a solver can use. Below the first line, the general form of constraints of this type is shown by using the SUM operator. For each production capacity constraint, each term in the summation consists of a decision variable (the production quantity of that product on that machine in that plant during that month) divided by the corresponding production rate, which gives the number of production days being used. Summing over the products then gives the total number of 78 CHAPTER 3 INTRODUCTION TO LINEAR PROGRAMMING production days being used on that machine in that plant during that month, so this number must not exceed the number of production days available. The purpose of the plant balance constraint for each plant, product, and month is to give the correct value to the current inventory variable, given the values of all the other decision variables including the inventory level for the preceding month. Each of the SUM operators in these constraints involves simply a sum of decision variables rather than a vector product. This is the case also for the SUM operator in the maximum inventory constraints. By contrast, the left-hand side of the maximum sales constraints is just a single decision variable for each of the 1,000 combinations of a plant, product, and month. (Separating these upperbound constraints on individual variables from the regular functional constraints is advantageous because of the computational efficiencies that can be obtained by using the upper bound technique described in Sec. 8.3.) No lower-bound constraints are shown here because MPL automatically assumes that all 21,000 decision variables have nonnegativity constraints unless nonzero lower bounds are specified. For each of the 3,100 functional constraints, note that the left-hand side is a linear function of the decision variables and the right-hand side is a constant taken from the appropriate data file. Since the objective function also is a linear function of the decision variables, this model is a legitimate linear programming model. To solve the model, MPL supports various leading solvers (software packages for solving linear programming models and/or other OR models) that are installed in MPL. As already mentioned in Sec. 1.5, these solvers include CPLEX, GUROBI, CoinMP, and SULUM, all of which can solve very large linear programming models with great efficiency. The student version of MPL in your OR Courseware already has installed the student version of these four solvers. For example, consider CLPEX. Its student version uses the simplex method to solve linear programming models. Therefore, to solve such a model formulated with MPL, all you have to do is choose Solve CPLEX from the Run menu or press the Run Solve button in the Toolbar. You then can display the solution file in a view window by pressing the View button at the bottom of the Status Window. For especially large linear programming models, Sec. 1.5 points out how academic users can acquire fullsize versions of MPL with CPLEX and GUROBI for use in their coursework. This brief introduction to MPL illustrates the ease with which modelers can use modeling languages to formulate huge linear programming models in a clear, concise way. To assist you in using MPL, an MPL Tutorial is included on the book’s website. This tutorial goes through all the details of formulating smaller versions of the production planning example considered here. You also can see elsewhere on the book’s website how all the other linear programming examples in this chapter and subsequent chapters would be formulated with MPL and solved by CPLEX. The LINGO Modeling Language LINGO is another popular modeling language featured in this book. The company, LINDO Systems, that produces LINGO first became known for the easy-to-use optimizer, LINDO, which is a subset of the LINGO software. LINDO Systems also produces a spreadsheet solver, What’sBest!, and a callable solver library, the LINDO API. The student version of LINGO is provided to you on the book’s website. (The latest trial versions of all of the above can be downloaded from www.lindo.com.) Both LINDO and What’sBest! share the LINDO API as the solver engine. The LINDO API has solvers based on the simplex method and interior-point/barrier algorithms (such as discussed in Secs. 4.9 and 8.4), special solvers for chance-constrained models (Sec. 7.5) and stochastic programming problems (Sec. 7.6), and solvers for nonlinear programming (Chap. 13), including even a global solver for nonconvex programming. Like MPL, LINGO enables a modeler to efficiently formulate a huge model in a clear compact fashion that separates the data from the model formulation. This separation means that as changes occur in the data describing the problem that needs to be solved from day to day (or even minute to minute), the user needs to change only the data and not be SELECTED REFERENCES 79 concerned with the model formulation. You can develop a model on a small data set and then when you supply the model with a large data set, the model formulation adjusts automatically to the new data set. LINGO uses sets as a fundamental concept. For example, in the Worldwide Corp. production planning problem, the simple or “primitive” sets of interest are products, plants, machines, and months. Each member of a set may have one or more attributes associated with it, such as the price of a product, the inventory capacity of a plant, the production rate of a machine, and the number of production days available in a month. Some of these attributes are input data, while others, such as production and shipping quantities, are decision variables for the model. One can also define derived sets that are built from combinations of other sets. As with MPL, the SUM operator is commonly used to write the objective function and constraints in a compact form. There is a hard copy manual available for LINGO. This entire manual also is available directly in LINGO via the Help command and can be searched in a variety of ways. A supplement to this chapter on the book’s website describes LINGO further and illustrates its use on a couple of small examples. A second supplement shows how LINGO can be used to formulate the model for the Worldwide Corp. production planning example. Appendix 4.1 at the end of Chap. 4 also provides an introduction to using both LINDO and LINGO. In addition, a LINGO tutorial on the website provides the details needed for doing basic modeling with this modeling language. The LINGO formulations and solutions for the various examples in both this chapter and many other chapters also are included on the website. ■ 3.7 CONCLUSIONS Linear programming is a powerful technique for dealing with resource-allocation problems, cost–benefit–trade-off problems, and fixed-requirements problems, as well as other problems having a similar mathematical formulation. It has become a standard tool of great importance for numerous business and industrial organizations. Furthermore, almost any social organization is concerned with similar types of problems in some context, and there is a growing recognition of the extremely wide applicability of linear programming. However, not all problems of these types can be formulated to fit a linear programming model, even as a reasonable approximation. When one or more of the assumptions of linear programming is violated seriously, it may then be possible to apply another mathematical programming model instead, e.g., the models of integer programming (Chap. 12) or nonlinear programming (Chap. 13). ■ SELECTED REFERENCES 1. Baker, K. R.: Optimization Modeling with Spreadsheets, 2nd ed., Wiley, New York, 2012. 2. Denardo, E. V.: Linear Programming and Generalizations: A Problem-based Introduction with Spreadsheets, Springer, New York, 2011, chap. 7. 3. Hillier, F. S., and M. S. Hillier: Introduction to Management Science: A Modeling and Case Studies Approach with Spreadsheets, 5th ed., McGraw-Hill/Irwin, Burr Ridge, IL, 2014, chaps. 2, 3. 4. LINGO User’s Guide, LINDO Systems, Inc., Chicago, IL, 2011. 5. MPL Modeling System (Release 4.2) manual, Maximal Software, Inc., Arlington, VA, e-mail: info@maximalsoftware.com, 2012. 6. Murty, K. G.: Optimization for Decision Making: Linear and Quadratic Models, Springer, New York, 2010, chap. 3. 7. Schrage, L.: Optimization Modeling with LINGO, LINDO Systems Press, Chicago, IL, 2008. 8. Williams, H. P.: Model Building in Mathematical Programming, 4th ed., Wiley, New York, 1999. 80 CHAPTER 3 INTRODUCTION TO LINEAR PROGRAMMING Some Award-Winning Applications of Linear Programming: (A link to all these articles is provided on our website, www.mhhe.com/hillier.) A1. Ambs, K., S. Cwilich, M. Deng, D. J. Houck, D. F. Lynch, and D. Yan: “Optimizing Restoration Capacity in the AT&T Network,” Interfaces, 30(1): 26–44, January–February 2000. A2. Caixeta-Filho, J. V., J. M. van Swaay-Neto, and A. de P. Wagemaker: “Optimization of the Production Planning and Trade of Lily Flowers at Jan de Wit Company,” Interfaces, 32(1): 35–46, January–February 2002. A3. Chalermkraivuth, K. C., S. Bollapragada, M. C. Clark, J. Deaton, L. Kiaer, J. P. Murdzek, W. Neeves, B. J. Scholz, and D. Toledano: “GE Asset Management, Genworth Financial, and GE Insurance Use a Sequential-Linear-Programming Algorithm to Optimize Portfolios, Interfaces, 35(5): 370–380, September–October 2005. A4. Elimam, A. A., M. Girgis, and S. Kotob: “A Solution to Post Crash Debt Entanglements in Kuwait’s al-Manakh Stock Market,” Interfaces, 27(1): 89–106, January–February 1997. A5. Epstein, R., R. Morales, J. Serón, and A. Weintraub: “Use of OR Systems in the Chilean Forest Industries,” Interfaces, 29(1): 7–29, January–February 1999. A6. Feunekes, U., S. Palmer, A. Feunekes, J. MacNaughton, J. Cunningham, and K. Mathisen: “Taking the Politics Out of Paving: Achieving Transportation Asset Management Excellence Through OR,” Interfaces, 41(1): 51-65, January–February 2011. A7. Geraghty, M. K., and E. Johnson: “Revenue Management Saves National Car Rental,” Interfaces, 27(1): 107–127, January–February 1997. A8. Leachman, R. C., R. F. Benson, C. Liu, and D. J. Raar: “IMPReSS: An Automated ProductionPlanning and Delivery-Quotation System at Harris Corporation—Semiconductor Sector,” Interfaces, 26(1): 6–37, January–February 1996. A9. Mukuch, W. M., J. L. Dodge, J. G. Ecker, D. C. Granfors, and G. J. Hahn: “Managing Consumer Credit Delinquency in the U.S. Economy: A Multi-Billion Dollar Management Science Application,” Interfaces, 22(1): 90–109, January–February 1992. A10. Murty, K. G., Y.-w. Wan, J. Liu, M. M. Tseng, E. Leung, K.-K. Lai, and H. W. C. Chiu: “Hongkong International Terminals Gains Elastic Capacity Using a Data-Intensive DecisionSupport System,” Interfaces, 35(1): 61–75, January–February 2005. A11. Yoshino, T., T. Sasaki, and T. Hasegawa: “The Traffic-Control System on the Hanshin Expressway,” Interfaces, 25(1): 94–108, January–February 1995. ■ LEARNING AIDS FOR THIS CHAPTER ON OUR WEBSITE (www.mhhe.com/hillier) Solved Examples: Examples for Chapter 3 A Demonstration Example in OR Tutor: Graphical Method Procedures in IOR Tutorial: Interactive Graphical Method Graphical Method and Sensitivity Analysis An Excel Add-In: Analytic Solver Platform for Education (ASPE) “Ch. 3—Intro to LP” Files for Solving the Examples: Excel Files LINGO/LINDO File MPL/Solvers File Glossary for Chapter 3 Supplements to This Chapter: The LINGO Modeling Language More About LINGO. See Appendix 1 for documentation of the software. 81 PROBLEMS ■ PROBLEMS The symbols to the left of some of the problems (or their parts) have the following meaning: D: I: C: The demonstration example listed above may be helpful. You may find it helpful to use the corresponding procedure in IOR Tutorial (the printout records your work). Use the computer to solve the problem by applying the simplex method. The available software options for doing this include Excel’s Solver and ASPE (Sec. 3.5), MPL/ Solvers (Sec. 3.6), LINGO (Supplements 1 and 2 to this chapter on the book’s website and Appendix 4.1), and LINDO (Appendix 4.1), but follow any instructions given by your instructor regarding the option to use. When a problem asks you to use Solver to solve the model, you may use either Excel’s Solver or ASPE’s Solver. An asterisk on the problem number indicates that at least a partial answer is given in the back of the book. 3.1-1. Read the referenced article that fully describes the OR study summarized in the application vignette presented in Sec. 3.1. Briefly describe how linear programming was applied in this study. Then list the various financial and nonfinancial benefits that resulted from this study. 3.1-2.* For each of the following constraints, draw a separate graph to show the nonnegative solutions that satisfy this constraint. (a) x1  3x2  6 (b) 4x1  3x2  12 (c) 4x1  x2  8 (d) Now combine these constraints into a single graph to show the feasible region for the entire set of functional constraints plus nonnegativity constraints. D 3.1-3. Consider the following objective function for a linear programming model: D Maximize Z  2x1  3x2 (a) Draw a graph that shows the corresponding objective function lines for Z  6, Z  12, and Z  18. (b) Find the slope-intercept form of the equation for each of these three objective function lines. Compare the slope for these three lines. Also compare the intercept with the x2 axis. 3.1-4. Consider the following equation of a line: 20x1  40x2  400 (a) Find the slope-intercept form of this equation. (b) Use this form to identify the slope and the intercept with the x2 axis for this line. (c) Use the information from part (b) to draw a graph of this line. D,I 3.1-5.* Use the graphical method to solve the problem: Maximize Z  2x1  x2 , subject to x2  10 2x1  5x2  60 x1  x2  18 3x1  x2  44 and x1  0, x2  0. 3.1-6. Use the graphical method to solve the problem: D,I Maximize Z  10x1  20x2 , subject to x1  2x2  15 x1  x2  12 5x1  3x2  45 and x1  0, x2  0. 3.1-7. The Whitt Window Company, a company with only three employees, makes two different kinds of hand-crafted windows: a wood-framed and an aluminum-framed window. The company earns $300 profit for each wood-framed window and $150 profit for each aluminum-framed window. Doug makes the wood frames and can make 6 per day. Linda makes the aluminum frames and can make 4 per day. Bob forms and cuts the glass and can make 48 square feet of glass per day. Each wood-framed window uses 6 square feet of glass and each aluminum-framed window uses 8 square feet of glass. The company wishes to determine how many windows of each type to produce per day to maximize total profit. (a) Describe the analogy between this problem and the Wyndor Glass Co. problem discussed in Sec. 3.1. Then construct and fill in a table like Table 3.1 for this problem, identifying both the activities and the resources. (b) Formulate a linear programming model for this problem. D,I (c) Use the graphical method to solve this model. I (d) A new competitor in town has started making wood-framed windows as well. This may force the company to lower the price they charge and so lower the profit made for each woodframed window. How would the optimal solution change (if at all) if the profit per wood-framed window decreases from $300 to $200? From $300 to 100? (You may find it helpful to use the Graphical Analysis and Sensitivity Analysis procedure in IOR Tutorial.) I (e) Doug is considering lowering his working hours, which would decrease the number of wood frames he makes per day. How would the optimal solution change if he makes only 5 wood frames per day? (You may find it helpful to use the Graphical Analysis and Sensitivity Analysis procedure in IOR Tutorial.) 3.1-8. The WorldLight Company produces two light fixtures (products 1 and 2) that require both metal frame parts and electrical 82 CHAPTER 3 INTRODUCTION TO LINEAR PROGRAMMING components. Management wants to determine how many units of each product to produce so as to maximize profit. For each unit of product 1, 1 unit of frame parts and 2 units of electrical components are required. For each unit of product 2, 3 units of frame parts and 2 units of electrical components are required. The company has 200 units of frame parts and 300 units of electrical components. Each unit of product 1 gives a profit of $1, and each unit of product 2, up to 60 units, gives a profit of $2. Any excess over 60 units of product 2 brings no profit, so such an excess has been ruled out. (a) Formulate a linear programming model for this problem. D,I (b) Use the graphical method to solve this model. What is the resulting total profit? 3.1-9. The Primo Insurance Company is introducing two new product lines: special risk insurance and mortgages. The expected profit is $5 per unit on special risk insurance and $2 per unit on mortgages. Management wishes to establish sales quotas for the new product lines to maximize total expected profit. The work requirements are as follows: created considerable excess production capacity. Management is considering devoting this excess capacity to one or more of three products; call them products 1, 2, and 3. The available capacity on the machines that might limit output is summarized in the following table: Machine Type Available Time (Machine Hours per Week) Milling machine Lathe Grinder 500 350 150 The number of machine hours required for each unit of the respective products is Productivity coefficient (in machine hours per unit) Work-Hours per Unit Department Special Risk Mortgage Work-Hours Available Underwriting Administration Claims 3 0 2 2 1 0 2400 800 1200 (a) Formulate a linear programming model for this problem. (b) Use the graphical method to solve this model. (c) Verify the exact value of your optimal solution from part (b) by solving algebraically for the simultaneous solution of the relevant two equations. D,I 3.1-10. Weenies and Buns is a food processing plant which manufactures hot dogs and hot dog buns. They grind their own flour for the hot dog buns at a maximum rate of 200 pounds per week. Each hot dog bun requires 0.1 pound of flour. They currently have a contract with Pigland, Inc., which specifies that a delivery of 800 pounds of pork product is delivered every Monday. Each hot dog requires 14 pound of pork product. All the other ingredients in the hot dogs and hot dog buns are in plentiful supply. Finally, the labor force at Weenies and Buns consists of 5 employees working full time (40 hours per week each). Each hot dog requires 3 minutes of labor, and each hot dog bun requires 2 minutes of labor. Each hot dog yields a profit of $0.88, and each bun yields a profit of $0.33. Weenies and Buns would like to know how many hot dogs and how many hot dog buns they should produce each week so as to achieve the highest possible profit. (a) Formulate a linear programming model for this problem. D,I (b) Use the graphical method to solve this model. 3.1-11.* The Omega Manufacturing Company has discontinued the production of a certain unprofitable product line. This act Machine Type Product 1 Product 2 Milling machine Lathe Grinder 9 5 3 3 4 0 Product 3 5 0 2 The sales department indicates that the sales potential for products 1 and 2 exceeds the maximum production rate and that the sales potential for product 3 is 20 units per week. The unit profit would be $50, $20, and $25, respectively, on products 1, 2, and 3. The objective is to determine how much of each product Omega should produce to maximize profit. (a) Formulate a linear programming model for this problem. C (b) Use a computer to solve this model by the simplex method. 3.1-12. Consider the following problem, where the value of c1 has not yet been ascertained. D Maximize Z  c1x1  x2 , subject to x1  x2  6 x1  2x2  10 and x1  0, x2  0. Use graphical analysis to determine the optimal solution(s) for c1 ). (x1, x2) for the various possible values of c1( D 3.1-13. Consider the following problem, where the value of k has not yet been ascertained. Maximize Z  x1  2x2 , 83 PROBLEMS subject to x2 x1  x2  2 x2  3 kx1  x2  2k  3, where k  0 (0, 2) The solution currently being used is x1  2, x2  3. Use graphical analysis to determine the values of k such that this solution actually is optimal. (0, 0) (3, 3) (6, 3) and x1  0, x2  0. D 3.1-14. Consider the following problem, where the values of c1 and c2 have not yet been ascertained. Maximize Z  c1x1  c2x2 , subject to 2x1  x2  11 x1  2x2  2 and x1  0, x2  0. Use graphical analysis to determine the optimal solution(s) for (x1, x2) for the various possible values of c1 and c2. (Hint: Separate the cases where c2  0, c2 0, and c2 0. For the latter two cases, focus on the ratio of c1 to c2.) 3.2-1. The following table summarizes the key facts about two products, A and B, and the resources, Q, R, and S, required to produce them. Resource Usage per Unit Produced Resource Product A Product B Q R S 2 1 3 1 2 3 Profit per unit 3 2 Amount of Resource Available 2 2 4 All the assumptions of linear programming hold. (a) Formulate a linear programming model for this problem. D,I (b) Solve this model graphically. (c) Verify the exact value of your optimal solution from part (b) by solving algebraically for the simultaneous solution of the relevant two equations. 3.2-2. The shaded area in the following graph represents the feasible region of a linear programming problem whose objective function is to be maximized. (6, 0) x1 Label each of the following statements as True or False, and then justify your answer based on the graphical method. In each case, give an example of an objective function that illustrates your answer. (a) If (3, 3) produces a larger value of the objective function than (0, 2) and (6, 3), then (3, 3) must be an optimal solution. (b) If (3, 3) is an optimal solution and multiple optimal solutions exist, then either (0, 2) or (6, 3) must also be an optimal solution. (c) The point (0, 0) cannot be an optimal solution. 3.2-3.* This is your lucky day. You have just won a $20,000 prize. You are setting aside $8,000 for taxes and partying expenses, but you have decided to invest the other $12,000. Upon hearing this news, two different friends have offered you an opportunity to become a partner in two different entrepreneurial ventures, one planned by each friend. In both cases, this investment would involve expending some of your time next summer as well as putting up cash. Becoming a full partner in the first friend’s venture would require an investment of $10,000 and 400 hours, and your estimated profit (ignoring the value of your time) would be $9,000. The corresponding figures for the second friend’s venture are $8,000 and 500 hours, with an estimated profit to you of $9,000. However, both friends are flexible and would allow you to come in at any fraction of a full partnership you would like. If you choose a fraction of a full partnership, all the above figures given for a full partnership (money investment, time investment, and your profit) would be multiplied by this same fraction. Because you were looking for an interesting summer job anyway (maximum of 600 hours), you have decided to participate in one or both friends’ ventures in whichever combination would maximize your total estimated profit. You now need to solve the problem of finding the best combination. (a) Describe the analogy between this problem and the Wyndor Glass Co. problem discussed in Sec. 3.1. Then construct and fill in a table like Table 3.1 for this problem, identifying both the activities and the resources. (b) Formulate a linear programming model for this problem. D,I (c) Use the graphical method to solve this model. What is your total estimated profit? 84 CHAPTER 3 INTRODUCTION TO LINEAR PROGRAMMING 3.2-4. Use the graphical method to find all optimal solutions for the following model: D,I Maximize Z  500x1  300x2 , subject to 15x1  5x2  300 10x1  6x2  240 8x1  12x2  450 and x1  0, x2  0. 3.2-5. Use the graphical method to demonstrate that the following model has no feasible solutions. D Maximize Z  5x1  7x2 , subject to 2x1  x2  1 x1  2x2  1 and x1  0, x2  0. 3.2-6. Suppose that the following constraints have been provided for a linear programming model. D –x1  3x2 –3x1  x2 30 30 and x1  0, x2  0. (a) Demonstrate that the feasible region is unbounded. (b) If the objective is to maximize Z   x1  x2, does the model have an optimal solution? If so, find it. If not, explain why not. (c) Repeat part (b) when the objective is to maximize Z  x1  x2. (d) For objective functions where this model has no optimal solution, does this mean that there are no good solutions according to the model? Explain. What probably went wrong when formulating the model? 3.3-1. Reconsider Prob. 3.2-3. Indicate why each of the four assumptions of linear programming (Sec. 3.3) appears to be reasonably satisfied for this problem. Is one assumption more doubtful than the others? If so, what should be done to take this into account? 3.3-2. Consider a problem with two decision variables, x1 and x2, which represent the levels of activities 1 and 2, respectively. For each variable, the permissible values are 0, 1, and 2, where the feasible combinations of these values for the two variables are determined from a variety of constraints. The objective is to maximize a x2 x1 0 1 2 0 1 2 0 3 6 4 8 12 8 13 18 certain measure of performance denoted by Z. The values of Z for the possibly feasible values of (x1, x2) are estimated to be those given in the following table: Based on this information, indicate whether this problem completely satisfies each of the four assumptions of linear programming. Justify your answers. 3.4-1. Read the referenced article that fully describes the OR study summarized in the application vignette presented in Sec. 3.4. Briefly describe how linear programming was applied in this study. Then list the various financial and nonfinancial benefits that resulted from this study. 3.4-2.* For each of the four assumptions of linear programming discussed in Sec. 3.3, write a one-paragraph analysis of how well you feel it applies to each of the following examples given in Sec. 3.4: (a) Design of radiation therapy (Mary). (b) Regional planning (Southern Confederation of Kibbutzim). (c) Controlling air pollution (Nori & Leets Co.). 3.4-3. For each of the four assumptions of linear programming discussed in Sec. 3.3, write a one-paragraph analysis of how well it applies to each of the following examples given in Sec. 3.4. (a) Reclaiming solid wastes (Save-It Co.). (b) Personnel scheduling (Union Airways). (c) Distributing goods through a distribution network (Distribution Unlimited Co.). 3.4-4. Use the graphical method to solve this problem: D,I Minimize Z  15x1  20x2 , subject to x1  2x2  10 2x1  3x2  6 x1  x2  6 and x1  0, D,I x2  0. 3.4-5. Use the graphical method to solve this problem: Minimize Z  3x1  2x2 , subject to x1  2x2  12 85 PROBLEMS 2x1  3x2  12 2x1  x2  8 and Ingredient x1  0, x2  0. 3.4-6. Consider the following problem, where the value of c1 has not yet been ascertained. D Maximize Z  c1x1  2x2 , subject to 4x1  x2  12 x1  x2  2 and x1  0, x2  0. Use graphical analysis to determine the optimal solution(s) for (x1, x2) for the various possible values of c1. 3.4-7. Consider the following model: D,I Grams of Ingredient per Serving Minimize Z  40x1  50x2 , subject to 2x1  3x2  30 x1  x2  12 2x1  x2  20 and x1  0, x2  0. (a) Use the graphical method to solve this model. (b) How does the optimal solution change if the objective function is changed to Z  40x1  70x2? (You may find it helpful to use the Graphical Analysis and Sensitivity Analysis procedure in IOR Tutorial.) (c) How does the optimal solution change if the third functional constraint is changed to 2x1  x2  15? (You may find it helpful to use the Graphical Analysis and Sensitivity Analysis procedure in IOR Tutorial.) 3.4-8. Ralph Edmund loves steaks and potatoes. Therefore, he has decided to go on a steady diet of only these two foods (plus some liquids and vitamin supplements) for all his meals. Ralph realizes that this isn’t the healthiest diet, so he wants to make sure that he eats the right quantities of the two foods to satisfy some key nutritional requirements. He has obtained the nutritional and cost information shown at the top of the next column. Ralph wishes to determine the number of daily servings (may be fractional) of steak and potatoes that will meet these requirements at a minimum cost. (a) Formulate a linear programming model for this problem. D,I (b) Use the graphical method to solve this model. C (c) Use a computer to solve this model by the simplex method. Steak Potatoes Carbohydrates Protein Fat 5 20 15 15 5 2 Cost per serving $8 $4 Daily Requirement (Grams)  50  40  60 3.4-9. Web Mercantile sells many household products through an online catalog. The company needs substantial warehouse space for storing its goods. Plans now are being made for leasing warehouse storage space over the next 5 months. Just how much space will be required in each of these months is known. However, since these space requirements are quite different, it may be most economical to lease only the amount needed each month on a month-by-month basis. On the other hand, the additional cost for leasing space for additional months is much less than for the first month, so it may be less expensive to lease the maximum amount needed for the entire 5 months. Another option is the intermediate approach of changing the total amount of space leased (by adding a new lease and/or having an old lease expire) at least once but not every month. The space requirement and the leasing costs for the various leasing periods are as follows: Month Required Space (Sq. Ft.) Leasing Period (Months) Cost per Sq. Ft. Leased 1 2 3 4 5 30,000 20,000 40,000 10,000 50,000 1 2 3 4 5 $ 65 $100 $135 $160 $190 The objective is to minimize the total leasing cost for meeting the space requirements. (a) Formulate a linear programming model for this problem. C (b) Solve this model by the simplex method. 3.4-10. Larry Edison is the director of the Computer Center for Buckly College. He now needs to schedule the staffing of the center. It is open from 8 A.M. until midnight. Larry has monitored the usage of the center at various times of the day, and determined that the following number of computer consultants are required: Time of Day 8 A.M.–noon Noon–4 P.M. 4 P.M.–8 P.M. 8 P.M.–midnight Minimum Number of Consultants Required to Be on Duty 4 8 10 6 86 CHAPTER 3 INTRODUCTION TO LINEAR PROGRAMMING Two types of computer consultants can be hired: full-time and part-time. The full-time consultants work for 8 consecutive hours in any of the following shifts: morning (8 A.M.–4 P.M.), afternoon (noon–8 P.M.), and evening (4 P.M.–midnight). Full-time consultants are paid $40 per hour. Part-time consultants can be hired to work any of the four shifts listed in the above table. Part-time consultants are paid $30 per hour. An additional requirement is that during every time period, there must be at least 2 full-time consultants on duty for every parttime consultant on duty. Larry would like to determine how many full-time and how many part-time workers should work each shift to meet the above requirements at the minimum possible cost. (a) Formulate a linear programming model for this problem. C (b) Solve this model by the simplex method. 3.4-11.* The Medequip Company produces precision medical diagnostic equipment at two factories. Three medical centers have placed orders for this month’s production output. The table below shows what the cost would be for shipping each unit from each factory to each of these customers. Also shown are the number of units that will be produced at each factory and the number of units ordered by each customer. Al wishes to know which investment plan maximizes the amount of money that can be accumulated by the beginning of year 6. (a) All the functional constraints for this problem can be expressed as equality constraints. To do this, let At, Bt, Ct, and Dt be the amount invested in investment A, B, C, and D, respectively, at the beginning of year t for each t where the investment is available and will mature by the end of year 5. Also let Rt be the number of available dollars not invested at the beginning of year t (and so available for investment in a later year). Thus, the amount invested at the beginning of year t plus Rt must equal the number of dollars available for investment at that time. Write such an equation in terms of the relevant variables above for the beginning of each of the 5 years to obtain the five functional constraints for this problem. (b) Formulate a complete linear programming model for this problem. C (c) Solve this model by the simplex model. 3.4-13. The Metalco Company desires to blend a new alloy of 40 percent tin, 35 percent zinc, and 25 percent lead from several available alloys having the following properties: Alloy Unit Shipping Cost To From Customer 1 Customer 2 Customer 3 Output 400 units 500 units Factory 1 Factory 2 $600 $400 $800 $900 $700 $600 Order size 300 units 200 units 400 units A decision now needs to be made about the shipping plan for how many units to ship from each factory to each customer. (a) Formulate a linear programming model for this problem. C (b) Solve this model by the simplex method. 3.4-12.* Al Ferris has $60,000 that he wishes to invest now in order to use the accumulation for purchasing a retirement annuity in 5 years. After consulting with his financial adviser, he has been offered four types of fixed-income investments, which we will label as investments A, B, C, D. Investments A and B are available at the beginning of each of the next 5 years (call them years 1 to 5). Each dollar invested in A at the beginning of a year returns $1.40 (a profit of $0.40) 2 years later (in time for immediate reinvestment). Each dollar invested in B at the beginning of a year returns $1.70 three years later. Investments C and D will each be available at one time in the future. Each dollar invested in C at the beginning of year 2 returns $1.90 at the end of year 5. Each dollar invested in D at the beginning of year 5 returns $1.30 at the end of year 5. Property 1 2 3 4 5 Percentage of tin Percentage of zinc Percentage of lead 60 10 30 25 15 60 45 45 10 20 50 30 50 40 10 Cost ($/lb) 22 20 25 24 27 The objective is to determine the proportions of these alloys that should be blended to produce the new alloy at a minimum cost. (a) Formulate a linear programming model for this problem. C (b) Solve this model by the simplex method. 3.4-14* A cargo plane has three compartments for storing cargo: front, center, and back. These compartments have capacity limits on both weight and space, as summarized below: Compartment Front Center Back Weight Capacity (Tons) 12 18 10 Space Capacity (Cubic Feet) 7,000 9,000 5,000 Furthermore, the weight of the cargo in the respective compartments must be the same proportion of that compartment’s weight capacity to maintain the balance of the airplane. 87 PROBLEMS The following four cargoes have been offered for shipment on an upcoming flight as space is available: Cargo Weight (Tons) Volume (Cubic Feet/Ton) Profit ($/Ton) 1 2 3 4 20 16 25 13 500 700 600 400 320 400 360 290 Any portion of these cargoes can be accepted. The objective is to determine how much (if any) of each cargo should be accepted and how to distribute each among the compartments to maximize the total profit for the flight. (a) Formulate a linear programming model for this problem. C (b) Solve this model by the simplex method to find one of its multiple optimal solutions. 3.4-15. Oxbridge University maintains a powerful mainframe computer for research use by its faculty, Ph.D. students, and research associates. During all working hours, an operator must be available to operate and maintain the computer, as well as to perform some programming services. Beryl Ingram, the director of the computer facility, oversees the operation. It is now the beginning of the fall semester, and Beryl is confronted with the problem of assigning different working hours to her operators. Because all the operators are currently enrolled in the university, they are available to work only a limited number of hours each day, as shown in the following table. Maximum Hours of Availability Operators K. C. D. H. H. B. S. C. K. S. N. K. Wage Rate Mon. Tue. Wed. Thurs. Fri. $25/hour $26/hour $24/hour $23/hour $28/hour $30/hour 6 0 4 5 3 0 0 6 8 5 0 0 6 0 4 5 3 0 0 6 0 0 8 6 6 0 4 5 0 2 There are six operators (four undergraduate students and two graduate students). They all have different wage rates because of differences in their experience with computers and in their programming ability. The above table shows their wage rates, along with the maximum number of hours that each can work each day. Each operator is guaranteed a certain minimum number of hours per week that will maintain an adequate knowledge of the operation. This level is set arbitrarily at 8 hours per week for the undergraduate students (K. C., D. H., H. B., and S. C.) and 7 hours per week for the graduate students (K. S. and N. K.). The computer facility is to be open for operation from 8 A.M. to 10 P.M. Monday through Friday with exactly one operator on duty during these hours. On Saturdays and Sundays, the computer is to be operated by other staff. Because of a tight budget, Beryl has to minimize cost. She wishes to determine the number of hours she should assign to each operator on each day. (a) Formulate a linear programming model for this problem. C (b) Solve this model by the simplex method. 3.4-16. Joyce and Marvin run a day care for preschoolers. They are trying to decide what to feed the children for lunches. They would like to keep their costs down, but also need to meet the nutritional requirements of the children. They have already decided to go with peanut butter and jelly sandwiches, and some combination of graham crackers, milk, and orange juice. The nutritional content of each food choice and its cost are given in the table below. Food Item Bread (1 slice) Peanut butter (1 tbsp) Strawberry jelly (1 tbsp) Graham cracker (1 cracker) Milk (1 cup) Juice (1 cup) Calories Total Vitamin C Protein Cost from Fat Calories (mg) (g) (¢) 10 70 0 3 5 75 100 0 4 4 0 50 3 0 7 20 70 0 60 150 100 0 2 120 1 8 1 8 15 35 The nutritional requirements are as follows. Each child should receive between 400 and 600 calories. No more than 30 percent of the total calories should come from fat. Each child should consume at least 60 milligrams (mg) of vitamin C and 12 grams (g) of protein. Furthermore, for practical reasons, each child needs exactly 2 slices of bread (to make the sandwich), at least twice as much peanut butter as jelly, and at least 1 cup of liquid (milk and/or juice). Joyce and Marvin would like to select the food choices for each child which minimize cost while meeting the above requirements. (a) Formulate a linear programming model for this problem. C (b) Solve this model by the simplex method. 3.5-1. Read the referenced article that fully describes the OR study summarized in the application vignette presented in Sec. 3.5. Briefly describe how linear programming was applied in this study. Then list the various financial and nonfinancial benefits that resulted from this study. 3.5-2.* You are given the following data for a linear programming problem where the objective is to maximize the profit from allocating three resources to two nonnegative activities. 88 CHAPTER 3 INTRODUCTION TO LINEAR PROGRAMMING Resource Usage per Unit of Each Activity C Activity 1 Activity 2 Amount of Resource Available 1 2 3 2 3 2 1 3 4 10 20 20 Contribution per unit $20 $30 Resource Which feasible guess has the best objective function value? (d) Use Solver to solve the model by the simplex method. 3.5-4. You are given the following data for a linear programming problem where the objective is to minimize the cost of conducting two nonnegative activities so as to achieve three benefits that do not fall below their minimum levels. Benefit Contribution per Unit of Each Activity Benefit Contribution per unit  profit per unit of the activity. (a) Formulate a linear programming model for this problem. D,I (b) Use the graphical method to solve this model. (c) Display the model on an Excel spreadsheet. (d) Use the spreadsheet to check the following solutions: (x1, x2)  (2, 2), (3, 3), (2, 4), (4, 2), (3, 4), (4, 3). Which of these solutions are feasible? Which of these feasible solutions has the best value of the objective function? C (e) Use Solver to solve the model by the simplex method. C (f) Use ASPE and its Solver to solve the model by the simplex method. 3.5-3. Ed Butler is the production manager for the Bilco Corporation, which produces three types of spare parts for automobiles. The manufacture of each part requires processing on each of two machines, with the following processing times (in hours): Part Machine A B C 1 2 0.02 0.05 0.03 0.02 0.05 0.04 Each machine is available 40 hours per month. Each part manufactured will yield a unit profit as follows: 1 2 3 Unit cost Profit B C $50 $40 $30 Ed wants to determine the mix of spare parts to produce in order to maximize total profit. (a) Formulate a linear programming model for this problem. (b) Display the model on an Excel spreadsheet. (c) Make three guesses of your own choosing for the optimal solution. Use the spreadsheet to check each one for feasibility and, if feasible, to find the value of the objective function. Activity 2 5 2 7 3 2 9 $60 $50 60 30 126 (a) Formulate a linear programming model for this problem. D,J (b) Use the graphical method to solve this model. (c) Display the model on an Excel spreadsheet. (d) Use the spreadsheet to check the following solutions: (x1, x2)  (7, 7), (7, 8), (8, 7), (8, 8), (8, 9), (9, 8). Which of these solutions are feasible? Which of these feasible solutions has the best value of the objective function? C (e) Use Solver to solve this model by the simplex method. C (f) Use ASPE and its Solver to solve the model by the simplex method. 3.5-5.* Fred Jonasson manages a family-owned farm. To supplement several food products grown on the farm, Fred also raises pigs for market. He now wishes to determine the quantities of the available types of feed (corn, tankage, and alfalfa) that should be given to each pig. Since pigs will eat any mix of these feed types, the objective is to determine which mix will meet certain nutritional requirements at a minimum cost. The number of units of each type of basic nutritional ingredient contained within a kilogram of each feed type is given in the following table, along with the daily nutritional requirements and feed costs: Part A Activity 1 Minimum Acceptable Level Nutritional Ingredient Carbohydrates Protein Vitamins Cost Kilogram Kilogram Kilogram Minimum of of of Daily Corn Tankage Alfalfa Requirement 90 30 10 20 80 20 40 60 60 $2.10 $1.80 $1.50 200 180 150 (a) Formulate a linear programming model for this problem. (b) Display the model on an Excel spreadsheet. (c) Use the spreadsheet to check if (x1, x2, x3)  (1, 2, 2) is a feasible solution and, if so, what the daily cost would be for this 89 PROBLEMS diet. How many units of each nutritional ingredient would this diet provide daily? (d) Take a few minutes to use a trial-and-error approach with the spreadsheet to develop your best guess for the optimal solution. What is the daily cost for your solution? C (e) Use Solver to solve the model by the simplex method. C (f) Use ASPE and its Solver to solve the model by the simplex method. country. The orders from wholesalers have already been received for the next 2 months (February and March), where the number of units requested are shown below. (The company is not obligated to completely fill these orders but will do so if it can without decreasing its profits.) 3.5-6. Maureen Laird is the chief financial officer for the Alva Electric Co., a major public utility in the midwest. The company has scheduled the construction of new hydroelectric plants 5, 10, and 20 years from now to meet the needs of the growing population in the region served by the company. To cover at least the construction costs, Maureen needs to invest some of the company’s money now to meet these future cash-flow needs. Maureen may purchase only three kinds of financial assets, each of which costs $1 million per unit. Fractional units may be purchased. The assets produce income 5, 10, and 20 years from now, and that income is needed to cover at least minimum cash-flow requirements in those years. (Any excess income above the minimum requirement for each time period will be used to increase dividend payments to shareholders rather than saving it to help meet the minimum cash-flow requirement in the next time period.) The following table shows both the amount of income generated by each unit of each asset and the minimum amount of income needed for each of the future time periods when a new hydroelectric plant will be constructed. Product February March February March 1 2 3,600 4,500 6,300 5,400 4,900 5,100 4,200 6,000 Plant 1 Plant 2 Each plant has 20 production days available in February and 23 production days available in March to produce and ship these products. Inventories are depleted at the end of January, but each plant has enough inventory capacity to hold 1,000 units total of the two products if an excess amount is produced in February for sale in March. In either plant, the cost of holding inventory in this way is $3 per unit of product 1 and $4 per unit of product 2. Each plant has the same two production processes, each of which can be used to produce either of the two products. The production cost per unit produced of each product is shown below for each process in each plant. Plant 1 Income per Unit of Asset Year 5 10 20 Asset 1 Asset 2 Asset 3 $2 million $1 million $0.5 million $0.5 million $0.5 million $1 million 0 $1.5 million $2 million Minimum Cash Flow Required $400 million $100 million $300 million Maureen wishes to determine the mix of investments in these assets that will cover the cash-flow requirements while minimizing the total amount invested. (a) Formulate a linear programming model for this problem. (b) Display the model on a spreadsheet. (c) Use the spreadsheet to check the possibility of purchasing 100 units of Asset 1, 100 units of Asset 2, and 200 units of Asset 3. How much cash flow would this mix of investments generate 5, 10, and 20 years from now? What would be the total amount invested? (d) Take a few minutes to use a trial-and-error approach with the spreadsheet to develop your best guess for the optimal solution. What is the total amount invested for your solution? C (e) Use Solver to solve the model by the simplex method. C (f) Use ASPE and its Solver to solve the model by the simplex method. 3.6-1. The Philbrick Company has two plants on opposite sides of the United States. Each of these plants produces the same two products and then sells them to wholesalers within its half of the Plant 2 Product Process 1 Process 2 Process 1 Process 2 1 2 $62 $78 $59 $85 $61 $89 $65 $86 The production rate for each product (number of units produced per day devoted to that product) also is given for each process in each plant below. Plant 1 Plant 2 Product Process 1 Process 2 Process 1 Process 2 1 2 100 120 140 150 130 160 110 130 The net sales revenue (selling price minus normal shipping costs) the company receives when a plant sells the products to its own customers (the wholesalers in its half of the country) is $83 per unit of product 1 and $112 per unit of product 2. However, it also is possible (and occasionally desirable) for a plant to make a shipment to the other half of the country to help fill the sales of the other plant. When this happens, an extra shipping cost of $9 per unit of product 1 and $7 per unit of product 2 is incurred. 90 CHAPTER 3 INTRODUCTION TO LINEAR PROGRAMMING Management now needs to determine how much of each product should be produced by each production process in each plant during each month, as well as how much each plant should sell of each product in each month and how much each plant should ship of each product in each month to the other plant’s customers. The objective is to determine which feasible plan would maximize the total profit (total net sales revenue minus the sum of the production costs, inventory costs, and extra shipping costs). (a) Formulate a complete linear programming model in algebraic form that shows the individual constraints and decision variables for this problem. C (b) Formulate this same model on an Excel spreadsheet instead. Then use the Excel Solver to solve the model. C (c) Use MPL to formulate this model in a compact form. Then use a MPL solver to solve the model. C (d) Use LINGO to formulate this model in a compact form. Then use the LINGO solver to solve the model. distribution plan on a monthly basis, with an objective of minimizing the total cost of producing and distributing the paper during the month. Specifically, it is necessary to determine jointly the amount of each type of paper to be made at each paper mill on each type of machine and the amount of each type of paper to be shipped from each paper mill to each customer. The relevant data can be expressed symbolically as follows: Djk  number of units of paper type k demanded by customer j, rklm  number of units of raw material m needed to produce 1 unit of paper type k on machine type l, Rim  number of units of raw material m available at paper mill i, ckl  number of capacity units of machine type l that will produce 1 unit of paper type k, Cil  number of capacity units of machine type l available at paper mill i, Pikl  production cost for each unit of paper type k produced on machine type l at paper mill i, Tijk  transportation cost for each unit of paper type k shipped from paper mill i to customer j. 3.6-2. Reconsider Prob. 3.1-11. (a) Use MPL/Solvers to formulate and solve the model for this problem. (b) Use LINGO to formulate and solve this model. C 3.6-3. Reconsider Prob. 3.4-11. (a) Use MPL/Solvers to formulate and solve the model for this problem. (b) Use LINGO to formulate and solve this model. C 3.6-4. Reconsider Prob. 3.4-15. (a) Use MPL/Solvers to formulate and solve the model for this problem. (b) Use LINGO to formulate and solve this model. C 3.6-5. Reconsider Prob. 3.5-5. (a) Use MPL/Solvers to formulate and solve the model for this problem. (b) Use LINGO to formulate and solve this model. C 3.6-6. Reconsider Prob. 3.5-6. (a) Use MPL/Solvers to formulate and solve the model for this problem. (b) Use LINGO to formulate and solve this model. C 3.6-7. A large paper manufacturing company, the Quality Paper Corporation, has 10 paper mills from which it needs to supply 1,000 customers. It uses three alternative types of machines and four types of raw materials to make five different types of paper. Therefore, the company needs to develop a detailed production (a) Using these symbols, formulate a linear programming model for this problem by hand. (b) How many functional constraints and decision variables does this model have? C C (c) Use MPL to formulate this problem. (d) Use LINGO to formulate this problem. 3.6-8. Read the referenced article that fully describes the OR study summarized in the application vignette presented in Sec. 3.6. Briefly describe how linear programming was applied in this study. Then list the various financial and nonfinancial benefits that resulted from this study. 3.7-1. From the bottom part of the selected references given at the end of the chapter, select one of these award-winning applications of linear programming. Read this article and then write a two-page summary of the application and the benefits (including nonfinancial benefits) it provided. 3.7-2. From the bottom part of the selected references given at the end of the chapter, select three of these award-winning applications of linear programming. For each one, read the article and then write a one-page summary of the application and the benefits (including nonfinancial benefits) it provided. ■ CASES CASE 3.1 Auto Assembly Automobile Alliance, a large automobile manufacturing company, organizes the vehicles it manufactures into three families: a family of trucks, a family of small cars, and a family of midsized and luxury cars. One plant outside Detroit, MI, assembles two models from the family of midsized and luxury cars. The first model, the Family Thrillseeker, is a four-door sedan with vinyl seats, plastic interior, standard features, and excellent gas mileage. It is CASES marketed as a smart buy for middle-class families with tight budgets, and each Family Thrillseeker sold generates a modest profit of $3,600 for the company. The second model, the Classy Cruiser, is a two-door luxury sedan with leather seats, wooden interior, custom features, and navigational capabilities. It is marketed as a privilege of affluence for uppermiddle-class families, and each Classy Cruiser sold generates a healthy profit of $5,400 for the company. Rachel Rosencrantz, the manager of the assembly plant, is currently deciding the production schedule for the next month. Specifically, she must decide how many Family Thrillseekers and how many Classy Cruisers to assemble in the plant to maximize profit for the company. She knows that the plant possesses a capacity of 48,000 labor-hours during the month. She also knows that it takes 6 labor-hours to assemble one Family Thrillseeker and 10.5 labor-hours to assemble one Classy Cruiser. Because the plant is simply an assembly plant, the parts required to assemble the two models are not produced at the plant. They are instead shipped from other plants around the Michigan area to the assembly plant. For example, tires, steering wheels, windows, seats, and doors all arrive from various supplier plants. For the next month, Rachel knows that she will be able to obtain only 20,000 doors (10,000 left-hand doors and 10,000 right-hand doors) from the door supplier. A recent labor strike forced the shutdown of that particular supplier plant for several days, and that plant will not be able to meet its production schedule for the next month. Both the Family Thrillseeker and the Classy Cruiser use the same door part. In addition, a recent company forecast of the monthly demands for different automobile models suggests that the demand for the Classy Cruiser is limited to 3,500 cars. There is no limit on the demand for the Family Thrillseeker within the capacity limits of the assembly plant. (a) Formulate and solve a linear programming problem to determine the number of Family Thrillseekers and the number of Classy Cruisers that should be assembled. Before she makes her final production decisions, Rachel plans to explore the following questions independently except where otherwise indicated. (b) The marketing department knows that it can pursue a targeted $500,000 advertising campaign that will raise the demand for the Classy Cruiser next month by 20 percent. Should the campaign be undertaken? (c) Rachel knows that she can increase next month’s plant capacity by using overtime labor. She can increase the plant’s labor-hour capacity by 25 percent. With the new assembly plant capacity, how many Family Thrillseekers and how many Classy Cruisers should be assembled? 91 (d) Rachel knows that overtime labor does not come without an extra cost. What is the maximum amount she should be willing to pay for all overtime labor beyond the cost of this labor at regular time rates? Express your answer as a lump sum. (e) Rachel explores the option of using both the targeted advertising campaign and the overtime labor-hours. The advertising campaign raises the demand for the Classy Cruiser by 20 percent, and the overtime labor increases the plant’s labor-hour capacity by 25 percent. How many Family Thrillseekers and how many Classy Cruisers should be assembled using the advertising campaign and overtime labor-hours if the profit from each Classy Cruiser sold continues to be 50 percent more than for each Family Thrillseeker sold? (f) Knowing that the advertising campaign costs $500,000 and the maximum usage of overtime labor-hours costs $1,600,000 beyond regular time rates, is the solution found in part (e) a wise decision compared to the solution found in part (a)? (g) Automobile Alliance has determined that dealerships are actually heavily discounting the price of the Family Thrillseekers to move them off the lot. Because of a profit-sharing agreement with its dealers, the company is therefore not making a profit of $3,600 on the Family Thrillseeker but is instead making a profit of $2,800. Determine the number of Family Thrillseekers and the number of Classy Cruisers that should be assembled given this new discounted price. (h) The company has discovered quality problems with the Family Thrillseeker by randomly testing Thrillseekers at the end of the assembly line. Inspectors have discovered that in over 60 percent of the cases, two of the four doors on a Thrillseeker do not seal properly. Because the percentage of defective Thrillseekers determined by the random testing is so high, the floor supervisor has decided to perform quality control tests on every Thrillseeker at the end of the line. Because of the added tests, the time it takes to assemble one Family Thrillseeker has increased from 6 to 7.5 hours. Determine the number of units of each model that should be assembled given the new assembly time for the Family Thrillseeker. (i) The board of directors of Automobile Alliance wishes to capture a larger share of the luxury sedan market and therefore would like to meet the full demand for Classy Cruisers. They ask Rachel to determine by how much the profit of her assembly plant would decrease as compared to the profit found in part (a). They then ask her to meet the full demand for Classy Cruisers if the decrease in profit is not more than $2,000,000. (j) Rachel now makes her final decision by combining all the new considerations described in parts (f), (g), and (h). What are her final decisions on whether to undertake the advertising campaign, whether to use overtime labor, the number of Family Thrillseekers to assemble, and the number of Classy Cruisers to assemble? 92 CHAPTER 3 INTRODUCTION TO LINEAR PROGRAMMING ■ PREVIEWS OF ADDED CASES ON OUR WEBSITE (www.mhhe.com/hillier) CASE 3.2 Cutting Cafeteria Costs This case focuses on a subject that is dear to the heart of many students. How should the manager of a college cafeteria choose the ingredients of a casserole dish to make it sufficiently tasty for the students while also minimizing costs? In this case, linear programming models with only two decision variables can be used to address seven specific issues being faced by the manager. CASE 3.3 Staffing a Call Center California Children’s Hospital currently uses a confusing, decentralized appointment and registration process for its patients. Therefore, the decision has been made to centralize the process by establishing one call center devoted exclusively to appointments and registration. The hospital manager now needs to develop a plan for how many employees of each kind (full-time or part-time, English speaking, Spanish speaking, or bilingual) should be hired for each of several possible work shifts. Linear programming is needed to find a plan that minimizes the total cost of providing a satisfactory level of service throughout the 14 hours that the call center will be open each weekday. The model requires more than two decision variables, so a software package such as described in Sec. 3.5 or Sec. 3.6 will be needed to solve the two versions of the model. CASE 3.4 Promoting a Breakfast Cereal The vice president for marketing of the Super Grain Corporation needs to develop a promotional campaign for the company’s new breakfast cereal. Three advertising media have been chosen for the campaign, but decisions now need to be made regarding how much of each medium should be used. Constraints include a limited advertising budget, a limited planning budget, and a limited number of TV commercial spots available, as well as requirements for effectively reaching two special target audiences (young children and parents of young children) and for making full use of a rebate program. The corresponding linear programming model requires more than two decision variables, so a software package such as described in Sec. 3.5 or Sec. 3.6 will be needed to solve the model. This case also asks for an analysis of how well the four assumptions of linear programming are satisfied for this problem. Does linear programming actually provide a reasonable basis for managerial decision making in this situation? (Case 13.3 will provide a continuation of this case.) 4 C H A P T E R Solving Linear Programming Problems: The Simplex Method W e now are ready to begin studying the simplex method, a general procedure for solving linear programming problems. Developed by the brilliant George Dantzig1 in 1947, it has proved to be a remarkably efficient method that is used routinely to solve huge problems on today’s computers. Except for its use on tiny problems, this method is always executed on a computer, and sophisticated software packages are widely available. Extensions and variations of the simplex method also are used to perform postoptimality analysis (including sensitivity analysis) on the model. This chapter describes and illustrates the main features of the simplex method. The first section introduces its general nature, including its geometric interpretation. The following three sections then develop the procedure for solving any linear programming model that is in our standard form (maximization, all functional constraints in  form, and nonnegativity constraints on all variables) and has only nonnegative right-hand sides bi in the functional constraints. Certain details on resolving ties are deferred to Sec. 4.5, and Sec. 4.6 describes how to adapt the simplex method to other model forms. Next we discuss postoptimality analysis (Sec. 4.7), and describe the computer implementation of the simplex method (Sec. 4.8). Section 4.9 then introduces an alternative to the simplex method (the interior-point approach) for solving huge linear programming problems. ■ 4.1 THE ESSENCE OF THE SIMPLEX METHOD The simplex method is an algebraic procedure. However, its underlying concepts are geometric. Understanding these geometric concepts provides a strong intuitive feeling for how the simplex method operates and what makes it so efficient. Therefore, before delving into algebraic details, we focus in this section on the big picture from a geometric viewpoint. 1 Widely revered as perhaps the most important pioneer of operations research, George Dantzig is commonly referred to as the father of linear programming because of the development of the simplex method and many key subsequent contributions. The authors had the privilege of being his faculty colleagues in the Department of Operations Research at Stanford University for nearly 30 years. Dr. Dantzig remained professionally active right up until he passed away in 2005 at the age of 90. 93 94 CHAPTER 4 SOLVING LINEAR PROGRAMMING PROBLEMS: THE SIMPLEX METHOD x2 Maximize Z  3x1  5x2, subject to x1  4 2x2  12 3x1  2x2  18 and x1  0, x2  0 x1  0 (0, 9) 3x1  2x2  18 (0, 6) (2, 6) (4, 6) 2x2  12 x1  4 Feasible region ■ FIGURE 4.1 Constraint boundaries and corner-point solutions for the Wyndor Glass Co. problem. (0, 0) (4, 3) x2  0 (4, 0) (6, 0) x1 To illustrate the general geometric concepts, we shall use the Wyndor Glass Co. example presented in Sec. 3.1. (Sections 4.2 and 4.3 use the algebra of the simplex method to solve this same example.) Section 5.1 will elaborate further on these geometric concepts for larger problems. To refresh your memory, the model and graph for this example are repeated in Fig. 4.1. The five constraint boundaries and their points of intersection are highlighted in this figure because they are the keys to the analysis. Here, each constraint boundary is a line that forms the boundary of what is permitted by the corresponding constraint. The points of intersection are the corner-point solutions of the problem. The five that lie on the corners of the feasible region—(0, 0), (0, 6), (2, 6), (4, 3), and (4, 0)—are the cornerpoint feasible solutions (CPF solutions). [The other three—(0, 9), (4, 6), and (6, 0)—are called corner-point infeasible solutions.] In this example, each corner-point solution lies at the intersection of two constraint boundaries. (For a linear programming problem with n decision variables, each of its cornerpoint solutions lies at the intersection of n constraint boundaries.2) Certain pairs of the CPF solutions in Fig. 4.1 share a constraint boundary, and other pairs do not. It will be important to distinguish between these cases by using the following general definitions. For any linear programming problem with n decision variables, two CPF solutions are adjacent to each other if they share n  1 constraint boundaries. The two adjacent CPF solutions are connected by a line segment that lies on these same shared constraint boundaries. Such a line segment is referred to as an edge of the feasible region. Since n  2 in the example, two of its CPF solutions are adjacent if they share one constraint boundary; for example, (0, 0) and (0, 6) are adjacent because they share the x1  0 constraint boundary. The feasible region in Fig. 4.1 has five edges, consisting of the five line segments forming the boundary of this region. Note that two edges emanate from 2 Although a corner-point solution is defined in terms of n constraint boundaries whose intersection gives this solution, it also is possible that one or more additional constraint boundaries pass through this same point. 4.1 95 THE ESSENCE OF THE SIMPLEX METHOD ■ TABLE 4.1 Adjacent CPF solutions for each CPF solution of the Wyndor Glass Co. problem CPF Solution (0, (0, (2, (4, (4, 0) 6) 6) 3) 0) Its Adjacent CPF Solutions (0, (2, (4, (4, (0, 6) 6) 3) 0) 0) and and and and and (4, (0, (0, (2, (4, 0) 0) 6) 6) 3) each CPF solution. Thus, each CPF solution has two adjacent CPF solutions (each lying at the other end of one of the two edges), as enumerated in Table 4.1. (In each row of this table, the CPF solution in the first column is adjacent to each of the two CPF solutions in the second column, but the two CPF solutions in the second column are not adjacent to each other.) One reason for our interest in adjacent CPF solutions is the following general property about such solutions, which provides a very useful way of checking whether a CPF solution is an optimal solution. Optimality test: Consider any linear programming problem that possesses at least one optimal solution. If a CPF solution has no adjacent CPF solutions that are better (as measured by Z), then it must be an optimal solution. Thus, for the example, (2, 6) must be optimal simply because its Z  36 is larger than Z  30 for (0, 6) and Z  27 for (4, 3). (We will delve further into why this property holds in Sec. 5.1.) This optimality test is the one used by the simplex method for determining when an optimal solution has been reached. Now we are ready to apply the simplex method to the example. Solving the Example Here is an outline of what the simplex method does (from a geometric viewpoint) to solve the Wyndor Glass Co. problem. At each step, first the conclusion is stated and then the reason is given in parentheses. (Refer to Fig. 4.1 for a visualization.) Initialization: Choose (0, 0) as the initial CPF solution to examine. (This is a convenient choice because no calculations are required to identify this CPF solution.) Optimality Test: Conclude that (0, 0) is not an optimal solution. (Adjacent CPF solutions are better.) Iteration 1: Move to a better adjacent CPF solution, (0, 6), by performing the following three steps. 1. Considering the two edges of the feasible region that emanate from (0, 0), choose to move along the edge that leads up the x2 axis. (With an objective function of Z  3x1  5x2, moving up the x2 axis increases Z at a faster rate than moving along the x1 axis.) 2. Stop at the first new constraint boundary: 2x2  12. [Moving farther in the direction selected in step 1 leaves the feasible region; e.g., moving to the second new constraint boundary hit when moving in that direction gives (0, 9), which is a corner-point infeasible solution.] 3. Solve for the intersection of the new set of constraint boundaries: (0, 6). (The equations for these constraint boundaries, x1  0 and 2x2  12, immediately yield this solution.) Optimality Test: Conclude that (0, 6) is not an optimal solution. (An adjacent CPF solution is better.) 96 CHAPTER 4 SOLVING LINEAR PROGRAMMING PROBLEMS: THE SIMPLEX METHOD x2 (0, 6) Z  30 (2, 6) Z  36 1 2 ■ FIGURE 4.2 This graph shows the sequence of CPF solutions (, , ) examined by the simplex method for the Wyndor Glass Co. problem. The optimal solution (2, 6) is found after just three solutions are examined. Feasible region (0, 0) (4, 3) Z  27 Z  12 0 Z0 (4, 0) x1 Iteration 2: Move to a better adjacent CPF solution, (2, 6), by performing the following three steps. 1. Considering the two edges of the feasible region that emanate from (0, 6), choose to move along the edge that leads to the right. (Moving along this edge increases Z, whereas backtracking to move back down the x2 axis decreases Z.) 2. Stop at the first new constraint boundary encountered when moving in that direction: 3x1  2x2  12. (Moving farther in the direction selected in step 1 leaves the feasible region.) 3. Solve for the intersection of the new set of constraint boundaries: (2, 6). (The equations for these constraint boundaries, 3x1  2x2  18 and 2x2  12, immediately yield this solution.) Optimality Test: Conclude that (2, 6) is an optimal solution, so stop. (None of the adjacent CPF solutions are better.) This sequence of CPF solutions examined is shown in Fig. 4.2, where each circled number identifies which iteration obtained that solution. (See the Solved Examples section on the book’s website for another example of how the simplex method marches through a sequence of CPF solutions to reach the optimal solution.) Now let us look at the six key solution concepts of the simplex method that provide the rationale behind the above steps. (Keep in mind that these concepts also apply for solving problems with more than two decision variables where a graph like Fig. 4.2 is not available to help quickly find an optimal solution.) The Key Solution Concepts The first solution concept is based directly on the relationship between optimal solutions and CPF solutions given at the end of Sec. 3.2. Solution concept 1: The simplex method focuses solely on CPF solutions. For any problem with at least one optimal solution, finding one requires only finding a best CPF solution.3 3 The only restriction is that the problem must possess CPF solutions. This is ensured if the feasible region is bounded. 4.1 THE ESSENCE OF THE SIMPLEX METHOD 97 Since the number of feasible solutions generally is infinite, reducing the number of solutions that need to be examined to a small finite number ( just three in Fig. 4.2) is a tremendous simplification. The next solution concept defines the flow of the simplex method. Solution concept 2: The simplex method is an iterative algorithm (a systematic solution procedure that keeps repeating a fixed series of steps, called an iteration, until a desired result has been obtained) with the following structure. Initialization: → → Optimality test: Set up to start iterations, including finding an initial CPF solution. Is the current CPF solution optimal? If no  If yes  → Stop.  ↓ Iteration: Perform an iteration to find a better CPF solution. When the example was solved, note how this flow diagram was followed through two iterations until an optimal solution was found. We next focus on how to get started. Solution concept 3: Whenever possible, the initialization of the simplex method chooses the origin (all decision variables equal to zero) to be the initial CPF solution. When there are too many decision variables to find an initial CPF solution graphically, this choice eliminates the need to use algebraic procedures to find and solve for an initial CPF solution. Choosing the origin commonly is possible when all the decision variables have nonnegativity constraints, because the intersection of these constraint boundaries yields the origin as a corner-point solution. This solution then is a CPF solution unless it is infeasible because it violates one or more of the functional constraints. If it is infeasible, special procedures described in Sec. 4.6 are needed to find the initial CPF solution. The next solution concept concerns the choice of a better CPF solution at each iteration. Solution concept 4: Given a CPF solution, it is much quicker computationally to gather information about its adjacent CPF solutions than about other CPF solutions. Therefore, each time the simplex method performs an iteration to move from the current CPF solution to a better one, it always chooses a CPF solution that is adjacent to the current one. No other CPF solutions are considered. Consequently, the entire path followed to eventually reach an optimal solution is along the edges of the feasible region. The next focus is on which adjacent CPF solution to choose at each iteration. Solution concept 5: After the current CPF solution is identified, the simplex method examines each of the edges of the feasible region that emanate from this CPF solution. Each of these edges leads to an adjacent CPF solution at the other end, but the simplex method does not even take the time to solve for the adjacent CPF solution. Instead, it simply identifies the rate of improvement in Z that would be obtained by moving along the edge. Among the edges with a positive rate of improvement in Z, it then chooses to move along the one with the largest rate of improvement in Z. The iteration is completed by first solving for the adjacent CPF solution at the other end of this one edge and then relabeling this adjacent 98 CHAPTER 4 SOLVING LINEAR PROGRAMMING PROBLEMS: THE SIMPLEX METHOD CPF solution as the current CPF solution for the optimality test and (if needed) the next iteration. At the first iteration of the example, moving from (0, 0) along the edge on the x1 axis would give a rate of improvement in Z of 3 (Z increases by 3 per unit increase in x1), whereas moving along the edge on the x2 axis would give a rate of improvement in Z of 5 (Z increases by 5 per unit increase in x2), so the decision is made to move along the latter edge. At the second iteration, the only edge emanating from (0, 6) that would yield a positive rate of improvement in Z is the edge leading to (2, 6), so the decision is made to move next along this edge. The final solution concept clarifies how the optimality test is performed efficiently. Solution concept 6: Solution concept 5 describes how the simplex method examines each of the edges of the feasible region that emanate from the current CPF solution. This examination of an edge leads to quickly identifying the rate of improvement in Z that would be obtained by moving along the edge toward the adjacent CPF solution at the other end. A positive rate of improvement in Z implies that the adjacent CPF solution is better than the current CPF solution, whereas a negative rate of improvement in Z implies that the adjacent CPF solution is worse. Therefore, the optimality test consists simply of checking whether any of the edges give a positive rate of improvement in Z. If none do, then the current CPF solution is optimal. In the example, moving along either edge from (2, 6) decreases Z. Since we want to maximize Z, this fact immediately gives the conclusion that (2, 6) is optimal. ■ 4.2 SETTING UP THE SIMPLEX METHOD Section 4.1 stressed the geometric concepts that underlie the simplex method. However, this algorithm normally is run on a computer, which can follow only algebraic instructions. Therefore, it is necessary to translate the conceptually geometric procedure just described into a usable algebraic procedure. In this section, we introduce the algebraic language of the simplex method and relate it to the concepts of the preceding section. The algebraic procedure is based on solving systems of equations. Therefore, the first step in setting up the simplex method is to convert the functional inequality constraints to equivalent equality constraints. (The nonnegativity constraints are left as inequalities because they are treated separately.) This conversion is accomplished by introducing slack variables. To illustrate, consider the first functional constraint in the Wyndor Glass Co. example of Sec. 3.1, x1  4. The slack variable for this constraint is defined to be x3  4  x1, which is the amount of slack in the left-hand side of the inequality. Thus, x1  x3  4. Given this equation, x1  4 if and only if 4  x1  x3  0. Therefore, the original constraint x1  4 is entirely equivalent to the pair of constraints x1  x3  4 and x3  0. 4.2 99 SETTING UP THE SIMPLEX METHOD Upon the introduction of slack variables for the other functional constraints, the original linear programming model for the example (shown below on the left) can now be replaced by the equivalent model (called the augmented form of the model) shown below on the right: Augmented Form of the Model4 Original Form of the Model Maximize Z  3x1  5x2, subject to Z  3x1  5x2, Maximize subject to x1  2x2  4 (1) 3x1  2x2  12 (2) 2x2 3x1  2x2  18 (3) 3x1  2x2 and  x3 x1  4  x4  12  x5  18 and x1  0, x2  0. xj  0, for j  1, 2, 3, 4, 5. Although both forms of the model represent exactly the same problem, the new form is much more convenient for algebraic manipulation and for identification of CPF solutions. We call this the augmented form of the problem because the original form has been augmented by some supplementary variables needed to apply the simplex method. If a slack variable equals 0 in the current solution, then this solution lies on the constraint boundary for the corresponding functional constraint. A value greater than 0 means that the solution lies on the feasible side of this constraint boundary, whereas a value less than 0 means that the solution lies on the infeasible side of this constraint boundary. A demonstration of these properties is provided by the demonstration example in your OR Tutor entitled Interpretation of the Slack Variables. The terminology used in Section 4.1 (corner-point solutions, etc.) applies to the original form of the problem. We now introduce the corresponding terminology for the augmented form. An augmented solution is a solution for the original variables (the decision variables) that has been augmented by the corresponding values of the slack variables. For example, augmenting the solution (3, 2) in the example yields the augmented solution (3, 2, 1, 8, 5) because the corresponding values of the slack variables are x3  1, x4  8, and x5  5. A basic solution is an augmented corner-point solution. To illustrate, consider the corner-point infeasible solution (4, 6) in Fig. 4.1. Augmenting it with the resulting values of the slack variables x3  0, x4  0, and x5  6 yields the corresponding basic solution (4, 6, 0, 0, 6). The fact that corner-point solutions (and so basic solutions) can be either feasible or infeasible implies the following definition: A basic feasible (BF) solution is an augmented CPF solution. Thus, the CPF solution (0, 6) in the example is equivalent to the BF solution (0, 6, 4, 0, 6) for the problem in augmented form. The only difference between basic solutions and corner-point solutions (or between BF solutions and CPF solutions) is whether the values of the slack variables are included. 4 The slack variables are not shown in the objective function because the coefficients there are 0. 100 CHAPTER 4 SOLVING LINEAR PROGRAMMING PROBLEMS: THE SIMPLEX METHOD For any basic solution, the corresponding corner-point solution is obtained simply by deleting the slack variables. Therefore, the geometric and algebraic relationships between these two solutions are very close, as described in Sec. 5.1. Because the terms basic solution and basic feasible solution are very important parts of the standard vocabulary of linear programming, we now need to clarify their algebraic properties. For the augmented form of the example, notice that the system of functional constraints has 5 variables and 3 equations, so Number of variables  number of equations  5  3  2. This fact gives us 2 degrees of freedom in solving the system, since any two variables can be chosen to be set equal to any arbitrary value in order to solve the three equations in terms of the remaining three variables.5 The simplex method uses zero for this arbitrary value. Thus, two of the variables (called the nonbasic variables) are set equal to zero, and then the simultaneous solution of the three equations for the other three variables (called the basic variables) is a basic solution. These properties are described in the following general definitions. A basic solution has the following properties: 1. Each variable is designated as either a nonbasic variable or a basic variable. 2. The number of basic variables equals the number of functional constraints (now equations). Therefore, the number of nonbasic variables equals the total number of variables minus the number of functional constraints. 3. The nonbasic variables are set equal to zero. 4. The values of the basic variables are obtained as the simultaneous solution of the system of equations (functional constraints in augmented form). (The set of basic variables is often referred to as the basis.) 5. If the basic variables satisfy the nonnegativity constraints, the basic solution is a BF solution. To illustrate these definitions, consider again the BF solution (0, 6, 4, 0, 6). This solution was obtained before by augmenting the CPF solution (0, 6). However, another way to obtain this same solution is to choose x1 and x4 to be the two nonbasic variables, and so the two variables are set equal to zero. The three equations then yield, respectively, x3  4, x2  6, and x5  6 as the solution for the three basic variables, as shown below (with the basic variables in bold type): (1) (2) (3) x1 2x2 3x1  2x2  x3  4  x4  12  x5  18 x1  0 and x4  0 so x3  4 x2  6 x5  6 Because all three of these basic variables are nonnegative, this basic solution (0, 6, 4, 0, 6) is indeed a BF solution. The Solved Examples section of the book’s website includes another example of the relationship between CPF solutions and BF solutions. Just as certain pairs of CPF solutions are adjacent, the corresponding pairs of BF solutions also are said to be adjacent. Here is an easy way to tell when two BF solutions are adjacent. Two BF solutions are adjacent if all but one of their nonbasic variables are the same. This implies that all but one of their basic variables also are the same, although perhaps with different numerical values. 5 This method of determining the number of degrees of freedom for a system of equations is valid as long as the system does not include any redundant equations. This condition always holds for the system of equations formed from the functional constraints in the augmented form of a linear programming model. 4.3 THE ALGEBRA OF THE SIMPLEX METHOD 101 Consequently, moving from the current BF solution to an adjacent one involves switching one variable from nonbasic to basic and vice versa for one other variable (and then adjusting the values of the basic variables to continue satisfying the system of equations). To illustrate adjacent BF solutions, consider one pair of adjacent CPF solutions in Fig. 4.1: (0, 0) and (0, 6). Their augmented solutions, (0, 0, 4, 12, 18) and (0, 6, 4, 0, 6), automatically are adjacent BF solutions. However, you do not need to look at Fig. 4.1 to draw this conclusion. Another signpost is that their nonbasic variables, (x1, x2) and (x1, x4), are the same with just the one exception—x2 has been replaced by x4. Consequently, moving from (0, 0, 4, 12, 18) to (0, 6, 4, 0, 6) involves switching x2 from nonbasic to basic and vice versa for x4. When we deal with the problem in augmented form, it is convenient to consider and manipulate the objective function equation at the same time as the new constraint equations. Therefore, before we start the simplex method, the problem needs to be rewritten once again in an equivalent way: Maximize Z, subject to (0) (1) (2) (3) Z  3x1  5x2  0 x1  x3  4 2x2  x4  12 3x1  2x2  x5  18 and xj  0, for j  1, 2, . . . , 5. It is just as if Eq. (0) actually were one of the original constraints; but because it already is in equality form, no slack variable is needed. While adding one more equation, we also have added one more unknown (Z) to the system of equations. Therefore, when using Eqs. (1) to (3) to obtain a basic solution as described above, we use Eq. (0) to solve for Z at the same time. Somewhat fortuitously, the model for the Wyndor Glass Co. problem fits our standard form, and all its functional constraints have nonnegative right-hand sides bi. If this had not been the case, then additional adjustments would have been needed at this point before the simplex method was applied. These details are deferred to Sec. 4.6, and we now focus on the simplex method itself. ■ 4.3 THE ALGEBRA OF THE SIMPLEX METHOD We continue to use the prototype example of Sec. 3.1, as rewritten at the end of Sec. 4.2, for illustrative purposes. To start connecting the geometric and algebraic concepts of the simplex method, we begin by outlining side by side in Table 4.2 how the simplex method solves this example from both a geometric and an algebraic viewpoint. The geometric viewpoint (first presented in Sec. 4.1) is based on the original form of the model (no slack variables), so again refer to Fig. 4.1 for a visualization when you examine the second column of the table. Refer to the augmented form of the model presented at the end of Sec. 4.2 when you examine the third column of the table. We now fill in the details for each step of the third column of Table 4.2. 102 CHAPTER 4 SOLVING LINEAR PROGRAMMING PROBLEMS: THE SIMPLEX METHOD ■ TABLE 4.2 Geometric and algebraic interpretations of how the simplex method solves the Wyndor Glass Co. problem Method Sequence Geometric Interpretation Algebraic Interpretation Initialization Choose (0, 0) to be the initial CPF solution. Optimality test Iteration 1 Step 1 Step 2 Step 3 Optimality test Iteration 2 Step 1 Step 2 Step 3 Optimality test Choose x1 and x2 to be the nonbasic variables ( 0) for the initial BF solution: (0, 0, 4, 12, 18). Not optimal, because increasing either nonbasic variable (x1 or x2) increases Z. Not optimal, because moving along either edge from (0, 0) increases Z. Move up the edge lying on the x2 axis. Stop when the first new constraint boundary (2x2  12) is reached. Find the intersection of the new pair of constraint boundaries: (0, 6) is the new CPF solution. Not optimal, because moving along the edge from (0, 6) to the right increases Z. Increase x2 while adjusting other variable values to satisfy the system of equations. Stop when the first basic variable (x3, x4, or x5) drops to zero (x4). With x2 now a basic variable and x4 now a nonbasic variable, solve the system of equations: (0, 6, 4, 0, 6) is the new BF solution. Not optimal, because increasing one nonbasic variable (x1) increases Z. Move along this edge to the right. Increase x1 while adjusting other variable values to satisfy the system of equations. Stop when the first new constraint Stop when the first basic variable boundary (3x1  2x2  18) is reached. (x2, x3, or x5) drops to zero (x5). Find the intersection of the new pair With x1 now a basic variable and x5 of constraint boundaries: (2, 6) is the now a nonbasic variable, solve the new CPF solution. system of equations: (2, 6, 2, 0, 0) is the new BF solution. (2, 6) is optimal, because moving (2, 6, 2, 0, 0) is optimal, because along either edge from (2, 6) decreases Z. increasing either nonbasic variable (x4 or x5) decreases Z. Initialization The choice of x1 and x2 to be the nonbasic variables (the variables set equal to zero) for the initial BF solution is based on solution concept 3 in Sec. 4.1. This choice eliminates the work required to solve for the basic variables (x3, x4, x5) from the following system of equations (where the basic variables are shown in bold type): (1) (2) (3) x1 2x2 3x1  2x2  x3  4  x4  12  x5  18 x1  0 and x2  0 so x3  4 x4  12 x5  18 Thus, the initial BF solution is (0, 0, 4, 12, 18). Notice that this solution can be read immediately because each equation has just one basic variable, which has a coefficient of 1, and this basic variable does not appear in any other equation. You will soon see that when the set of basic variables changes, the simplex method uses an algebraic procedure (Gaussian elimination) to convert the equations to this same convenient form for reading every subsequent BF solution as well. This form is called proper form from Gaussian elimination. An Application Vignette Samsung Electronics Corp., Ltd. (SEC) is a leading merchant of dynamic and static random access memory devices and other advanced digital integrated circuits. It has been the world’s largest information technology company in revenues (well over $100 billion annually) since 2009, employing well over 200,000 people in over 60 countries. Its site at Kiheung, South Korea (probably the largest semiconductor fabrication site in the world) fabricates more than 300,000 silicon wafers per month. Cycle time is the industry’s term for the elapsed time from the release of a batch of blank silicon wafers into the fabrication process until completion of the devices that are fabricated on those wafers. Reducing cycle times is an ongoing goal since it both decreases costs and enables offering shorter lead times to potential customers, a real key to maintaining or increasing market share in a very competitive industry. Three factors present particularly major challenges when striving to reduce cycle times. One is that the product mix changes continually. Another is that the company often needs to make substantial changes in the fab-out schedule inside the target cycle time as it revises forecasts of customer demand. The third is that the machines of a general type are not homogenous so only a small number of machines are qualified to perform each device-step. An OR team developed a huge linear programming model with tens of thousands of decision variables and functional constraints to cope with these challenges. The objective function involved minimizing back-orders and finished-goods inventory. Despite the huge size of this model, it was readily solved in minutes whenever needed by using a highly sophisticated implementation of the simplex method (and related techniques). The ongoing implementation of this model enabled the company to reduce manufacturing cycle times to fabricate dynamic random access memory devices from more than 80 days to less than 30 days. This tremendous improvement and the resulting reduction in both manufacturing costs and sale prices enabled Samsung to capture an additional $200 million in annual sales revenue. Source: R. C. Leachman, J. Kang, and Y. Lin: “SLIM: Short Cycle Time and Low Inventory in Manufacturing at Samsung Electronics,” Interfaces, 32(1): 61–77, Jan.–Feb. 2002. (A link to this article is provided on our website, www.mhhe.com/hillier.) Optimality Test The objective function is Z  3x1  5x2, so Z  0 for the initial BF solution. Because none of the basic variables (x3, x4, x5) have a nonzero coefficient in this objective function, the coefficient of each nonbasic variable (x1, x2) gives the rate of improvement in Z if that variable were to be increased from zero (while the values of the basic variables are adjusted to continue satisfying the system of equations).6 These rates of improvement (3 and 5) are positive. Therefore, based on solution concept 6 in Sec. 4.1, we conclude that (0, 0, 4, 12, 18) is not optimal. For each BF solution examined after subsequent iterations, at least one basic variable has a nonzero coefficient in the objective function. Therefore, the optimality test then will use the new Eq. (0) to rewrite the objective function in terms of just the nonbasic variables, as you will see later. Determining the Direction of Movement (Step 1 of an Iteration) Increasing one nonbasic variable from zero (while adjusting the values of the basic variables to continue satisfying the system of equations) corresponds to moving along one edge emanating from the current CPF solution. Based on solution concepts 4 and 5 in Sec. 4.1, the choice of which nonbasic variable to increase is made as follows: 6 Note that this interpretation of the coefficients of the xj variables is based on these variables being on the righthand side, Z  3x1  5x2. When these variables are brought to the left-hand side for Eq. (0), Z  3x1  5x2  0, the nonzero coefficients change their signs. 104 CHAPTER 4 SOLVING LINEAR PROGRAMMING PROBLEMS: THE SIMPLEX METHOD Z  3x1  5x2 Increase x1? Rate of improvement in Z  3. Increase x2? Rate of improvement in Z  5. 5  3, so choose x2 to increase. As indicated next, we call x2 the entering basic variable for iteration 1. At any iteration of the simplex method, the purpose of step 1 is to choose one nonbasic variable to increase from zero (while the values of the basic variables are adjusted to continue satisfying the system of equations). Increasing this nonbasic variable from zero will convert it to a basic variable for the next BF solution. Therefore, this variable is called the entering basic variable for the current iteration (because it is entering the basis). Determining Where to Stop (Step 2 of an Iteration) Step 2 addresses the question of how far to increase the entering basic variable x2 before stopping. Increasing x2 increases Z, so we want to go as far as possible without leaving the feasible region. The requirement to satisfy the functional constraints in augmented form (shown below) means that increasing x2 (while keeping the nonbasic variable x1  0) changes the values of some of the basic variables as shown on the right. (1) (2) (3) x1 2x2 3x1  2x2  x3  4  x4  12  x5  18 x1  0, so x3  4 x4  12  2x2 x5  18  2x2. The other requirement for feasibility is that all the variables be nonnegative. The nonbasic variables (including the entering basic variable) are nonnegative, but we need to check how far x2 can be increased without violating the nonnegativity constraints for the basic variables. ⇒ no upper bound on x2. 12 x4  12  2x2  0 ⇒ x2    6 씯 minimum. 2 18 x5  18  2x2  0 ⇒ x2    9. 2 x3  4  0 Thus, x2 can be increased just to 6, at which point x4 has dropped to 0. Increasing x2 beyond 6 would cause x4 to become negative, which would violate feasibility. These calculations are referred to as the minimum ratio test. The objective of this test is to determine which basic variable drops to zero first as the entering basic variable is increased. We can immediately rule out the basic variable in any equation where the coefficient of the entering basic variable is zero or negative, since such a basic variable would not decrease as the entering basic variable is increased. [This is what happened with x3 in Eq. (1) of the example.] However, for each equation where the coefficient of the entering basic variable is strictly positive ( 0), this test calculates the ratio of the right-hand side to the coefficient of the entering basic variable. The basic variable in the equation with the minimum ratio is the one that drops to zero first as the entering basic variable is increased. At any iteration of the simplex method, step 2 uses the minimum ratio test to determine which basic variable drops to zero first as the entering basic variable is increased. Decreasing this basic variable to zero will convert it to a nonbasic variable for the next BF solution. Therefore, this variable is called the leaving basic variable for the current iteration (because it is leaving the basis). Thus, x4 is the leaving basic variable for iteration 1 of the example. 4.3 105 THE ALGEBRA OF THE SIMPLEX METHOD Solving for the New BF Solution (Step 3 of an Iteration) Increasing x2  0 to x2  6 moves us from the initial BF solution on the left to the new BF solution on the right. Nonbasic variables: Basic variables: Initial BF solution x1  0, x2  0 x3  4, x4  12, x5  18 New BF solution x1  0, x4  0 x3  ?, x2  6, x5  ? The purpose of step 3 is to convert the system of equations to a more convenient form (proper form from Gaussian elimination) for conducting the optimality test and (if needed) the next iteration with this new BF solution. In the process, this form also will identify the values of x3 and x5 for the new solution. Here again is the complete original system of equations, where the new basic variables are shown in bold type (with Z playing the role of the basic variable in the objective function equation): (0) (1) (2) (3) Z  3x1  5x2  0. x1  x3  4. 2x2  x4  12. 3x1  2x2  x5  18. Thus, x2 has replaced x4 as the basic variable in Eq. (2). To solve this system of equations for Z, x2, x3, and x5, we need to perform some elementary algebraic operations to reproduce the current pattern of coefficients of x4 (0, 0, 1, 0) as the new coefficients of x2. We can use either of two types of elementary algebraic operations: 1. Multiply (or divide) an equation by a nonzero constant. 2. Add (or subtract) a multiple of one equation to (or from) another equation. To prepare for performing these operations, note that the coefficients of x2 in the above system of equations are 5, 0, 2, and 2, respectively, whereas we want these coefficients to become 0, 0, 1, and 0, respectively. To turn the coefficient of 2 in Eq. (2) into 1, we use the first type of elementary algebraic operation by dividing Eq. (2) by 2 to obtain (2) 1 x2  x4  6. 2 To turn the coefficients of 5 and 2 into zeros, we need to use the second type of elementary algebraic operation. In particular, we add 5 times this new Eq. (2) to Eq. (0), and subtract 2 times this new Eq. (2) from Eq. (3). The resulting complete new system of equations is (0) Z  3x1 (1) x1 (2) (3) 5  x4 2  x3 x2 3x1  30  4 1  x4  6 2  x4  x5  6. Since x1  0 and x4  0, the equations in this form immediately yield the new BF solution, (x1, x2, x3, x4, x5)  (0, 6, 4, 0, 6), which yields Z  30. This procedure for obtaining the simultaneous solution of a system of linear equations is called the Gauss-Jordan method of elimination, or Gaussian elimination for 106 CHAPTER 4 SOLVING LINEAR PROGRAMMING PROBLEMS: THE SIMPLEX METHOD short.7 The key concept for this method is the use of elementary algebraic operations to reduce the original system of equations to proper form from Gaussian elimination, where each basic variable has been eliminated from all but one equation (its equation) and has a coefficient of 1 in that equation. Optimality Test for the New BF Solution The current Eq. (0) gives the value of the objective function in terms of just the current nonbasic variables: 5 Z  30  3x1  x4. 2 Increasing either of these nonbasic variables from zero (while adjusting the values of the basic variables to continue satisfying the system of equations) would result in moving toward one of the two adjacent BF solutions. Because x1 has a positive coefficient, increasing x1 would lead to an adjacent BF solution that is better than the current BF solution, so the current solution is not optimal. Iteration 2 and the Resulting Optimal Solution Since Z  30  3x1  52 x4, Z can be increased by increasing x1, but not x4. Therefore, step 1 chooses x1 to be the entering basic variable. For step 2, the current system of equations yields the following conclusions about how far x1 can be increased (with x4  0): 4 x3  4  x1  0 ⇒ x1    4. 1 x2  6  0 ⇒ no upper bound on x1. 6 x5  6  3x1  0 ⇒ x1    2 씯 minimum. 3 Therefore, the minimum ratio test indicates that x5 is the leaving basic variable. For step 3, with x1 replacing x5 as a basic variable, we perform elementary algebraic operations on the current system of equations to reproduce the current pattern of coefficients of x5 (0, 0, 0, 1) as the new coefficients of x1. This yields the following new system of equations: (0) 3  x4  x5  36 2 Z 1 1 x3  x4  x5  2 3 3 (1) (2) (3) x2 x1 1  x4 2  6 1 1  x4  x5  2. 3 3 Therefore, the next BF solution is (x1, x2, x3, x4, x5)  (2, 6, 2, 0, 0), yielding Z  36. To apply the optimality test to this new BF solution, we use the current Eq. (0) to express Z in terms of just the current nonbasic variables: 7 Actually, there are some technical differences between the Gauss-Jordan method of elimination and Gaussian elimination, but we shall not make this distinction. 4.4 THE SIMPLEX METHOD IN TABULAR FORM 107 3 Z  36  x4  x5. 2 Increasing either x4 or x5 would decrease Z, so neither adjacent BF solution is as good as the current one. Therefore, based on solution concept 6 in Sec. 4.1, the current BF solution must be optimal. In terms of the original form of the problem (no slack variables), the optimal solution is x1  2, x2  6, which yields Z  3x1  5x2  36. To see another example of applying the simplex method, we recommend that you now view the demonstration entitled Simplex Method—Algebraic Form in your OR Tutor. This vivid demonstration simultaneously displays both the algebra and the geometry of the simplex method as it dynamically evolves step by step. Like the many other demonstration examples accompanying other sections of the book (including the next section), this computer demonstration highlights concepts that are difficult to convey on the printed page. In addition, the Solved Examples section of the book’s website includes another example of applying the simplex method. To further help you learn the simplex method efficiently, the IOR Tutorial in your OR Courseware includes a procedure entitled Solve Interactively by the Simplex Method. This routine performs nearly all the calculations while you make the decisions step by step, thereby enabling you to focus on concepts rather than get bogged down in a lot of number crunching. Therefore, you probably will want to use this routine for your homework on this section. The software will help you get started by letting you know whenever you make a mistake on the first iteration of a problem. After you learn the simplex method, you will want to simply apply an automatic computer implementation of it to obtain optimal solutions of linear programming problems immediately. For your convenience, we also have included an automatic procedure called Solve Automatically by the Simplex Method in IOR Tutorial. This procedure is designed for dealing with only textbook-sized problems, including checking the answer you got with the interactive procedure. Section 4.8 will describe more powerful software options for linear programming that also are provided on the book’s website. The next section includes a summary of the simplex method for a more convenient tabular form. ■ 4.4 THE SIMPLEX METHOD IN TABULAR FORM The algebraic form of the simplex method presented in Sec. 4.3 may be the best one for learning the underlying logic of the algorithm. However, it is not the most convenient form for performing the required calculations. When you need to solve a problem by hand (or interactively with your IOR Tutorial), we recommend the tabular form described in this section.8 The tabular form of the simplex method records only the essential information, namely, (1) the coefficients of the variables, (2) the constants on the right-hand sides of the equations, and (3) the basic variable appearing in each equation. This saves writing the symbols for the variables in each of the equations, but what is even more important is the fact that it permits highlighting the numbers involved in arithmetic calculations and recording the computations compactly. Table 4.3 compares the initial system of equations for the Wyndor Glass Co. problem in algebraic form (on the left) and in tabular form (on the right), where the table on the right is called a simplex tableau. The basic variable for each equation is shown in bold type 8 A form more convenient for automatic execution on a computer is presented in Sec. 5.2. 108 CHAPTER 4 SOLVING LINEAR PROGRAMMING PROBLEMS: THE SIMPLEX METHOD ■ TABLE 4.3 Initial system of equations for the Wyndor Glass Co. problem (a) Algebraic Form (b) Tabular Form Coefficient of: (0) (1) (2) (3) Z  3x1  5x2  x3  x4  x5  0 Z  3x1  5x2  x3  x4  x  4 Z  3x1  2x2  x3  x4  x5  12 Z  3x1  2x2  x3  x4  x5  18 Basic Variable Eq. Z x1 x2 x3 x4 x5 Right Side Z x3 x4 x5 (0) (1) (2) (3) 1 0 0 0 3 1 0 3 5 0 2 2 0 1 0 0 0 0 1 0 0 0 0 1 0 4 12 18 on the left and in the first column of the simplex tableau on the right. [Although only the xj variables are basic or nonbasic, Z plays the role of the basic variable for Eq. (0).] All variables not listed in this basic variable column (x1, x2) automatically are nonbasic variables. After we set x1  0, x2  0, the right-side column gives the resulting solution for the basic variables, so that the initial BF solution is (x1, x2, x3, x4, x5)  (0, 0, 4, 12, 18) which yields Z  0. The tabular form of the simplex method uses a simplex tableau to compactly display the system of equations yielding the current BF solution. For this solution, each variable in the leftmost column equals the corresponding number in the rightmost column (and variables not listed equal zero). When the optimality test or an iteration is performed, the only relevant numbers are those to the right of the Z column.9 The term row refers to just a row of numbers to the right of the Z column (including the right-side number), where row i corresponds to Eq. (i). We summarize the tabular form of the simplex method below and, at the same time, briefly describe its application to the Wyndor Glass Co. problem. Keep in mind that the logic is identical to that for the algebraic form presented in the preceding section. Only the form for displaying both the current system of equations and the subsequent iteration has changed (plus we shall no longer bother to bring variables to the right-hand side of an equation before drawing our conclusions in the optimality test or in steps 1 and 2 of an iteration). Summary of the Simplex Method (and Iteration 1 for the Example) Initialization. Introduce slack variables. Select the decision variables to be the initial nonbasic variables (set equal to zero) and the slack variables to be the initial basic variables. (See Sec. 4.6 for the necessary adjustments if the model is not in our standard form—maximization, only  functional constraints, and all nonnegativity constraints— or if any bi values are negative.) For the Example: This selection yields the initial simplex tableau shown in column (b) of Table 4.3, so the initial BF solution is (0, 0, 4, 12, 18). Optimality Test. The current BF solution is optimal if and only if every coefficient in row 0 is nonnegative ( 0). If it is, stop; otherwise, go to an iteration to obtain the next BF solution, which involves changing one nonbasic variable to a basic variable (step 1) and vice versa (step 2) and then solving for the new solution (step 3). For the Example: Just as Z  3x1  5x2 indicates that increasing either x1 or x2 will increase Z, so the current BF solution is not optimal, the same conclusion is drawn from 9 For this reason, it is permissible to delete the Eq. and Z columns to reduce the size of the simplex tableau. We prefer to retain these columns as a reminder that the simplex tableau is displaying the current system of equations and that Z is one of the variables in Eq. (0). 4.4 109 THE SIMPLEX METHOD IN TABULAR FORM the equation Z  3x1  5x2  0. These coefficients of 3 and 5 are shown in row 0 in column (b) of Table 4.3. Iteration. Step 1: Determine the entering basic variable by selecting the variable (automatically a nonbasic variable) with the negative coefficient having the largest absolute value (i.e., the “most negative” coefficient) in Eq. (0). Put a box around the column below this coefficient, and call this the pivot column. For the Example: The most negative coefficient is 5 for x2 (5  3), so x2 is to be changed to a basic variable. (This change is indicated in Table 4.4 by the box around the x2 column below 5.) Step 2: Determine the leaving basic variable by applying the minimum ratio test. Minimum Ratio Test 1. 2. 3. 4. Pick out each coefficient in the pivot column that is strictly positive ( 0). Divide each of these coefficients into the right-side entry for the same row. Identify the row that has the smallest of these ratios. The basic variable for that row is the leaving basic variable, so replace that variable by the entering basic variable in the basic variable column of the next simplex tableau. Put a box around this row and call it the pivot row. Also call the number that is in both boxes the pivot number. For the Example: The calculations for the minimum ratio test are shown to the right of Table 4.4. Thus, row 2 is the pivot row (see the box around this row in the first simplex tableau of Table 4.5), and x4 is the leaving basic variable. In the next simplex tableau (see the bottom of Table 4.5), x2 replaces x4 as the basic variable for row 2. Step 3: Solve for the new BF solution by using elementary row operations (multiply or divide a row by a nonzero constant; add or subtract a multiple of one row to another row) to construct a new simplex tableau in proper form from Gaussian elimination below the current one, and then return to the optimality test. The specific elementary row operations that need to be performed are listed below. 1. Divide the pivot row by the pivot number. Use this new pivot row in steps 2 and 3. 2. For each other row (including row 0) that has a negative coefficient in the pivot column, add to this row the product of the absolute value of this coefficient and the new pivot row. 3. For each other row that has a positive coefficient in the pivot column, subtract from this row the product of this coefficient and the new pivot row. ■ TABLE 4.4 Applying the minimum ratio test to determine the first leaving basic variable for the Wyndor Glass Co. problem Coefficient of: Basic Variable Eq. Z x1 x2 x3 x4 x5 Right Side Z x3 (0) (1) 1 0 3 1 5 0 0 1 0 0 0 0 0 4 x4 (2) 0 0 2 0 1 0 12 12 씮   6 씯 minimum 2 x5 (3) 0 3 2 0 0 1 18 18 씮   9 2 Ratio 110 CHAPTER 4 SOLVING LINEAR PROGRAMMING PROBLEMS: THE SIMPLEX METHOD ■ TABLE 4.5 Simplex tableaux for the Wyndor Glass Co. problem after the first pivot row is divided by the first pivot number Coefficient of: Basic Variable Eq. Z x1 x2 x3 x4 x5 Right Side 0 Z x3 x4 x5 (0) (1) (2) (3) 1 0 0 0 3 1 0 3 5 0 2 2 0 1 0 0 0 0 1 0 0 0 0 1 0 4 12 18 1 Z x3 x2 x5 (0) (1) (2) (3) 1 0 0 0 0 1 0 1  2 0 6 Iteration For the Example: Since x2 is replacing x4 as a basic variable, we need to reproduce the first tableau’s pattern of coefficients in the column of x4 (0, 0, 1, 0) in the second tableau’s column of x2. To start, divide the pivot row (row 2) by the pivot number (2), which gives the new row 2 shown in Table 4.5. Next, we add to row 0 the product, 5 times the new row 2. Then we subtract from row 3 the product, 2 times the new row 2 (or equivalently, subtract from row 3 the old row 2). These calculations yield the new tableau shown in Table 4.6 for iteration 1. Thus, the new BF solution is (0, 6, 4, 0, 6), with Z  30. We next return to the optimality test to check if the new BF solution is optimal. Since the new row 0 still has a negative coefficient (3 for x1), the solution is not optimal, and so at least one more iteration is needed. Iteration 2 for the Example and the Resulting Optimal Solution The second iteration starts anew from the second tableau of Table 4.6 to find the next BF solution. Following the instructions for steps 1 and 2, we find x1 as the entering basic variable and x5 as the leaving basic variable, as shown in Table 4.7. For step 3, we start by dividing the pivot row (row 3) in Table 4.7 by the pivot number (3). Next, we add to row 0 the product, 3 times the new row 3. Then we subtract the new row 3 from row 1. We now have the set of tableaux shown in Table 4.8. Therefore, the new BF solution is (2, 6, 2, 0, 0), with Z  36. Going to the optimality test, we find that this solution is ■ TABLE 4.6 First two simplex tableaux for the Wyndor Glass Co. problem Coefficient of: Iteration 0 1 Basic Variable Eq. Z x1 x2 x3 x4 x5 Right Side Z x3 x4 x5 (0) (1) (2) (3) 1 0 0 0 3 1 0 3 5 0 2 2 0 1 0 0 0 0 1 0 0 0 0 1 0 4 12 18 Z (0) 1 3 0 0 0 30 x3 (1) 0 1 0 1 0 4 x2 (2) 0 0 1 0 0 6 x5 (3) 0 3 0 0 1 6 5  2 0 1  2 1 4.4 111 THE SIMPLEX METHOD IN TABULAR FORM ■ TABLE 4.7 Steps 1 and 2 of iteration 2 for the Wyndor Glass Co. problem Coefficient of: Iteration Basic Variable Eq. Z x1 x2 x3 x4 x5 Right Side Z (0) 1 3 0 0 5  2 0 30 x3 (1) 0 1 0 1 0 0 4 x2 (2) 0 0 1 0 1  2 0 6 x5 (3) 0 3 0 0 1 1 6 1 Ratio 4   4 1 6   2 씯 minimum 3 optimal because none of the coefficients in row 0 is negative, so the algorithm is finished. Consequently, the optimal solution for the Wyndor Glass Co. problem (before slack variables are introduced) is x1  2, x2  6. Now compare Table 4.8 with the work done in Sec. 4.3 to verify that these two forms of the simplex method really are equivalent. Then note how the algebraic form is superior for learning the logic behind the simplex method, but the tabular form organizes the work being done in a considerably more convenient and compact form. We generally use the tabular form from now on. An additional example of applying the simplex method in tabular form is available to you in the OR Tutor. See the demonstration entitled Simplex Method—Tabular Form. Another example also is included in the Solved Examples section of the book’s website. ■ TABLE 4.8 Complete set of simplex tableaux for the Wyndor Glass Co. problem Coefficient of: Iteration 0 1 Basic Variable Eq. Z x1 x2 x3 x4 x5 Right Side Z x3 x4 x5 (0) (1) (2) (3) 1 0 0 0 3 1 0 3 5 0 2 2 0 1 0 0 0 0 1 0 0 0 0 1 0 4 12 18 Z (0) 1 3 0 0 0 30 x3 (1) 0 1 0 1 0 4 x2 (2) 0 0 1 0 0 6 x5 (3) 0 3 0 0 1 6 Z (0) 1 0 0 0 3  2 1 36 x3 (1) 0 0 0 1 1  3 1  3 2 x2 (2) 0 0 1 0 1  2 0 6 x1 (3) 0 1 0 0 1  3 1  3 2 2 5  2 0 1  2 1 112 ■ 4.5 CHAPTER 4 SOLVING LINEAR PROGRAMMING PROBLEMS: THE SIMPLEX METHOD TIE BREAKING IN THE SIMPLEX METHOD You may have noticed in the preceding two sections that we never said what to do if the various choice rules of the simplex method do not lead to a clear-cut decision, because of either ties or other similar ambiguities. We discuss these details now. Tie for the Entering Basic Variable Step 1 of each iteration chooses the nonbasic variable having the negative coefficient with the largest absolute value in the current Eq. (0) as the entering basic variable. Now suppose that two or more nonbasic variables are tied for having the largest negative coefficient (in absolute terms). For example, this would occur in the first iteration for the Wyndor Glass Co. problem if its objective function were changed to Z  3x1 3x2, so that the initial Eq. (0) became Z  3x1  3x2  0. How should this tie be broken? The answer is that the selection between these contenders may be made arbitrarily. The optimal solution will be reached eventually, regardless of the tied variable chosen, and there is no convenient method for predicting in advance which choice will lead there sooner. In this example, the simplex method happens to reach the optimal solution (2, 6) in three iterations with x1 as the initial entering basic variable, versus two iterations if x2 is chosen. Tie for the Leaving Basic Variable—Degeneracy Now suppose that two or more basic variables tie for being the leaving basic variable in step 2 of an iteration. Does it matter which one is chosen? Theoretically it does, and in a very critical way, because of the following sequence of events that could occur. First, all the tied basic variables reach zero simultaneously as the entering basic variable is increased. Therefore, the one or ones not chosen to be the leaving basic variable also will have a value of zero in the new BF solution. (Note that basic variables with a value of zero are called degenerate, and the same term is applied to the corresponding BF solution.) Second, if one of these degenerate basic variables retains its value of zero until it is chosen at a subsequent iteration to be a leaving basic variable, the corresponding entering basic variable also must remain zero (since it cannot be increased without making the leaving basic variable negative), so the value of Z must remain unchanged. Third, if Z may remain the same rather than increase at each iteration, the simplex method may then go around in a loop, repeating the same sequence of solutions periodically rather than eventually increasing Z toward an optimal solution. In fact, examples have been artificially constructed so that they do become entrapped in just such a perpetual loop.10 Fortunately, although a perpetual loop is theoretically possible, it has rarely been known to occur in practical problems. If a loop were to occur, one could always get out of it by changing the choice of the leaving basic variable. Furthermore, special rules11 have been constructed for breaking ties so that such loops are always avoided. However, these rules frequently are ignored in actual application, and they will not be repeated here. For your purposes, just break this kind of tie arbitrarily and proceed without worrying about the degenerate basic variables that result. 10 For further information about cycling around a perpetual loop, see J. A. J. Hall and K. I. M. McKinnon: “The Simplest Examples Where the Simplex Method Cycles and Conditions Where EXPAND Fails to Prevent Cycling,” Mathematical Programming, Series B, 100(1): 135–150, May 2004. 11 See R. Bland: “New Finite Pivoting Rules for the Simplex Method,” Mathematics of Operations Research, 2: 103–107, 1977. 4.5 113 TIE BREAKING IN THE SIMPLEX METHOD ■ TABLE 4.9 Initial simplex tableau for the Wyndor Glass Co. problem without the last two functional constraints Coefficient of: Basic Variable Eq. Z x1 x2 x3 Right Side Ratio Z x3 (0) (1) 1 0 3 1 5 0 0 1 0 4 None With x1  0 and x2 increasing, x3  4  1x1  0x2  4  0. No Leaving Basic Variable—Unbounded Z In step 2 of an iteration, there is one other possible outcome that we have not yet discussed, namely, that no variable qualifies to be the leaving basic variable.12 This outcome would occur if the entering basic variable could be increased indefinitely without giving negative values to any of the current basic variables. In tabular form, this means that every coefficient in the pivot column (excluding row 0) is either negative or zero. As illustrated in Table 4.9, this situation arises in the example displayed in Fig. 3.6. In this example, the last two functional constraints of the Wyndor Glass Co. problem have been overlooked and so are not included in the model. Note in Fig. 3.6 how x2 can be increased indefinitely (thereby increasing Z indefinitely) without ever leaving the feasible region. Then note in Table 4.9 that x2 is the entering basic variable but the only coefficient in the pivot column is zero. Because the minimum ratio test uses only coefficients that are greater than zero, there is no ratio to provide a leaving basic variable. The interpretation of a tableau like the one shown in Table 4.9 is that the constraints do not prevent the value of the objective function Z from increasing indefinitely, so the simplex method would stop with the message that Z is unbounded. Because even linear programming has not discovered a way of making infinite profits, the real message for practical problems is that a mistake has been made! The model probably has been misformulated, either by omitting relevant constraints or by stating them incorrectly. Alternatively, a computational mistake may have occurred. Multiple Optimal Solutions We mentioned in Sec. 3.2 (under the definition of optimal solution) that a problem can have more than one optimal solution. This fact was illustrated in Fig. 3.5 by changing the objective function in the Wyndor Glass Co. problem to Z  3x1  2x2, so that every point on the line segment between (2, 6) and (4, 3) is optimal. Thus, all optimal solutions are a weighted average of these two optimal CPF solutions (x1, x2)  w1(2, 6)  w2(4, 3), where the weights w1 and w2 are numbers that satisfy the relationships w1  w2  1 and w1  0, w2  0. For example, w1  13 and w2  23 give 12 Note that the analogous case (no entering basic variable) cannot occur in step 1 of an iteration, because the optimality test would stop the algorithm first by indicating that an optimal solution had been reached. 114 CHAPTER 4 SOLVING LINEAR PROGRAMMING PROBLEMS: THE SIMPLEX METHOD 冢 冣 冢 冣 1 2 2 8 6 6 10 (x1, x2)  (2, 6)  (4, 3)    ,     , 4 3 3 3 3 3 3 3 as one optimal solution. In general, any weighted average of two or more solutions (vectors) where the weights are nonnegative and sum to 1 is called a convex combination of these solutions. Thus, every optimal solution in the example is a convex combination of (2, 6) and (4, 3). This example is typical of problems with multiple optimal solutions. As indicated at the end of Sec. 3.2, any linear programming problem with multiple optimal solutions (and a bounded feasible region) has at least two CPF solutions that are optimal. Every optimal solution is a convex combination of these optimal CPF solutions. Consequently, in augmented form, every optimal solution is a convex combination of the optimal BF solutions. (Problems 4.5-5 and 4.5-6 guide you through the reasoning behind this conclusion.) The simplex method automatically stops after one optimal BF solution is found. However, for many applications of linear programming, there are intangible factors not incorporated into the model that can be used to make meaningful choices between alternative optimal solutions. In such cases, these other optimal solutions should be identified as well. As indicated above, this requires finding all the other optimal BF solutions, and then every optimal solution is a convex combination of the optimal BF solutions. After the simplex method finds one optimal BF solution, you can detect if there are any others and, if so, find them as follows: Whenever a problem has more than one optimal BF solution, at least one of the nonbasic variables has a coefficient of zero in the final row 0, so increasing any such variable will not change the value of Z. Therefore, these other optimal BF solutions can be identified (if desired) by performing additional iterations of the simplex method, each time choosing a nonbasic variable with a zero coefficient as the entering basic variable.13 To illustrate, consider again the case just mentioned, where the objective function in the Wyndor Glass Co. problem is changed to Z  3x1  2x2. The simplex method obtains the first three tableaux shown in Table 4.10 and stops with an optimal BF solution. However, because a nonbasic variable (x3) then has a zero coefficient in row 0, we perform one more iteration in Table 4.10 to identify the other optimal BF solution. Thus, the two optimal BF solutions are (4, 3, 0, 6, 0) and (2, 6, 2, 0, 0), each yielding Z  18. Notice that the last tableau also has a nonbasic variable (x4) with a zero coefficient in row 0. This situation is inevitable because the extra iteration does not change row 0, so this leaving basic variable necessarily retains its zero coefficient. Making x4 an entering basic variable now would only lead back to the third tableau. (Check this.) Therefore, these two are the only BF solutions that are optimal, and all other optimal solutions are a convex combination of these two. (x1, x2, x3, x4, x5)  w1(2, 6, 2, 0, 0)  w2(4, 3, 0, 6, 0), w1  w2  1, w1  0, w2  0. 13 If such an iteration has no leaving basic variable, this indicates that the feasible region is unbounded and the entering basic variable can be increased indefinitely without changing the value of Z. 4.6 115 ADAPTING TO OTHER MODEL FORMS ■ TABLE 4.10 Complete set of simplex tableaux to obtain all optimal BF solutions for the Wyndor Glass Co. problem with c2 = 2 Coefficient of: Basic Variable Eq. Z x1 x2 x3 x4 x5 Right Side (0) (1) (2) (3) 1 0 0 0 3 1 0 3 2 0 2 2 0 1 0 0 0 0 1 0 0 0 0 1 0 4 12 18 No 0 Z x3 x4 x5 (0) (1) (2) (3) 1 0 0 0 0 1 0 0 2 0 2 2 3 1 0 3 0 0 1 0 0 0 0 1 12 4 12 6 No 1 Z x1 x4 x5 Z x1 x4 (0) (1) (2) 1 0 0 0 1 0 0 0 0 0 0 1 Yes (3) 0 0 1 1 0 1 1  2 18 4 6 x2 0 1 3 3  2 Z (0) 1 0 0 0 (1) 0 1 0 0 1 1  3 18 x1 0 1  3 x3 (2) 0 0 0 1 1  3 1  3 2 x2 (3) 0 0 1 0 1  2 0 6 Iteration 2 Extra ■ 4.6 0 Solution Optimal? 3 Yes 2 ADAPTING TO OTHER MODEL FORMS Thus far we have presented the details of the simplex method under the assumptions that the problem is in our standard form (maximize Z subject to functional constraints in  form and nonnegativity constraints on all variables) and that bi  0 for all i  1, 2, . . . , m. In this section we point out how to make the adjustments required for other legitimate forms of the linear programming model. You will see that all these adjustments can be made during the initialization, so the rest of the simplex method can then be applied just as you have learned it already. The only serious problem introduced by the other forms for functional constraints (the  or  forms, or having a negative right-hand side) lies in identifying an initial BF solution. Before, this initial solution was found very conveniently by letting the slack variables be the initial basic variables, so that each one just equals the nonnegative right-hand side of its equation. Now, something else must be done. The standard approach that is used for all these cases is the artificial-variable technique. This technique constructs a more convenient artificial problem by introducing a dummy variable (called an artificial variable) into each constraint that needs one. This new variable is introduced just for the purpose of being the initial basic variable for that equation. The usual nonnegativity constraints are placed on these variables, and the objective function also is modified to impose an exorbitant penalty on their having values larger than zero. The iterations of the simplex method then automatically force the artificial variables to disappear (become zero), one at a time, until they are all gone, after which the real problem is solved. 116 CHAPTER 4 SOLVING LINEAR PROGRAMMING PROBLEMS: THE SIMPLEX METHOD To illustrate the artificial-variable technique, first we consider the case where the only nonstandard form in the problem is the presence of one or more equality constraints. Equality Constraints Any equality constraint ai1x1  ai2x2   ain xn  bi actually is equivalent to a pair of inequality constraints: ai1x1  ai2x2  ai1x1  ai2x2   ain xn  bi  ain xn  bi. However, rather than making this substitution and thereby increasing the number of constraints, it is more convenient to use the artificial-variable technique. We shall illustrate this technique with the following example. Example. Suppose that the Wyndor Glass Co. problem in Sec. 3.1 is modified to require that Plant 3 be used at full capacity. The only resulting change in the linear programming model is that the third constraint, 3x1  2x2  18, instead becomes an equality constraint 3x1  2x2  18, so that the complete model becomes the one shown in the upper right-hand corner of Fig. 4.3. This figure also shows in darker ink the feasible region which now consists of just the line segment connecting (2, 6) and (4, 3). After the slack variables still needed for the inequality constraints are introduced, the system of equations for the augmented form of the problem becomes ■ FIGURE 4.3 When the third functional constraint becomes an equality constraint, the feasible region for the Wyndor Glass Co. problem becomes the line segment between (2, 6) and (4, 3). x2 10 Maximize Z  3x1  5x2, 4 x1 subject to 2x2  12 3x1  2x2  18 and x1 0, x2 0 8 6 (2, 6) 4 (4, 3) 2 0 2 4 6 8 x1 4.6 117 ADAPTING TO OTHER MODEL FORMS (0) (1) (2) (3) Z  3x1  5x2 x1  x3 2x2  x4 3x1  2x2  0.  4.  12.  18. Unfortunately, these equations do not have an obvious initial BF solution because there is no longer a slack variable to use as the initial basic variable for Eq. (3). It is necessary to find an initial BF solution to start the simplex method. This difficulty can be circumvented in the following way. Obtaining an Initial BF Solution. The procedure is to construct an artificial problem that has the same optimal solution as the real problem by making two modifications of the real problem. 1. Apply the artificial-variable technique by introducing a nonnegative artificial variable (call it x苶5)14 into Eq. (3), just as if it were a slack variable (3) 3x1  2x2  苶x5  18. 2. Assign an overwhelming penalty to having x苶5  0 by changing the objective function Z  3x1  5x2 to Z  3x1  5x2  Mx苶5, where M symbolically represents a huge positive number. (This method of forcing 苶x5 to be 苶x5  0 in the optimal solution is called the Big M method.) Now find the optimal solution for the real problem by applying the simplex method to the artificial problem, starting with the following initial BF solution: Initial BF Solution Nonbasic variables: Basic variables: x1  0, x3  4, x2  0 x4  12, 苶x5  18. Because 苶x5 plays the role of the slack variable for the third constraint in the artificial problem, this constraint is equivalent to 3x1  2x2  18 (just as for the original Wyndor Glass Co. problem in Sec. 3.1). We show below the resulting artificial problem (before augmenting) next to the real problem. The Real Problem The Artificial Problem Define x苶5  18  3x1  2x2. Maximize Z  3x1  5x2, Maximize subject to subject to Z  3x1  5x2  M x苶5, x1  2x2  4 (so 3x1  2x2  苶x5  4 3x1  2x2  12 (so 3x1  2x2  苶x5  12 3x1  2x2  18 (so 3x1  2x2  苶x5  18 (so 3x1  2x2  苶x5  18) and x1  0, x2  0. and x1  0, x2  0, 苶x5  0. Therefore, just as in Sec. 3.1, the feasible region for (x1, x2) for the artificial problem is the one shown in Fig. 4.4. The only portion of this feasible region that coincides with the feasible region for the real problem is where x苶5  0 (so 3x1  2x2  18). 14 We shall always label the artificial variables by putting a bar over them. 118 CHAPTER 4 SOLVING LINEAR PROGRAMMING PROBLEMS: THE SIMPLEX METHOD x2 Define x5  18  3x1  2x2. Maximize Z  3x1  5x2  Mx5, subject to x1 4 2x2  12 3x1  2x2  18 and x1 0, x2 0, x5 0 Z  30  6M (2, 6) Z  36 3 (0, 6) ■ FIGURE 4.4 This graph shows the feasible region and the sequence of CPF solutions (, , , ) examined by the simplex method for the artificial problem that corresponds to the real problem of Fig. 4.3. Feasible region (4, 3) Z  27 1 0 (0, 0) 2 Z  0  18M Z  12  6M (4, 0) x1 Figure 4.4 also shows the order in which the simplex method examines the CPF solutions (or BF solutions after augmenting), where each circled number identifies which iteration obtained that solution. Note that the simplex method moves counterclockwise here whereas it moved clockwise for the original Wyndor Glass Co. problem (see Fig. 4.2). The reason for this difference is the extra term Mx苶x5 in the objective function for the artificial problem. Before applying the simplex method and demonstrating that it follows the path shown in Fig. 4.4, the following preparatory step is needed. Converting Equation (0) to Proper Form. The system of equations after the artificial problem is augmented is (0) (1) (2) (3) Z  3x1  5x2  Mx苶5 x1  x3 2x2  x4 3x1  2x2  x苶5  0  4  12  18 where the initial basic variables (x3, x4, 苶x5) are shown in bold type. However, this system is not yet in proper form from Gaussian elimination because a basic variable 苶x5 has a nonzero coefficient in Eq. (0). Recall that all basic variables must be algebraically eliminated from Eq. (0) before the simplex method can either apply the optimality test or find the entering basic variable. This elimination is necessary so that the negative of the coefficient of each nonbasic variable will give the rate at which Z would increase if that nonbasic variable were to be increased from 0 while adjusting the values of the basic variables accordingly. To algebraically eliminate x苶5 from Eq. (0), we need to subtract from Eq. (0) the product, M times Eq. (3). New (0) Z  3x1  5x2  Mx苶5  0 M(3x1  2x2 Mx苶x5  18) Z  (3M  3)x1  (2M  5)x2  18M. 4.6 119 ADAPTING TO OTHER MODEL FORMS Application of the Simplex Method. This new Eq. (0) gives Z in terms of just the nonbasic variables (x1, x2), Z  18M  (3M  3)x1  (2M  5)x2. Since 3M  3  2M  5 (remember that M represents a huge number), increasing x1 increases Z at a faster rate than increasing x2 does, so x1 is chosen as the entering basic variable. This leads to the move from (0, 0) to (4, 0) at iteration 1, shown in Fig. 4.4, thereby increasing Z by 4(3M  3). The quantities involving M never appear in the system of equations except for Eq. (0), so they need to be taken into account only in the optimality test and when an entering basic variable is determined. One way of dealing with these quantities is to assign some particular (huge) numerical value to M and use the resulting coefficients in Eq. (0) in the usual way. However, this approach may result in significant rounding errors that invalidate the optimality test. Therefore, it is better to do what we have just shown, namely, to express each coefficient in Eq. (0) as a linear function aM  b of the symbolic quantity M by separately recording and updating the current numerical value of (1) the multiplicative factor a and (2) the additive term b. Because M is assumed to be so large that b always is negligible compared with M when a 0, the decisions in the optimality test and the choice of the entering basic variable are made by using just the multiplicative factors in the usual way, except for breaking ties with the additive factors. Using this approach on the example yields the simplex tableaux shown in Table 4.11. Note that the artificial variable 苶x5 is a basic variable (x苶x5  0) in the first two tableaux ■ TABLE 4.11 Complete set of simplex tableaux for the problem shown in Fig. 4.4 Coefficient of: Basic Variable Eq. Z x1 x2 x3 x4 x 苶5 0 Z x3 x4 x苶5 (0) (1) (2) (3) 1 0 0 0 3M  3 1 0 3 2M  5 0 2 2 0 1 0 0 0 0 1 0 0 0 0 1 1 Z x1 x4 x苶5 (0) (1) (2) (3) 1 0 0 0 0 1 0 0 2M  5 0 2 2 3M  3 1 0 3 0 0 1 0 0 0 0 1 Z (0) 1 0 0 x1 x4 (1) (2) 0 0 1 0 0 0 x2 (3) 0 0 1 Z (0) 1 0 x1 (1) 0 x3 (2) x2 (3) Iteration 2 18M 4 12 18 6M  12 4 12 6 9  2 1 3 3  2 0 1 0 0 3  2 M1 36 1 0 0 1  3 1  3 2 0 0 0 1 1  3 1  3 2 0 0 1 0 1  2 0 6 3 0 0 5 M  2 0 1 1  2 Right Side 27 4 6 3 120 CHAPTER 4 SOLVING LINEAR PROGRAMMING PROBLEMS: THE SIMPLEX METHOD and a nonbasic variable (x苶x5  0) in the last two. Therefore, the first two BF solutions for this artificial problem are infeasible for the real problem whereas the last two also are BF solutions for the real problem. This example involved only one equality constraint. If a linear programming model has more than one, each is handled in just the same way. (If the right-hand side is negative, multiply through both sides by 1 first.) Negative Right-Hand Sides The technique mentioned in the preceding sentence for dealing with an equality constraint with a negative right-hand side (namely, multiply through both sides by 1) also works for any inequality constraint with a negative right-hand side. Multiplying through both sides of an inequality by 1 also reverses the direction of the inequality; i.e.,  changes to  or vice versa. For example, doing this to the constraint x1  x2  1 (that is, x1  x2  1) gives the equivalent constraint x1  x2  1 (that is, x2  1  x1) but now the right-hand side is positive. Having nonnegative right-hand sides for all the functional constraints enables the simplex method to begin, because (after augmenting) these right-hand sides become the respective values of the initial basic variables, which must satisfy nonnegativity constraints. We next focus on how to augment  constraints, such as x1  x2  1, with the help of the artificial-variable technique. Functional Constraints in ≥ Form To illustrate how the artificial-variable technique deals with functional constraints in  form, we will use the model for designing Mary’s radiation therapy, as presented in Sec. 3.4. For your convenience, this model is repeated below, where we have placed a box around the constraint of special interest here. Radiation Therapy Example Minimize Z  0.4x1  0.5x2, subject to 0.3x1  0.1x2  2.7 0.5x1  0.5x2  6 0.6x1  0.4x2  6 and x1  0, x2  0. The graphical solution for this example (originally presented in Fig. 3.12) is repeated here in a slightly different form in Fig. 4.5. The three lines in the figure, along with the two axes, constitute the five constraint boundaries of the problem. The dots lying at the intersection of a pair of constraint boundaries are the corner-point solutions. The only two 4.6 121 ADAPTING TO OTHER MODEL FORMS x2 27 15 Dots  corner-point solutions Dark line segment  feasible region Optimal solution  (7.5, 4.5) 0.6x1  0.4x2  6 10 (6, 6) 5 (7.5, 4.5) (8, 3) ■ FIGURE 4.5 Graphical display of the radiation therapy example and its corner-point solutions. 0.5x1  0.5x2  6 0.3x1  0.1x2  2.7 5 0 10 x1 corner-point feasible solutions are (6, 6) and (7.5, 4.5), and the feasible region is the line segment connecting these two points. The optimal solution is (x1, x2)  (7.5, 4.5), with Z  5.25. We soon will show how the simplex method solves this problem by directly solving the corresponding artificial problem. However, first we must describe how to deal with the third constraint. Our approach involves introducing both a surplus variable x5 (defined as x5  0.6x1  0.4x2  6) and an artificial variable 苶x6, as shown next. 씮 씮 0.6x1  0.4x2 6 0.6x1  0.4x2  x5 6 0.6x1  0.4x2  x5  苶x6  6 (x5  0) (x5  0, 苶x6  0). Here x5 is called a surplus variable because it subtracts the surplus of the left-hand side over the right-hand side to convert the inequality constraint to an equivalent equality 122 CHAPTER 4 SOLVING LINEAR PROGRAMMING PROBLEMS: THE SIMPLEX METHOD constraint. Once this conversion is accomplished, the artificial variable is introduced just as for any equality constraint. After a slack variable x3 is introduced into the first constraint, an artificial variable x苶4 is introduced into the second constraint, and the Big M method is applied, so the complete artificial problem (in augmented form) is Z  0.4x1  0.5x2  Mx苶x4  Mxx苶6, 0.3x1  0.1x2  x3  2.7 0.5x1  0.5x2  苶x4 6 0.6x1  0.4x2  x5  苶x6  6 x1  0, x2  0, x3  0, x苶4  0, x5  0, Minimize subject to and 苶x6  0. Note that the coefficients of the artificial variables in the objective function are M, instead of M, because we now are minimizing Z. Thus, even though 苶x4  0 and/or x苶6  0 is possible for a feasible solution for the artificial problem, the huge unit penalty of M prevents this from occurring in an optimal solution. As usual, introducing artificial variables enlarges the feasible region. Compare below the original constraints for the real problem with the corresponding constraints on (x1, x2) for the artificial problem. Constraints on (x1, x2) for the Real Problem 0.3x1  0.1x2  2.7 0.5x1  0.5x2  6 0.6x1  0.4x2  6 x1  0, x2  0 Constraints on (x1, x2) for the Artificial Problem 0.3x1  0.1x2  2.7 0.5x1  0.5x2  6 ( holds when 苶x4  0) No such constraint (except when 苶x6  0) x1  0, x2  0 Introducing the artificial variable 苶x4 to play the role of a slack variable in the second constraint allows values of (x1, x2) below the 0.5x1  0.5x2  6 line in Fig. 4.5. Introducing x5 and x苶6 into the third constraint of the real problem (and moving these variables to the right-hand side) yields the equation 0.6x1  0.4x2  6  x5  x苶6. Because both x5 and 苶x6 are constrained only to be nonnegative, their difference x5  苶x6 can be any positive or negative number. Therefore, 0.6x1  0.4x2 can have any value, which has the effect of eliminating the third constraint from the artificial problem and allowing points on either side of the 0.6x1  0.4x2  6 line in Fig. 4.5. (We keep the third constraint in the system of equations only because it will become relevant again later, after the Big M method forces x苶6 to be zero.) Consequently, the feasible region for the artificial problem is the entire polyhedron in Fig. 4.5 whose vertices are (0, 0), (9, 0), (7.5, 4.5), and (0, 12). Since the origin now is feasible for the artificial problem, the simplex method starts with (0, 0) as the initial CPF solution, i.e., with (x1, x2, x3, 苶x4, x5, 苶x6)  (0, 0, 2.7, 6, 0, 6) as the initial BF solution. (Making the origin feasible as a convenient starting point for the simplex method is the whole point of creating the artificial problem.) We soon will trace the entire path followed by the simplex method from the origin to the optimal solution for both the artificial and real problems. But, first, how does the simplex method handle minimization? Minimization One straightforward way of minimizing Z with the simplex method is to exchange the roles of the positive and negative coefficients in row 0 for both the optimality test and 4.6 123 ADAPTING TO OTHER MODEL FORMS step 1 of an iteration. However, rather than changing our instructions for the simplex method for this case, we present the following simple way of converting any minimization problem to an equivalent maximization problem: n Minimizing Z  冱 cj xj j1 is equivalent to n maximizing Z  冱 (cj)xj; j1 i.e., the two formulations yield the same optimal solution(s). The two formulations are equivalent because the smaller Z is, the larger Z is, so the solution that gives the smallest value of Z in the entire feasible region must also give the largest value of Z in this region. Therefore, in the radiation therapy example, we make the following change in the formulation: 씮 씮 Minimize Maximize Z  0.4x1  0.5x2 Z  0.4x1  0.5x2. After artificial variables x苶4 and 苶x6 are introduced and then the Big M method is applied, the corresponding conversion is 씮 씮 Minimize Maximize Z  0.4x1  0.5x2  Mx苶x4  Mx苶x6 Z  0.4x1  0.5x2  Mx苶x4  Mx苶x6. Solving the Radiation Therapy Example We now are nearly ready to apply the simplex method to the radiation therapy example. By using the maximization form just obtained, the entire system of equations is now (0) (1) (2) (3) Z  0.4x1  0.5x2  Mx苶4 0.3x1  0.1x2  x3 0.5x1  0.5x2  苶x4 0.6x1  0.4x2  Mx苶6  0  2.7 6  x5  苶x6  6. The basic variables (x3, x苶4, x苶6) for the initial BF solution (for this artificial problem) are shown in bold type. Note that this system of equations is not yet in proper form from Gaussian elimination, as required by the simplex method, since the basic variables x苶4 and x苶6 still need to be algebraically eliminated from Eq. (0). Because 苶x4 and x苶6 both have a coefficient of M, Eq. (0) needs to have subtracted from it both M times Eq. (2) and M times Eq. (3). The calculations for all the coefficients (and the right-hand sides) are summarized below, where the vectors are the relevant rows of the simplex tableau corresponding to the above system of equations. Row 0: M[0.4, M[0.5, M[0.6, New row 0  [1.1M  0.4, 0.5, 0.5, 0.4, 0.9M  0.5, 0, 0, 0, 0, M, 1, 0, 0, 0, 0, 1, M, M, 0, 1, 0, 0] 6] 6] 12M] The resulting initial simplex tableau, ready to begin the simplex method, is shown at the top of Table 4.12. Applying the simplex method in just the usual way then yields the 124 CHAPTER 4 SOLVING LINEAR PROGRAMMING PROBLEMS: THE SIMPLEX METHOD ■ TABLE 4.12 The Big M method for the radiation therapy example Coefficient of: Iteration 0 Basic Variable Eq. Z x1 x2 x3 x 苶4 x5 x 苶6 Right Side Z x3 苶x4 x苶6 (0) (1) (2) (3) 1 0 0 0 1.1M  0.4 0.3 0.5 0.6 0.9M  0.5 0.1 0.5 0.4 0.0 1.0 0.0 0.0 0.0 0.0 1.0 0.0 M 0 0 1 0 0 0 1 12M1 2.7 6.0 6.0 Z (0) 1 0.0 16 11 M   30 30 11 4 M   3 3 0.0 M 0 2.1M  3.6 x1 (1) 0 1.0 1  3 10  3 0.0 0 0 9.0 苶x4 (2) 0 0.0 0 0 1.5 (3) 0 0.0 5  3 2 1.0 x苶6 1  3 0.2 0.0 1 1 0.6 Z (0) 1 0.0 0.0 5 7 M   3 3 0.0 5 11 M   3 6 8 11 M   3 6 0.5M  4.7 x1 (1) 0 1.0 0.0 20  3 0.0 5  3 5  3 8.0 苶x4 (2) 0 0.0 0.0 x2 (3) 0 0.0 5  3 5 Z x1 x5 x2 (0) (1) (2) (3) 1 0 0 0 0.0 1.0 0.0 0.0 M 0 1 0 1 2 3 1.0 5  3 10.0 0.0 5  3 5 0.0 0.0 0.0 1.0  0.5  5.0  1.0 5.0 M  1.1 1.0 1 0.6 3.0 0 0 1 0 1.0 0.5 3.0 5.25 7.51 0.31 4.51 sequence of simplex tableaux shown in the rest of Table 4.12. For the optimality test and the selection of the entering basic variable at each iteration, the quantities involving M are treated just as discussed in connection with Table 4.11. Specifically, whenever M is present, only its multiplicative factor is used, unless there is a tie, in which case the tie is broken by using the corresponding additive terms. Just such a tie occurs in the last selection of an entering basic variable (see the next-to-last tableau), where the coefficients of x3 and x5 in row 0 both have the same multiplicative factor of 53. Comparing the additive terms, 161 73 leads to choosing x5 as the entering basic variable. Note in Table 4.12 the progression of values of the artificial variables x苶4 and x苶6 and of Z. We start with large values, x苶4  6 and 苶x6  6, with Z  12M (Z  12M). The first iteration greatly reduces these values. The Big M method succeeds in driving 苶x6 to zero (as a new nonbasic variable) at the second iteration and then in doing the same to 苶x4 at the next iteration. With both x苶4  0 and 苶x6  0, the basic solution given in the last tableau is guaranteed to be feasible for the real problem. Since it passes the optimality test, it also is optimal. Now see what the Big M method has done graphically in Fig. 4.6. The feasible region for the artificial problem initially has four CPF solutions—(0, 0), (9, 0), (0, 12), and (7.5, 4.5)—and then replaces the first three with two new CPF solutions—(8, 3), (6, 6)— after x苶6 decreases to 苶x6  0 so that 0.6x1  0.4x2  6 becomes an additional constraint. (Note that the three replaced CPF solutions—(0, 0), (9, 0), and (0, 12)—actually were corner-point infeasible solutions for the real problem shown in Fig. 4.5.) Starting with the origin as the convenient initial CPF solution for the artificial problem, we move around 4.6 125 ADAPTING TO OTHER MODEL FORMS x2 Constraints for the artificial problem: Z  6  1.2M (0, 12) 0.3x1  0.1x2  2.7 0.5x1  0.5x2  6 ( holds when x4  0) (0.6x1  0.4x2  6 when x6  0) x1  0, x2  0 (x4  0, x6  0) Z  5.4 This dark line segment is the feasible region for the real problem (x4  0, x6  0). (6, 6) (7.5, 4.5) optimal 3 Z  5.25 (8, 3) ■ FIGURE 4.6 This graph shows the feasible region and the sequence of CPF solutions (, , , ) examined by the simplex method (with the Big M method) for the artificial problem that corresponds to the real problem of Fig. 4.5. Z  4.7  0.5M 2 Feasible region for the artificial problem 0 Z  3.6  2.1M 1 (0, 0) (9, 0) x1 Z  0  12M the boundary to three other CPF solutions—(9, 0), (8, 3), and (7.5, 4.5). The last of these is the first one that also is feasible for the real problem. Fortuitously, this first feasible solution also is optimal, so no additional iterations are needed. For other problems with artificial variables, it may be necessary to perform additional iterations to reach an optimal solution after the first feasible solution is obtained for the real problem. (This was the case for the example solved in Table 4.11.) Thus, the Big M method can be thought of as having two phases. In the first phase, all the artificial variables are driven to zero (because of the penalty of M per unit for being greater than zero) in order to reach an initial BF solution for the real problem. In the second phase, all the artificial variables are kept at zero (because of this same penalty) while the simplex method generates a sequence of BF solutions for the real problem that leads to an optimal solution. The two-phase method described next is a streamlined procedure for performing these two phases directly, without even introducing M explicitly. The Two-Phase Method For the radiation therapy example just solved in Table 4.12, recall its real objective function Real problem: Minimize Z  0.4x1  0.5x2. 126 CHAPTER 4 SOLVING LINEAR PROGRAMMING PROBLEMS: THE SIMPLEX METHOD However, the Big M method uses the following objective function (or its equivalent in maximization form) throughout the entire procedure: Minimize Big M method: Z  0.4x1  0.5x2  Mx苶4  Mx苶6. Since the first two coefficients are negligible compared to M, the two-phase method is able to drop M by using the following two objective functions with completely different definitions of Z in turn. Two-phase method: Phase 1: Phase 2: Z  x苶4  苶x6 Z  0.4x1  0.5x2 Minimize Minimize (until x苶4  0, 苶x6  0). (with x苶4  0, 苶x6  0). The phase 1 objective function is obtained by dividing the Big M method objective function by M and then dropping the negligible terms. Since phase 1 concludes by obtaining a BF solution for the real problem (one where 苶x4  0 and x苶6  0), this solution is then used as the initial BF solution for applying the simplex method to the real problem (with its real objective function) in phase 2. Before solving the example in this way, we summarize the general method. Summary of the Two-Phase Method. Initialization: Revise the constraints of the original problem by introducing artificial variables as needed to obtain an obvious initial BF solution for the artificial problem. Phase 1: The objective for this phase is to find a BF solution for the real problem. To do this, Minimize Z  artificial variables, subject to revised constraints. The optimal solution obtained for this problem (with Z  0) will be a BF solution for the real problem. Phase 2: The objective for this phase is to find an optimal solution for the real problem. Since the artificial variables are not part of the real problem, these variables can now be dropped (they are all zero now anyway).15 Starting from the BF solution obtained at the end of phase 1, use the simplex method to solve the real problem. For the example, the problems to be solved by the simplex method in the respective phases are summarized below. Phase 1 Problem (Radiation Therapy Example): Minimize Z  x苶4  苶x6, subject to 0.3x1  0.1x2  x3  2.7 0.5x1  0.5x2  苶x4 6 0.6x1  0.4x2  x5  苶x6  6 and x1  0, x2  0, x3  0, 苶x4  0, x5  0, 苶x6  0. Phase 2 Problem (Radiation Therapy Example): Minimize 15 Z  0.4x1  0.5x2, We are skipping over three other possibilities here: (1) artificial variables  0 (discussed in the next subsection), (2) artificial variables that are degenerate basic variables, and (3) retaining the artificial variables as nonbasic variables in phase 2 (and not allowing them to become basic) as an aid to subsequent postoptimality analysis. Your IOR Tutorial allows you to explore these possibilities. 4.6 127 ADAPTING TO OTHER MODEL FORMS subject to  2.7 0.3x1  0.1x2  x3 0.5x1  0.5x2 6 0.6x1  0.4x2  x5  6 and x1  0, x2  0, x3  0, x5  0. The only differences between these two problems are in the objective function and in the inclusion (phase 1) or exclusion (phase 2) of the artificial variables x苶4 and x苶6. Without the artificial variables, the phase 2 problem does not have an obvious initial BF solution. The sole purpose of solving the phase 1 problem is to obtain a BF solution with x苶4  0 and 苶x6  0 so that this solution (without the artificial variables) can be used as the initial BF solution for phase 2. Table 4.13 shows the result of applying the simplex method to this phase 1 problem. [Row 0 in the initial tableau is obtained by converting Minimize Z  苶x4  苶x6 to Maximize (Z)  x苶4  x苶6 and then using elementary row operations to eliminate the basic variables x苶4 and x苶6 from Z  苶x4  苶x6  0.] In the next-to-last tableau, there is a tie for the entering basic variable between x3 and x5, which is broken arbitrarily in favor of x3. The solution obtained at the end of phase 1, then, is (x1, x2, x3, 苶x4, x5, 苶x6)  (6, 6, 0.3, 0, 0, 0) or, after 苶x4 and x苶6 are dropped, (x1, x2, x3, x5)  (6, 6, 0.3, 0). ■ TABLE 4.13 Phase 1 of the two-phase method for the radiation therapy example Coefficient of: Iteration 0 Basic Variable Eq. Z x1 x2 x3 x 苶4 x5 x 苶6 Right Side Z x3 x苶4 x苶6 (0) (1) (2) (3) 1 0 0 0 1.1 0.3 0.5 0.6 0.9 0.1 0.5 0.4 00 01 00 00 0 0 1 0 1 0 0 1 0 0 0 1 12 2.7 6.0 6.0 Z (0) 1 0.0 16  30 11  3 0 1 0 2.1 x1 (1) 0 1.0 1  3 10  3 0 0 0 9.0 x苶4 (2) 0 0.0 0 0 1.5 (3) 0 0.0 5  3 2 1 x苶6 1  3 0.2 0 1 1 0.6 Z (0) 1 0.0 0.0 5  3 0 5  3 8  3 0.5 x1 (1) 0 1.0 0.0 20  3 0 5  3 5  3 8.0 x苶4 (2) 0 0.0 0.0 x2 (3) 0 0.0 1.0 5  3 10 5  3 5 5  3 5 Z x1 (0) (1) 1 0 0.0 1.0 0.0 0.0 00 00 0 5 1 5 0.0 6.0 x3 (2) 0 0.0 0.0 01 1 1 0.3 x2 (3) 0 0.0 1.0 00 5 5 6.0 1 2 3 1 0 1 4 3  5 6 0.5 3.0 128 CHAPTER 4 SOLVING LINEAR PROGRAMMING PROBLEMS: THE SIMPLEX METHOD As claimed in the summary, this solution from phase 1 is indeed a BF solution for the real problem (the phase 2 problem) because it is the solution (after you set x5  0) to the system of equations consisting of the three functional constraints for the phase 2 problem. In fact, after deleting the 苶x4 and 苶x6 columns as well as row 0 for each iteration, Table 4.13 shows one way of using Gaussian elimination to solve this system of equations by reducing the system to the form displayed in the final tableau. Table 4.14 shows the preparations for beginning phase 2 after phase 1 is completed. Starting from the final tableau in Table 4.13, we drop the artificial variables (x苶4 and x苶6), substitute the phase 2 objective function (Z  0.4x1  0.5x2 in maximization form) into row 0, and then restore the proper form from Gaussian elimination (by algebraically eliminating the basic variables x1 and x2 from row 0). Thus, row 0 in the last tableau is obtained by performing the following elementary row operations in the next-to-last tableau: from row 0 subtract both the product, 0.4 times row 1, and the product, 0.5 times row 3. Except for the deletion of the two columns, note that rows 1 to 3 never change. The only adjustments occur in row 0 in order to replace the phase 1 objective function by the phase 2 objective function. The last tableau in Table 4.14 is the initial tableau for applying the simplex method to the phase 2 problem, as shown at the top of Table 4.15. Just one iteration then leads to the optimal solution shown in the second tableau: (x1, x2, x3, x5)  (7.5, 4.5, 0, 0.3). This solution is the desired optimal solution for the real problem of interest rather than the artificial problem constructed for phase 1. Now we see what the two-phase method has done graphically in Fig. 4.7. Starting at the origin, phase 1 examines a total of four CPF solutions for the artificial problem. The first three actually were corner-point infeasible solutions for the real problem shown in Fig. 4.5. The fourth CPF solution, at (6, 6), is the first one that also is feasible for the real problem, so it becomes the initial CPF solution for phase 2. One iteration in phase 2 leads to the optimal CPF solution at (7.5, 4.5). ■ TABLE 4.14 Preparing to begin phase 2 for the radiation therapy example Coefficient of: Basic Variable Eq. Z x1 x2 x3 x 苶4 x5 x 苶6 Right Side Z x1 (0) (1) 1 0 00. 10. 0.0 0.0 0 0 0.0 5.0 1 5 0.0 6.0 x3 (2) 0 00. 0.0 1 1.0 1 0.3 x2 (3) 0 00. 1.0 0 1 4 3  5 6 5.0 5 6.0 Drop x苶4 and 苶x6 Z x1 x3 x2 (0) (1) (2) (3) 1 0 0 0 00. 10. 00. 00. 0.0 0.0 0.0 1.0 0 0 1 0 0.0 5.0 1.0 5.0 0.0 6.0 0.3 6.0 Substitute phase 2 objective function Z x1 x3 x2 (0) (1) (2) (3) 1 0 0 0 0.4 10. 00. 00. 0.5 0.0 0.0 1.0 0 0 1 0 0.0 5.0 1.0 5.0 0.0 6.0 0.3 6.0 Restore proper form from Gaussian elimination Z x1 x3 x2 (0) (1) (2) (3) 1 0 0 0 00. 10. 00. 00. 0.0 0.0 0.0 1.0 0 0 1 0 0.5 5.0 1.0 5.0 5.4 6.0 0.3 6.0 Final Phase 1 tableau 4.6 129 ADAPTING TO OTHER MODEL FORMS ■ TABLE 4.15 Phase 2 of the two-phase method for the radiation therapy example Coefficient of: Basic Variable Eq. Z x1 x2 x3 x5 Right Side 0 Z x1 x3 x2 (0) (1) (2) (3) 1 0 0 0 0 1 0 0 0 0 0 1 0.0 0.0 1.0 0.0 0.5 5.0 1.0 5.0 5.40 6.00 0.30 6.00 1 Z x1 x5 x2 (0) (1) (2) (3) 1 0 0 0 0 1 0 0 0 0 0 1 0.5 5.0 1.0 5.0 0.0 0.0 1.0 0.0 5.25 7.50 0.30 4.50 Iteration x2 (0, 12) (6, 6) 3 This dark line segment is the feasible region for the real problem (phase 2). 0 1 (7.5, 4.5) optimal ■ FIGURE 4.7 This graph shows the sequence of CPF solutions for phase 1 (, , , ) and then for phase 2 ( 0 , 1 ) when the two-phase method is applied to the radiation therapy example. Feasible region for the artificial problem (phase 1) 0 2 (8, 3) 1 (0, 0) (9, 0) x1 If the tie for the entering basic variable in the next-to-last tableau of Table 4.13 had been broken in the other way, then phase 1 would have gone directly from (8, 3) to (7.5, 4.5). After (7.5, 4.5) was used to set up the initial simplex tableau for phase 2, the optimality test would have revealed that this solution was optimal, so no iterations would be done. It is interesting to compare the Big M and two-phase methods. Begin with their objective functions. 130 CHAPTER 4 SOLVING LINEAR PROGRAMMING PROBLEMS: THE SIMPLEX METHOD Big M Method: Minimize Z  0.4x1  0.5x2  Mx苶4  Mxx苶6. Two-Phase Method: Phase 1: Phase 2: Minimize Minimize Z  苶x4  x苶6. Z  0.4x1  0.5x2. Because the Mx苶4 and Mx苶6 terms dominate the 0.4x1 and 0.5x2 terms in the objective function for the Big M method, this objective function is essentially equivalent to the phase 1 objective function as long as x苶4 and/or 苶x6 is greater than zero. Then, when both x苶4  0 and 苶x6  0, the objective function for the Big M method becomes completely equivalent to the phase 2 objective function. Because of these virtual equivalencies in objective functions, the Big M and twophase methods generally have the same sequence of BF solutions. The one possible exception occurs when there is a tie for the entering basic variable in phase 1 of the two-phase method, as happened in the third tableau of Table 4.13. Notice that the first three tableaux of Tables 4.12 and 4.13 are almost identical, with the only difference being that the multiplicative factors of M in Table 4.12 become the sole quantities in the corresponding spots in Table 4.13. Consequently, the additive terms that broke the tie for the entering basic variable in the third tableau of Table 4.12 were not present to break this same tie in Table 4.13. The result for this example was an extra iteration for the two-phase method. Generally, however, the advantage of having the additive factors is minimal. The two-phase method streamlines the Big M method by using only the multiplicative factors in phase 1 and by dropping the artificial variables in phase 2. (The Big M method could combine the multiplicative and additive factors by assigning an actual huge number to M, but this might create numerical instability problems.) For these reasons, the two-phase method is commonly used in computer codes. The Solved Examples section on the book’s website provides another example of applying both the Big M method and the two-phase method to the same problem. No Feasible Solutions So far in this section we have been concerned primarily with the fundamental problem of identifying an initial BF solution when an obvious one is not available. You have seen how the artificial-variable technique can be used to construct an artificial problem and obtain an initial BF solution for this artificial problem instead. Use of either the Big M method or the two-phase method then enables the simplex method to begin its pilgrimage toward the BF solutions, and ultimately toward the optimal solution, for the real problem. However, you should be wary of a certain pitfall with this approach. There may be no obvious choice for the initial BF solution for the very good reason that there are no feasible solutions at all! Nevertheless, by constructing an artificial feasible solution, there is nothing to prevent the simplex method from proceeding as usual and ultimately reporting a supposedly optimal solution. Fortunately, the artificial-variable technique provides the following signpost to indicate when this has happened: If the original problem has no feasible solutions, then either the Big M method or phase 1 of the two-phase method yields a final solution that has at least one artificial variable greater than zero. Otherwise, they all equal zero. 4.6 131 ADAPTING TO OTHER MODEL FORMS ■ TABLE 4.16 The Big M method for the revision of the radiation therapy example that has no feasible solutions Coefficient of: Iteration 0 Basic Variable Eq. Z x1 x2 x3 x 苶4 x5 x 苶6 Right Side Z x3 苶x4 x苶6 (0) (1) (2) (3) 1 0 0 0 1.1M  0.4 0.3 0.5 0.6 0.9M  0.5 0.1 0.5 0.4 0 1 0 0 0.0 0.0 1.0 0.0 M 0 0 1 0 0 0 1 12M 1.8 6.0 6.0 Z (0) 1 0.0 16 11 M   30 30 11 4 M   3 3 0.0 M 0 5.4M  2.4 x1 (1) 0 1.0 1  3 10  3 0.0 0 0 6.0 x苶4 (2) 0 0.0 0 0 3.0 (3) 0 0.0 5  3 2 1.0 x苶6 1  3 0.2 0.0 1 1 2.4 Z x1 x2 x苶6 (0) (1) (2) (3) 1 0 0 0 0.0 1.0 0.0 0.0 0.0 0.0 1.0 0.0 M  0.5 5 5 1 1.6M  1.1 1.0 3.0 0.6 M 0 0 1 0 0 0 1 1 2 0.6M  5.7 3.0 9.0 0.6 To illustrate, let us change the first constraint in the radiation therapy example (see Fig. 4.5) as follows: 0.3x1  0.1x2  2.7 씮 0.3x1  0.1x2  1.8, so that the problem no longer has any feasible solutions. Applying the Big M method just as before (see Table 4.12) yields the tableaux shown in Table 4.16. (Phase 1 of the twophase method yields the same tableaux except that each expression involving M is replaced by just the multiplicative factor.) Hence, the Big M method normally would be indicating that the optimal solution is (3, 9, 0, 0, 0, 0.6). However, since an artificial variable 苶x6  0.6  0, the real message here is that the problem has no feasible solutions.16 Variables Allowed to Be Negative In most practical problems, negative values for the decision variables would have no physical meaning, so it is necessary to include nonnegativity constraints in the formulations of their linear programming models. However, this is not always the case. To illustrate, suppose that the Wyndor Glass Co. problem is changed so that product 1 already is in production, and the first decision variable x1 represents the increase in its production rate. Therefore, a negative value of x1 would indicate that product 1 is to be cut back by that amount. Such reductions might be desirable to allow a larger production rate for the new, more profitable product 2, so negative values should be allowed for x1 in the model. Since the procedure for determining the leaving basic variable requires that all the variables have nonnegativity constraints, any problem containing variables allowed to be negative must be converted to an equivalent problem involving only nonnegative variables before the simplex method is applied. Fortunately, this conversion can be done. The 16 Techniques have been developed (and incorporated into linear programming software) to analyze what causes a large linear programming problem to have no feasible solutions so that any errors in the formulation can be corrected. For example, see J. W. Chinneck: Feasibility and Infeasibility in Optimization: Algorithms and Computational Methods, Springer Science + Business Media, New York, 2008. 132 CHAPTER 4 SOLVING LINEAR PROGRAMMING PROBLEMS: THE SIMPLEX METHOD modification required for each variable depends upon whether it has a (negative) lower bound on the values allowed. Each of these two cases is now discussed. Variables with a Bound on the Negative Values Allowed. Consider any decision variable xj that is allowed to have negative values which satisfy a constraint of the form xj  Lj, where Lj is some negative constant. This constraint can be converted to a nonnegativity constraint by making the change of variables x j  xj  Lj, so x j  0. Thus, x j  Lj would be substituted for xj throughout the model, so that the redefined decision variable x j cannot be negative. (This same technique can be used when Lj is positive to convert a functional constraint xj  Lj to a nonnegativity constraint x j  0.) To illustrate, suppose that the current production rate for product 1 in the Wyndor Glass Co. problem is 10. With the definition of x1 just given, the complete model at this point is the same as that given in Sec. 3.1 except that the nonnegativity constraint x1  0 is replaced by x1  10. To obtain the equivalent model needed for the simplex method, this decision variable would be redefined as the total production rate of product 1 x 1  x1  10, which yields the changes in the objective function and constraints as shown: Z  3x1  5x2 3x1  2x2  4 3x1  2x2  12 3x1  2x2  18 x1  10, x2  0 씮 Z  3(x1  10)  5x2 3(x1  10)  2x2  4 3(x1  10)  2x2  12 3(x1  10)  2x2  18 x1  10  10, x2  0 씮 Z  30  3x1  5x2 2x1  2x2  14 3x1  2x2  12 3x1  2x2  48 x1  0, x2  0 Variables with No Bound on the Negative Values Allowed. In the case where xj does not have a lower-bound constraint in the model formulated, another approach is required: xj is replaced throughout the model by the difference of two new nonnegative variables  xj  x j  xj ,  where x j  0, xj  0.    Since x j and xj can have any nonnegative values, this difference xj  xj can have any value (positive or negative), so it is a legitimate substitute for xj in the model. But after such substitutions, the simplex method can proceed with just nonnegative variables.  The new variables x j and xj have a simple interpretation. As explained in the next paragraph, each BF solution for the new form of the model necessarily has the property  that either x j  0 or xj  0 (or both). Therefore, at the optimal solution obtained by the simplex method (a BF solution), 冦0x x  冦  0 x j  x j j j if xj  0, otherwise; if xj  0, otherwise;  so that x j represents the positive part of the decision variable xj and xj its negative part (as suggested by the superscripts). 4.7 133 POSTOPTIMALITY ANALYSIS  For example, if xj  10, the above expressions give x j  10 and xj  0. This same    value of xj  xj  xj  10 also would occur with larger values of xj and x j such that    x j  xj  10. Plotting these values of xj and xj on a two-dimensional graph gives a  line with an endpoint at x j  10, xj  0 to avoid violating the nonnegativity constraints. This endpoint is the only corner-point solution on the line. Therefore, only this endpoint can be part of an overall CPF solution or BF solution involving all the variables of the  model. This illustrates why each BF solution necessarily has either x j  0 or xj  0 (or both).  To illustrate the use of the x j and xj , let us return to the example introduced previously in this chapter where x1 is redefined as the increase over the current production rate of 10 for product 1 in the Wyndor Glass Co. problem. However, now suppose that the x1  10 constraint was not included in the original model because it clearly would not change the optimal solution. (In some problems, certain variables do not need explicit lower-bound constraints because the functional constraints already prevent lower values.) Therefore, before the simplex method is applied, x1 would be replaced by the difference x1  x1  x1, where x1  0, x1  0, as shown: Maximize subject to Z  3x1  5x2, Z  3x1  5x2  4 2x2  12 3x1  2x2  18 x2  0 (only) Maximize subject to 씮 Z  3x1  3x1  5x2, Z  3x1  3x1  5x2  4 2x2  12 3x1  3x1  2x2  18 x1  0, x1  0, x2  0 From a computational viewpoint, this approach has the disadvantage that the new equivalent model to be used has more variables than the original model. In fact, if all the original variables lack lower-bound constraints, the new model will have twice as many variables. Fortunately, the approach can be modified slightly so that the number of variables is increased by only one, regardless of how many original variables need to be replaced. This modification is done by replacing each such variable xj by xj  xj  x, where xj  0, x  0, instead, where x is the same variable for all relevant j. The interpretation of x in this case is that x is the current value of the largest (in absolute terms) negative original variable, so that xj is the amount by which xj exceeds this value. Thus, the simplex method now can make some of the xj variables larger than zero even when x  0. ■ 4.7 POSTOPTIMALITY ANALYSIS We stressed in Secs. 2.3, 2.4, and 2.5 that postoptimality analysis—the analysis done after an optimal solution is obtained for the initial version of the model—constitutes a very major and very important part of most operations research studies. The fact that postoptimality analysis is very important is particularly true for typical linear programming applications. In this section, we focus on the role of the simplex method in performing this analysis. Table 4.17 summarizes the typical steps in postoptimality analysis for linear programming studies. The rightmost column identifies some algorithmic techniques that 134 CHAPTER 4 SOLVING LINEAR PROGRAMMING PROBLEMS: THE SIMPLEX METHOD ■ TABLE 4.17 Postoptimality analysis for linear programming Task Purpose Technique Model debugging Model validation Final managerial decisions on resource allocations (the bi values) Evaluate estimates of model parameters Evaluate trade-offs between model parameters Find errors and weaknesses in model Demonstrate validity of final model Make appropriate division of organizational resources between activities under study and other important activities Determine crucial estimates that may affect optimal solution for further study Determine best trade-off Reoptimization See Sec. 2.4 Shadow prices Sensitivity analysis Parametric linear programming involve the simplex method. These techniques are introduced briefly here with the technical details deferred to later chapters. Since you may not have the opportunity to cover these particular chapters, this section has two objectives. One is to make sure that you have at least an introduction to these important techniques; the other is to provide some helpful background if you do have the opportunity to delve further into these topics later. Reoptimization As discussed in Sec. 3.6, linear programming models that arise in practice commonly are very large, with hundreds, thousands, or even millions of functional constraints and decision variables. In such cases, many variations of the basic model may be of interest for considering different scenarios. Therefore, after having found an optimal solution for one version of a linear programming model, we frequently must solve again (often many times) for the solution of a slightly different version of the model. We nearly always have to solve again several times during the model debugging stage (described in Secs. 2.3 and 2.4), and we usually have to do so a large number of times during the later stages of postoptimality analysis as well. One approach is simply to reapply the simplex method from scratch for each new version of the model, even though each run may require hundreds or even thousands of iterations for large problems. However, a much more efficient approach is to reoptimize. Reoptimization involves deducing how changes in the model get carried along to the final simplex tableau (as described in Secs. 5.3 and 7.1). This revised tableau and the optimal solution for the prior model are then used as the initial tableau and the initial basic solution for solving the new model. If this solution is feasible for the new model, then the simplex method is applied in the usual way, starting from this initial BF solution. If the solution is not feasible, a related algorithm called the dual simplex method (described in Sec. 8.1) probably can be applied to find the new optimal solution,17 starting from this initial basic solution. The big advantage of this reoptimization technique over re-solving from scratch is that an optimal solution for the revised model probably is going to be much closer to the prior optimal solution than to an initial BF solution constructed in the usual way for the simplex method. Therefore, assuming that the model revisions were modest, only a few iterations should be required to reoptimize instead of the hundreds or thousands that may be required when you start from scratch. In fact, the optimal solutions for the prior and revised models are frequently the same, in which case the reoptimization technique requires only one application of the optimality test and no iterations. 17 The one requirement for using the dual simplex method here is that the optimality test is still passed when applied to row 0 of the revised final tableau. If not, then still another algorithm called the primal-dual method can be used instead. 4.7 POSTOPTIMALITY ANALYSIS 135 Shadow Prices Recall that linear programming problems often can be interpreted as allocating resources to activities. In particular, when the functional constraints are in  form, we interpreted the bi (the right-hand sides) as the amounts of the respective resources being made available for the activities under consideration. In many cases, there may be some latitude in the amounts that will be made available. If so, the bi values used in the initial (validated) model actually may represent management’s tentative initial decision on how much of the organization’s resources will be provided to the activities considered in the model instead of to other important activities under the purview of management. From this broader perspective, some of the bi values can be increased in a revised model, but only if a sufficiently strong case can be made to management that this revision would be beneficial. Consequently, information on the economic contribution of the resources to the measure of performance (Z ) for the current study often would be extremely useful. The simplex method provides this information in the form of shadow prices for the respective resources. The shadow price for resource i (denoted by y*i ) measures the marginal value of this resource, i.e., the rate at which Z could be increased by (slightly) increasing the amount of this resource (bi) being made available.18,19 The simplex method identifies this shadow price by yi*  coefficient of the ith slack variable in row 0 of the final simplex tableau. To illustrate, for the Wyndor Glass Co. problem, Resource i  production capacity of Plant i (i  1, 2, 3) being made available to the two new products under consideration, bi  hours of production time per week being made available in Plant i for these new products. Providing a substantial amount of production time for the new products would require adjusting the amount of production time still available for the current products, so choosing the bi value is a difficult managerial decision. The tentative initial decision has been b1  4, b2  12, b3  18, as reflected in the basic model considered in Sec. 3.1 and in this chapter. However, management now wishes to evaluate the effect of changing any of the bi values. The shadow prices for these three resources provide just the information that management needs. The final tableau in Table 4.8 yields y*1  0  shadow price for resource 1, 3 y*2    shadow price for resource 2, 2 y*3  1  shadow price for resource 3. With just two decision variables, these numbers can be verified by checking graphically that individually increasing any bi by 1 indeed would increase the optimal value of Z by y*i . For example, Fig. 4.8 demonstrates this increase for resource 2 by reapplying the 18 The increase in bi must be sufficiently small that the current set of basic variables remains optimal since this rate (marginal value) changes if the set of basic variables changes. 19 In the case of a functional constraint in  or  form, its shadow price is again defined as the rate at which Z could be increased by (slightly) increasing the value of bi, although the interpretation of bi now would normally be something other than the amount of a resource being made available. 136 CHAPTER 4 SOLVING LINEAR PROGRAMMING PROBLEMS: THE SIMPLEX METHOD x2 3x1  2x2  18 Z  3x1  5x2 8 5 , 13 3 2 6 ■ FIGURE 4.8 This graph shows that the shadow price is y2*  32 for resource 2 for the Wyndor Glass Co. problem. The two dots are the optimal solutions for b2  12 or b2  13, and plugging these solutions into the objective function reveals that increasing b2 by 1 increases Z by y2*  32. 5 (2, 6) 13 Z  3 3  5 2  37 12 Z  3( 2 )  5( 6 )  36 2x2  13 2x2  12 冧 Z  3 2  y2* x1  4 4 2 0 2 4 6 x1 graphical method presented in Sec. 3.1. The optimal solution, (2, 6) with Z  36, changes to (53, 123) with Z  3712 when b2 is increased by 1 (from 12 to 13), so that 1 3 y*2  Z  37  36  . 2 2 Since Z is expressed in thousands of dollars of profit per week, y*2  32 indicates that adding 1 more hour of production time per week in Plant 2 for these two new products would increase their total profit by $1,500 per week. Should this actually be done? It depends on the marginal profitability of other products currently using this production time. If there is a current product that contributes less than $1,500 of weekly profit per hour of weekly production time in Plant 2, then some shift of production time to the new products would be worthwhile. We shall continue this story in Sec. 7.2, where the Wyndor OR team uses shadow prices as part of its sensitivity analysis of the model. Figure 4.8 demonstrates that y*2  32 is the rate at which Z could be increased by increasing b2 slightly. However, it also demonstrates the common phenomenon that this interpretation holds only for a small increase in b2. Once b2 is increased beyond 18, the optimal solution stays at (0, 9) with no further increase in Z. (At that point, the set of basic variables in the optimal solution has changed, so a new final simplex tableau will be obtained with new shadow prices, including y*2  0.) Now note in Fig. 4.8 why y*1  0. Because the constraint on resource 1, x1  4, is not binding on the optimal solution (2, 6), there is a surplus of this resource. Therefore, increasing b1 beyond 4 cannot yield a new optimal solution with a larger value of Z. By contrast, the constraints on resources 2 and 3, 2x2  12 and 3x1  2x2  18, are binding constraints (constraints that hold with equality at the optimal solution). Because the limited supply of these resources (b2  12, b3  18) binds Z from being increased further, they have positive shadow prices. Economists refer to such resources as scarce goods, whereas resources available in surplus (such as resource 1) are free goods (resources with a zero shadow price). 4.7 POSTOPTIMALITY ANALYSIS 137 The kind of information provided by shadow prices clearly is valuable to management when it considers reallocations of resources within the organization. It also is very helpful when an increase in bi can be achieved only by going outside the organization to purchase more of the resource in the marketplace. For example, suppose that Z represents profit and that the unit profits of the activities (the cj values) include the costs (at regular prices) of all the resources consumed. Then a positive shadow price of y*i for resource i means that the total profit Z can be increased by y*i by purchasing 1 more unit of this resource at its regular price. Alternatively, if a premium price must be paid for the resource in the marketplace, then y*i represents the maximum premium (excess over the regular price) that would be worth paying.20 The theoretical foundation for shadow prices is provided by the duality theory described in Chap. 6. Sensitivity Analysis When discussing the certainty assumption for linear programming at the end of Sec. 3.3, we pointed out that the values used for the model parameters (the aij, bi, and cj identified in Table 3.3) generally are just estimates of quantities whose true values will not become known until the linear programming study is implemented at some time in the future. A main purpose of sensitivity analysis is to identify the sensitive parameters (i.e., those that cannot be changed without changing the optimal solution). The sensitive parameters are the parameters that need to be estimated with special care to minimize the risk of obtaining an erroneous optimal solution. They also will need to be monitored particularly closely as the study is implemented. If it is discovered that the true value of a sensitive parameter differs from its estimated value in the model, this immediately signals a need to change the solution. How are the sensitive parameters identified? In the case of the bi, you have just seen that this information is given by the shadow prices provided by the simplex method. In particular, if y*i  0, then the optimal solution changes if bi is changed, so bi is a sensitive parameter. However, y*i  0 implies that the optimal solution is not sensitive to at least small changes in bi. Consequently, if the value used for bi is an estimate of the amount of the resource that will be available (rather than a managerial decision), then the bi values that need to be monitored more closely are those with positive shadow prices—especially those with large shadow prices. When there are just two variables, the sensitivity of the various parameters can be analyzed graphically. For example, in Fig. 4.9, c1  3 can be changed to any other value from 0 to 7.5 without the optimal solution changing from (2, 6). (The reason is that any value of c1 within this range keeps the slope of Z  c1x1  5x2 between the slopes of the lines 2x2  12 and 3x1  2x2  18.) Similarly, if c2  5 is the only parameter changed, it can have any value greater than 2 without affecting the optimal solution. Hence, neither c1 nor c2 is a sensitive parameter. (The procedure called Graphical Method and Sensitivity Analysis in IOR Tutorial enables you to perform this kind of graphical analysis very efficiently.) The easiest way to analyze the sensitivity of each of the aij parameters graphically is to check whether the corresponding constraint is binding at the optimal solution. Because x1  4 is not a binding constraint, any sufficiently small change in its coefficients (a11  1, a12  0) is not going to change the optimal solution, so these are not sensitive parameters. On the other hand, both 2x2  12 and 3x1  2x2  18 are binding constraints, 20 If the unit profits do not include the costs of the resources consumed, then y*i represents the maximum total unit price that would be worth paying to increase bi. 138 CHAPTER 4 SOLVING LINEAR PROGRAMMING PROBLEMS: THE SIMPLEX METHOD x2 10 ■ FIGURE 4.9 This graph demonstrates the sensitivity analysis of c1 and c2 for the Wyndor Glass Co. problem. Starting with the original objective function line [where c1  3, c2  5, and the optimal solution is (2, 6)], the other two lines show the extremes of how much the slope of the objective function line can change and still retain (2, 6) as an optimal solution. Thus, with c2  5, the allowable range for c1 is 0  c1  7.5. With c1  3, the allowable range for c2 is c2  2. 8 Z  36  3x1  5x2 Z  45  7.5x1  5x2 (or Z  18  3x1  2x2) (2, 6) optimal Z  30  0x1  5x2 4 Feasible region 2 0 2 4 6 x1 so changing any one of their coefficients (a21  0, a22  2, a31  3, a32  2) is going to change the optimal solution, and therefore these are sensitive parameters. Typically, greater attention is given to performing sensitivity analysis on the bi and cj parameters than on the aij parameters. On real problems with hundreds or thousands of constraints and variables, the effect of changing one aij value is usually negligible, but changing one bi or cj value can have real impact. Furthermore, in many cases, the aij values are determined by the technology being used (the aij values are sometimes called technological coefficients), so there may be relatively little (or no) uncertainty about their final values. This is fortunate, because there are far more aij parameters than bi and cj parameters for large problems. For problems with more than two (or possibly three) decision variables, you cannot analyze the sensitivity of the parameters graphically as was just done for the Wyndor Glass Co. problem. However, you can extract the same kind of information from the simplex method. Getting this information requires using the fundamental insight described in Sec. 5.3 to deduce the changes that get carried along to the final simplex tableau as a result of changing the value of a parameter in the original model. The rest of the procedure is described and illustrated in Secs. 7.1 and 7.2. Using Excel to Generate Sensitivity Analysis Information Sensitivity analysis normally is incorporated into software packages based on the simplex method. For example, when using an Excel spreadsheet to formulate and solve a linear programming model, Solver will generate sensitivity analysis information upon request. (The same exact information also is generated by ASPE’s Solver.) As was shown in Fig. 3.21, when Solver gives the message that it has found a solution, it also gives on the right a list of three reports that can be provided. By selecting the second one (labeled “Sensitivity”) after solving the Wyndor Glass Co. problem, you will obtain the sensitivity report shown in Fig. 4.10. The upper table in this report provides sensitivity analysis information about the decision variables and their coefficients in the objective function. The lower table does the same for the functional constraints and their right-hand sides. 4.7 139 POSTOPTIMALITY ANALYSIS Variable Cells Name Final Value Batches Produced Doors Batches Produced Windows 2 6 0 0 Cell $C$12 $D$12 Reduced Objective Allowable Allowable Cost Coefficient Increase Decrease 3 5 4.5 1E+30 3 3 Constraints ■ FIGURE 4.10 The sensitivity report provided by Solver for the Wyndor Glass Co. problem. Cell Name Final Value Shadow Price $E$7 $E$8 $E$9 Plant 1 Used Plant 2 Used Plant 3 Used 2 12 18 0 1.5 1 Constraint Allowable Allowable R.H. Side Increase Decrease 4 12 18 1E+30 6 6 2 6 6 Look first at the upper table in this figure. The “Final Value” column indicates the optimal solution. The next column gives the reduced costs. (We will not discuss these reduced costs now because the information they provide can also be gleaned from the rest of the upper table.) The next three columns provide the information needed to identify the allowable range for each coefficient cj in the objective function. For any cj, its allowable range is the range of values for this coefficient over which the current optimal solution remains optimal, assuming no change in the other coefficients. The “Objective Coefficient” column gives the current value of each coefficient in units of thousands of dollars, and then the next two columns give the allowable increase and the allowable decrease from this value to remain within the allowable range. Therefore, 3  3  c1  3  4.5, so 0  c1  7.5 is the allowable range for c1 over which the current optimal solution will stay optimal (assuming c2  5), just as was found graphically in Fig. 4.9. Similarly, since Excel uses 1E  30 (1030) to represent infinity, 5  3  c2  5  , so 2  c2 is the allowable range for c2. The fact that both the allowable increase and the allowable decrease are greater than zero for the coefficient of both decision variables provides another useful piece of information, as described below. When the upper table in the sensitivity report generated by the Excel Solver indicates that both the allowable increase and the allowable decrease are greater than zero for every objective coefficient, this is a signpost that the optimal solution in the “Final Value” column is the only optimal solution. Conversely, having any allowable increase or allowable decrease equal to zero is a signpost that there are multiple optimal solutions. Changing the corresponding coefficient a tiny amount beyond the zero allowed and re-solving provides another optimal CPF solution for the original model. Now consider the lower table in Fig. 4.10 that focuses on sensitivity analysis for the three functional constraints. The “Final Value” column gives the value of each constraint’s 140 CHAPTER 4 SOLVING LINEAR PROGRAMMING PROBLEMS: THE SIMPLEX METHOD left-hand side for the optimal solution. The next two columns give the shadow price and the current value of the right-hand side (bi) for each constraint. When just one bi value is then changed, the last two columns give the allowable increase or allowable decrease in order to remain within its allowable range. For any bi, its allowable range is the range of values for this right-hand side over which the current optimal BF solution (with adjusted values21 for the basic variables) remains feasible, assuming no change in the other right-hand sides. A key property of this range of values is that the current shadow price for bi remains valid for evaluating the effect on Z of changing bi only as long as bi remains within this allowable range. Thus, using the lower table in Fig. 4.10, combining the last two columns with the current values of the right-hand sides gives the following allowable ranges: 2  b1 6  b2  18 12  b3  24. This sensitivity report generated by Solver is typical of the sensitivity analysis information provided by linear programming software packages. You will see in Appendix 4.1 that LINDO and LINGO provide essentially the same report. MPL/Solvers does also when it is requested with the Solution File dialog box. Once again, this information obtained algebraically also can be derived from graphical analysis for this two-variable problem. (See Prob. 4.7-1.) For example, when b2 is increased from 12 in Fig. 4.8, the originally optimal CPF solution at the intersection of two constraint boundaries 2x2  b2 and 3x1  2x2  18 will remain feasible (including x1  0) only for b2  18. The Solved Examples section of the book’s website includes another example of applying sensitivity analysis (using both graphical analysis and the sensitivity report). Sections 7.1–7.3 also will delve into this type of analysis more deeply. Parametric Linear Programming Sensitivity analysis involves changing one parameter at a time in the original model to check its effect on the optimal solution. By contrast, parametric linear programming (or parametric programming for short) involves the systematic study of how the optimal solution changes as many of the parameters change simultaneously over some range. This study can provide a very useful extension of sensitivity analysis, e.g., to check the effect of “correlated” parameters that change together due to exogenous factors such as the state of the economy. However, a more important application is the investigation of trade-offs in parameter values. For example, if the cj values represent the unit profits of the respective activities, it may be possible to increase some of the cj values at the expense of decreasing others by an appropriate shifting of personnel and equipment among activities. Similarly, if the bi values represent the amounts of the respective resources being made available, it may be possible to increase some of the bi values by agreeing to accept decreases in some of the others. In some applications, the main purpose of the study is to determine the most appropriate trade-off between two basic factors, such as costs and benefits. The usual approach 21 Since the values of the basic variables are obtained as the simultaneous solution of a system of equations (the functional constraints in augmented form), at least some of these values change if one of the right-hand sides changes. However, the adjusted values of the current set of basic variables still will satisfy the nonnegativity constraints, and so still will be feasible, as long as the new value of this right-hand side remains within its allowable range. If the adjusted basic solution is still feasible, it also will still be optimal. We shall elaborate further in Sec. 7.2. 4.8 COMPUTER IMPLEMENTATION 141 is to express one of these factors in the objective function (e.g., minimize total cost) and incorporate the other into the constraints (e.g., benefits  minimum acceptable level), as was done for the Nori & Leets Co. air pollution problem in Sec. 3.4. Parametric linear programming then enables systematic investigation of what happens when the initial tentative decision on the trade-off (e.g., the minimum acceptable level for the benefits) is changed by improving one factor at the expense of the other. The algorithmic technique for parametric linear programming is a natural extension of that for sensitivity analysis, so it, too, is based on the simplex method. The procedure is described in Sec. 8.2. ■ 4.8 COMPUTER IMPLEMENTATION If the electronic computer had never been invented, you probably would have never heard of linear programming and the simplex method. Even though it is possible to apply the simplex method by hand (perhaps with the aid of a calculator) to solve tiny linear programming problems, the calculations involved are just too tedious to do this on a routine basis. However, the simplex method is ideally suited for execution on a computer. It is the computer revolution that has made possible the widespread application of linear programming in recent decades. Implementation of the Simplex Method Computer codes for the simplex method now are widely available for essentially all modern computer systems. These codes commonly are part of a sophisticated software package for mathematical programming that includes many of the procedures described in subsequent chapters (including those used for postoptimality analysis). These production computer codes do not closely follow either the algebraic form or the tabular form of the simplex method presented in Secs. 4.3 and 4.4. These forms can be streamlined considerably for computer implementation. Therefore, the codes use instead a matrix form (usually called the revised simplex method) that is especially well suited for the computer. This form accomplishes exactly the same things as the algebraic or tabular form, but it does this while computing and storing only the numbers that are actually needed for the current iteration; and then it carries along the essential data in a more compact form. The revised simplex method is described in Secs. 5.2 and 5.4. The simplex method is used routinely to solve surprisingly large linear programming problems. For example, powerful desktop computers (including workstations) commonly are used to solve problems with hundreds of thousands, or even millions, of functional constraints and a larger number of decision variables. Occasionally, successfully solved problems have even tens of millions of functional constraints and decision variables.22 For certain special types of linear programming problems (such as the transportation, assignment, and minimum cost flow problems to be described later in the book), even larger problems now can be solved by specialized versions of the simplex method. Several factors affect how long it will take to solve a linear programming problem by the general simplex method. The most important one is the number of ordinary functional constraints. In fact, computation time tends to be roughly proportional to the cube of this number, so that doubling this number may multiply the computation time by a factor of 22 Do not try this at home. Attacking such a massive problem requires an especially sophisticated linear programming system that uses the latest techniques for exploiting sparcity in the coefficient matrix as well as other special techniques (e.g., crashing techniques for quickly finding an advanced initial BF solution). When problems are re-solved periodically after minor updating of the data, much time often is saved by using (or modifying) the last optimal solution to provide the initial BF solution for the new run. 142 CHAPTER 4 SOLVING LINEAR PROGRAMMING PROBLEMS: THE SIMPLEX METHOD approximately 8. By contrast, the number of variables is a relatively minor factor.23 Thus, doubling the number of variables probably will not even double the computation time. A third factor of some importance is the density of the table of constraint coefficients (i.e., the proportion of the coefficients that are not zero), because this affects the computation time per iteration. (For large problems encountered in practice, it is common for the density to be under 5 percent, or even under 1 percent, and this much “sparcity” tends to greatly accelerate the simplex method.) One common rule of thumb for the number of iterations is that it tends to be roughly twice the number of functional constraints. With large linear programming problems, it is inevitable that some mistakes and faulty decisions will be made initially in formulating the model and inputting it into the computer. Therefore, as discussed in Sec. 2.4, a thorough process of testing and refining the model (model validation) is needed. The usual end product is not a single static model that is solved once by the simplex method. Instead, the OR team and management typically consider a long series of variations on a basic model (sometimes even thousands of variations) to examine different scenarios as part of postoptimality analysis. This entire process is greatly accelerated when it can be carried out interactively on a desktop computer. And, with the help of both mathematical programming modeling languages and improving computer technology, this now is becoming common practice. Until the mid-1980s, linear programming problems were solved almost exclusively on mainframe computers. Since then, there has been an explosion in the capability of doing linear programming on desktop computers, including personal computers as well as workstations. Workstations, including some with parallel processing capabilities, now are commonly used instead of mainframe computers to solve massive linear programming models. The fastest personal computers are not lagging far behind, although solving huge models usually requires additional memory. Even laptop computers now can solve fairly large linear programming problems. Linear Programming Software Featured in This Book As described in Sec. 3.6, the student version of MPL in your OR Courseware provides a student-friendly modeling laguage for efficiently formulating large programming models (and related models) in a compact way. MPL also provides some elite solvers for solving these models amazingly quickly. The student version of MPL in your OR Courseware includes the student version of four of these solvers—CPLEX, GUROBI, CoinMP, and SULUM. The professional version of MPL frequently is used to solve huge linear programming models with many thousands (or possibly even millions) of functional constrants and decision variables. An MPL tutorial and numerous MPL examples are provided on this book’s website. LINDO (short for Linear, Interactive, and Discrete Optimizer) has a very long history in the realm of applications of linear programming and its extensions. The easy-touse LINDO interface is available as a subset of the LINGO optimization modeling package from LINDO Systems, www.lindo.com. The long-time popularity of LINDO is partially due to its ease of use. For “textbook-sized” problems, the model can be entered and solved in an intuitive, straightforward manner, so the LINDO interface provides a convenient tool for students to use. Although easy to use for small models, the professional version of LINDO/LINGO can also solve huge models with many thousands (or possibly even millions) of functional constraints and decision variables. The OR Courseware provided on this book’s website contains a student version of LINDO/LINGO, accompanied by an extensive tutorial. Appendix 4.1 provides a 23 This statement assumes that the revised simplex method described in Secs. 5.2 and 5.4 is being used. 4.9 THE INTERIOR POINT APPROACH 143 quick introduction. Additionally, the software contains extensive online help. The OR Courseware also contains LINGO/LINDO formulations for the major examples used in the book. Spreadsheet-based solvers are becoming increasingly popular for linear programming and its extensions. Leading the way is the basic Solver produced by Frontline Systems for Microsoft Excel. In addition to Solver, Frontline Systems also has developed more powerful Premium Solver products, including the very versatile Analytic Solver Platform for Education (ASPE) that is included in you OR courseware. (ASPE has strong capabilities for solving many types of OR problems in addition to linear programming.) Because of the widespread use of spreadsheet packages such as Microsoft Excel today, these solvers are introducing large numbers of people to the potential of linear programming for the first time. For textbook-sized linear programming problems (and considerably larger problems as well), spreadsheets provide a convenient way to formulate and solve the model, as described in Sec. 3.5. The more powerful spreadsheet solvers can solve fairly large models with many thousand decision variables. However, when the spreadsheet grows to an unwieldy size, a good modeling language and its solver may provide a more efficient approach to formulating and solving the model. Spreadsheets provide an excellent communication tool, especially when dealing with typical managers who are very comfortable with this format but not with the algebraic formulations of OR models. Therefore, optimization software packages and modeling languages now can commonly import and export data and results in a spreadsheet format. For example, the MPL modeling language includes an enhancement (called the OptiMax Component Library) that enables the modeler to create the feel of a spreadsheet model for the user of the model while still using MPL to formulate the model very efficiently. All the software, tutorials, and examples packed on the book’s website are providing you with several attractive software options for linear programming (as well as some other areas of operations research). Available Software Options for Linear Programming 1. Demonstration examples (in OR Tutor) and both interactive and automatic procedures in IOR Tutorial for efficiently learning the simplex method. 2. Excel and its Solver for formulating and solving linear programming models in a spreadsheet format. 3. Analytic Solver Platform for Education (ASPE) for greatly extending the functionality of Excel’s Solver. 4. A student version of MPL and its solvers—CPLEX, GUROBI, CoinMP, and SULUM— for efficiently formulating and solving large linear programming models. 5. A student version of LINGO and its solver (shared with LINDO) for an alternative way of efficiently formulating and solving large linear programming models. Your instructor may specify which software to use. Whatever the choice, you will be gaining experience with the kind of state-of-the-art software that is used by OR professionals. ■ 4.9 THE INTERIOR-POINT APPROACH TO SOLVING LINEAR PROGRAMMING PROBLEMS The most dramatic new development in operations research during the 1980s was the discovery of the interior-point approach to solving linear programming problems. This discovery was made in 1984 by a young mathematician at AT&T Bell Laboratories, Narendra Karmarkar, when he successfully developed a new algorithm for linear programming with this kind of approach. Although this particular algorithm experienced 144 CHAPTER 4 SOLVING LINEAR PROGRAMMING PROBLEMS: THE SIMPLEX METHOD only mixed success in competing with the simplex method, the key solution concept described below appeared to have great potential for solving huge linear programming problems that might be beyond the reach of the simplex method. Many top researchers subsequently worked on modifying Karmarkar’s algorithm to fully tap this potential. Much progress was made (and continues to be made), and a number of powerful algorithms using the interior-point approach have been developed. Today, the more powerful software packages that are designed for solving really large linear programming problems include at least one algorithm using the interior-point approach along with the simplex method and its variants. As research continues on these algorithms, their computer implementations continue to improve. This has spurred renewed research on the simplex method, and its computer implementations continue to improve as well. The competition between the two approaches for supremacy in solving huge problems is continuing. Now let us look at the key idea behind Karmarkar’s algorithm and its subsequent variants that use the interior-point approach. The Key Solution Concept Although radically different from the simplex method, Karmarkar’s algorithm does share a few of the same characteristics. It is an iterative algorithm. It gets started by identifying a feasible trial solution. At each iteration, it moves from the current trial solution to a better trial solution in the feasible region. It then continues this process until it reaches a trial solution that is (essentially) optimal. The big difference lies in the nature of these trial solutions. For the simplex method, the trial solutions are CPF solutions (or BF solutions after augmenting), so all movement is along edges on the boundary of the feasible region. For Karmarkar’s algorithm, the trial solutions are interior points, i.e., points inside the boundary of the feasible region. For this reason, Karmarkar’s algorithm and its variants can be referred to as interior-point algorithms. However, because of an early patent obtained on an early version of an interior-point algorithm, such an algorithm now is commonly referred to as a barrier algorithm (or barrier method). The term barrier is used because, from the perspective of a search whose trial solutions are interior points, each constraint boundary is treated as a barrier. However, we will continue to use the more suggestive interior-point algorithm terminology. To illustrate the interior-point approach, Fig. 4.11 shows the path followed by the interior-point algorithm in your OR Courseware when it is applied to the Wyndor Glass Co. problem, starting from the initial trial solution (1, 2). Note how all the trial solutions (dots) shown on this path are inside the boundary of the feasible region as the path approaches the optimal solution (2, 6). (All the subsequent trial solutions not shown also are inside the boundary of the feasible region.) Contrast this path with the path followed by the simplex method around the boundary of the feasible region from (0, 0) to (0, 6) to (2, 6). Table 4.18 shows the actual output from IOR Tutorial for this problem.24 (Try it yourself.) Note how the successive trial solutions keep getting closer and closer to the optimal solution, but never literally get there. However, the deviation becomes so infinitesimally small that the final trial solution can be taken to be the optimal solution for all practical purposes. (The Solved Examples section on the book’s website shows the output from IOR Tutorial for another example as well.) Section 8.4 presents the details of the specific interior-point algorithm that is implemented in IOR Tutorial. 24 The procedure is called Solve Automatically by the Interior-Point Algorithm. The option menu provides two choices for a certain parameter of the algorithm  (defined in Sec. 8.4). The choice used here is the default value of   0.5. 4.9 ■ FIGURE 4.11 The curve from (1, 2) to (2, 6) shows a typical path followed by an interior-point algorithm, right through the interior of the feasible region for the Wyndor Glass Co. problem. 145 THE INTERIOR-POINT APPROACH x2 (2, 6) optimal 6 (1.56, 5.5) (1.38, 5) (1.27, 4) 4 (1, 2) 2 0 2 4 x1 ■ TABLE 4.18 Output of interior-point algorithm in OR Courseware for Wyndor Glass Co. problem Iteration x1 x2 Z 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 1 1.27298 1.37744 1.56291 1.80268 1.92134 1.96639 1.98385 1.99197 1.99599 1.99799 1.999 1.9995 1.99975 1.99987 1.99994 2 4 5 5.5 5.71816 5.82908 5.90595 5.95199 5.97594 5.98796 5.99398 5.99699 5.9985 5.99925 5.99962 5.99981 13 23.8189 29.1323 32.1887 33.9989 34.9094 35.429 35.7115 35.8556 35.9278 35.9639 35.9819 35.991 35.9955 35.9977 35.9989 Comparison with the Simplex Method One meaningful way of comparing interior-point algorithms with the simplex method is to examine their theoretical properties regarding computational complexity. Karmarkar has proved that the original version of his algorithm is a polynomial time algorithm; that is, 146 CHAPTER 4 SOLVING LINEAR PROGRAMMING PROBLEMS: THE SIMPLEX METHOD the time required to solve any linear programming problem can be bounded above by a polynomial function of the size of the problem. Pathological counterexamples have been constructed to demonstrate that the simplex method does not possess this property, so it is an exponential time algorithm (i.e., the required time can be bounded above only by an exponential function of the problem size). This difference in worst-case performance is noteworthy. However, it tells us nothing about their comparison in average performance on real problems, which is the more crucial issue. The two basic factors that determine the performance of an algorithm on a real problem are the average computer time per iteration and the number of iterations. Our next comparisons concern these factors. Interior-point algorithms are far more complicated than the simplex method. Considerably more extensive computations are required for each iteration to find the next trial solution. Therefore, the computer time per iteration for an interior-point algorithm is many times longer than that for the simplex method. For fairly small problems, the numbers of iterations needed by an interior-point algorithm and by the simplex method tend to be somewhat comparable. For example, on a problem with 10 functional constraints, roughly 20 iterations would be typical for either kind of algorithm. Consequently, on problems of similar size, the total computer time for an interior-point algorithm will tend to be many times longer than that for the simplex method. On the other hand, a key advantage of interior-point algorithms is that large problems do not require many more iterations than small problems. For example, a problem with 10,000 functional constraints probably will require well under 100 iterations. Even considering the very substantial computer time per iteration needed for a problem of this size, such a small number of iterations makes the problem quite tractable. By contrast, the simplex method might need 20,000 iterations and so might require a very large amount of computer time. Therefore, interior-point algorithms might be faster than the simplex method for such very large problems. When advancing to huge problems with hundreds of thousands (or even millions) of functional constraints, interior-point algorithms tend to become the best hope for solving the problem. The reason for this very large difference in the number of iterations on huge problems is the difference in the paths followed. At each iteration, the simplex method moves from the current CPF solution to an adjacent CPF solution along an edge on the boundary of the feasible region. Huge problems have an astronomical number of CPF solutions. The path from the initial CPF solution to an optimal solution may be a very circuitous one around the boundary, taking only a small step each time to the next adjacent CPF solution, so a huge number of steps may be required to reach an optimal solution. By contrast, an interior-point algorithm bypasses all this by shooting through the interior of the feasible region toward an optimal solution. Adding more functional constraints adds more constraint boundaries to the feasible region, but has little effect on the number of trial solutions needed on this path through the interior. This frequently makes it possible for interior-point algorithms to solve problems with a huge number of functional constraints. A final key comparison concerns the ability to perform the various kinds of postoptimality analysis described in Sec. 4.7. The simplex method and its extensions are very well suited to, and are widely used for, this kind of analysis. Unfortunately, the interiorpoint approach has limited capability in this area.25 Given the great importance of postoptimality analysis, this is a key drawback of interior-point algorithms. However, we point out next how the simplex method can be combined with the interior-point approach to overcome this drawback. 25 However, research aimed at increasing this capability has made some progress. For example, see E. A. Yildirim and M. J. Todd: “Sensitivity Analysis in Linear Programming and Semidefinite Programming Using Interior-Point Methods,” Mathematical Programming, Series A, 90(2): 229–261, April 2001. APPENDIX 4.1 AN INTRODUCTION TO USING LINDO AND LINGO 147 Combining the Simplex Method with the Interior-Point Approach For Postoptimality Analysis As just mentioned, a key disadvantage of the interior-point approach is its limited capability for performing postoptimality analysis. To overcome this drawback, researchers have developed procedures for switching over to the simplex method after an interior-point algorithm has finished. Recall that the trial solutions obtained by an interior-point algorithm keep getting closer and closer to an optimal solution (the best CPF solution), but never quite get there. Therefore, a switching procedure requires identifying a CPF solution (or BF solution after augmenting) that is very close to the final trial solution. For example, by looking at Fig. 4.11, it is easy to see that the final trial solution in Table 4.18 is very near the CPF solution (2, 6). Unfortunately, on problems with thousands of decision variables (so no graph is available), identifying a nearby CPF (or BF) solution is a very challenging and time-consuming task. However, good progress has been made for developing a crossover algorithm for converting the solution obtained by an interior-point algorithm into a BF solution. Once this nearby BF solution has been found, the optimality test for the simplex method is applied to check whether this actually is the optimal BF solution. If it is not optimal, some iterations of the simplex method are conducted to move from this BF solution to an optimal solution. Generally, only a very few iterations (perhaps one) are needed because the interior-point algorithm has brought us so close to an optimal solution. Therefore, these iterations should be done quite quickly, even on problems that are too huge to be solved from scratch. After an optimal solution is actually reached, the simplex method and its variants are applied to help perform postoptimality analysis. ■ 4.10 CONCLUSIONS The simplex method is an efficient and reliable algorithm for solving linear programming problems. It also provides the basis for performing the various parts of postoptimality analysis very efficiently. Although it has a useful geometric interpretation, the simplex method is an algebraic procedure. At each iteration, it moves from the current BF solution to a better, adjacent BF solution by choosing both an entering basic variable and a leaving basic variable and then using Gaussian elimination to solve a system of linear equations. When the current solution has no adjacent BF solution that is better, the current solution is optimal and the algorithm stops. We presented the full algebraic form of the simplex method to convey its logic, and then we streamlined the method to a more convenient tabular form. To set up for starting the simplex method, it is sometimes necessary to use artificial variables to obtain an initial BF solution for an artificial problem. If so, either the Big M method or the two-phase method is used to ensure that the simplex method obtains an optimal solution for the real problem. Computer implementations of the simplex method and its variants have become so powerful that they now are frequently used to solve huge linear programming problems. Interior-point algorithms also provide a powerful tool for solving such problems. ■ APPENDIX 4.1 AN INTRODUCTION TO USING LINDO AND LINGO The LINGO software can accept optimization models in either of two styles or syntax: (a) LINDO syntax or (b) LINGO syntax. We will first describe LINDO syntax. The relative advantages of LINDO syntax are that it is very easy and natural for simple linear and integer programming problems. It has been in wide use since 1981. The LINDO syntax allows you to enter a model in a natural form, essentially as presented in a textbook. For example, here is how the Wyndor Glass Co. example introduced in Sec. 3.1. is 148 CHAPTER 4 SOLVING LINEAR PROGRAMMING PROBLEMS: THE SIMPLEX METHOD entered. Presuming you have installed LINGO, you click on the LINGO icon to start up LINGO and then immediately type the following: ! ! ! ! Wyndor Glass Co. Problem. LINDO model X1 = batches of product 1 per week X2 = batches of product 2 per week Profit, in 1000 of dollars, MAX Profit) 3 X1 + 5 X2 Subject to ! Production time Plant1) X1 <= 4 Plant2) 2 X2 <= 12 Plant3) 3 X1 + 2 X2 <= 18 END The first four lines, each starting with an exclamation point at the beginning, are simply comments. The comment on the fourth line further clarifies that the objective function is expressed in units of thousands of dollars. The number 1000 in this comment does not have the usual comma in front of the last three digits because LINDO/LINGO does not accept commas. (LINDO syntax also does not accept parentheses in algebraic expressions.) Lines five onward specify the model. The decision variables can be either lowercase or uppercase. Uppercase usually is used so the variables won’t be dwarfed by the following “subscripts.” Instead of X1 or X2, you may use more suggestive names, such as the name of the product being produced; e.g., DOORS and WINDOWS, to represent the decision variable throughout the model. The fifth line of the LINDO formulation indicates that the objective of the model is to maximize the objective function, 3x1  5x2. The word Profit followed by a parenthesis is optional. It clarifies that the quantity being maximized is to be called Profit on the solution report. The comment on the seventh line points out that the following constraints are on the production times being used. The next three lines start by giving a name (again, optional, followed by a parenthesis) for each of the functional constraints. These constraints are written in the usual way except for the inequality signs. Because most keyboards do not include  and  signs, LINDO interprets either or  as  and either  or  as . (On keyboards that include  and  signs, LINDO will not recognize them.) The end of the constraints is signified by the word END. No nonnegativity constraints are stated because LINDO automatically assumes that all variables are  0. If, say, x1 had not had a nonnegativity constraint, this would be indicated by typing FREE X1 on the next line below END. To solve this model in LINGO/LINDO, click on the red Bull’s Eye solve button at the top of the LINGO window. Figure A4.1 shows the resulting “solution report.” The top lines indicate that the best overall, or “global,” solution has been found, with an objective function value of 36, in two iterations. Next come the values for x1 and x2 for the optimal solution. ■ FIGURE A4.1 The solution report provided by LINDO syntax for the Wyndor Glass Co. problem. Global optimal solution found. Objective value: 36.00000 Total solver iterations: Variable X1 X2 Row PROFIT PLANT1 PLANT2 PLANT3 Value 2.000000 6.000000 2 Reduced Cost 0.000000 0.000000 Slack or Surplus 36.00000 2.000000 0.000000 0.000000 Dual Price 1.000000 0.000000 1.500000 1.000000 APPENDIX 4.1 AN INTRODUCTION TO USING LINDO AND LINGO 149 The column to the right of the Values column gives the reduced costs. We have not discussed reduced costs in this chapter because the information they provide can also be gleaned from the allowable range for the coefficients in the objective function. These allowable ranges are readily available (as you will see in the next figure). When the variable is a basic variable in the optimal solution (as for both variables in the Wyndor problem), its reduced cost automatically is 0. When the variable is a nonbasic variable, its reduced cost provides some interesting information. A variable whose objective coefficient is “too small” in a maximizing model or “too large” in a minimizing model will have a value of 0 in an optimal solution. The reduced cost indicates how much this coefficient needs to be increased (when maximizing) or decreased (when minimizing) before the optimal solution would change and this variable would become a basic variable. However, recall that this same information already is available from the allowable range for the coefficient of this variable in the objective function. The reduced cost (for a nonbasic variable) is just the allowable increase (when maximizing) from the current value of this coefficient to remain within its allowable range or the allowable decrease (when minimizing). The bottom portion of Fig. A.4.1 provides information about the three functional constraints. The Slack or Surplus column gives the difference between the two sides of each constraint. The Dual Price column gives, by another name, the shadow prices discussed in Sec. 4.7 for these constraints. (This alternate name comes from the fact found in Sec. 6.1 that these shadow prices are just the optimal values of the dual variables introduced in Chap. 6.) Be aware, however, that LINDO uses a different sign convention from the common one adopted elsewhere in this text (see footnote 19 regarding the definition of shadow price in Sec. 4.7). In particular, for minimization problems, LINGO/LINDO shadow prices (dual prices) are the negative of ours. After LINDO provides you with the solution report, you also have the option to do range (sensitivity) analysis. Fig. A4.2 shows the range report, which is generated by clicking on: LINGO | Range. Except for using units of thousand of dollars instead of dollars for the coefficients in the objective function, this report is identical to the last three columns of the table in the sensitivity report generated by Solver, as shown earlier in Fig. 4.10. Thus, as already discussed in Sec. 4.7, the first two rows of numbers in this range report indicate that the allowable range for each coefficient in the objective function (assuming no other change in the model) is 0  c1  7.5 2  c2 Similarly, the last three rows indicate that the allowable range for each right-hand side (assuming no other change in the model) is 2  b1 6  b2  18 12  b3  24 You can print the results in standard Windows fashion by clicking on Files | Print. ■ FIGURE A4.2 Range report provided by LINDO for the Wyndor Glass Co. problem. Ranges in which the basis is unchanged: Variable X1 X2 Row PLANT1 PLANT2 PLANT3 Objective Coefficient Ranges Current Allowable Allowable Coefficient Increase Decrease 3.000000 4.500000 3.000000 5.000000 INFINITY 3.000000 Righthand Side Ranges Current Allowable RHS Increase 4.000000 INFINITY 12.000000 6.000000 18.000000 6.000000 Allowable Decrease 2.000000 6.000000 6.000000 150 CHAPTER 4 SOLVING LINEAR PROGRAMMING PROBLEMS: THE SIMPLEX METHOD These are the basics for getting started with LINGO/LINDO. You can turn on or turn off the generation of reports. For example, if the automatic generation of the standard solution report has been turned off (Terse mode), you can turn it back on by clicking on: LINGO | Options | Interface | Output level | Verbose | Apply. The ability to generate range reports can be turned on or off by clicking on: LINGO | Options | General solver | Dual computations | Prices & Ranges | Apply. The second input style that LINGO supports is LINGO syntax. LINGO syntax is dramatically more powerful than LINDO syntax. The advantages to using LINGO syntax are: (a) it allows arbitrary mathematical expressions, including parentheses and all familiar mathematical operators such as division, multiplication, log, sin, etc., (b) the ability to solve not just linear programming problems but also nonlinear programming problems, (c) scalability to large applications using subscripted variables and sets, (d) the ability to read input data from a spreadsheet or database and send solution information back into a spreadsheet or database, (e) the ability to naturally represent sparse relationships, (f) programming ability so that you can solve a series of models automatically as when doing parametric analysis, (g) the ability to quickly formulate and solve both chanceconstrained programming problems (described in Sec.7.5) and stochastic programming problems (described in Sec. 7.6). A formulation of the Wyndor problem in LINGO, using the subscript/sets feature is: ! Wyndor Glass Co. Problem; SETS: PRODUCT: PPB, X; ! Each product has a profit/batch and amount; RESOURCE: HOURSAVAILABLE; ! Each resource has a capacity; ! Each resource product combination has an hours/batch; RXP(RESOURCE,PRODUCT): HPB; ENDSETS DATA: PRODUCT = DOORS WINDOWS; ! The products; PPB = 3 5; ! Profit per batch; RESOURCE = PLANT1 PLANT2 PLANT3; HOURSAVAILABLE = 4 12 18; HPB = 1 0 3 0 2 2; ! Hours per batch; ENDDATA ! Sum over all products j the profit per batch times batches produced; MAX = @SUM( PRODUCT(j): PPB(j)*X(j)); @FOR( RESOURCE(i)): ! For each resource i...; ! Sum over all products j of hours per batch time batches produced...; @SUM(RXP(i,j): HPB(i,j)*X(j)) <= HOURSAVAILABLE(i); ); The original Wyndor problem has two products and three resources. If Wyndor expands to having four products and five resources, it is a trivial change to insert the appropriate new data into the DATA section. The formulation of the model adjusts automatically. The subscript/sets capability also allows one to naturally represent three dimensional or higher models. The large problem described in Sec. 3.6 has five dimensions: plants, machines, products, regions/customers, and time periods. This would be hard to fit into a two-dimensional spreadsheet but is easy to represent in a modeling language with sets and subscripts. In practice, for problems like that in Sec. 3.6, many of the 10(10)(10)(10)(10) = 100,000 possible combinations of relationships do not exist; e.g., not LEARNING AIDS FOR THIS CHAPTER ON OUR WEBSITE 151 all plants can make all products, and not all customers demand all products. The subscript/sets capability in modeling languages make it easy to represent such sparse relationships. For most models that you enter, LINGO will be able to detect automatically whether you are using LINDO syntax or LINGO syntax. You may choose your default syntax by clicking on: LINGO | Options | Interface | File format | lng (for LINGO) or ltx (for LINDO). LINGO includes an extensive online Help menu to give more details and examples. Supplements 1 and 2 to Chapter 3 (shown on the book’s website) provide a relatively complete introduction to LINGO. The LINGO tutorial on the website also provides additional details. The LINGO/LINDO files on the website for various chapters show LINDO/LINGO formulations for numerous examples from most of the chapters. ■ SELECTED REFERENCES 1. Dantzig, G. B., and M. N. Thapa: Linear Programming 1: Introduction, Springer, New York, 1997. 2. Denardo, E. V.: Linear Programming and Generalizations: A Problem-based Introduction with Spreadsheets, Springer, New York, 2011. 3. Fourer, R.: “Software Survey: Linear Programming,” OR/MS Today, June 2011, pp. 60–69. 4. Luenberger, D., and Y. Ye: Linear and Nonlinear Programming, 3rd ed., Springer, New York, 2008. 5. Maros, I.: Computational Techniques of the Simplex Method, Kluwer Academic Publishers (now Springer), Boston, MA, 2003. 6. Schrage, L.: Optimization Modeling with LINGO, LINDO Systems, Chicago, 2008. 7. Tretkoff, C., and I. Lustig: “New Age of Optimization Applications,” OR/MS Today, December 2006, pp. 46–49. 8. Vanderbei, R. J.: Linear Programming: Foundations and Extensions, 4th ed., Springer, New York, 2014. ■ LEARNING AIDS FOR THIS CHAPTER ON OUR WEBSITE (www.mhhe.com/hillier) Solved Examples: Examples for Chapter 4 Demonstration Examples in OR Tutor: Interpretation of the Slack Variables Simplex Method—Algebraic Form Simplex Method—Tabular Form Interactive Procedures in IOR Tutorial: Enter or Revise a General Linear Programming Model Set Up for the Simplex Method—Interactive Only Solve Interactively by the Simplex Method Interactive Graphical Method Automatic Procedures in IOR Tutorial: Solve Automatically by the Simplex Method Solve Automatically by the Interior-Point Algorithm Graphical Method and Sensitivity Analysis An Excel Add-In: Analytic Solver Platform for Education (ASPE) 152 CHAPTER 4 SOLVING LINEAR PROGRAMMING PROBLEMS: THE SIMPLEX METHOD Files (Chapter 3) for Solving the Wyndor and Radiation Therapy Examples: Excel Files LINGO/LINDO File MPL/Solvers File Glossary for Chapter 4 See Appendix 1 for documentation of the software. ■ PROBLEMS The symbols to the left of some of the problems (or their parts) have the following meaning: D: The corresponding demonstration example listed on the preceding page may be helpful. I: We suggest that you use the corresponding interactive procedure listed on the preceding page (the printout records your work). C: Use the computer with any of the software options available to you (or as instructed by your instructor) to solve the problem automatically. (See Sec. 4.8 for a listing of the options featured in this book and on the book's website.) An asterisk on the problem number indicates that at least a partial answer is given in the back of the book. 4.1-1. Consider the following problem. and x1  0, x2  0. (a) Use the graphical method to solve this problem. Circle all the corner points on the graph. (b) For each CPF solution, identify the pair of constraint boundary equations it satisfies. (c) For each CPF solution, identify its adjacent CPF solutions. (d) Calculate Z for each CPF solution. Use this information to identify an optimal solution. (e) Describe graphically what the simplex method does step by step to solve the problem. D,I 4.1-3. A certain linear programming model involving two activities has the feasible region shown below. Z  x1  2x2, Maximize subject to 2 x2  2 x1  x2  3 8 x1 2 (0, 6 ) 3 x1  0, x2  0. (a) Plot the feasible region and circle all the CPF solutions. (b) For each CPF solution, identify the pair of constraint boundary equations that it satisfies. (c) For each CPF solution, use this pair of constraint boundary equations to solve algebraically for the values of x1 and x2 at the corner point. (d) For each CPF solution, identify its adjacent CPF solutions. (e) For each pair of adjacent CPF solutions, identify the constraint boundary they share by giving its equation. 4.1-2. Consider the following problem. Maximize subject to 2x1  x2  6 x1  2x2  6 Z  3x1  2x2, Level of Activity 2 and 6 (5, 5) (6, 4) 4 Feasible region 2 (8, 0) 0 2 4 6 Level of Activity 1 8 The objective is to maximize the total profit from the two activities. The unit profit for activity 1 is $1,000 and the unit profit for activity 2 is $2,000. 153 PROBLEMS (a) Calculate the total profit for each CPF solution. Use this information to find an optimal solution. (b) Use the solution concepts of the simplex method given in Sec. 4.1 to identify the sequence of CPF solutions that would be examined by the simplex method to reach an optimal solution. 4.1-4.* Consider the linear programming model (given in the back of the book) that was formulated for Prob. 3.2-3. (a) Use graphical analysis to identify all the corner-point solutions for this model. Label each as either feasible or infeasible. (b) Calculate the value of the objective function for each of the CPF solutions. Use this information to identify an optimal solution. (c) Use the solution concepts of the simplex method given in Sec. 4.1 to identify which sequence of CPF solutions might be examined by the simplex method to reach an optimal solution. (Hint: There are two alternative sequences to be identified for this particular model.) 4.1-5. Repeat Prob. 4.1-4 for the following problem. Z  x1  2x2, Maximize subject to x1  3x2  8 x1  x2  4 and x1  0, x2  0. 4.1-6. Describe graphically what the simplex method does step by step to solve the following problem. Z  2x1  3x2, Maximize subject to 3x1  x2 4x1  2x2 4x1  x2 x1  2x2     1 20 10 5 and x1  0, x2  0. 4.1-7. Describe graphically what the simplex method does step by step to solve the following problem. Minimize Z  5x1  7x2, subject to 2x1  3x2  42 3x1  4x2  60 x1  x2  18 and x1  0, x2  0. 4.1-8. Label each of the following statements about linear programming problems as true or false, and then justify your answer. (a) For minimization problems, if the objective function evaluated at a CPF solution is no larger than its value at every adjacent CPF solution, then that solution is optimal. (b) Only CPF solutions can be optimal, so the number of optimal solutions cannot exceed the number of CPF solutions. (c) If multiple optimal solutions exist, then an optimal CPF solution may have an adjacent CPF solution that also is optimal (the same value of Z). 4.1-9. The following statements give inaccurate paraphrases of the six solution concepts presented in Sec. 4.1. In each case, explain what is wrong with the statement. (a) The best CPF solution always is an optimal solution. (b) An iteration of the simplex method checks whether the current CPF solution is optimal and, if not, moves to a new CPF solution. (c) Although any CPF solution can be chosen to be the initial CPF solution, the simplex method always chooses the origin. (d) When the simplex method is ready to choose a new CPF solution to move to from the current CPF solution, it only considers adjacent CPF solutions because one of them is likely to be an optimal solution. (e) To choose the new CPF solution to move to from the current CPF solution, the simplex method identifies all the adjacent CPF solutions and determines which one gives the largest rate of improvement in the value of the objective function. 4.2-1. Reconsider the model in Prob. 4.1-4. (a) Introduce slack variables in order to write the functional constraints in augmented form. (b) For each CPF solution, identify the corresponding BF solution by calculating the values of the slack variables. For each BF solution, use the values of the variables to identify the nonbasic variables and the basic variables. (c) For each BF solution, demonstrate (by plugging in the solution) that, after the nonbasic variables are set equal to zero, this BF solution also is the simultaneous solution of the system of equations obtained in part (a). 4.2-2. Reconsider the model in Prob. 4.1-5. Follow the instructions of Prob. 4.2-1 for parts (a), (b), and (c). (d) Repeat part (b) for the corner-point infeasible solutions and the corresponding basic infeasible solutions. (e) Repeat part (c) for the basic infeasible solutions. 4.3-1. Read the referenced article that fully describes the OR study summarized in the application vignette presented in Sec. 4.3. Briefly describe the application of the simplex method in this study. Then list the various financial and nonfinancial benefits that resulted from this study. 4.3-2. Work through the simplex method (in algebraic form) step by step to solve the model in Prob. 4.1-4. D,I 4.3-3. Reconsider the model in Prob. 4.1-5. (a) Work through the simplex method (in algebraic form) by hand to solve this model. 154 D,I C CHAPTER 4 SOLVING LINEAR PROGRAMMING PROBLEMS: THE SIMPLEX METHOD (b) Repeat part (a) with the corresponding interactive routine in your IOR Tutorial. (c) Verify the optimal solution you obtained by using a software package based on the simplex method. 4.3-4.* Work through the simplex method (in algebraic form) step by step to solve the following problem. D,I Maximize Z  4x1  3x2  6x3, subject to 3x1  x2  3x3  30 2x1  2x2  3x3  40 and x1  0, x2  0, x3  0. 4.3-5. Work through the simplex method (in algebraic form) step by step to solve the following problem. D,I Maximize Z  x1  2x2  4x3, subject to 3x1  x2  5x3  10 x1  4x2  x3  8 2x1  4x2  2x3  7 and x1  0, x2  0, x3  0. 4.3-6. Consider the following problem. Maximize Z  5x1  3x2  4x3, x2  0, x3  0. You are given the information that x1  0, x2  0, and x3  0 in the optimal solution. (a) Describe how you can use this information to adapt the simplex method to solve this problem in the minimum possible number of iterations (when you start from the usual initial BF solution). Do not actually perform any iterations. (b) Use the procedure developed in part (a) to solve this problem by hand. (Do not use your OR Courseware.) 4.3-8. Label each of the following statements as true or false, and then justify your answer by referring to specific statements in the chapter. (a) The simplex method’s rule for choosing the entering basic variable is used because it always leads to the best adjacent BF solution (largest Z). (b) The simplex method’s minimum ratio rule for choosing the leaving basic variable is used because making another choice with a larger ratio would yield a basic solution that is not feasible. (c) When the simplex method solves for the next BF solution, elementary algebraic operations are used to eliminate each nonbasic variable from all but one equation (its equation) and to give it a coefficient of 1 in that one equation. 4.4-1. Repeat Prob. 4.3-2, using the tabular form of the simplex method. D,I 4.4-2. Repeat Prob. 4.3-3, using the tabular form of the simplex method. D,I,C Maximize 2x1  x2  x3  20 3x1  x2  2x3  30 x2  0, x3  0. You are given the information that the nonzero variables in the optimal solution are x2 and x3. (a) Describe how you can use this information to adapt the simplex method to solve this problem in the minimum possible number of iterations (when you start from the usual initial BF solution). Do not actually perform any iterations. (b) Use the procedure developed in part (a) to solve this problem by hand. (Do not use your OR Courseware.) 4.3-7. Consider the following problem. Z  2x1  4x2  3x3, subject to x1  3x2  2x3  30 x1  x2  x3  24 3x1  5x2  3x3  60 Z  2x1  x2, subject to and Maximize x1  0, 4.4-3. Consider the following problem. subject to x1  0, and x1  x2  40 4x1  x2  100 and x1  0, x2  0. (a) Solve this problem graphically in a freehand manner. Also identify all the CPF solutions. D,I (b) Now use IOR Tutorial to solve the problem graphically. D (c) Use hand calculations to solve this problem by the simplex method in algebraic form. D,I (d) Now use IOR Tutorial to solve this problem interactively by the simplex method in algebraic form. D (e) Use hand calculations to solve this problem by the simplex method in tabular form. D,I (f) Now use IOR Tutorial to solve this problem interactively by the simplex method in tabular form. C (g) Use a software package based on the simplex method to solve the problem. 155 PROBLEMS 4.4-4. Repeat Prob. 4.4-3 for the following problem. Maximize x2  0. 4.4-5. Consider the following problem. Z  2x1  4x2  3x3, subject to  x1  2x2  x3  20 2x1  4x2  2x3  60 2x1  3x2  x3  50 and x1  0, 3x1  4x2  2x3  60 2x1  x2  2x3  40 x1  3x2  2x3  80 and x2  0, x3  0. (a) Work through the simplex method step by step in algebraic form. D,I (b) Work through the simplex method step by step in tabular form. C (c) Use a software package based on the simplex method to solve the problem. D,I 4.4-6. Consider the following problem. Z  3x1  5x2  6x3, 2x1  x2  x3 x1  2x2  x3 x1  x2  2x3 x1  x2  x3 x3  0. 4.5-2. Suppose that the following constraints have been provided for a linear programming model with decision variables x1 and x2. x1  3x2  30 3x1  x2  30     4 4 4 3 and x1  0, and x2  0, x3  0. (a) Work through the simplex method step by step in algebraic form. D,I (b) Work through the simplex method in tabular form. C (c) Use a computer package based on the simplex method to solve the problem. D,I 4.4-7. Work through the simplex method step by step (in tabular form) to solve the following problem. D,I Maximize x2  0, 4.5-1. Consider the following statements about linear programming and the simplex method. Label each statement as true or false, and then justify your answer. (a) In a particular iteration of the simplex method, if there is a tie for which variable should be the leaving basic variable, then the next BF solution must have at least one basic variable equal to zero. (b) If there is no leaving basic variable at some iteration, then the problem has no feasible solutions. (c) If at least one of the basic variables has a coefficient of zero in row 0 of the final tableau, then the problem has multiple optimal solutions. (d) If the problem has multiple optimal solutions, then the problem must have a bounded feasible region. subject to x1  0, Z  x1  x2  2x3, subject to and Maximize x3  0. 4.4-8. Work through the simplex method step by step to solve the following problem. x1  2x2  30 x1  x2  20 x1  0, x2  0, D,I subject to Maximize x1  0, Z  2x1  3x2, Maximize x1  0, and Z  2x1  x2  x3, subject to 3x1  x2  x3  6 x1  x2  2x3  1 x1  x2  x3  2 x2  0. (a) Demonstrate graphically that the feasible region is unbounded. (b) If the objective is to maximize Z  x1  x2, does the model have an optimal solution? If so, find it. If not, explain why not. (c) Repeat part (b) when the objective is to maximize Z  x1  x2. (d) For objective functions where this model has no optimal solution, does this mean that there are no good solutions according to the model? Explain. What probably went wrong when formulating the model? D,I (e) Select an objective function for which this model has no optimal solution. Then work through the simplex method step by step to demonstrate that Z is unbounded. C (f) For the objective function selected in part (e), use a software package based on the simplex method to determine that Z is unbounded. 4.5-3. Follow the instructions of Prob. 4.5-2 when the constraints are the following: 2x1  x2  20 x1  2x2  20 156 CHAPTER 4 SOLVING LINEAR PROGRAMMING PROBLEMS: THE SIMPLEX METHOD and and x1  0, x2  0. xj  0, 4.5-4. Consider the following problem. D,I Z  5x1  x2  3x3  4x4, Maximize subject to x1  2x2  4 x1  x2  3 and x2  0, x3  0, x4  0. Work through the simplex method step by step to demonstrate that Z is unbounded. 4.5-5. A basic property of any linear programming problem with a bounded feasible region is that every feasible solution can be expressed as a convex combination of the CPF solutions (perhaps in more than one way). Similarly, for the augmented form of the problem, every feasible solution can be expressed as a convex combination of the BF solutions. (a) Show that any convex combination of any set of feasible solutions must be a feasible solution (so that any convex combination of CPF solutions must be feasible). (b) Use the result quoted in part (a) to show that any convex combination of BF solutions must be a feasible solution. 4.5-6. Using the facts given in Prob. 4.5-5, show that the following statements must be true for any linear programming problem that has a bounded feasible region and multiple optimal solutions: (a) Every convex combination of the optimal BF solutions must be optimal. (b) No other feasible solution can be optimal. 4.5-7. Consider a two-variable linear programming problem whose CPF solutions are (0, 0), (6, 0), (6, 3), (3, 3), and (0, 2). (See Prob. 3.2-2 for a graph of the feasible region.) (a) Use the graph of the feasible region to identify all the constraints for the model. (b) For each pair of adjacent CPF solutions, give an example of an objective function such that all the points on the line segment between these two corner points are multiple optimal solutions. (c) Now suppose that the objective function is Z  x1  2x2. Use the graphical method to find all the optimal solutions. D,I (d) For the objective function in part (c), work through the simplex method step by step to find all the optimal BF solutions. Then write an algebraic expression that identifies all the optimal solutions. 4.5-8. Consider the following problem. Maximize subject to x1  x2  3 x3  x4  2 Z  2x1  3x2, Maximize  x1  2x2  4x3  3x4  20 4x1  6x2  5x3  4x4  40 2x1  3x2  3x3  8x4  50 D,I Work through the simplex method step by step to find all the optimal BF solutions. 4.6-1.* Consider the following problem. subject to x1  0, for j  1, 2, 3, 4. Z  x1  x2  x3  x4, and x1  0, x2  0. D,I (a) Solve this problem graphically. (b) Using the Big M method, construct the complete first simplex tableau for the simplex method and identify the corresponding initial (artificial) BF solution. Also identify the initial entering basic variable and the leaving basic variable. I (c) Continue from part (b) to work through the simplex method step by step to solve the problem. 4.6-2. Consider the following problem. Maximize Z  4x1  2x2  3x3  5x4, subject to 2x1  3x2  4x3  2x4  300 8x1  x2  x3  5x4  300 and xj  0, for j  1, 2, 3, 4. (a) Using the Big M method, construct the complete first simplex tableau for the simplex method and identify the corresponding initial (artificial) BF solution. Also identify the initial entering basic variable and the leaving basic variable. I (b) Work through the simplex method step by step to solve the problem. (c) Using the two-phase method, construct the complete first simplex tableau for phase 1 and identify the corresponding initial (artificial) BF solution. Also identify the initial entering basic variable and the leaving basic variable. I (d) Work through phase 1 step by step. (e) Construct the complete first simplex tableau for phase 2. I (f) Work through phase 2 step by step to solve the problem. (g) Compare the sequence of BF solutions obtained in part (b) with that in parts (d) and ( f ). Which of these solutions are feasible only for the artificial problem obtained by introducing artificial variables and which are actually feasible for the real problem? C (h) Use a software package based on the simplex method to solve the problem. 4.6-3.* Consider the following problem. Minimize Z  2x1  3x2  x3, 157 PROBLEMS subject to subject to x1  4x2  2x3  8 3x1  2x2  2x3  6 x1  2x2  x3  20 2x1  4x2  x3  50 and x1  0, and x2  0, x3  0. (a) Reformulate this problem to fit our standard form for a linear programming model presented in Sec. 3.2. I (b) Using the Big M method, work through the simplex method step by step to solve the problem. I (c) Using the two-phase method, work through the simplex method step by step to solve the problem. (d) Compare the sequence of BF solutions obtained in parts (b) and (c). Which of these solutions are feasible only for the artificial problem obtained by introducing artificial variables and which are actually feasible for the real problem? C (e) Use a software package based on the simplex method to solve the problem. 4.6-4. For the Big M method, explain why the simplex method never would choose an artificial variable to be an entering basic variable once all the artificial variables are nonbasic. 4.6-5. Consider the following problem. Z  90x1  70x2, Maximize subject to x1  0, x2  0, x3  0. (a) Using the Big M method, construct the complete first simplex tableau for the simplex method and identify the corresponding initial (artificial) BF solution. Also identify the initial entering basic variable and the leaving basic variable. I (b) Work through the simplex method step by step to solve the problem. I (c) Using the two-phase method, construct the complete first simplex tableau for phase 1 and identify the corresponding initial (artificial) BF solution. Also identify the initial entering basic variable and the leaving basic variable. I (d) Work through phase 1 step by step. (e) Construct the complete first simplex tableau for phase 2. I (f) Work through phase 2 step by step to solve the problem. (g) Compare the sequence of BF solutions obtained in part (b) with that in parts (d) and ( f ). Which of these solutions are feasible only for the artificial problem obtained by introducing artificial variables and which are actually feasible for the real problem? C (h) Use a software package based on the simplex method to solve the problem. 4.6-8. Consider the following problem. 2x1  x2  2 x1  x2  2 Minimize Z  2x1  x2  3x3, and x1  0, x2  0. (a) Demonstrate graphically that this problem has no feasible solutions. C (b) Use a computer package based on the simplex method to determine that the problem has no feasible solutions. I (c) Using the Big M method, work through the simplex method step by step to demonstrate that the problem has no feasible solutions. I (d) Repeat part (c) when using phase 1 of the two-phase method. 4.6-6. Follow the instructions of Prob. 4.6-5 for the following problem. Minimize Z  5,000x1  7,000x2, subject to and x1  0, x2  0, x3  0. (a) Using the two-phase method, work through phase 1 step by step. C (b) Use a software package based on the simplex method to formulate and solve the phase 1 problem. I (c) Work through phase 2 step by step to solve the original problem. C (d) Use a software package based on the simplex method to solve the original problem. I Minimize Z  3x1  2x2  4x3, subject to and x2  0. 4.6-7. Consider the following problem. Maximize 5x1  2x2  7x3  420 3x1  2x2  5x3  280 4.6-9.* Consider the following problem. 2x1  x2  1  x1  2x2  1 x1  0, subject to Z  2x1  5x2  3x3, 2x1  x2  3x3  60 3x1  3x2  5x3  120 and x1  0, x2  0, x3  0. 158 CHAPTER 4 SOLVING LINEAR PROGRAMMING PROBLEMS: THE SIMPLEX METHOD (a) Using the Big M method, work through the simplex method step by step to solve the problem. I (b) Using the two-phase method, work through the simplex method step by step to solve the problem. (c) Compare the sequence of BF solutions obtained in parts (a) and (b). Which of these solutions are feasible only for the artificial problem obtained by introducing artificial variables and which are actually feasible for the real problem? C (d) Use a software package based on the simplex method to solve the problem. I 4.6-10. Follow the instructions of Prob. 4.6-9 for the following problem. Minimize Z  3x1  2x2  7x3, subject to x1  x2  x3  10 2x1  x2  x3  10 and x1  0, x2  0, x3  0. 4.6-11. Label each of the following statements as true or false, and then justify your answer. (a) When a linear programming model has an equality constraint, an artificial variable is introduced into this constraint in order to start the simplex method with an obvious initial basic solution that is feasible for the original model. (b) When an artificial problem is created by introducing artificial variables and using the Big M method, if all artificial variables in an optimal solution for the artificial problem are equal to zero, then the real problem has no feasible solutions. (c) The two-phase method is commonly used in practice because it usually requires fewer iterations to reach an optimal solution than the Big M method does. 4.6-12. Consider the following problem. Maximize Z  x1  4x2  2x3, subject to 4x1  x2  2x3  5 x1  x2  2x3  10 and x2  0, x3  0 (no nonnegativity constraint for x1). (a) Reformulate this problem so all variables have nonnegativity constraints. D,I (b) Work through the simplex method step by step to solve the problem. C (c) Use a software package based on the simplex method to solve the problem. 4.6-13.* Consider the following problem. Maximize Z  x1  4x2, subject to 3x1  x2  6  x1  2x2  4  x1  2x2  3 (no lower bound constraint for x1). D,I (a) Solve this problem graphically. (b) Reformulate this problem so that it has only two functional constraints and all variables have nonnegativity constraints. D,I (c) Work through the simplex method step by step to solve the problem. 4.6-14. Consider the following problem. Maximize Z  x1  2x2  x3, subject to 3x2  x3  120 x1  x2  4x3  80 3x1  x2  2x3  100 (no nonnegativity constraints). (a) Reformulate this problem so that all variables have nonnegativity constraints. D,I (b) Work through the simplex method step by step to solve the problem. C (c) Use a computer package based on the simplex method to solve the problem. 4.6-15. This chapter has described the simplex method as applied to linear programming problems where the objective function is to be maximized. Section 4.6 then described how to convert a minimization problem to an equivalent maximization problem for applying the simplex method. Another option with minimization problems is to make a few modifications in the instructions for the simplex method given in the chapter in order to apply the algorithm directly. (a) Describe what these modifications would need to be. (b) Using the Big M method, apply the modified algorithm developed in part (a) to solve the following problem directly by hand. (Do not use your OR Courseware.) Minimize Z  3x1  8x2  5x3, subject to 3x1  3x2  4x3  70 3x1  5x2  2x3  70 and x1  0, x2  0, x3  0. 4.6-16. Consider the following problem. Maximize Z  2x1  x2  4x3  3x4, subject to x1  x2  3x3  2x4  4 x1  x2  x3  x4  1 2x1  x2  x3  x4  2 x1  2x2  x3  2x4  2 and x2  0, x3  0, x4  0 (no nonnegativity constraint for x1). 159 PROBLEMS (a) Reformulate this problem to fit our standard form for a linear programming model presented in Sec. 3.2. (b) Using the Big M method, construct the complete first simplex tableau for the simplex method and identify the corresponding initial (artificial) BF solution. Also identify the initial entering basic variable and the leaving basic variable. (c) Using the two-phase method, construct row 0 of the first simplex tableau for phase 1. C (d) Use a computer package based on the simplex method to solve the problem. I x1  3x2  17 x1 x2  5 and x1  0, Z  4x1  5x2  3x3, subject to 4.7-4. Consider the following problem. and x2  0, x3  0. Work through the simplex method step by step to demonstrate that this problem does not possess any feasible solutions. 4.7-1. Refer to Fig. 4.10 and the resulting allowable range for the respective right-hand sides of the Wyndor Glass Co. problem given in Sec. 3.1. Use graphical analysis to demonstrate that each given allowable range is correct. 4.7-2. Reconsider the model in Prob. 4.1-5. Interpret the right-hand side of the respective functional constraints as the amount available of the respective resources. I (a) Use graphical analysis as in Fig. 4.8 to determine the shadow prices for the respective resources. I (b) Use graphical analysis to perform sensitivity analysis on this model. In particular, check each parameter of the model to determine whether it is a sensitive parameter (a parameter whose value cannot be changed without changing the optimal solution) by examining the graph that identifies the optimal solution. I (c) Use graphical analysis as in Fig. 4.9 to determine the allowable range for each cj value (coefficient of xj in the objective function) over which the current optimal solution will remain optimal. I (d) Changing just one bi value (the right-hand side of functional constraint i) will shift the corresponding constraint boundary. If the current optimal CPF solution lies on this constraint boundary, this CPF solution also will shift. Use graphical analysis to determine the allowable range for each bi value over which this CPF solution will remain feasible. C (e) Verify your answers in parts (a), (c), and (d) by using a computer package based on the simplex method to solve the problem and then to generate sensitivity analysis information. 4.7-3. You are given the following linear programming problem. Maximize Z  4x1  2x2, subject to 2x1  3x2  16 Maximize Z  x1  7x2  3x3, subject to 2x1  x2  x3  4 4x1  3x2  x3  2 3x1  2x2  x3  3 x1  x2  2x3  20 15x1  6x2  5x3  50 x1  3x2  5x3  30 x1  0, x2  0. (a) Solve this problem graphically. (b) Use graphical analysis to find the shadow prices for the resources. (c) Determine how many additional units of resource 1 would be needed to increase the optimal value of Z by 15. D,I 4.6-17. Consider the following problem. Maximize (resource 2) (resource 3) (resource 1) (resource 1) (resource 2) (resource 3) and x1  0, x2  0, x3  0. D,I (a) Work through the simplex method step by step to solve the problem. (b) Identify the shadow prices for the three resources and describe their significance. C (c) Use a software package based on the simplex method to solve the problem and then to generate sensitivity information. Use this information to identify the shadow price for each resource, the allowable range for each objective function coefficient, and the allowable range for each righthand side. 4.7-5.* Consider the following problem. Maximize Z  2x1  2x2  3x3, subject to x1  x2  x3  4 2x1  x2  x3  2 x1  x2  3x3  12 (resource 1) (resource 2) (resource 3) and x1  0, x2  0, x3  0. (a) Work through the simplex method step by step to solve the problem. (b) Identify the shadow prices for the three resources and describe their significance. C (c) Use a software package based on the simplex method to solve the problem and then to generate sensitivity information. Use this information to identify the shadow price for each resource, the allowable range for each objective function coefficient and the allowable range for each righthand side. D,I 4.7-6. Consider the following problem. Maximize Z  5x1  4x2  x3  3x4, 160 CHAPTER 4 SOLVING LINEAR PROGRAMMING PROBLEMS: THE SIMPLEX METHOD (resource 1) (resource 2) this information to identify the shadow price for each resource, the allowable range for each objective function coefficient, and the allowable range for each right-hand side. subject to 3x1  2x2  3x3  x4  24 3x1  3x2  x3  3x4  36 and x1  0, x2  0, x3  0, x4  0. (a) Work through the simplex method step by step to solve the problem. (b) Identify the shadow prices for the two resources and describe their significance. C (c) Use a software package based on the simplex method to solve the problem and then to generate sensitivity information. Use D,I 4.9.1. Use the interior-point algorithm in your IOR Tutorial to solve the model in Prob. 4.1-4. Choose   0.5 from the Option menu, use (x1, x2)  (0.1, 0.4) as the initial trial solution, and run 15 iterations. Draw a graph of the feasible region, and then plot the trajectory of the trial solutions through this feasible region. 4.9-2. Repeat Prob. 4.9-1 for the model in Prob. 4.1-5. ■ CASES CASE 4.1 Fabrics and Fall Fashions From the tenth floor of her office building, Katherine Rally watches the swarms of New Yorkers fight their way through the streets infested with yellow cabs and the sidewalks littered with hot dog stands. On this sweltering July day, she pays particular attention to the fashions worn by the various women and wonders what they will choose to wear in the fall. Her thoughts are not simply random musings; they are critical to her work since she owns and manages TrendLines, an elite women’s clothing company. Today is an especially important day because she must meet with Ted Lawson, the production manager, to decide upon next month’s production plan for the fall line. Specifically, she must determine the quantity of each clothing item she should produce given the plant’s production capacity, limited resources, and demand forecasts. Accurate planning for next month’s production is critical to fall sales since the items produced next month will appear in stores during September, and women generally buy the majority of the fall fashions when they first appear in September. She turns back to her sprawling glass desk and looks at the numerous papers covering it. Her eyes roam across the clothing patterns designed almost six months ago, the lists of materials requirements for each pattern, and the lists of demand forecasts for each pattern determined by customer surveys at fashion shows. She remembers the hectic and sometimes nightmarish days of designing the fall line and presenting it at fashion shows in New York, Milan, and Paris. Ultimately, she paid her team of six designers a total of $860,000 for their work on her fall line. With the cost of hiring runway models, hair stylists, and makeup artists, sewing and fitting clothes, building the set, choreographing and rehearsing the show, and renting the conference hall, each of the three fashion shows cost her an additional $2,700,000. She studies the clothing patterns and material requirements. Her fall line consists of both professional and casual fashions. She determined the prices for each clothing item by taking into account the quality and cost of material, the cost of labor and machining, the demand for the item, and the prestige of the TrendLines brand name. The fall professional fashions include: Clothing Item Materials Requirements Price Labor and Machine Cost Tailored wool slacks 3 yards of wool 2 yards of acetate for lining 1.5 yards of cashmere 1.5 yards of silk 0.5 yard of silk 2 yards of rayon 1.5 yards of acetate for lining 2.5 yards of wool 1.5 yards of acetate for lining $300 $160 $450 $180 $120 $270 $150 $100 $ 60 $120 $320 $140 Cashmere sweater Silk blouse Silk camisole Tailored skirt Wool blazer 161 CASES The fall casual fashions include: Clothing Item Materials Requirements Price Labor and Machine Cost Velvet pants 3 yards of velvet 2 yards of acetate for lining 1.5 yards of cotton 0.5 yard of cotton 1.5 yards of velvet 1.5 yards of rayon $350 $175 $130 $ 75 $200 $120 $ 60 $ 40 $160 $ 90 Cotton sweater Cotton miniskirt Velvet shirt Button-down blouse She knows that for the next month, she has ordered 45,000 yards of wool, 28,000 yards of acetate, 9,000 yards of cashmere, 18,000 yards of silk, 30,000 yards of rayon, 20,000 yards of velvet, and 30,000 yards of cotton for production. The prices of the materials are as follows: Material Price per yard Wool Acetate Cashmere Silk Rayon Velvet Cotton $ 9.00 $ 1.50 $60.00 $13.00 $ 2.25 $12.00 $ 2.50 Any material that is not used in production can be sent back to the textile wholesaler for a full refund, although scrap material cannot be sent back to the wholesaler. She knows that the production of both the silk blouse and cotton sweater leaves leftover scraps of material. Specifically, for the production of one silk blouse or one cotton sweater, 2 yards of silk and cotton, respectively, are needed. From these 2 yards, 1.5 yards are used for the silk blouse or the cotton sweater and 0.5 yard is left as scrap material. She does not want to waste the material, so she plans to use the rectangular scrap of silk or cotton to produce a silk camisole or cotton miniskirt, respectively. Therefore, whenever a silk blouse is produced, a silk camisole is also produced. Likewise, whenever a cotton sweater is produced, a cotton miniskirt is also produced. Note that it is possible to produce a silk camisole without producing a silk blouse and a cotton miniskirt without producing a cotton sweater. The demand forecasts indicate that some items have limited demand. Specifically, because the velvet pants and velvet shirts are fashion fads, TrendLines has forecasted that it can sell only 5,500 pairs of velvet pants and 6,000 velvet shirts. TrendLines does not want to produce more than the forecasted demand because once the pants and shirts go out of style, the company cannot sell them. TrendLines can produce less than the forecasted demand, however, since the company is not required to meet the demand. The cashmere sweater also has limited demand because it is quite expensive, and TrendLines knows it can sell at most 4,000 cashmere sweaters. The silk blouses and camisoles have limited demand because many women think silk is too hard to care for, and TrendLines projects that it can sell at most 12,000 silk blouses and 15,000 silk camisoles. The demand forecasts also indicate that the wool slacks, tailored skirts, and wool blazers have a great demand because they are basic items needed in every professional wardrobe. Specifically, the demand for wool slacks is 7,000 pairs of slacks, and the demand for wool blazers is 5,000 blazers. Katherine wants to meet at least 60 percent of the demand for these two items in order to maintain her loyal customer base and not lose business in the future. Although the demand for tailored skirts could not be estimated, Katherine feels she should make at least 2,800 of them. (a) Ted is trying to convince Katherine not to produce any velvet shirts since the demand for this fashion fad is quite low. He argues that this fashion fad alone accounts for $500,000 of the fixed design and other costs. The net contribution (price of clothing item  materials cost  labor cost) from selling the fashion fad should cover these fixed costs. Each velvet shirt generates a net contribution of $22. He argues that given the net contribution, even satisfying the maximum demand will not yield a profit. What do you think of Ted’s argument? (b) Formulate and solve a linear programming problem to maximize profit given the production, resource, and demand constraints. Before she makes her final decision, Katherine plans to explore the following questions independently except where otherwise indicated. (c) The textile wholesaler informs Katherine that the velvet cannot be sent back because the demand forecasts show that the 162 CHAPTER 4 SOLVING LINEAR PROGRAMMING PROBLEMS: THE SIMPLEX METHOD demand for velvet will decrease in the future. Katherine can therefore get no refund for the velvet. How does this fact change the production plan? (d) What is an intuitive economic explanation for the difference between the solutions found in parts (b) and (c)? (e) The sewing staff encounters difficulties sewing the arms and lining into the wool blazers since the blazer pattern has an awkward shape and the heavy wool material is difficult to cut and sew. The increased labor time to sew a wool blazer increases the labor and machine cost for each blazer by $80. Given this new cost, how many of each clothing item should TrendLines produce to maximize profit? (f) The textile wholesaler informs Katherine that since another textile customer canceled his order, she can obtain an extra 10,000 yards of acetate. How many of each clothing item should TrendLines now produce to maximize profit? (g) TrendLines assumes that it can sell every item that was not sold during September and October in a big sale in November at 60 percent of the original price. Therefore, it can sell all items in unlimited quantity during the November sale. (The previously mentioned upper limits on demand concern only the sales during September and October.) What should the new production plan be to maximize profit? ■ PREVIEWS OF ADDED CASES ON OUR WEBSITE (www.mhhe.com/hillier) CASE 4.2 New Frontiers AmeriBank will soon begin offering Web banking to its customers. To guide its planning for the services to provide over the Internet, a survey will be conducted with four different age groups in three types of communities. AmeriBank is imposing a number of constraints on how extensively each age group and each community should be surveyed. Linear programming is needed to develop a plan for the survey that will minimize its total cost while meeting all the survey constraints under several different scenarios. CASE 4.3 Assigning Students to Schools After deciding to close one of its middle schools, the Springfield school board needs to reassign all of next year’s middle school students to the three remaining middle schools. Many of the students will be bused, so minimizing the total busing cost is one objective. Another is to minimize the inconvenience and safety concerns for the students who will walk or bicycle to school. Given the capacities of the three schools, as well as the need to roughly balance the number of students in the three grades at each school, how can linear programming be used to determine how many students from each of the city’s six residential areas should be assigned to each school? What would happen if each entire residential area must be assigned to the same school? (This case will be continued in Cases 7.3 and 12.4.) 5 C H A P T E R The Theory of the Simplex Method C hapter 4 introduced the basic mechanics of the simplex method. Now we shall delve a little more deeply into this algorithm by examining some of its underlying theory. The first section further develops the general geometric and algebraic properties that form the foundation of the simplex method. We then describe the matrix form of the simplex method, which streamlines the procedure considerably for computer implementation. Next we use this matrix form to present a fundamental insight about a property of the simplex method that enables us to deduce how changes that are made in the original model get carried along to the final simplex tableau. This insight will provide the key to the important topics of Chap. 6 (duality theory) and Secs. 7.1–7.3 (sensitivity analysis). The chapter then concludes by presenting the revised simplex method, which further streamlines the matrix form of the simplex method. Commercial computer codes of the simplex method normally are based on the revised simplex method. ■ 5.1 FOUNDATIONS OF THE SIMPLEX METHOD Section 4.1 introduced corner-point feasible (CPF) solutions and the key role they play in the simplex method. These geometric concepts were related to the algebra of the simplex method in Secs. 4.2 and 4.3. However, all this was done in the context of the Wyndor Glass Co. problem, which has only two decision variables and so has a straightforward geometric interpretation. How do these concepts generalize to higher dimensions when we deal with larger problems? We address this question in this section. We begin by introducing some basic terminology for any linear programming problem with n decision variables. While we are doing this, you may find it helpful to refer to Fig. 5.1 (which repeats Fig. 4.1) to interpret these definitions in two dimensions (n  2). Terminology It may seem intuitively clear that optimal solutions for any linear programming problem must lie on the boundary of the feasible region, and in fact, this is a general property. Because boundary is a geometric concept, our initial definitions clarify how the boundary of the feasible region is identified algebraically. The constraint boundary equation for any constraint is obtained by replacing its , , or  sign with an  sign. 163 164 CHAPTER 5 THE THEORY OF THE SIMPLEX METHOD Maximize Z  3x1  5x2, subject to x1  4 2x2  12 3x1  2x2  18 and x1  0, x2  0 x1  0 (0, 9) 3x1  2x2  18 (0, 6) (2, 6) (4, 6) 2x2  12 x1  4 Feasible region ■ FIGURE 5.1 Constraint boundaries, constraint boundary equations, and corner-point solutions for the Wyndor Glass Co. problem. (0, 0) (4, 3) (4, 0) (6, 0) x2  0 Consequently, the form of a constraint boundary equation is ai1x1  ai2 x2      ain xn  bi for functional constraints and xj  0 for nonnegativity constraints. Each such equation defines a “flat” geometric shape (called a hyperplane) in n-dimensional space, analogous to the line in two-dimensional space and the plane in three-dimensional space. This hyperplane forms the constraint boundary for the corresponding constraint. When the constraint has either a  or a  sign, this constraint boundary separates the points that satisfy the constraint (all the points on one side up to and including the constraint boundary) from the points that violate the constraint (all those on the other side of the constraint boundary). When the constraint has an  sign, only the points on the constraint boundary satisfy the constraint. For example, the Wyndor Glass Co. problem has five constraints (three functional constraints and two nonnegativity constraints), so it has the five constraint boundary equations shown in Fig. 5.1. Because n  2, the hyperplanes defined by these constraint boundary equations are simply lines. Therefore, the constraint boundaries for the five constraints are the five lines shown in Fig. 5.1. The boundary of the feasible region contains just those feasible solutions that satisfy one or more of the constraint boundary equations. Geometrically, any point on the boundary of the feasible region lies on one or more of the hyperplanes defined by the respective constraint boundary equations. Thus, in Fig. 5.1, the boundary consists of the five darker line segments. Next, we give a general definition of CPF solution in n-dimensional space. A corner-point feasible (CPF) solution is a feasible solution that does not lie on any line segment1 connecting two other feasible solutions. As this definition implies, a feasible solution that does lie on a line segment connecting two other feasible solutions is not a CPF solution. To illustrate when n  2, consider Fig. 5.1. 1 An algebraic expression for a line segment is given in Appendix 2. 5.1 FOUNDATIONS OF THE SIMPLEX METHOD 165 The point (2, 3) is not a CPF solution, because it lies on various such line segments, e.g., it is the midpoint on the line segment connecting (0, 3) and (4, 3). Similarly, (0, 3) is not a CPF solution, because it is the midpoint on the line segment connecting (0, 0) and (0, 6). However, (0, 0) is a CPF solution, because it is impossible to find two other feasible solutions that lie on completely opposite sides of (0, 0). (Try it.) When the number of decision variables n is greater than 2 or 3, this definition for CPF solution is not a very convenient one for identifying such solutions. Therefore, it will prove most helpful to interpret these solutions algebraically. For the Wyndor Glass Co. example, each CPF solution in Fig. 5.1 lies at the intersection of two (n  2) constraint lines; i.e., it is the simultaneous solution of a system of two constraint boundary equations. This situation is summarized in Table 5.1, where defining equations refer to the constraint boundary equations that yield (define) the indicated CPF solution. For any linear programming problem with n decision variables, each CPF solution lies at the intersection of n constraint boundaries; i.e., it is the simultaneous solution of a system of n constraint boundary equations. However, this is not to say that every set of n constraint boundary equations chosen from the n  m constraints (n nonnegativity and m functional constraints) yields a CPF solution. In particular, the simultaneous solution of such a system of equations might violate one or more of the other m constraints not chosen, in which case it is a corner-point infeasible solution. The example has three such solutions, as summarized in Table 5.2. (Check to see why they are infeasible.) ■ TABLE 5.1 Defining equations for each CPF solution for the Wyndor Glass Co. problem CPF Solution Defining Equations (0, 0) x1  0 x2  0 (0, 6) x1  0 2x2  12 (2, 6) 2x2  12 3x1  2x2  18 (4, 3) 3x1  2x2  18 x1  4 (4, 0) x1  4 x2  0 ■ TABLE 5.2 Defining equations for each corner-point infeasible solution for the Wyndor Glass Co. problem Corner-Point Infeasible Solution Defining Equations (0, 9) x1  0 3x1  2x2  18 (4, 6) 2x2  12 x1  4 (6, 0) 3x1  2x2  18 x2  0 166 CHAPTER 5 THE THEORY OF THE SIMPLEX METHOD Furthermore, a system of n constraint boundary equations might have no solution at all. This occurs twice in the example, with the pairs of equations (1) x1  0 and x1  4 and (2) x2  0 and 2x2  12. Such systems are of no interest to us. The final possibility (which never occurs in the example) is that a system of n constraint boundary equations has multiple solutions because of redundant equations. You need not be concerned with this case either, because the simplex method circumvents its difficulties. We also should mention that it is possible for more than one system of n constraint boundary equations to yield the same CP solution. This would happen if a CPF solution that lies at the intersection of n constraint boundaries also happens to have one or more other constraint boundaries that pass through this same point. For example, if the x1  4 constraint in the Wyndor Glass Co. problem (where n  2) were to be replaced by x1  2, note in Fig. 5.1 how the CPF solution (2, 6) lies at the intersection of three constraint boundaries instead of just two. Therefore, this solution can be derived from any one of three pairs of constraint boundary equations. (This is an example of the degeneracy discussed in a different context in Sec. 4.5.) To summarize for the example, with five constraints and two variables, there are 10 pairs of constraint boundary equations. Five of these pairs became defining equations for CPF solutions (Table 5.1), three became defining equations for corner-point infeasible solutions (Table 5.2), and each of the final two pairs had no solution. Adjacent CPF Solutions Section 4.1 introduced adjacent CPF solutions and their role in solving linear programming problems. We now elaborate. Recall from Chap. 4 that (when we ignore slack, surplus, and artificial variables) each iteration of the simplex method moves from the current CPF solution to an adjacent one. What is the path followed in this process? What really is meant by adjacent CPF solution? First we address these questions from a geometric viewpoint, and then we turn to algebraic interpretations. These questions are easy to answer when n  2. In this case, the boundary of the feasible region consists of several connected line segments forming a polygon, as shown in Fig. 5.1 by the five darker line segments. These line segments are the edges of the feasible region. Emanating from each CPF solution are two such edges leading to an adjacent CPF solution at the other end. (Note in Fig. 5.1 how each CPF solution has two adjacent ones.) The path followed in an iteration is to move along one of these edges from one end to the other. In Fig. 5.1, the first iteration involves moving along the edge from (0, 0) to (0, 6), and then the next iteration moves along the edge from (0, 6) to (2, 6). As Table 5.1 illustrates, each of these moves to an adjacent CPF solution involves just one change in the set of defining equations (constraint boundaries on which the solution lies). When n  3, the answers are slightly more complicated. To help you visualize what is going on, Fig. 5.2 shows a three-dimensional drawing of a typical feasible region when n  3, where the dots are the CPF solutions. This feasible region is a polyhedron rather than the polygon we had with n  2 (Fig. 5.1), because the constraint boundaries now are planes rather than lines. The faces of the polyhedron form the boundary of the feasible region, where each face is the portion of a constraint boundary that satisfies the other constraints as well. Note that each CPF solution lies at the intersection of three constraint boundaries (sometimes including some of the x1  0, x2  0, and x3  0 constraint boundaries for the nonnegativity constraints), and the solution also satisfies the other constraints. Such intersections that do not satisfy one or more of the other constraints yield corner-point infeasible solutions instead. The darker line segment in Fig. 5.2 depicts the path of the simplex method on a typical iteration. The point (2, 4, 3) is the current CPF solution to begin the iteration, and the point (4, 2, 4) will be the new CPF solution at the end of the iteration. The point 5.1 167 FOUNDATIONS OF THE SIMPLEX METHOD Constraints x3 x1  4 x2  4 x1  x2  6 x1  2x3  4 x1  0, x2  0, x3  0 (4, 0, 4) (4, 2, 4) (0, 0, 2) (4, 0, 0) (2, 4, 3) (0, 0, 0) x1 (0, 4, 2) ■ FIGURE 5.2 Feasible region and CPF solutions for a three-variable linear programming problem. (4, 2, 0) x2 (0, 4, 0) (2, 4, 0) (2, 4, 3) lies at the intersection of the x2  4, x1  x2  6, and x1  2x3  4 constraint boundaries, so these three equations are the defining equations for this CPF solution. If the x2  4 defining equation were removed, the intersection of the other two constraint boundaries (planes) would form a line. One segment of this line, shown as the dark line segment from (2, 4, 3) to (4, 2, 4) in Fig. 5.2, lies on the boundary of the feasible region, whereas the rest of the line is infeasible. This line segment is an edge of the feasible region, and its endpoints (2, 4, 3) and (4, 2, 4) are adjacent CPF solutions. For n  3, all the edges of the feasible region are formed in this way as the feasible segment of the line lying at the intersection of two constraint boundaries, and the two endpoints of an edge are adjacent CPF solutions. In Fig. 5.2 there are 15 edges of the feasible region, and so there are 15 pairs of adjacent CPF solutions. For the current CPF solution (2, 4, 3), there are three ways to remove one of its three defining equations to obtain an intersection of the other two constraint boundaries, so there are three edges emanating from (2, 4, 3). These edges lead to (4, 2, 4), (0, 4, 2), and (2, 4, 0), so these are the CPF solutions that are adjacent to (2, 4, 3). For the next iteration, the simplex method chooses one of these three edges, say, the darker line segment in Fig. 5.2, and then moves along this edge away from (2, 4, 3) until it reaches the first new constraint boundary, x1  4, at its other endpoint. [We cannot continue farther along this line to the next constraint boundary, x2  0, because this leads to a cornerpoint infeasible solution—(6, 0, 5).] The intersection of this first new constraint boundary with the two constraint boundaries forming the edge yields the new CPF solution (4, 2, 4). When n  3, these same concepts generalize to higher dimensions, except the constraint boundaries now are hyperplanes instead of planes. Let us summarize. Consider any linear programming problem with n decision variables and a bounded feasible region. A CPF solution lies at the intersection of n constraint boundaries (and satisfies the other constraints as well). An edge of the feasible region is a feasible line segment that lies at the intersection of n  1 constraint boundaries, where each endpoint lies on one additional constraint boundary (so that these endpoints are CPF solutions). Two CPF solutions are adjacent if the line segment connecting them is an edge of the feasible region. Emanating from each CPF solution are n such edges, each one leading to one of the n adjacent CPF solutions. Each iteration of the simplex method moves from the current CPF solution to an adjacent one by moving along one of these n edges. 168 CHAPTER 5 THE THEORY OF THE SIMPLEX METHOD When you shift from a geometric viewpoint to an algebraic one, intersection of constraint boundaries changes to simultaneous solution of constraint boundary equations. The n constraint boundary equations yielding (defining) a CPF solution are its defining equations, where deleting one of these equations yields a line whose feasible segment is an edge of the feasible region. We next analyze some key properties of CPF solutions and then describe the implications of all these concepts for interpreting the simplex method. However, while the summary on the previous page is fresh in your mind, let us give you a preview of its implications. When the simplex method chooses an entering basic variable, the geometric interpretation is that it is choosing one of the edges emanating from the current CPF solution to move along. Increasing this variable from zero (and simultaneously changing the values of the other basic variables accordingly) corresponds to moving along this edge. Having one of the basic variables (the leaving basic variable) decrease so far that it reaches zero corresponds to reaching the first new constraint boundary at the other end of this edge of the feasible region. Properties of CPF Solutions We now focus on three key properties of CPF solutions that hold for any linear programming problem that has feasible solutions and a bounded feasible region. Property 1: (a) If there is exactly one optimal solution, then it must be a CPF solution. (b) If there are multiple optimal solutions (and a bounded feasible region), then at least two must be adjacent CPF solutions. Property 1 is a rather intuitive one from a geometric viewpoint. First consider Case (a), which is illustrated by the Wyndor Glass Co. problem (see Fig. 5.1) where the one optimal solution (2, 6) is indeed a CPF solution. Note that there is nothing special about this example that led to this result. For any problem having just one optimal solution, it always is possible to keep raising the objective function line (hyperplane) until it just touches one point (the optimal solution) at a corner of the feasible region. We now give an algebraic proof for this case. Proof of Case (a) of Property 1: We set up a proof by contradiction by assuming that there is exactly one optimal solution and that it is not a CPF solution. We then show below that this assumption leads to a contradiction and so cannot be true. (The solution assumed to be optimal will be denoted by x*, and its objective function value by Z*.) Recall the definition of CPF solution (a feasible solution that does not lie on any line segment connecting two other feasible solutions). Since we have assumed that the optimal solution x* is not a CPF solution, this implies that there must be two other feasible solutions such that the line segment connecting them contains the optimal solution. Let the vectors x and x denote these two other feasible solutions, and let Z 1 and Z 2 denote their respective objective function values. Like each other point on the line segment connecting x and x , x*  ␣x  (1  ␣)x for some value of ␣ such that 0 ␣ 1. (For example, if x* is the midpoint between x and x , then ␣  0.5.) Thus, since the coefficients of the variables are identical for Z*, Z1, and Z2, it follows that Z*  ␣Z2  (1  ␣)Z1. Since the weights ␣ and 1  ␣ add to 1, the only possibilities for how Z*, Z1, and Z2 compare are (1) Z*  Z1  Z2, (2) Z1 Z* Z2, and (3) Z1  Z*  Z2. The first 5.1 FOUNDATIONS OF THE SIMPLEX METHOD 169 possibility implies that x and x also are optimal, which contradicts the assumption that there is exactly one optimal solution. Both the latter possibilities contradict the assumption that x* (not a CPF solution) is optimal. The resulting conclusion is that it is impossible to have a single optimal solution that is not a CPF solution. Now consider Case (b), which was demonstrated in Sec. 3.2 under the definition of optimal solution by changing the objective function in the example to Z  3x1  2x2 (see Fig. 3.5 in Sec. 3.2). What then happens when you are solving graphically is that the objective function line keeps getting raised until it contains the line segment connecting the two CPF solutions (2, 6) and (4, 3). The same thing would happen in higher dimensions except that an objective function hyperplane would keep getting raised until it contained the line segment(s) connecting two (or more) adjacent CPF solutions. As a consequence, all optimal solutions can be obtained as weighted averages of optimal CPF solutions. (This situation is described further in Probs. 4.5-5 and 4.5-6.) The real significance of Property 1 is that it greatly simplifies the search for an optimal solution because now only CPF solutions need to be considered. The magnitude of this simplification is emphasized in Property 2. Property 2: There are only a finite number of CPF solutions. This property certainly holds in Figs. 5.1 and 5.2, where there are just 5 and 10 CPF solutions, respectively. To see why the number is finite in general, recall that each CPF solution is the simultaneous solution of a system of n out of the m  n constraint boundary equations. The number of different combinations of m  n equations taken n at a time is 冢 冣 mn (m  n)!  , n m!n! which is a finite number. This number, in turn, in an upper bound on the number of CPF solutions. In Fig. 5.1, m  3 and n  2, so there are 10 different systems of two equations, but only half of them yield CPF solutions. In Fig. 5.2, m  4 and n  3, which gives 35 different systems of three equations, but only 10 yield CPF solutions. Property 2 suggests that, in principle, an optimal solution can be obtained by exhaustive enumeration; i.e., find and compare all the finite number of CPF solutions. Unfortunately, there are finite numbers, and then there are finite numbers that (for all practical purposes) might as well be infinite. For example, a rather small linear programming problem with only m  50 and n  50 would have 100!/(50!)2 ⬇ 1029 systems of equations to be solved! By contrast, the simplex method would need to examine only approximately 100 CPF solutions for a problem of this size. This tremendous savings can be obtained because of the optimality test given in Sec. 4.1 and restated here as Property 3. Property 3: If a CPF solution has no adjacent CPF solutions that are better (as measured by Z), then there are no better CPF solutions anywhere. Therefore, such a CPF solution is guaranteed to be an optimal solution (by Property 1), assuming only that the problem possesses at least one optimal solution (guaranteed if the problem possesses feasible solutions and a bounded feasible region). To illustrate Property 3, consider Fig. 5.1 for the Wyndor Glass Co. example. For the CPF solution (2, 6), its adjacent CPF solutions are (0, 6) and (4, 3), and neither has a better value of Z than (2, 6) does. This outcome implies that none of the other CPF solutions— (0, 0) and (4, 0)—can be better than (2, 6), so (2, 6) must be optimal. By contrast, Fig. 5.3 shows a feasible region that can never occur for a linear programming problem (since the continuation of the constraint boundary lines that pass 170 CHAPTER 5 THE THEORY OF THE SIMPLEX METHOD x2 6 (0, 6) (2, 6) ( 83 , 5) (4, 5) Z  36  3x1  5x2 4 Feasible region 2 ■ FIGURE 5.3 Modification of the Wyndor Glass Co. problem that violates both linear programming and Property 3 for CPF solutions in linear programming. (4, 0) (0, 0) 2 4 x1 through ( 38, 5) would chop off part of this region) but that does violate Property 3. The problem shown is identical to the Wyndor Glass Co. example (including the same objective function) except for the enlargement of the feasible region to the right of ( 83, 5). Consequently, the adjacent CPF solutions for (2, 6) now are (0, 6) and (83, 5), and again neither is better than (2, 6). However, another CPF solution (4, 5) now is better than (2, 6), thereby violating Property 3. The reason is that the boundary of the feasible region goes down from (2, 6) to ( 83, 5) and then “bends outward” to (4, 5), beyond the objective function line passing through (2, 6). The key point is that the kind of situation illustrated in Fig. 5.3 can never occur in linear programming. The feasible region in Fig. 5.3 implies that the 2x2  12 and 3x1  2x2  18 constraints apply for 0  x1  83. However, under the condition that 83  x1  4, the 3x1  2x2  18 constraint is dropped and replaced by x2  5. Such “conditional constraints” just are not allowed in linear programming. The basic reason that Property 3 holds for any linear programming problem is that the feasible region always has the property of being a convex set2, as defined in Appendix 2 and illustrated in several figures there. For two-variable linear programming problems, this convex property means that the angle inside the feasible region at every CPF solution is less than 180°. This property is illustrated in Fig. 5.1, where the angles at (0, 0), (0, 6), and (4, 0) are 90° and those at (2, 6) and (4, 3) are between 90° and 180°. By contrast, the feasible region in Fig. 5.3 is not a convex set, because the angle at ( 83, 5) is more than 180°. This is the kind of “bending outward” at an angle greater than 180° that can never occur in linear programming. In higher dimensions, the same intuitive notion of “never bending outward” (a basic property of a convex set) continues to apply. 2 If you already are familiar with convex sets, note that the set of solutions that satisfy any linear programming constraint (whether it be an inequality or equality constraint) is a convex set. For any linear programming problem, its feasible region is the intersection of the sets of solutions that satisfy its individual constraints. Since the intersection of convex sets is a convex set, this feasible region necessarily is a convex set. 5.1 171 FOUNDATIONS OF THE SIMPLEX METHOD To clarify the significance of a convex feasible region, consider the objective function hyperplane that passes through a CPF solution that has no adjacent CPF solutions that are better. [In the original Wyndor Glass Co. example, this hyperplane is the objective function line passing through (2, 6).] All these adjacent solutions [(0, 6) and (4, 3) in the example] must lie either on the hyperplane or on the unfavorable side (as measured by Z) of the hyperplane. The feasible region being convex means that its boundary cannot “bend outward” beyond an adjacent CPF solution to give another CPF solution that lies on the favorable side of the hyperplane. So Property 3 holds. Extensions to the Augmented Form of the Problem For any linear programming problem in our standard form (including functional constraints in  form), the appearance of the functional constraints after slack variables are introduced is as follows: (1) a11x1  a12x2      a1n xn  xn1  b1 (2) a21x1  a22x2      a2n xn  xn2  b2  (m) am1x1  am2x2      amn xn  xnm  bm, where xn1, xn2, . . . , xnm are the slack variables. For other linear programming problems, Sec. 4.6 described how essentially this same appearance (proper form from Gaussian elimination) can be obtained by introducing artificial variables, etc. Thus, the original solutions (x1, x2, . . . , xn) now are augmented by the corresponding values of the slack or artificial variables (xn1, xn2, . . . , xnm) and perhaps some surplus variables as well. This augmentation led in Sec. 4.2 to defining basic solutions as augmented corner-point solutions and basic feasible solutions (BF solutions) as augmented CPF solutions. Consequently, the preceding three properties of CPF solutions also hold for BF solutions. Now let us clarify the algebraic relationships between basic solutions and corner-point solutions. Recall that each corner-point solution is the simultaneous solution of a system of n constraint boundary equations, which we called its defining equations. The key question is: How do we tell whether a particular constraint boundary equation is one of the defining equations when the problem is in augmented form? The answer, fortunately, is a simple one. Each constraint has an indicating variable that completely indicates (by whether its value is zero) whether that constraint’s boundary equation is satisfied by the current solution. A summary appears in Table 5.3. For the type of constraint in each row ■ TABLE 5.3 Indicating variables for constraint boundary equations* Type of Constraint Form of Constraint Nonnegativity xj  0 n Functional () 冱 aijxj  bi j1 n Functional () 冱 aijxj  bi j1 Functional () 冱 aijxj  bi j1 n ∗ Constraint in Augmented Form xj  0 Constraint Boundary Equation xj  0 n Indicating Variable xj n 冱 aijxj  xni  bi 冱 aijxj  bi j1 xni j1 n n 冱 aijxj  苶xni  bi j1 冱 aijxj  bi j1 n 冱 aijxj  苶xni  xs  bi j1 i Indicating variable  0 ⇒ constraint boundary equation satisfied; indicating variable 0 ⇒ constraint boundary equation violated. x苶ni n 冱 aijxj  bi j1 x苶ni  xsi 172 CHAPTER 5 THE THEORY OF THE SIMPLEX METHOD of the table, note that the corresponding constraint boundary equation (fourth column) is satisfied if and only if this constraint’s indicating variable (fifth column) equals zero. In the last row (functional constraint in  form), the indicating variable 苶xni  xsi actually is the difference between the artificial variable x苶ni and the surplus variable xsi . Thus, whenever a constraint boundary equation is one of the defining equations for a corner-point solution, its indicating variable has a value of zero in the augmented form of the problem. Each such indicating variable is called a nonbasic variable for the corresponding basic solution. The resulting conclusions and terminology (already introduced in Sec. 4.2) are summarized next. Each basic solution has m basic variables, and the rest of the variables are nonbasic variables set equal to zero. (The number of nonbasic variables equals n plus the number of surplus variables.) The values of the basic variables are given by the simultaneous solution of the system of m equations for the problem in augmented form (after the nonbasic variables are set to zero). This basic solution is the augmented corner-point solution whose n defining equations are those indicated by the nonbasic variables. In particular, whenever an indicating variable in the fifth column of Table 5.3 is a nonbasic variable, the constraint boundary equation in the fourth column is a defining equation for the corner-point solution. (For functional constraints in  form, at least one of the two supplementary variables 苶xni and xsi always is a nonbasic variable, but the constraint boundary equation becomes a defining equation only if both of these variables are nonbasic variables.) Now consider the basic feasible solutions. Note that the only requirements for a solution to be feasible in the augmented form of the problem are that it satisfy the system of equations and that all the variables be nonnegative. A BF solution is a basic solution where all m basic variables are nonnegative ( 0). A BF solution is said to be degenerate if any of these m variables equals zero. Thus, it is possible for a variable to be zero and still not be a nonbasic variable for the current BF solution. (This case corresponds to a CPF solution that satisfies another constraint boundary equation in addition to its n defining equations.) Therefore, it is necessary to keep track of which is the current set of nonbasic variables (or the current set of basic variables) rather than to rely upon their zero values. We noted earlier that not every system of n constraint boundary equations yields a corner-point solution, because the system may have no solution or it may have multiple solutions. For analogous reasons, not every set of n nonbasic variables yields a basic solution. However, these cases are avoided by the simplex method. To illustrate these definitions, consider the Wyndor Glass Co. example once more. Its constraint boundary equations and indicating variables are shown in Table 5.4. ■ TABLE 5.4 Indicating variables for the constraint boundary equations of the Wyndor Glass Co. problem* Constraint x1  0 x2  0 x1  4 2x2  12 3x1  2x2  18 ∗ Constraint in Augmented Form Constraint Boundary Equation Indicating Variable x1  0 x2  0 (1) 2x1  2x2  x3x3x3  24 (2) 3x1  2x2  x3x4x3  12 (3) 3x1  2x2  x3x3x5  18 x1  0 x2  0 x1  4 2x2  12 3x1  2x2  18 x1 x2 x3 x4 x5 Indicating variable  0 ⇒ constraint boundary equation satisfied; indicating variable 0 ⇒ constraint boundary equation violated. 5.1 173 FOUNDATIONS OF THE SIMPLEX METHOD Augmenting each of the CPF solutions (see Table 5.1) yields the BF solutions listed in Table 5.5. This table places adjacent BF solutions next to each other, except for the pair consisting of the first and last solutions listed. Notice that in each case the nonbasic variables necessarily are the indicating variables for the defining equations. Thus, adjacent BF solutions differ by having just one different nonbasic variable. Also notice that each BF solution is the simultaneous solution of the system of equations for the problem in augmented form (see Table 5.4) when the nonbasic variables are set equal to zero. Similarly, the three corner-point infeasible solutions (see Table 5.2) yield the three basic infeasible solutions shown in Table 5.6. The other two sets of nonbasic variables, (1) x1 and x3 and (2) x2 and x4, do not yield a basic solution, because setting either pair of variables equal to zero leads to having no solution for the system of Eqs. (1) to (3) given in Table 5.4. This conclusion parallels the observation we made early in this section that the corresponding sets of constraint boundary equations do not yield a solution. The simplex method starts at a BF solution and then iteratively moves to a better adjacent BF solution until an optimal solution is reached. At each iteration, how is the adjacent BF solution reached? For the original form of the problem, recall that an adjacent CPF solution is reached from the current one by (1) deleting one constraint boundary (defining equation) from the set of n constraint boundaries defining the current solution, (2) moving away from the current solution in the feasible direction along the intersection of the remaining n  1 constraint boundaries (an edge of the feasible region), and (3) stopping when the first new constraint boundary (defining equation) is reached. ■ TABLE 5.5 BF solutions for the Wyndor Glass Co. problem CPF Solution Defining Equations (0, 0) x1  0 x2  0 (0, 6) BF Solution Nonbasic Variables (0, 0, 4, 12, 18) x1 x2 x1  0 2x2  12 (0, 6, 4, 0, 6) x1 x4 (2, 6) 2x2  12 3x1  2x2  18 (2, 6, 2, 0, 0) x4 x5 (4, 3) 3x1  2x2  18 x1  4 (4, 3, 0, 6, 0) x5 x3 (4, 0) x1  4 x2  0 (4, 0, 0, 12, 6) x3 x2 ■ TABLE 5.6 Basic infeasible solutions for the Wyndor Glass Co. problem Corner-Point Infeasible Solution Defining Equations Basic Infeasible Solution Nonbasic Variables (0, 9) x1  0 3x1  2x2  18 (0, 9, 4, 6, 0) x1 x5 (4, 6) 2x2  12 x1  4 (4, 6, 0, 0, 6) x4 x3 (6, 0) 3x1  2x2  18 x2  0 (6, 0, 2, 12, 0) x5 x2 174 CHAPTER 5 THE THEORY OF THE SIMPLEX METHOD ■ TABLE 5.7 Sequence of solutions obtained by the simplex method for the Wyndor Glass Co. problem Iteration CPF Solution Defining Equations 0 (0, 0) x1  0 x2  0 1 (0, 6) 2 (2, 6) BF Solution Nonbasic Functional Constraints Variables in Augmented Form (0, 0, 4, 12, 18) x1  0 x2  0 x1  2x2  x3  4 2x2  x4  12 3x1  2x2  x5  18 x1  0 2x2  12 (0, 6, 4, 0, 6) x1  0 x4  0 x1  2x2  x3  4 2x2  x4  12 3x1  2x2  x5  18 2x2  12 3x1  2x2  18 (2, 6, 2, 0, 0) x4  0 x5  0 x1  2x2  x3  4 2x2  x4  12 3x1  2x2  x5  18 Equivalently, in our new terminology, the simplex method reaches an adjacent BF solution from the current one by (1) deleting one variable (the entering basic variable) from the set of n nonbasic variables defining the current solution, (2) moving away from the current solution by increasing this one variable from zero (and adjusting the other basic variables to still satisfy the system of equations) while keeping the remaining n  1 nonbasic variables at zero, and (3) stopping when the first of the basic variables (the leaving basic variable) reaches a value of zero (its constraint boundary). With either interpretation, the choice among the n alternatives in step 1 is made by selecting the one that would give the best rate of improvement in Z (per unit increase in the entering basic variable) during step 2. Table 5.7 illustrates the close correspondence between these geometric and algebraic interpretations of the simplex method. Using the results already presented in Secs. 4.3 and 4.4, the fourth column summarizes the sequence of BF solutions found for the Wyndor Glass Co. problem, and the second column shows the corresponding CPF solutions. In the third column, note how each iteration results in deleting one constraint boundary (defining equation) and substituting a new one to obtain the new CPF solution. Similarly, note in the fifth column how each iteration results in deleting one nonbasic variable and substituting a new one to obtain the new BF solution. Furthermore, the nonbasic variables being deleted and added are the indicating variables for the defining equations being deleted and added in the third column. The last column displays the initial system of equations [excluding Eq. (0)] for the augmented form of the problem, with the current basic variables shown in bold type. In each case, note how setting the nonbasic variables equal to zero and then solving this system of equations for the basic variables must yield the same solution for (x1, x2) as the corresponding pair of defining equations in the third column. The Solved Examples section of the book’s website provides another example of developing the type of information given in Table 5.7 for a minimization problem. ■ 5.2 THE SIMPLEX METHOD IN MATRIX FORM Chapter 4 describes the simplex method in both an algebraic form and a tabular form. Further insight into the theory and power of the simplex method can be obtained by examining its matrix form. We begin by introducing matrix notation to represent linear programming problems. (See Appendix 4 for a review of matrices.) 5.2 THE SIMPLEX METHOD IN MATRIX FORM 175 To help you distinguish between matrices, vectors, and scalars, we consistently use BOLDFACE CAPITAL letters to represent matrices, boldface lowercase letters to represent vectors, and italicized letters in ordinary print to represent scalars. We also use a boldface zero (0) to denote a null vector (a vector whose elements all are zero) in either column or row form (which one should be clear from the context), whereas a zero in ordinary print (0) continues to represent the number zero. Using matrices, our standard form for the general linear programming model given in Sec. 3.2 becomes Z  cx, Maximize subject to Ax  b x  0, and where c is the row vector c  [c1, c2, . . . , cn], x, b, and 0 are the column vectors such that  x1  x  x   2 , ⯗    xn  0 0 0   , ⯗   0  b1  b  b   2 , ⯗    bm  and A is the matrix  a11 a12 … a1n  a a22 … a2n  A   21 .  ...................................    am1 am2 … amn  To obtain the augmented form of the problem, introduce the column vector of slack variables  xn1  x  xs   n2   ⯗     xnm  so that the constraints become [A, I] 冤x 冥  b x s and 冤x 冥  0, x s where I is the m  m identity matrix, and the null vector 0 now has n  m elements. (We comment at the end of the section about how to deal with problems that are not in our standard form.) Solving for a Basic Feasible Solution Recall that the general approach of the simplex method is to obtain a sequence of improving BF solutions until an optimal solution is reached. One of the key features of the matrix form of the simplex method involves the way in which it solves for each new 176 CHAPTER 5 THE THEORY OF THE SIMPLEX METHOD BF solution after identifying its basic and nonbasic variables. Given these variables, the resulting basic solution is the solution of the m equations [A, I] x 冤x 冥  b, s in which the n nonbasic variables from the n  m elements of 冤x 冥 x s are set equal to zero. Eliminating these n variables by equating them to zero leaves a set of m equations in m unknowns (the basic variables). This set of equations can be denoted by BxB  b, where the vector of basic variables  xB1  x  xB   B2   ⯗     xBm  is obtained by eliminating the nonbasic variables from 冤x 冥 , x s and the basis matrix   B     B11 B12 … B1m  B21 B22 … B2m  .....................................  Bm1 Bm2 … Bmm  is obtained by eliminating the columns corresponding to coefficients of nonbasic variables from [A, I]. (In addition, the elements of xB and, therefore, the columns of B may be placed in a different order when the simplex method is executed.) The simplex method introduces only basic variables such that B is nonsingular, so that B1 always will exist. Therefore, to solve BxB  b, both sides are premultiplied by B1: B1BxB  B1b. Since B1B  I, the desired solution for the basic variables is xB  B1b. Let cB be the vector whose elements are the objective function coefficients (including zeros for slack variables) for the corresponding elements of xB. The value of the objective function for this basic solution is then Z  cBxB  cBB1b. Example. To illustrate this method of solving for a BF solution, consider again the Wyndor Glass Co. problem presented in Sec. 3.1 and solved by the original simplex method in Table 4.8. In this case, 5.2 177 THE SIMPLEX METHOD IN MATRIX FORM c  [3, 5], 1  [A, I]   0  3 0 2 2 1 0 0 0 1 0 0  0 ,  1  4   b   12  ,    18  x 冤 冥 x1 , x2  x3    xs   x4  .    x5  Referring to Table 4.8, we see that the sequence of BF solutions obtained by the simplex method is the following: Iteration 0  x3    xB   x4  , x   5 1  B  0  0 cB  [0, 0, 0], 0 1 0 0  0   B1,  1  x3  1    x  4   0     x5  0 so 0 1 0  4 0  4     0   12    12  ,     1   18   18   4   Z  [0, 0, 0]  12   0.    18  so Iteration 1  x3    xB   x2  ,    x5  1  B  0  0 0 2 2 0  0 ,  1 1 0  1 B1   0 2   0 1 0  0 ,  1 so  x3  1 0    1  x2    0 2     x5   0 1 cB  [0, 5, 0],  4 0  4     0   12    6  ,     1   18   6 so  4   Z  [0, 5, 0]  6   30.    6 Iteration 2  x3    xB   x2  ,    x1  1  B  0  0 0 2 2 1  0 ,  3 1 B 1   0  0 so  x3  1     x2    0     x1  0 1 3 1 2 1 3  cB  [0, 5, 3], 13   4   2     0   12    6  ,    1   2 3   18  so  2   Z  [0, 5, 3]  6   36.    2 1 3 1 2 1 3  13   0 , 1 3 178 CHAPTER 5 THE THEORY OF THE SIMPLEX METHOD Matrix Form of the Current Set of Equations The last preliminary before we summarize the matrix form of the simplex method is to show the matrix form of the set of equations appearing in the simplex tableau for any iteration of the original simplex method. For the original set of equations, the matrix form is 1 0 冤 Z  c 0   0 . x   A I   b  xs  冥 冤冥 This set of equations also is exhibited in the first simplex tableau of Table 5.8. The algebraic operations performed by the simplex method (multiply an equation by a constant and add a multiple of one equation to another equation) are expressed in matrix form by premultiplying both sides of the original set of equations by the appropriate matrix. This matrix would have the same elements as the identity matrix, except that each multiple for an algebraic operation would go into the spot needed to have the matrix multiplication perform this operation. Even after a series of algebraic operations over several iterations, we still can deduce what this matrix must be (symbolically) for the entire series by using what we already know about the right-hand sides of the new set of equations. In particular, after any iteration, xB  B1b and Z  cBB1b, so the right-hand sides of the new set of equations have become cBB1 0 cBB1b  B1b . B1 b Z 1  xB 0 冤 冥 冤 冥冤 冥 冤 冥 Because we perform the same series of algebraic operations on both sides of the original set of equations, we use this same matrix that premultiplies the original right-hand side to premultiply the original left-hand side. Consequently, since 1 0 冤 cBB1 1 B1 0 冥冤 c 0 1 cBB1A  c  B1A A I 0 冥 冤 cBB1 , B1 冥 ■ TABLE 5.8 Initial and later simplex tableaux in matrix form Coefficient of: Basic Variable Eq. Z Original Variables Slack Variables Right Side 0 Z xB (0) (1, 2, . . . , m) 1 0 c A 0 I 0 b Any Z xB (0) (1, 2, . . . , m) 1 0 cBB1A  c B1 A cBB1 B1 Iteration cBB1b B1b 5.2 179 THE SIMPLEX METHOD IN MATRIX FORM the desired matrix form of the set of equations after any iteration is 1 0 冤 Z  cBB1   cBB1b . x   1 B B1b   x s   cBB1A  c B1A 冥 冤 冥 The second simplex tableau of Table 5.8 also exhibits this same set of equations. Example. To illustrate this matrix form for the current set of equations, we will show how it yields the final set of equations resulting from iteration 2 for the Wyndor Glass Co. problem. Using the B1 and cB given for iteration 2 at the end of the preceding subsection, we have 1  B A  0  0 1 3 1 2 1 3 13   1  0  0 1  3  3 1 3 1 2 1 3 13   0   [0, 32, 1], 1 3 1 1 cBB 1   [0, 5, 3]  0  0 0  cBB A  c  [0, 5, 3]  0  1 1   0 0   2   0   2 1 0  1 ,  0 0  1   [3, 5]  [0, 0].  0 Also, by using the values of xB  B1b and Z  cBB1b calculated at the end of the preceding subsection, these results give the following set of equations: 1 0  0  0 0 0 0 1 0 0 1 0 0 1 0 0 3 2 1 3 1 2 1 3  Z 1   x1 13   x2   0   x3 1   3  x4    x5  36   2  ,  6    2 as shown in the final simplex tableau in Table 4.8. The matrix form of the set of equations after any iteration (as shown in the box just before the above example) provides the key to the execution of the matrix form of the simplex method. The matrix expressions shown in these equations (or in the bottom part of Table 5.8) provide a direct way of calculating all the numbers that would appear in the current set of equations (for the algebraic form of the simplex method) or in the current simplex tableau (for the tableau form of the simplex method). The three forms of the simplex method make exactly the same decisions (entering basic variable, leaving basic variable, etc.) step after step and iteration after iteration. The only difference between these forms is in the methods used 180 CHAPTER 5 THE THEORY OF THE SIMPLEX METHOD to calculate the numbers needed to make those decisions. As summarized below, the matrix form provides a convenient and compact way of calculating these numbers without carrying along a series of systems of equations or a series of simplex tableaux. Summary of the Matrix Form of the Simplex Method 1. Initialization: Introduce slack variables, etc., to obtain the initial basic variables, as described in Chap. 4. This yields the initial xB, cB, B, and B1 (where B  I  B1 under our current assumption that the problem being solved fits our standard form). Then go to the optimality test. 2. Iteration: Step 1. Determine the entering basic variable: Refer to the coefficients of the nonbasic variables in Eq. (0) that were obtained in the preceding application of the optimality test below. Then (just as described in Sec. 4.4), select the variable with the negative coefficient having the largest absolute value as the entering basic variable. Step 2. Determine the leaving basic variable: Use the matrix expressions, B1A (for the coefficients of the original variables) and B1 (for the coefficients of the slack variables), to calculate the coefficients of the entering basic variable in every equation except Eq. (0). Also use the preceding calculation of xB  B1b (see Step 3) to identify the right-hand sides of these equations. Then (just as described in Sec. 4.4), use the minimum ratio test to select the leaving basic variable. Step 3. Determine the new BF solution: Update the basis matrix B by replacing the column for the leaving basic variable by the corresponding column in [A, I] for the entering basic variable. Also make the corresponding replacements in xB and cB. Then derive B1 (as illustrated in Appendix 4) and set xB  B1b. 3. Optimality test: Use the matrix expressions, cB B1A  c (for the coefficients of the original variables) and cB B1 (for the coefficients of the slack variables), to calculate the coefficients of the nonbasic variables in Eq. (0). The current BF solution is optimal if and only if all of these coefficients are nonnegative. If it is optimal, stop. Otherwise, go to an iteration to obtain the next BF solution. Example. We already have performed some of the above matrix calculations for the Wyndor Glass Co. problem earlier in this section. We now will put all the pieces together in applying the full simplex method in matrix form to this problem. As a starting point, recall that c = [3, 5],  1 0 1 0 0   [A, I]   0 2 0 1 0  ,    3 2 0 0 1  4   b   12  .    18  Initialization The initial basic variables are the slack variables, so (as already noted for Iteration 0 for the first example in this section)  x3   4     xB   x4    12  , cB  [0, 0, 0], x     5  18  1  B  0  1 0 1 0 0  0  1  B1. 5.2 181 THE SIMPLEX METHOD IN MATRIX FORM Optimality test The coefficients of the nonbasic variables (x1 and x2) are cBB1A  c  [0, 0]  [3, 5]  [3, 5] so these negative coefficients indicate that the initial BF solution (xB = b) is not optimal. Iteration 1 Since 5 is larger in absolute value than 3, the entering basic variable is x2. Performing only the relevant portion of a matrix multiplication, the coefficients of x2 in every equation except Eq. (0) are —  B1A   — —  0  2  2 and the right-hand side of these equations are given by the value of xB shown in the initialization step. Therefore, the minimum ratio test indicates that the leaving basic variable is x4 since 12/2 18/2. Iteration 1 for the first example in this section already shows the resulting updated B, xB, cB, and B1, namely, 1  B  0  0 0 2 2 0  0  , B1   1 1  0  0 0 1 2 –1 0  0  , xB   1  x3    1  x2   B b  x   5      4  6  , cB  [0, 5, 0], 6  so x2 has replaced x4 in xB , in providing an element of cB from [3, 5, 0, 0, 0], and in providing a column from [A, I] in B. Optimality test The nonbasic variables now are x1 and x4 , and their coefficients in Eq. (0) are For x1: For x4: 1  cBB1A  c  [0, 5, 0]  0  0 1  cBB1  [0, 5, 0]  0  0 0 1 2 –1 0 1 2 –1 0  0  1  1 0    0 2   [3, 5] = [3, —]    3 2 0  0   [—, 5/2, —]  1 Since x1 has a negative coefficient, the current BF is not optimal, so we go on to the next iteration. Iteration 2: Since x1 is the one nonbasic variable with a negative coefficient in Eq. (0), it now becomes the entering basic variable. Its coefficients in the other equations are 1  B A  0  0 1 0 1 2 –1 0  0  1  1 0 1     0 2   0     3 2 3 — —  —  182 CHAPTER 5 THE THEORY OF THE SIMPLEX METHOD Also using xB obtained at the end of the preceding iteration, the minimum ratio test indicates that x5 is the leaving basic variable since 6/3 4/1. Iteration 2 for the first example in this section already shows the resulting updated B, B1, xB, and cB, namely, 1  B  0  0 0 1 2 0  ,  2 3 1 B 1   0  0 1 3 1 2 1 3  1  2  x3  3      1 x 0  , xB   2  = B b   6  , cB  [0, 5, 3],  2 x  1    1 3 so x1 has replaced x5 in xB , in providing an element of cB from [3, 5, 0, 0, 0], and in providing a column from [A,I] in B. Optimality test The nonbasic variables now are x4 and x5. Using the calculations already shown for the second example in this section, their coefficients in Eq. (0) are 3/2 and 1, respectively. Since neither of these coefficients are negative, the current BF solution (x1 = 2, x2 = 6, x3 = 2, x4 = 0, x5 = 0) is optimal and the procedure terminates. Final Observations The above example illustrates that the matrix form of the simplex method uses just a few matrix expressions to perform all the needed calculations. These matrix expressions are summarized in the bottom part of Table 5.8. A fundamental insight from this table is that it is only necessary to know the current B1 and cBB1, which appear in the slack variables portion of the current simplex tableau, in order to calculate all the other numbers in this tableau in terms of the original parameters (A, b, and c) of the model being solved. When dealing with the final simplex tableau, this insight proves to be a particularly valuable one, as will be described in the next section. A drawback of the matrix form of the simplex method as it has been outlined in this section is that it is necessary to derive B1, the inverse of the updated basis matrix, at the end of each iteration. Although routines are available for inverting small square (nonsingular) matrices (and this can even be done readily by hand for 2 x 2 or perhaps 3 x 3 matrices), the time required to invert matrices grows very rapidly with the size of the matrices. Fortunately, there is a much more efficient procedure available for updating B1 from one iteration to the next rather than inverting the new basis matrix from scratch. When this procedure is incorporated into the matrix form of the simplex method, this improved version of the matrix form is conventionally called the revised simplex method. This is the version of the simplex method (along with further improvements) that normally is used in commercial software for linear programming. We will describe the procedure for updating B1 in Sec. 5.4. The Solved Examples section of the book’s website gives another example of applying the matrix form of the simplex method. This example also incorporates the efficient procedure for updating B1 at each iteration instead of inverting the updated basis matrix from scratch, so the full-fledged revised simplex method is applied. Finally, we should remind you that the description of the matrix form of the simplex method throughout this section has assumed that the problem being solved fits our standard form for the general linear programming model given in Sec. 3.2. However, the modifications for other forms of the model are relatively straightforward. The initialization step would be conducted just as was described in Sec. 4.6 for either the algebraic form or tabular form of the simplex method. When this step involves introducing artificial variables to obtain an initial BF solution (and thereby to obtain an identity matrix as the initial basis matrix), these variables are included among the m elements of xs. 5.3 ■ 5.3 A FUNDAMENTAL INSIGHT 183 A FUNDAMENTAL INSIGHT We shall now focus on a property of the simplex method (in any form) that has been revealed by the matrix form of the simplex method in Sec. 5.2. This fundamental insight provides the key to both duality theory (Chap. 6) and sensitivity analysis (Secs. 7.1–7.3), two very important parts of linear programming. We shall first describe this insight when the problem being solved fits our standard form for linear programming models (Sec. 3.2) and then discuss how to adapt to other forms later. The insight is based directly on Table 5.8 in Sec. 5.2, as described below. The insight provided by Table 5.8: Using matrix notation, Table 5.8 gives the rows of the initial simplex tableau as [–c, 0, 0] for row 0 and [A, I, b] for the rest of the rows. After any iteration, the coefficients of the slack variables in the current simplex tableau become cBB1 for row 0 and B1 for the rest of the rows, where B is the current basis matrix. Examining the rest of the current simplex tableau, the insight is that these coefficients of the slack variables immediately reveal how the entire rows of the current simplex tableau have been obtained from the rows in the initial simplex tableau. In particular, after any iteration, Row 0  [–c, 0, 0] + cBB1[A, I, b] Rows 1 to m  B1[A, I, b] We shall describe the applications of this insight at the end of this section. These applications are particularly important only when we are dealing with the final simplex tableau after the optimal solution has been obtained. Therefore, we will focus hereafter on discussing the “fundamental insight” just in terms of the optimal solution. To distinguish between the matrix notation used after any iteration (B1, etc.) and the corresponding notation after just the last iteration, we now introduce the following notation for the latter case. When B is the basis matrix for the optimal solution found by the simplex method, let S*  B1 = coefficients of the slack variables in rows 1 to m A*  B1A = coefficients of the original variables in rows 1 to m y*  cBB1 = coefficients of the slack variables in row 0 z*  cBB1A, so z* – c  coefficients of the original variables in row 0 Z*  cBB1b = optimal value of the objective function b*  B1b  optimal right-hand sides of rows 1 to m The bottom half of Table 5.9 shows where each of these symbols fits in the final simplex tableau. To illustrate all the notation, the top half of Table 5.9 includes the initial tableau for the Wyndor Glass Co. problem and the bottom half includes the final tableau for this problem. Referring to this again, suppose now that you are given the initial tableau, t and T, and just y* and S* from the final tableau. How can this information alone be used to calculate the rest of the final tableau? The answer is provided by the fundamental insight summarized below. Fundamental Insight (1) t*  t  y*T  [y*A  c y* y*b]. (2) T*  S*T  [S*A S* S*b]. 184 CHAPTER 5 THE THEORY OF THE SIMPLEX METHOD ■ TABLE 5.9 General notation for initial and final simplex tableaux in matrix form, illustrated by the Wyndor Glass Co. problem Initial Tableau Row 0: t  [3, 5 0, 0, 0 0]  [c 0 0]. Other rows: 1 0  T  0 2  3 2 1 0 0 0 0 1 0 0 1 4  12   [A I b].  18  Combined: 冤 T冥  冤 A c 0 I 0 . b t 冥 Final Tableau Row 0: Other rows: t*  [0, 0 T*  冤 T*冥  冤 t* Combined: 0 0  1  0 1 0 0, 32, 1 36]  [z*  c 1 3 1 2 1 3 1 0 0  z*  c A* y* S* 1 3 y* 2 6   [A*  2  0 1 3 S* Z*]. b*]. 冥 Z* b* . Thus, by knowing the parameters of the model in the initial tableau (c, A, and b) and only the coefficients of the slack variables in the final tableau (y* and S*), these equations enable calculating all the other numbers in the final tableau. Now let us summarize the mathematical logic behind the two equations for the fundamental insight. To derive Eq. (2), recall that the entire sequence of algebraic operations performed by the simplex method (excluding those involving row 0) is equivalent to premultiplying T by some matrix, call it M. Therefore, T*  MT, but now we need to identify M. By writing out the component parts of T and T*, this equation becomes [A* S* b*]  M [A I b]  [MA M Mb]. Because the middle (or any other) component of these equal matrices must be the same, it follows that M  S*, so Eq. (2) is a valid equation. Equation (1) is derived in a similar fashion by noting that the entire sequence of algebraic operations involving row 0 amounts to adding some linear combination of the rows in T to t, which is equivalent to adding to t some vector times T. Denoting this vector by v, we thereby have t*  t  vT, but v still needs to be identified. Writing out the component parts of t and t* yields [z*  c y* Z*]  [c 0 0]  v [A I b]  [c  vA v vb]. Equating the middle component of these equal vectors gives v  y*, which validates Eq. (1). 5.3 A FUNDAMENTAL INSIGHT 185 Adapting to Other Model Forms Thus far, the fundamental insight has been described under the assumption that the original model is in our standard form, described in Sec. 3.2. However, the above mathematical logic now reveals just what adjustments are needed for other forms of the original model. The key is the identity matrix I in the initial tableau, which turns into S* in the final tableau. If some artificial variables must be introduced into the initial tableau to serve as initial basic variables, then it is the set of columns (appropriately ordered) for all the initial basic variables (both slack and artificial) that forms I in this tableau. (The columns for any surplus variables are extraneous.) The same columns in the final tableau provide S* for the T*  S*T equation and y* for the t*  t  y*T equation. If M’s were introduced into the preliminary row 0 as coefficients for artificial variables, then the t for the t*  t  y*T equation is the row 0 for the initial tableau after these nonzero coefficients for basic variables are algebraically eliminated. (Alternatively, the preliminary row 0 can be used for t, but then these M’s must be subtracted from the final row 0 to give y*.) (See Prob. 5.3-9.) Applications The fundamental insight has a variety of important applications in linear programming. One of these applications involves the revised simplex method, which is based mainly on the matrix form of the simplex method presented in Sec. 5.2. As described in this preceding section (see Table 5.8), this method used B1 and the initial tableau to calculate all the relevant numbers in the current tableau for every iteration. It goes even further than the fundamental insight by using B1 to calculate y* itself as y*  cBB1. Another application involves the interpretation of the shadow prices ( y1*, y2*, . . . , y*m) described in Sec. 4.7. The fundamental insight reveals that Z* (the value of Z for the optimal solution) is m Z*  y*b  冱 yi*bi, i1 so, e.g., Z*  0b1  3 b  b3 2 2 for the Wyndor Glass Co. problem. This equation immediately yields the interpretation for the yi* values given in Sec. 4.7. Another group of extremely important applications involves various postoptimality tasks (reoptimization technique, sensitivity analysis, parametric linear programming— described in Sec. 4.7) that investigate the effect of making one or more changes in the original model. In particular, suppose that the simplex method already has been applied to obtain an optimal solution (as well as y* and S*) for the original model, and then these changes are made. If exactly the same sequence of algebraic operations were to be applied to the revised initial tableau, what would be the resulting changes in the final tableau? Because y* and S* don’t change, the fundamental insight reveals the answer immediately. One particularly common type of postoptimality analysis involves investigating possible changes in b. The elements of b often represent managerial decisions about the amounts of various resources being made available to the activities under consideration in the linear programming model. Therefore, after the optimal solution has been obtained by the simplex method, management often wants to explore what would happen if some of these managerial decisions on resource allocations were to be changed in various ways. By using the formulas, 186 CHAPTER 5 THE THEORY OF THE SIMPLEX METHOD xB  S*b Z*  y*b, you can see exactly how the optimal BF solution changes (or whether it becomes infeasible because of negative variables), as well as how the optimal value of the objective function changes, as a function of b. You do not have to reapply the simplex method over and over for each new b, because the coefficients of the slack variables tell all! For example, consider the change from b2  12 to b2  13 as illustrated in Fig. 4.8 for the Wyndor Glass Co. problem. It is not necessary to solve for the new optimal solution (x1, x2)  (53, 123) because the values of the basic variables in the final tableau (b*) are immediately revealed by the fundamental insight:  x3     x2   b*  S*b     x1  1  0  0 1 3 1 2 1 3   73  13   4      0   13    123  .  5 1  3 3   18  There is an even easier way to make this calculation. Since the only change is in the second component of b (b2  1), which gets premultiplied by only the second column of S*, the change in b* can be calculated as simply  13    b*   12  b2   1  3   13   1  2 ,  1  3  so the original values of the basic variables in the final tableau (x3  2, x2  6, x1  2) now become 7  x3   2  13   3      1  13   x2    6    2    2  .      1  5  x1   2  3   3 (If any of these new values were negative, and thus infeasible, then the reoptimization technique described in Sec. 4.7 would be applied, starting from this revised final tableau.) Applying incremental analysis to the preceding equation for Z* also immediately yields 3 3 Z*  b2  . 2 2 The fundamental insight can be applied to investigating other kinds of changes in the original model in a very similar fashion; it is the crux of the sensitivity analysis procedure described in Secs. 7.1-7.3. You also will see in the next chapter that the fundamental insight plays a key role in the very useful duality theory for linear programming. ■ 5.4 THE REVISED SIMPLEX METHOD The revised simplex method is based directly on the matrix form of the simplex method presented in Sec. 5.2. However, as mentioned at the end of that section, the difference is that the revised simplex method incorporates a key improvement into the matrix form. Instead of needing to invert the new basis matrix B after each iteration, which is computationally expensive for large matrices, the revised simplex method uses a much more efficient procedure that simply updates B1 from one iteration to the next. We focus on describing and illustrating this procedure in this section. 5.4 187 THE REVISED SIMPLEX METHOD This procedure is based on two properties of the simplex method. One is described in the insight provided by Table 5.8 at the beginning of Sec. 5.3. In particular, after any iteration, the coefficients of the slack variables for all the rows except row 0 in the current simplex tableau become B1, where B is the current basis matrix. This property always holds as long as the problem being solved fits our standard form described in Sec. 3.2 for linear programming models. (For nonstandard forms where artificial variables need to be introduced, the only difference is that it is the set of appropriately ordered columns that form an identity matrix I below row 0 in the initial simplex tableau that then provides B1 in any subsequent tableau.) The other relevant property of the simplex method is that step 3 of an iteration changes the numbers in the simplex tableau, including the numbers giving B1, only by performing the elementary algebraic operations (such as dividing an equation by a constant or subtracting a multiple of some equation from another equation) that are needed to restore proper form from Gaussian elimination. Therefore, all that is needed to update B1 from one iteration to 1 the next is to obtain the new B1 (denote it by B1 (denote it by B1 new) from the old B old) 1 by performing the usual algebraic operations on B old that the algebraic form of the simplex method would perform on the entire system of equations (except Eq. (0)) for this iteration. Thus, given the choice of the entering basic variable and leaving basic variable from steps 1 and 2 of an iteration, the procedure is to apply step 3 of an iteration (as described in Secs. 4.3 and 4.4) to the B1 portion of the current simplex tableau or system of equations. To describe this procedure formally, let xk  entering basic variable, a ik  coefficient of xk in current Eq. (i), for i  1, 2, . . . , m (identified in step 2 of an iteration), r  number of equation containing the leaving basic variable. Recall that the new set of equations [excluding Eq. (0)] can be obtained from the preceding set by subtracting aik /ark times Eq. (r) from Eq. (i), for all i  1, 2, . . . , m except i  r, and then dividing Eq. (r) by ark. Therefore, the element in row i and column j of B1 new is (B1 new)ij  冦 a ik 1 (B1 (B ) old)ij  a rk old rj 1 1 (B ) a rk old rj if i r, if i  r. These formulas are expressed in matrix notation as 1 B1 new  EB old, where matrix E is an identity matrix except that its rth column is replaced by the vector  ␩1  ␩  2 ␩  , ⯗    ␩m  冦  where ␩i  a ik a rk 1 a rk if i r, if i  r. Thus, E  [U1, U2, . . . , Ur1, ␩, Ur1, . . . , Um], where the m elements of each of the Ui column vectors are 0 except for a 1 in the ith position.3 3 This form of the new basis inverse as the product of E and the old basis inverse is referred to as the product form of the inverse. After repeated iterations, the new basis inverse then is the product of a sequence of E matrices and the original basis inverse. Another efficient procedure for obtaining the current basis inverse, that we will not describe, is a modified form of Gaussian elimination called LU Factorization. 188 CHAPTER 5 THE THEORY OF THE SIMPLEX METHOD Example. We shall illustrate this procedure by applying it to the Wyndor Glass Co. problem. We already have applied the matrix form of the simplex method to this same problem in Sec. 5.2, so we will refer to the results obtained there for each iteration (the entering basic variable, leaving basic variable, etc.) for the information needed to apply the procedure. Iteration 1 We found in Sec. 5.2 that the initial B1  I, the entering basic variable is x2 (so k  2), the coefficients of x2 in Eqs. 1, 2, and 3 are a12  0, a22 = 2, and a32  2, the leaving basic variable is x4, and the number of the equation containing x4 is r  2. To obtain the new B1,   a12   0  a22      1    1 , ␩    2 a22       a32   1    a  2 2   so 1 B 1 0  1  0 2  0 1  0  1  0  0  1  0 0 1 0 1 0 0   1 0   0 2   1 0 1  0  0 .  1 Iteration 2 We found in Sec. 5.2 for this iteration that the entering basic variable is x1 (so k = 1), the coefficients of x1 in the current Eqs. 1, 2, and 3 are a'11 = 1, a'21 = 0, and a'31 = 3, the leaving basic variable is x5, and the number of the equation containing x5 is r = 3. These results yield   a 11   1  a 31       3   ␩    a 21    0     a 31   1  1     3    a 31  Therefore, the new B1 is 1  1 B  0  0 0 13   1 0  1 0  0 1 2 1  0 0 1 3  1 0   0   0   1 0 1 3 1 2 1 3  13   0 . 1 3 No more iterations are needed at this point, so this example is finished. Since the revised simplex method consists of combining this procedure for updating B1 at each iteration with the rest of the matrix form of the simplex method presented in Sec. 5.2, combining this example with the one in Sec. 5.2 applying the matrix SELECTED REFERENCES 189 form to the same problem provides a complete example of applying the revised simplex method. As mentioned at the end of Sec. 5.2, the Solved Examples section of the book’s website also gives another example of applying the revised simplex method. Let us conclude this section by summarizing the advantages of the revised simplex method over the algebraic or tabular form of the simplex method. One advantage is that the number of arithmetic computations may be reduced. This is especially true when the A matrix contains a large number of zero elements (which is usually the case for the large problems arising in practice). The amount of information that must be stored at each iteration is less, sometimes considerably so. The revised simplex method also permits the control of the rounding errors inevitably generated by computers. This control can be exercised by periodically obtaining the current B1 by directly inverting B. Furthermore, some of the postoptimality analysis problems discussed in Sec. 4.7 and the end of Sec.5.3 can be handled more conveniently with the revised simplex method. For all these reasons, the revised simplex method is usually the preferable form of the simplex method for computer execution. ■ 5.5 CONCLUSIONS Although the simplex method is an algebraic procedure, it is based on some fairly simple geometric concepts. These concepts enable one to use the algorithm to examine only a relatively small number of BF solutions before reaching and identifying an optimal solution. Chapter 4 describes how elementary algebraic operations are used to execute the algebraic form of the simplex method, and then how the tableau form of the simplex method uses the equivalent elementary row operations in the same way. Studying the simplex method in these forms is a good way of getting started in learning its basic concepts. However, these forms of the simplex method do not provide the most efficient form for execution on a computer. Matrix operations are a faster way of combining and executing elementary algebraic operations or row operations. Therefore, the matrix form of the simplex method provides an effective way of adapting the simplex method for computer implementation. The revised simplex method provides a further improvement for computer implementation by combining the matrix form of the simplex method with an efficient procedure for updating the inverse of the current basis matrix from iteration to iteration. The final simplex tableau includes complete information on how it can be algebraically reconstructed directly from the initial simplex tableau. This fundamental insight has some very important applications, especially for postoptimality analysis. ■ SELECTED REFERENCES 1. Bazaraa, M. S., J. J. Jarvis, and H. D. Sherali: Linear Programming and Network Flows, 4th ed., Wiley, Hoboken, NJ, 2010. 2. Dantzig, G. B., and M. N. Thapa: Linear Programming 1: Introduction, Springer, New York, 1997. 3. Dantzig, G. B., and M. N. Thapa: Linear Programming 2: Theory and Extensions, Springer, New York, 2003. 4. Denardo, E. V.: Linear Programming and Generalizations: A Problem-based Introduction with Spreadsheets, Springer, New York, 2011. 5. Elhallaoui, I., A. Metrane, G. Desaulniers, and F. Soumis: “An Improved Primal Simplex Algorithm for Degenerate Linear Programs,” INFORMS Journal on Computing, 23(4): 569–577, Fall 2011. 6. Luenberger, D., and Y. Ye: Linear and Nonlinear Programming, 3rd ed., Springer, New York, 2008. 7. Murty, K. G.: Optimization for Decision Making: Linear and Quadratic Models, Springer, New York, 2010. 8. Vanderbei, R. J.: Linear Programming: Foundations and Extensions, 4th ed., Springer, New York, 2014. 190 CHAPTER 5 THE THEORY OF THE SIMPLEX METHOD ■ LEARNING AIDS FOR THIS CHAPTER ON OUR WEBSITE (www.mhhe.com/hillier) Solved Examples: Examples for Chapter 5 A Demonstration Example in OR Tutor: Fundamental Insight Interactive Procedures in IOR Tutorial: Interactive Graphical Method Enter or Revise a General Linear Programming Model Set Up for the Simplex Method—Interactive Only Solve Interactively by the Simplex Method Automatic Procedures in IOR Tutorial: Solve Automatically by the Simplex Method Graphical Method and Sensitivity Analysis Files (Chapter 3) for Solving the Wyndor Example: Excel Files LINGO/LINDO File MPL/Solvers File Glossary for Chapter 5 See Appendix 1 for documentation of the software. ■ PROBLEMS The symbols to the left of some of the problems (or their parts) have the following meaning: D: The demonstration example listed above may be helpful. I: You can check some of your work by using procedures listed above. An asterisk on the problem number indicates that at least a partial answer is given in the back of the book. 5.1-1.* Consider the following problem. Z  3x1  2x2, Maximize subject to 2x1  x2  6 x1  2x2  6 and x1  0, x2  0. (a) Solve this problem graphically. Identify the CPF solutions by circling them on the graph. (b) Identify all the sets of two defining equations for this problem. For each set, solve (if a solution exists) for the corresponding corner-point solution, and classify it as a CPF solution or corner-point infeasible solution. (c) Introduce slack variables in order to write the functional constraints in augmented form. Use these slack variables to identify the basic solution that corresponds to each corner-point solution found in part (b). (d) Do the following for each set of two defining equations from part (b): Identify the indicating variable for each defining equation. Display the set of equations from part (c) after deleting these two indicating (nonbasic) variables. Then use the latter set of equations to solve for the two remaining variables (the basic variables). Compare the resulting basic solution to the corresponding basic solution obtained in part (c). (e) Without executing the simplex method, use its geometric interpretation (and the objective function) to identify the path I 191 PROBLEMS (sequence of CPF solutions) it would follow to reach the optimal solution. For each of these CPF solutions in turn, identify the following decisions being made for the next iteration: (i) which defining equation is being deleted and which is being added; (ii) which indicating variable is being deleted (the entering basic variable) and which is being added (the leaving basic variable). 5.1-2. Repeat Prob. 5.1-1 for the model in Prob. 3.1-6. 5.1-3. Consider the following problem. Z  2x1  3x2, Maximize subject to 3x1  x2 4x1  2x2 4x1  x2 x1  2x2 and x1  0,  1  20  10  5 (a) Identify the CPF solution obtained at iteration 1. (b) Identify the constraint boundary equations that define this CPF solution. 5.1-5. Consider the three-variable linear programming problem shown in Fig. 5.2. (a) Construct a table like Table 5.1, giving the set of defining equations for each CPF solution. (b) What are the defining equations for the corner-point infeasible solution (6, 0, 5)? (c) Identify one of the systems of three constraint boundary equations that yields neither a CPF solution nor a cornerpoint infeasible solution. Explain why this occurs for this system. 5.1-6. Consider the following problem. Minimize Z  3x1  2x2, subject to 2x1  x2  10 3x1  2x2  6 x1  x2  6 x2  0. (a) Solve this problem graphically. Identify the CPF solutions by circling them on the graph. (b) Develop a table giving each of the CPF solutions and the corresponding defining equations, BF solution, and nonbasic variables. Calculate Z for each of these solutions, and use just this information to identify the optimal solution. (c) Develop the corresponding table for the corner-point infeasible solutions, etc. Also identify the sets of defining equations and nonbasic variables that do not yield a solution. I 5.1-4. Consider the following problem. Z  2x1  x2  x3, Maximize and x1  0, x2  0. (a) Identify the 10 sets of defining equations for this problem. For each one, solve (if a solution exists) for the corresponding corner-point solution, and classify it as a CPF solution or a corner-point infeasible solution. (b) For each corner-point solution, give the corresponding basic solution and its set of nonbasic variables. 5.1-7. Reconsider the model in Prob. 3.1-5. (a) Identify the 15 sets of defining equations for this problem. For each one, solve (if a solution exists) for the corresponding corner-point solution, and classify it as a CPF solution or a corner-point infeasible solution. (b) For each corner-point solution, give the corresponding basic solution and its set of nonbasic variables. subject to 3x1  x2  x3  60 x1  x2  2x3  10 x1  x2  x3  20 and x1  0, x2  0, x3  0. After slack variables are introduced and then one complete iteration of the simplex method is performed, the following simplex tableau is obtained. Coefficient of: Iteration 1 Basic Variable Eq. Z x1 x2 x3 x4 x5 x6 Right Side Z x4 x1 x6 (0) (1) (2) (3) 1 0 0 0 0 0 1 0 1 4 1 2 3 5 2 3 0 1 0 0 2 3 1 1 0 0 0 1 20 30 10 10 5.1-8. Each of the following statements is true under most circumstances, but not always. In each case, indicate when the statement will not be true and why. (a) The best CPF solution is an optimal solution. (b) An optimal solution is a CPF solution. (c) A CPF solution is the only optimal solution if none of its adjacent CPF solutions are better (as measured by the value of the objective function). 5.1-9. Consider the original form (before augmenting) of a linear programming problem with n decision variables (each with a nonnegativity constraint) and m functional constraints. Label each of the following statements as true or false, and then justify your answer with specific references (including page citations) to material in the chapter. 192 CHAPTER 5 THE THEORY OF THE SIMPLEX METHOD (a) If a feasible solution is optimal, it must be a CPF solution. (b) The number of CPF solutions is at least (m  n)! . m!n! (c) If a CPF solution has adjacent CPF solutions that are better (as measured by Z), then one of these adjacent CPF solutions must be an optimal solution. 5.1-10. Label each of the following statements about linear programming problems as true or false, and then justify your answer. (a) If a feasible solution is optimal but not a CPF solution, then infinitely many optimal solutions exist. (b) If the value of the objective function is equal at two different feasible points x* and x**, then all points on the line segment connecting x* and x** are feasible and Z has the same value at all those points. (c) If the problem has n variables (before augmenting), then the simultaneous solution of any set of n constraint boundary equations is a CPF solution. 5.1-11. Consider the augmented form of linear programming problems that have feasible solutions and a bounded feasible region. Label each of the following statements as true or false, and then justify your answer by referring to specific statements (with page citations) in the chapter. (a) There must be at least one optimal solution. (b) An optimal solution must be a BF solution. (c) The number of BF solutions is finite. 5.1-12.* Reconsider the model in Prob. 4.6-9. Now you are given the information that the basic variables in the optimal solution are x2 and x3. Use this information to identify a system of three constraint boundary equations whose simultaneous solution must be this optimal solution. Then solve this system of equations to obtain this solution. 5.1-13. Reconsider Prob. 4.3-6. Now use the given information and the theory of the simplex method to identify a system of three constraint boundary equations (in x1, x2, x3) whose simultaneous solution must be the optimal solution, without applying the simplex method. Solve this system of equations to find the optimal solution. 5.1-14. Consider the following problem. Maximize Z  2x1  2x2  3x3, subject to 2x1  x2  2x3  4 x1  x2  x3  3 and x1  0, x2  0, x3  0. Let x4 and x5 be the slack variables for the respective functional constraints. Starting with these two variables as the basic variables for the initial BF solution, you now are given the information that the simplex method proceeds as follows to obtain the optimal solution in two iterations: (1) In iteration 1, the entering basic variable is x3 and the leaving basic variable is x4; (2) in iteration 2, the entering basic variable is x2 and the leaving basic variable is x5. (a) Develop a three-dimensional drawing of the feasible region for this problem, and show the path followed by the simplex method. (b) Give a geometric interpretation of why the simplex method followed this path. (c) For each of the two edges of the feasible region traversed by the simplex method, give the equation of each of the two constraint boundaries on which it lies, and then give the equation of the additional constraint boundary at each endpoint. (d) Identify the set of defining equations for each of the three CPF solutions (including the initial one) obtained by the simplex method. Use the defining equations to solve for these solutions. (e) For each CPF solution obtained in part (d), give the corresponding BF solution and its set of nonbasic variables. Explain how these nonbasic variables identify the defining equations obtained in part (d). 5.1-15. Consider the following problem. Maximize Z  3x1  4x2  2x3, subject to x1  x2  x3  20 x1  2x2  x3  30 and x1  0, x2  0, x3  0. Let x4 and x5 be the slack variables for the respective functional constraints. Starting with these two variables as the basic variables for the initial BF solution, you now are given the information that the simplex method proceeds as follows to obtain the optimal solution in two iterations: (1) In iteration 1, the entering basic variable is x2 and the leaving basic variable is x5; (2) in iteration 2, the entering basic variable is x1 and the leaving basic variable is x4. Follow the instructions of Prob. 5.1-14 for this situation. 5.1-16. By inspecting Fig. 5.2, explain why Property 1b for CPF solutions holds for this problem if it has the following objective function. (a) Maximize Z  x3. (b) Maximize Z  x1  2x3. 5.1-17. Consider the three-variable linear programming problem shown in Fig. 5.2. (a) Explain in geometric terms why the set of solutions satisfying any individual constraint is a convex set, as defined in Appendix 2. 193 PROBLEMS (b) Use the conclusion in part (a) to explain why the entire feasible region (the set of solutions that simultaneously satisfies every constraint) is a convex set. x2 5.1-18. Suppose that the three-variable linear programming problem given in Fig. 5.2 has the objective function Maximize Without using the algebra of the simplex method, apply just its geometric reasoning (including choosing the edge giving the maximum rate of increase of Z ) to determine and explain the path it would follow in Fig. 5.2 from the origin to the optimal solution. 5.1-19. Consider the three-variable linear programming problem shown in Fig. 5.2. (a) Construct a table like Table 5.4, giving the indicating variable for each constraint boundary equation and original constraint. (b) For the CPF solution (2, 4, 3) and its three adjacent CPF solutions (4, 2, 4), (0, 4, 2), and (2, 4, 0), construct a table like Table 5.5, showing the corresponding defining equations, BF solution, and nonbasic variables. (c) Use the sets of defining equations from part (b) to demonstrate that (4, 2, 4), (0, 4, 2), and (2, 4, 0) are indeed adjacent to (2, 4, 3), but that none of these three CPF solutions are adjacent to each other. Then use the sets of nonbasic variables from part (b) to demonstrate the same thing. 5.1-20. The formula for the line passing through (2, 4, 3) and (4, 2, 4) in Fig. 5.2 can be written as (4, 5) 4 3 2 1 0 1 2 3 4 x1 5.2-1. Consider the following problem. Maximize Z  8x1  4x2  6x3  3x4  9x5, subject to x1  2x2  3x3  3x4  x5  180 4x1  3x2  2x3  x4  x5  270 x1  3x2  2x3  x4  3x5  180 (2, 4, 3)  ␣[(4, 2, 4)  (2, 4, 3)]  (2, 4, 3)  ␣(2, 2, 1), where 0  ␣  1 for just the line segment between these points. After augmenting with the slack variables x4, x5, x6, x7 for the respective functional constraints, this formula becomes (2, 5) 5 Z  3x1  4x2  3x3. (resource 1) (resource 2) (resource 3) and xj  0, (2, 4, 3, 2, 0, 0, 0)  ␣(2, 2, 1, 2, 2, 0, 0). j  1, . . . , 5. Use this formula directly to answer each of the following questions, and thereby relate the algebra and geometry of the simplex method as it goes through one iteration in moving from (2, 4, 3) to (4, 2, 4). (You are given the information that it is moving along this line segment.) (a) What is the entering basic variable? (b) What is the leaving basic variable? (c) What is the new BF solution? You are given the facts that the basic variables in the optimal solution are x3, x1, and x5 and that 5.1-21. Consider a two-variable mathematical programming problem that has the feasible region shown on the graph, where the six dots correspond to CPF solutions. The problem has a linear objective function, and the two dashed lines are objective function lines passing through the optimal solution (4, 5) and the secondbest CPF solution (2, 5). Note that the nonoptimal solution (2, 5) is better than both of its adjacent CPF solutions, which violates Property 3 in Sec. 5.1 for CPF solutions in linear programming. Demonstrate that this problem cannot be a linear programming problem by constructing the feasible region that would result if the six line segments on the boundary were constraint boundaries for linear programming constraints. (a) Use the given information to identify the optimal solution. (b) Use the given information to identify the shadow prices for the three resources. 1 1  11 3  3 1 0 .  2 4 1   1  6 9 3     27    0 1 3 2 3 10     5.2-2.* Work through the matrix form of the simplex method step by step to solve the following problem. I Maximize Z  5x1  8x2  7x3  4x4  6x5, subject to 2x1  3x2  3x3  2x4  2x5  20 3x1  5x2  4x3  2x4  4x5  30 194 CHAPTER 5 THE THEORY OF THE SIMPLEX METHOD and and xj  0, j  1, 2, 3, 4, 5. x1  0, 5.2-3. Reconsider Prob. 5.1-1. For the sequence of CPF solutions identified in part (e), construct the basis matrix B for each of the corresponding BF solutions. For each one, invert B manually, use this B1 to calculate the current solution, and then perform the next iteration (or demonstrate that the current solution is optimal). 5.2-4. Work through the matrix form of the simplex method step by step to solve the model given in Prob. 4.1-5. x2  0, x3  0, Let x5 and x6 denote the slack variables for the respective constraints. After you apply the simplex method, a portion of the final simplex tableau is as follows: Coefficient of: I Basic Variable Eq. Z Z (0) x2 x4 (1) (2) 5.2-5. Work through the matrix form of the simplex method step by step to solve the model given in Prob. 4.7-6. I D 5.3-1.* Consider the following problem. x1 x2 x3 x4 x5 x6 1 1 1 0 0 1 1 1 2 Right Side Z  x1  x2  2x3, Maximize subject to (a) Use the fundamental insight presented in Sec. 5.3 to identify the missing numbers in the final simplex tableau. Show your calculations. (b) Identify the defining equations of the CPF solution corresponding to the optimal BF solution in the final simplex tableau. 2x1  2x2  3x3  5 x1  x2  x3  3 x1  x2  x3  2 and x1  0, x4  0. x2  0, x3  0. D Let x4, x5, and x6 denote the slack variables for the respective constraints. After you apply the simplex method, a portion of the final simplex tableau is as follows: 5.3-3. Consider the following problem. Z  6x1  x2  2x3, Maximize subject to 1 2x1  2x2  x3  2 2 Coefficient of: Basic Variable Eq. Z Z (0) x2 x6 x3 (1) (2) (3) x1 x2 x3 x4 x5 x6 1 1 1 0 0 0 0 1 0 1 3 1 2 0 1 0 Right Side (a) Use the fundamental insight presented in Sec. 5.3 to identify the missing numbers in the final simplex tableau. Show your calculations. (b) Identify the defining equations of the CPF solution corresponding to the optimal BF solution in the final simplex tableau. D 5.3-2. Consider the following problem. Maximize Z  4x1  3x2  x3  2x4, subject to 4x1  2x2  x3  x4  5 3x1  x2  2x3  x4  4 3 4x1  2x2  x3  3 2 1 2x1  2x2  x3  1 2 and x1  0, x2  0, x3  0. Let x4, x5, and x6 denote the slack variables for the respective constraints. After you apply the simplex method, a portion of the final simplex tableau is as follows: Coefficient of: Basic Variable Eq. Z Z (0) x5 x3 x1 (1) (2) (3) x1 x2 x3 x4 x5 x6 1 2 0 2 0 0 0 1 2 1 1 0 0 2 4 1 Right Side 195 PROBLEMS Use the fundamental insight presented in Sec. 5.3 to identify the missing numbers in the final simplex tableau. Show your calculations. D 5.3-4. Consider the following problem. Z  20x1  6x2  8x3, Maximize subject to 8x1  2x2  3x3 4x1  3x2 2x1  x3 2x1  3x2 x3  200  100  50  20 Note that values have not been assigned to the coefficients in the objective function (c1, c2, c3), and that the only specification for the right-hand side of the functional constraints is that the second one (2b) be twice as large as the first (b). Now suppose that your boss has inserted her best estimate of the values of c1, c2, c3, and b without informing you and then has run the simplex method. You are given the resulting final simplex tableau below (where x4 and x5 are the slack variables for the respective functional constraints), but you are unable to read the value of Z*. and Coefficient of: x1  0, x2  0, x3  0. Let x4, x5, x6, and x7 denote the slack variables for the first through fourth constraints, respectively. Suppose that after some number of iterations of the simplex method, a portion of the current simplex tableau is as follows: Basic Variable Eq. Z x1 x2 x3 Z (0) 1 7 10 0 0  x2 (1) 0 1 0  x3 (2) 0 0 1 Coefficient of: Basic Variable Eq. Z Z (0) 1  x1 (1) 0  x2 (2) 0 x6 (3) 0 x7 (4) 0 x1 x2 x3 x4 9 4 3 16 1  4 3  8 0 x5 x6 x7 1 2 0 0 0 0 0 0 1 0 0 1  1 8 1  2 1  4 0  (a) Use the fundamental insight presented in Sec. 5.3 to identify the missing numbers in the current simplex tableau. Show your calculations. (b) Indicate which of these missing numbers would be generated by the matrix form of the simplex method to perform the next iteration. (c) Identify the defining equations of the CPF solution corresponding to the BF solution in the current simplex tableau. 5.3-5. Consider the following problem. Maximize Z  c1x1  c2x2  c3x3, subject to x1  2x2  x3  b 2x1  x2  3x3  2b and x1  0, x2  0, x3  0. Right Side x5 3 5  3 5 1  5 4 5 Z* 1 5 2  5  1 3 Right Side (a) Use the fundamental insight presented in Sec. 5.3 to identify the value of (c1, c2, c3) that was used. (b) Use the fundamental insight presented in Sec. 5.3 to identify the value of b that was used. (c) Calculate the value of Z* in two ways, where one way uses your results from part (a) and the other way uses your result from part (b). Show your two methods for finding Z*. 5.3-6. For iteration 2 of the example in Sec. 5.3, the following expression was shown: Final row 0  [3, D 1 5 3 5 x4 5 0,  [0, 0, 0 0] 3 2 1  1]  0 3  , 0 2 2 1 0 0 0 1 0 0 0 1 4  12  .  18  Derive this expression by combining the algebraic operations (in matrix form) for iterations 1 and 2 that affect row 0. 5.3-7. Most of the description of the fundamental insight presented in Sec. 5.3 assumes that the problem is in our standard form. Now consider each of the following other forms, where the additional adjustments in the initialization step are those presented in Sec. 4.6, including the use of artificial variables and the Big M method where appropriate. Describe the resulting adjustments in the fundamental insight. (a) Equality constraints (b) Functional constraints in  form (c) Negative right-hand sides (d) Variables allowed to be negative (with no lower bound) 196 CHAPTER 5 THE THEORY OF THE SIMPLEX METHOD 5.3-8. Reconsider the model in Prob. 4.6-5. Use artificial variables and the Big M method to construct the complete first simplex tableau for the simplex method, and then identify the columns that will contain S* for applying the fundamental insight in the final tableau. Explain why these are the appropriate columns. 5.3-9. Consider the following problem. Z  2x1  3x2  2x3, Minimize subject to x1  4x2  2x3  8 3x1  2x2 6 initial simplex tableau given above. Derive M and v for this problem. (c) When you apply the t*  t  vT equation, another option is to use t  [2, 3, 2, 0, M, 0, M, 0], which is the preliminary row 0 before the algebraic elimination of the nonzero coefficients of the initial basic variables 苶x5 and 苶x7. Repeat part (b) for this equation with this new t. After you derive the new v, show that this equation yields the same final row 0 for this problem as the equation derived in part (b). (d) Identify the defining equations of the CPF solution corresponding to the optimal BF solution in the final simplex tableau. 5.3-10. Consider the following problem. and x1  0, x2  0, Maximize x3  0. Let x4 and x6 be the surplus variables for the first and second constraints, respectively. Let 苶x5 and 苶x7 be the corresponding artificial variables. After you make the adjustments described in Sec. 4.6 for this model form when using the Big M method, the initial simplex tableau ready to apply the simplex method is as follows: subject to 2x1  2x2  x3  10 3x1  x2  x3  20 and x1  0, Coefficient of: Basic Variable Eq. Z x1 x2 x3 x4 x 苶5 x6 x 苶7 Z (0) 1 4M  2 6M  3 2M  2 M 苶x5 x苶7 (1) 0 (2) 0 1 3 4 2 2 0 0 M Right Side 0 14M 1 1 0 0 0 0 1 1 8 6 After you apply the simplex method, a portion of the final simplex tableau is as follows: Coefficient of: Basic Variable Eq. Z Z (0) 1 x2 x1 (1) 0 (2) 0 x1 x2 x3 x4 x 苶5 M  0.5  0.3 0.2 x6 x 苶7 Right Side M  0.5 0.1 0.4 (a) Based on the above tableaux, use the fundamental insight presented in Sec. 5.3 to identify the missing numbers in the final simplex tableau. Show your calculations. (b) Examine the mathematical logic presented in Sec. 5.3 to validate the fundamental insight (see the T*  MT and t*  t  vT equations and the subsequent derivations of M and v). This logic assumes that the original model fits our standard form, whereas the current problem does not fit this form. Show how, with minor adjustments, this same logic applies to the current problem when t is row 0 and T is rows 1 and 2 in the Z  3x1  7x2  2x3, x2  0, x3  0. You are given the fact that the basic variables in the optimal solution are x1 and x3. (a) Introduce slack variables, and then use the given information to find the optimal solution directly by Gaussian elimination. (b) Extend the work in part (a) to find the shadow prices. (c) Use the given information to identify the defining equations of the optimal CPF solution, and then solve these equations to obtain the optimal solution. (d) Construct the basis matrix B for the optimal BF solution, invert B manually, and then use this B1 to solve for the optimal solution and the shadow prices y*. Then apply the optimality test for the matrix form of the simplex method to verify that this solution is optimal. (e) Given B1 and y* from part (d), use the fundamental insight presented in Sec. 5.3 to construct the complete final simplex tableau. 5.4-1. Consider the model given in Prob. 5.2-2. Let x6 and x7 be the slack variables for the first and second constraints, respectively. You are given the information that x2 is the entering basic variable and x7 is the leaving basic variable for the first iteration of the simplex method and then x4 is the entering basic variable and x6 is the leaving basic variable for the second (final) iteration. Use the procedure presented in Sec. 5.4 for updating B1 from one iteration to the next to find B1 after the first iteration and then after the second iteration. 5.4-2.* Work through the revised simplex method step by step to solve the model given in Prob. 4.3-4. I 5.4-3. Work through the revised simplex method step by step to solve the model given in Prob. 4.7-5. I 5.4-4. Work through the revised simplex method step by step to solve the model given in Prob. 3.1-6. I 6 C H A P T E R Duality Theory O ne of the most important discoveries in the early development of linear programming was the concept of duality and its many important ramifications. This discovery revealed that every linear programming problem has associated with it another linear programming problem called the dual. The relationships between the dual problem and the original problem (called the primal) prove to be extremely useful in a variety of ways. For example, you soon will see that the shadow prices described in Sec. 4.7 actually are provided by the optimal solution for the dual problem. We shall describe many other valuable applications of duality theory in this chapter as well. For greater clarity, the first three sections discuss duality theory under the assumption that the primal linear programming problem is in our standard form (but with no restriction that the bi values need to be positive). Other forms are then discussed in Sec. 6.4. We begin the chapter by introducing the essence of duality theory and its applications. We then describe the economic interpretation of the dual problem (Sec. 6.2) and delve deeper into the relationships between the primal and dual problems (Sec. 6.3). Section 6.5 focuses on the role of duality theory in sensitivity analysis. (As discussed in detail in the next chapter, sensitivity analysis involves the analysis of the effect on the optimal solution if changes occur in the values of some of the parameters of the model.) ■ 6.1 THE ESSENCE OF DUALITY THEORY Given our standard form for the primal problem at the left (perhaps after conversion from another form), its dual problem has the form shown to the right. 197 198 CHAPTER 6 DUALITY THEORY Primal Problem Dual Problem n Z  冱 c j x j, Maximize m W  冱 bi yi, Minimize j1 subject to i1 subject to n m 冱 aij x j  bi, j1 for i  1, 2, . . . , m and 冱 aij yi  cj, i1 for j  1, 2, . . . , n and xj  0, for j  1, 2, . . . , n. yi  0, for i  1, 2, . . . , m. Thus, with the primal problem in maximization form, the dual problem is in minimization form instead. Furthermore, the dual problem uses exactly the same parameters as the primal problem, but in different locations, as summarized below. 1. The coefficients in the objective function of the primal problem are the right-hand sides of the functional constraints in the dual problem. 2. The right-hand sides of the functional constraints in the primal problem are the coefficients in the objective function of the dual problem. 3. The coefficients of a variable in the functional constraints of the primal problem are the coefficients in a functional constraint of the dual problem. To highlight the comparison, now look at these same two problems in matrix notation (as introduced at the beginning of Sec. 5.2), where c and y  [y1, y2, . . . , ym] are row vectors but b and x are column vectors. Primal Problem Maximize Z  cx, subject to Dual Problem Minimize W  yb, subject to yA  c Ax  b and and x  0. y  0. To illustrate, the primal and dual problems for the Wyndor Glass Co. example of Sec. 3.1 are shown in Table 6.1 in both algebraic and matrix form. The primal-dual table for linear programming (Table 6.2) also helps to highlight the correspondence between the two problems. It shows all the linear programming parameters (the aij, bi, and cj) and how they are used to construct the two problems. All the headings for the primal problem are horizontal, whereas the headings for the dual problem are read by turning the book sideways. For the primal problem, each column (except the right-side column) gives the coefficients of a single variable in the respective constraints and then in the objective function, whereas each row (except the bottom one) gives the parameters for a single contraint. For the dual problem, each row (except the right-side row) gives the coefficients of a single variable in the respective constraints and then in the objective function, whereas each column (except the rightmost one) gives the parameters for a single constraint. In addition, the right-side column gives the right-hand sides for the primal problem and the objective function coefficients for the dual problem, 6.1 199 THE ESSENCE OF DUALITY THEORY ■ TABLE 6.1 Primal and dual problems for the Wyndor Glass Co. example Primal Problem in Algebraic Form Dual Problem in Algebraic Form Z  3x1  5x2, Maximize W  4y1  12y2  18y3, Minimize subject to subject to 3x1  2x2  4 y12y2  3y3  3 3x1  2x2  12 2y2  2y3  5 3x1  2x2  18 and x1  0, and y1  0, x2 ⱖ 0. Primal Problem in Matrix Form x1 冤x 冥, Minimize 2 subject to 1 0  3 0 x1 2  x2  2 冤 冥 y3  0. Dual Problem in Matrix Form Z  [3, 5] Maximize y2  0,  4 W  [y1, y2, y3]  12     18  subject to  4  12     18  [y1, y2, y3] and 1 0  3 0 2   [3, 5]  2 and x1 0  . x2 0 冤 冥 冤冥 [y1, y2, y3]  [0, 0, 0]. ■ TABLE 6.2 Primal-dual table for linear programming, illustrated by the Wyndor Glass Co. example (a) General Case Primal Problem x2 … xn Right Side a11 a21 a12 a22 … … a1n a2n  b1  b2 ym am1 am2 … amn  bm VI c1 VI c2 … … VI cn y1 y2 ⯗ Coefficients for Objective Function (Maximize) (b) Wyndor Glass Co. Example y1 y2 y3 x1 x2 1 0 3 0 2 2 VI 3 VI 5  4  12  18 ⯗ Coefficients for Objective Function (Minimize) x1 Right Side Dual Problem Coefficient of: Coefficient of: 200 CHAPTER 6 DUALITY THEORY whereas the bottom row gives the objective function coefficients for the primal problem and the right-hand sides for the dual problem. Consequently, we now have the following general relationships between the primal and dual problems. 1. The parameters for a (functional) constraint in either problem are the coefficients of a variable in the other problem. 2. The coefficients in the objective function of either problem are the right-hand sides for the other problem. Thus, there is a direct correspondence between these entities in the two problems, as summarized in Table 6.3. These correspondences are a key to some of the applications of duality theory, including sensitivity analysis. The Solved Examples section of the book’s website provides another example of using the primal-dual table to construct the dual problem for a linear programming model. Origin of the Dual Problem Duality theory is based directly on the fundamental insight (particularly with regard to row 0) presented in Sec. 5.3. To see why, we continue to use the notation introduced in Table 5.9 for row 0 of the final tableau, except for replacing Z* by W* and dropping the asterisks from z* and y* when referring to any tableau. Thus, at any given iteration of the simplex method for the primal problem, the current numbers in row 0 are denoted as shown in the (partial) tableau given in Table 6.4. For the coefficients of x1, x2, . . . , xn, recall that z  (z1, z2, . . . , zn) denotes the vector that the simplex method added to the vector of initial coefficients, ⫺c, in the process of reaching the current tableau. (Do not confuse z with the value of the objective function Z.) Similarly, since the initial coefficients of xn⫹1, xn⫹2, . . . , xn⫹m in row 0 all are 0, y ⫽ (y1, y2, . . . , ym) denotes the vector that the simplex method has added to these coefficients. Also recall [see Eq. (1) in the statement of the fundamental insight in Sec. 5.3] that the fundamental insight led to the following relationships between these quantities and the parameters of the original model: m W ⫽ yb ⫽ 冱 bi yi , i⫽1 m z ⫽ yA, zj ⫽ 冱 aij yi , so for j ⫽ 1, 2, . . . , n. i⫽1 ■ TABLE 6.3 Correspondence between entities in primal and dual problems One Problem Other Problem Constraint i ←→ Variable i Objective function ←→ Right-hand sides ■ TABLE 6.4 Notation for entries in row 0 of a simplex tableau Coefficient of: Iteration Basic Variable Eq. Z x1 x2 … xn xn⫹1 xn⫹2 … xn⫹m Right Side Any Z (0) 1 z1 ⫺ c1 z2 ⫺ c2 … zn ⫺ cn y1 y2 … ym W 6.1 201 THE ESSENCE OF DUALITY THEORY To illustrate these relationships with the Wyndor example, the first equation gives W  4y1  12y2  18y3, which is just the objective function for the dual problem shown in the upper right-hand box of Table 6.1. The second set of equations give z1  y1  3y3 and z2  2y2  2y3, which are the left-hand sides of the functional constraints for this dual problem. Thus, by subtracting the right-hand sides of these  constraints (c1  3 and c2  5), (z1 ⫺ c1) and (z2 ⫺ c2) can be interpreted as being the surplus variables for these functional constraints. The remaining key is to express what the simplex method tries to accomplish (according to the optimality test) in terms of these symbols. Specifically, it seeks a set of basic variables, and the corresponding BF solution, such that all coefficients in row 0 are nonnegative. It then stops with this optimal solution. Using the notation in Table 6.4, this goal is expressed symbolically as follows: Condition for Optimality: zj ⫺ cj ⱖ 0 for j ⫽ 1, 2, . . . , n, yi ⱖ 0 for i ⫽ 1, 2, . . . , m. After we substitute the preceding expression for zj, the condition for optimality says that the simplex method can be interpreted as seeking values for y1, y2, . . . , ym such that m W ⫽ 冱 biyi, i⫽1 subject to m 冱 aijyi ⱖ cj, i⫽1 for j ⫽ 1, 2, . . . , n and yi ⱖ 0, for i ⫽ 1, 2, . . . , m. But, except for lacking an objective for W, this problem is precisely the dual problem! To complete the formulation, let us now explore what the missing objective should be. Since W is just the current value of Z, and since the objective for the primal problem is to maximize Z, a natural first reaction is that W should be maximized also. However, this is not correct for the following rather subtle reason: The only feasible solutions for this new problem are those that satisfy the condition for optimality for the primal problem. Therefore, it is only the optimal solution for the primal problem that corresponds to a feasible solution for this new problem. As a consequence, the optimal value of Z in the primal problem is the minimum feasible value of W in the new problem, so W should be minimized. (The full justification for this conclusion is provided by the relationships we develop in Sec. 6.3.) Adding this objective of minimizing W gives the complete dual problem. Consequently, the dual problem may be viewed as a restatement in linear programming terms of the goal of the simplex method, namely, to reach a solution for the primal problem that satisfies the optimality test. Before this goal has been reached, the corresponding y in row 0 (coefficients of slack variables) of the current tableau must be infeasible for the dual problem. However, after the goal is reached, the corresponding y must be an optimal solution (labeled y*) for the dual problem, because it is a feasible solution *) that attains the minimum feasible value of W. This optimal solution (y1*, y2*, . . . , ym provides for the primal problem the shadow prices that were described in Sec. 4.7. Furthermore, this optimal W is just the optimal value of Z, so the optimal objective function values are equal for the two problems. This fact also implies that cx ⱕ yb for any x and y that are feasible for the primal and dual problems, respectively. 202 CHAPTER 6 DUALITY THEORY To illustrate, the left-hand side of Table 6.5 shows row 0 for the respective iterations when the simplex method is applied to the Wyndor Glass Co. example. In each case, row 0 is partitioned into three parts: the coefficients of the decision variables (x1, x2), the coefficients of the slack variables (x3, x4, x5), and the right-hand side (value of Z). Since the coefficients of the slack variables give the corresponding values of the dual variables (y1, y2, y3), each row 0 identifies a corresponding solution for the dual problem, as shown in the y1, y2, and y3 columns of Table 6.5. To interpret the next two columns, recall that (z1 ⫺ c1) and (z2 ⫺ c2) are the surplus variables for the functional constraints in the dual problem, so the full dual problem after augmenting with these surplus variables is W ⫽ 4y1 ⫹ 12y2 ⫹ 18y3, Minimize subject to y1 ⫹ 3y3 ⫺ (z1 ⫺ c1) ⫽ 3 2y2 ⫹ 2y3 ⫺ (z2 ⫺ c2) ⫽ 5 and y1 ⱖ 0, y2 ⱖ 0, y3 ⱖ 0. Therefore, by using the numbers in the y1, y2, and y3 columns, the values of these surplus variables can be calculated as z1 ⫺ c1 ⫽ y1 ⫹ 3y3 ⫺ 3, z2 ⫺ c2 ⫽ 2y2 ⫹ 2y3 ⫺ 5. Thus, a negative value for either surplus variable indicates that the corresponding constraint is violated. Also included in the rightmost column of the table is the calculated value of the dual objective function W ⫽ 4y1 ⫹ 12y2 ⫹ 18y3. As displayed in Table 6.4, all these quantities to the right of row 0 in Table 6.5 already are identified by row 0 without requiring any new calculations. In particular, note in Table 6.5 how each number obtained for the dual problem already appears in row 0 in the spot indicated by Table 6.4. For the initial row 0, Table 6.5 shows that the corresponding dual solution (y1, y2, y3) ⫽ (0, 0, 0) is infeasible because both surplus variables are negative. The first iteration succeeds in eliminating one of these negative values, but not the other. After two iterations, the optimality test is satisfied for the primal problem because all the dual variables and surplus variables are nonnegative. This dual solution (y1*, y2*, y3*) ⫽ (0, 23ᎏᎏ, 1) is optimal (as could be verified by applying the simplex method directly to the dual problem), so the optimal value of Z and W is Z* ⫽ 36 ⫽ W*. ■ TABLE 6.5 Row 0 and corresponding dual solution for each iteration for the Wyndor Glass Co. example Primal Problem Iteration Dual Problem Row 0 0 [⫺3, ⫺5 0, 1 [⫺3, ⫺0 0, 2 [⫺0, ⫺0 0, 0, 5 ᎏᎏ, 2 3 ᎏᎏ, 2 y1 y2 y3 z1 ⫺ c1 z2 ⫺ c2 0 5 ᎏᎏ 2 3 ᎏᎏ 2 0 ⫺3 ⫺5 0 0 ⫺3 ⫺0 30 1 ⫺0 ⫺0 36 0 0] 0 0 30] 0 1 36] 0 W 6.1 THE ESSENCE OF DUALITY THEORY 203 Summary of Primal-Dual Relationships Now let us summarize the newly discovered key relationships between the primal and dual problems. Weak duality property: If x is a feasible solution for the primal problem and y is a feasible solution for the dual problem, then cx  yb. For example, for the Wyndor Glass Co. problem, one feasible solution is x1  3, x2  3, which yields Z  cx  24, and one feasible solution for the dual problem is y1  1, y2  1, y3  2, which yields a larger objective function value W  yb  52. These are just sample feasible solutions for the two problems. For any such pair of feasible solutions, this inequality must hold because the maximum feasible value of Z  cx (36) equals the minimum feasible value of the dual objective function W  yb, which is our next property. Strong duality property: If x* is an optimal solution for the primal problem and y* is an optimal solution for the dual problem, then cx*  y*b. Thus, these two properties imply that cx ⬍ yb for feasible solutions if one or both of them are not optimal for their respective problems, whereas equality holds when both are optimal. The weak duality property describes the relationship between any pair of solutions for the primal and dual problems where both solutions are feasible for their respective problems. At each iteration, the simplex method finds a specific pair of solutions for the two problems, where the primal solution is feasible but the dual solution is not feasible (except at the final iteration). Our next property describes this situation and the relationship between this pair of solutions. Complementary solutions property: At each iteration, the simplex method simultaneously identifies a CPF solution x for the primal problem and a complementary solution y for the dual problem (found in row 0, the coefficients of the slack variables), where cx ⫽ yb. If x is not optimal for the primal problem, then y is not feasible for the dual problem. To illustrate, after one iteration for the Wyndor Glass Co. problem, x1 ⫽ 0, x2 ⫽ 6, and y1 ⫽ 0, y2 ⫽ 5ᎏ2ᎏ, y3 ⫽ 0, with cx ⫽ 30 ⫽ yb. This x is feasible for the primal problem, but this y is not feasible for the dual problem (since it violates the constraint, y1 ⫹ 3y3 ⱖ 3). The complementary solutions property also holds at the final iteration of the simplex method, where an optimal solution is found for the primal problem. However, more can be said about the complementary solution y in this case, as presented in the next property. Complementary optimal solutions property: At the final iteration, the simplex method simultaneously identifies an optimal solution x* for the primal problem and a complementary optimal solution y* for the dual problem (found in row 0, the coefficients of the slack variables), where cx* ⫽ y*b. The y*i are the shadow prices for the primal problem. For the example, the final iteration yields x1* ⫽ 2, x2* ⫽ 6, and y1* ⫽ 0, y2* ⫽ ᎏ32ᎏ, y3* ⫽ 1, with cx* ⫽ 36 ⫽ y*b. 204 CHAPTER 6 DUALITY THEORY We shall take a closer look at some of these properties in Sec. 6.3. There you will see that the complementary solutions property can be extended considerably further. In particular, after slack and surplus variables are introduced to augment the respective problems, every basic solution in the primal problem has a complementary basic solution in the dual problem. We already have noted that the simplex method identifies the values of the surplus variables for the dual problem as zj ⫺ cj in Table 6.4. This result then leads to an additional complementary slackness property that relates the basic variables in one problem to the nonbasic variables in the other (Tables 6.7 and 6.8), but more about that later. In Sec. 6.4, after describing how to construct the dual problem when the primal problem is not in our standard form, we discuss another very useful property, which is summarized as follows: Symmetry property: For any primal problem and its dual problem, all relationships between them must be symmetric because the dual of this dual problem is this primal problem. Therefore, all the preceding properties hold regardless of which of the two problems is labeled as the primal problem. (The direction of the inequality for the weak duality property does require that the primal problem be expressed or reexpressed in maximization form and the dual problem in minimization form.) Consequently, the simplex method can be applied to either problem, and it simultaneously will identify complementary solutions (ultimately a complementary optimal solution) for the other problem. So far, we have focused on the relationships between feasible or optimal solutions in the primal problem and corresponding solutions in the dual problem. However, it is possible that the primal (or dual) problem either has no feasible solutions or has feasible solutions but no optimal solution (because the objective function is unbounded). Our final property summarizes the primal-dual relationships under all these possibilities. Duality theorem: The following are the only possible relationships between the primal and dual problems. 1. If one problem has feasible solutions and a bounded objective function (and so has an optimal solution), then so does the other problem, so both the weak and strong duality properties are applicable. 2. If one problem has feasible solutions and an unbounded objective function (and so no optimal solution), then the other problem has no feasible solutions. 3. If one problem has no feasible solutions, then the other problem has either no feasible solutions or an unbounded objective function. Applications As we have just implied, one important application of duality theory is that the dual problem can be solved directly by the simplex method in order to identify an optimal solution for the primal problem. We discussed in Sec. 4.8 that the number of functional constraints affects the computational effort of the simplex method far more than the number of variables does. If m ⬎ n, so that the dual problem has fewer functional constraints (n) than the primal problem (m), then applying the simplex method directly to the dual problem instead of the primal problem probably will achieve a substantial reduction in computational effort. The weak and strong duality properties describe key relationships between the primal and dual problems. One useful application is for evaluating a proposed solution for the primal problem. For example, suppose that x is a feasible solution that has been proposed for implementation and that a feasible solution y has been found by inspection for the dual 6.2 ECONOMIC INTERPRETATION OF DUALITY 205 problem such that cx  yb. In this case, x must be optimal without the simplex method even being applied! Even if cx ⬍ yb, then yb still provides an upper bound on the optimal value of Z, so if yb ⫺ cx is small, intangible factors favoring x may lead to its selection without further ado. One of the key applications of the complementary solutions property is its use in the dual simplex method presented in Sec. 8.1. This algorithm operates on the primal problem exactly as if the simplex method were being applied simultaneously to the dual problem, which can be done because of this property. Because the roles of row 0 and the right side in the simplex tableau have been reversed, the dual simplex method requires that row 0 begin and remain nonnegative while the right side begins with some negative values (subsequent iterations strive to reach a nonnegative right side). Consequently, this algorithm occasionally is used because it is more convenient to set up the initial tableau in this form than in the form required by the simplex method. Furthermore, it frequently is used for reoptimization (discussed in Sec. 4.7), because changes in the original model lead to the revised final tableau fitting this form. This situation is common for certain types of sensitivity analysis, as you will see in the next chapter. In general terms, duality theory plays a central role in sensitivity analysis. This role is the topic of Sec. 6.5. Another important application is its use in the economic interpretation of the dual problem and the resulting insights for analyzing the primal problem. You already have seen one example when we discussed shadow prices in Sec. 4.7. Section 6.2 describes how this interpretation extends to the entire dual problem and then to the simplex method. ■ 6.2 ECONOMIC INTERPRETATION OF DUALITY The economic interpretation of duality is based directly upon the typical interpretation for the primal problem (linear programming problem in our standard form) presented in Sec. 3.2. To refresh your memory, we have summarized this interpretation of the primal problem in Table 6.6. Interpretation of the Dual Problem To see how this interpretation of the primal problem leads to an economic interpretation for the dual problem,1 note in Table 6.4 that W is the value of Z (total profit) at the current iteration. Because W ⫽ b1y1 ⫹ b2 y2 ⫹ . . . ⫹ bm ym , ■ TABLE 6.6 Economic interpretation of the primal problem Quantity xj cj Z bi aij 1 Interpretation Level of activity j (j ⫽ 1, 2, . . . , n) Unit profit from activity j Total profit from all activities Amount of resource i available (i ⫽ 1, 2, . . . , m) Amount of resource i consumed by each unit of activity j Actually, several slightly different interpretations have been proposed. The one presented here seems to us to be the most useful because it also directly interprets what the simplex method does in the primal problem. 206 CHAPTER 6 DUALITY THEORY each bi yi can thereby be interpreted as the current contribution to profit by having bi units of resource i available for the primal problem. Thus, The dual variable yi is interpreted as the contribution to profit per unit of resource i (i  1, 2, . . . , m), when the current set of basic variables is used to obtain the primal solution. In other words, the yi values (or y*i values in the optimal solution) are just the shadow prices discussed in Sec. 4.7. For example, when iteration 2 of the simplex method finds the optimal solution for the Wyndor problem, it also finds the optimal values of the dual variables (as shown in the bottom row of Table 6.5) to be y1*  0, y2*  ᎏ32ᎏ, and y3* ⫽ 1. These are precisely the shadow prices found in Sec. 4.7 for this problem through graphical analysis. Recall that the resources for the Wyndor problem are the production capacities of the three plants being made available to the two new products under consideration, so that bi is the number of hours of production time per week being made available in Plant i for these new products, where i ⫽ 1, 2, 3. As discussed in Sec. 4.7, the shadow prices indicate that individually increasing any bi by 1 would increase the optimal value of the objective function (total weekly profit in units of thousands of dollars) by y*i . Thus, y*i can be interpreted as the contribution to profit per unit of resource i when using the optimal solution. This interpretation of the dual variables leads to our interpretation of the overall dual problem. Specifically, since each unit of activity j in the primal problem consumes aij units of resource i, m ⌺i⫽1 ai j yi is interpreted as the current contribution to profit of the mix of resources that would be consumed if 1 unit of activity j were used ( j ⫽ 1, 2, . . . , n). For the Wyndor problem, 1 unit of activity j corresponds to producing 1 batch of product j per week, where j ⫽ 1, 2. The mix of resources consumed by producing 1 batch of product 1 is 1 hour of production time in Plant 1 and 3 hours in Plant 3. The corresponding mix per batch of product 2 is 2 hours each in Plants 2 and 3. Thus, y1 ⫹ 3y3 and 2y2 ⫹ 2y3 are interpreted as the current contributions to profit (in thousands of dollars per week) of these respective mixes of resources per batch produced per week of the respective products. For each activity j, this same mix of resources (and more) probably can be used in other ways as well, but no alternative use should be considered if it is less profitable than 1 unit of activity j. Since cj is interpreted as the unit profit from activity j, each functional constraint in the dual problem is interpreted as follows: ⌺m i⫽1 aij yi ⱖ cj says that the actual contribution to profit of the above mix of resources must be at least as much as if they were used by 1 unit of activity j; otherwise, we would not be making the best possible use of these resources. For the Wyndor problem, the unit profits (in thousands of dollars per week) are c1 ⫽ 3 and c2 ⫽ 5, so the dual functional constraints with this interpretation are y1 ⫹ 3y3 ⱖ 3 and 2y2 ⫹ 2y3 ⱖ 5. Similarly, the interpretation of the nonnegativity constraints is the following: yi ⱖ 0 says that the contribution to profit of resource i (i ⫽ 1, 2, . . . , m) must be nonnegative: otherwise, it would be better not to use this resource at all. The objective m Minimize W ⫽ 冱 bi yi i⫽1 6.2 ECONOMIC INTERPRETATION OF DUALITY 207 can be viewed as minimizing the total implicit value of the resources consumed by the activities. For the Wyndor problem, the total implicit value (in thousands of dollars per week) of the resources consumed by the two products is W  4y1  12y2  18y3. This interpretation can be sharpened somewhat by differentiating between basic and nonbasic variables in the primal problem for any given BF solution (x1, x2, . . . , xnm). Recall that the basic variables (the only variables whose values can be nonzero) always have a coefficient of zero in row 0. Therefore, referring again to Table 6.4 and the accompanying equation for zj, we see that m 冱 aij yi  cj, i1 yi ⫽ 0, if xj ⬎ 0 ( j ⫽ 1, 2, . . . , n), if xn⫹i ⬎ 0 (i ⫽ 1, 2, . . . , m). (This is one version of the complementary slackness property discussed in Sec. 6.3.) The economic interpretation of the first statement is that whenever an activity j operates at a strictly positive level (xj ⬎ 0), the marginal value of the resources it consumes must equal (as opposed to exceeding) the unit profit from this activity. The second statement implies that the marginal value of resource i is zero (yi ⫽ 0) whenever the supply of this resource is not exhausted by the activities (xn⫹i ⬎ 0). In economic terminology, such a resource is a “free good”; the price of goods that are oversupplied must drop to zero by the law of supply and demand. This fact is what justifies interpreting the objective for the dual problem as minimizing the total implicit value of the resources consumed, rather than the resources allocated. To illustrate these two statements, consider the optimal BF solution (2, 6, 2, 0, 0) for the Wyndor problem. The basic variables are x1, x2, and x3, so their coefficients in row 0 are zero, as shown in the bottom row of Table 6.5. This bottom row also gives the corresponding dual solution: y1* ⫽ 0, y2* ⫽ 3ᎏ2ᎏ, y3* ⫽ 1, with surplus variables (z1* ⫺ c1) ⫽ 0 and (z2* ⫺ c2) ⫽ 0. Since x1 ⬎ 0 and x2 ⬎ 0, both these surplus variables and direct calculations indicate that y1* ⫹ 3y3* ⫽ c1 ⫽ 3 and 2y2* ⫹ 2y3* ⫽ c2 ⫽ 5. Therefore, the value of the resources consumed per batch of the respective products produced does indeed equal the respective unit profits. The slack variable for the constraint on the amount of Plant 1 capacity used is x3 ⬎ 0, so the marginal value of adding any Plant 1 capacity would be zero (y1* ⫽ 0). Interpretation of the Simplex Method The interpretation of the dual problem also provides an economic interpretation of what the simplex method does in the primal problem. The goal of the simplex method is to find how to use the available resources in the most profitable feasible way. To attain this goal, we must reach a BF solution that satisfies all the requirements on profitable use of the resources (the constraints of the dual problem). These requirements comprise the condition for optimality for the algorithm. For any given BF solution, the requirements (dual constraints) associated with the basic variables are automatically satisfied (with equality). However, those associated with nonbasic variables may or may not be satisfied. In particular, if an original variable xj is nonbasic so that activity j is not used, then the current contribution to profit of the resources that would be required to undertake each unit of activity j m 冱 aij yi i⫽1 208 CHAPTER 6 DUALITY THEORY may be smaller than, larger than, or equal to the unit profit cj obtainable from the activity. If it is smaller, so that zj ⫺ cj ⬍ 0 in row 0 of the simplex tableau, then these resources can be used more profitably by initiating this activity. If it is larger (zj ⫺ cj ⬎ 0), then these resources already are being assigned elsewhere in a more profitable way, so they should not be diverted to activity j. If zj ⫺ cj ⫽ 0, there would be no change in profitability by initiating activity j. Similarly, if a slack variable xn⫹i is nonbasic so that the total allocation bi of resource i is being used, then yi is the current contribution to profit of this resource on a marginal basis. Hence, if yi ⬍ 0, profit can be increased by cutting back on the use of this resource (i.e., increasing xn⫹i). If yi ⬎ 0, it is worthwhile to continue fully using this resource, whereas this decision does not affect profitability if yi ⫽ 0. Therefore, what the simplex method does is to examine all the nonbasic variables in the current BF solution to see which ones can provide a more profitable use of the resources by being increased. If none can, so that no feasible shifts or reductions in the current proposed use of the resources can increase profit, then the current solution must be optimal. If one or more can, the simplex method selects the variable that, if increased by 1, would improve the profitability of the use of the resources the most. It then actually increases this variable (the entering basic variable) as much as it can until the marginal values of the resources change. This increase results in a new BF solution with a new row 0 (dual solution), and the whole process is repeated. The economic interpretation of the dual problem considerably expands our ability to analyze the primal problem. However, you already have seen in Sec. 6.1 that this interpretation is just one ramification of the relationships between the two problems. In Sec. 6.3, we delve into these relationships more deeply. ■ 6.3 PRIMAL–DUAL RELATIONSHIPS Because the dual problem is a linear programming problem, it also has corner-point solutions. Furthermore, by using the augmented form of the problem, we can express these corner-point solutions as basic solutions. Because the functional constraints have the ⱖ form, this augmented form is obtained by subtracting the surplus (rather than adding the slack) from the left-hand side of each constraint j ( j ⫽ 1, 2, . . . , n).2 This surplus is m zj ⫺ cj ⫽ 冱 aijyi ⫺ cj , for j ⫽ 1, 2, . . . , n. i⫽1 Thus, zj⫺cj plays the role of the surplus variable for constraint j (or its slack variable if the constraint is multiplied through by ⫺1). Therefore, augmenting each corner-point solution (y1, y2, . . . , ym) yields a basic solution (y1, y2, . . . , ym , z1 ⫺ c1, z2 ⫺ c2, . . . , zn ⫺ cn) by using this expression for zj ⫺ cj. Since the augmented form of the dual problem has n functional constraints and n ⫹ m variables, each basic solution has n basic variables and m nonbasic variables. (Note how m and n reverse their previous roles here because, as Table 6.3 indicates, dual constraints correspond to primal variables and dual variables correspond to primal constraints.) 2 You might wonder why we do not also introduce artificial variables into these constraints as discussed in Sec. 4.6. The reason is that these variables have no purpose other than to change the feasible region temporarily as a convenience in starting the simplex method. We are not interested now in applying the simplex method to the dual problem, and we do not want to change its feasible region. 6.3 209 PRIMAL-DUAL RELATIONSHIPS Complementary Basic Solutions One of the important relationships between the primal and dual problems is a direct correspondence between their basic solutions. The key to this correspondence is row 0 of the simplex tableau for the primal basic solution, such as shown in Table 6.4 or 6.5. Such a row 0 can be obtained for any primal basic solution, feasible or not, by using the formulas given in the bottom part of Table 5.8. Note again in Tables 6.4 and 6.5 how a complete solution for the dual problem (including the surplus variables) can be read directly from row 0. Thus, because of its coefficient in row 0, each variable in the primal problem has an associated variable in the dual problem, as summarized in Table 6.7, first for any problem and then for the Wyndor problem. A key insight here is that the dual solution read from row 0 must also be a basic solution! The reason is that the m basic variables for the primal problem are required to have a coefficient of zero in row 0, which thereby requires the m associated dual variables to be zero, i.e., nonbasic variables for the dual problem. The values of the remaining n (basic) variables then will be the simultaneous solution to the system of equations given at the beginning of this section. In matrix form, this system of equations is z ⫺ c ⫽ yA ⫺ c, and the fundamental insight of Sec. 5.3 actually identifies its solution for z ⫺ c and y as being the corresponding entries in row 0. Because of the symmetry property quoted in Sec. 6.1 (and the direct association between variables shown in Table 6.7), the correspondence between basic solutions in the primal and dual problems is a symmetric one. Furthermore, a pair of complementary basic solutions has the same objective function value, shown as W in Table 6.4. Let us now summarize our conclusions about the correspondence between primal and dual basic solutions, where the first property extends the complementary solutions property of Sec. 6.1 to the augmented forms of the two problems and then to any basic solution (feasible or not) in the primal problem. Complementary basic solutions property: Each basic solution in the primal problem has a complementary basic solution in the dual problem, where their respective objective function values (Z and W) are equal. Given row 0 of the simplex tableau for the primal basic solution, the complementary dual basic solution (y, z ⫺ c) is found as shown in Table 6.4. The next property shows how to identify the basic and nonbasic variables in this complementary basic solution. Complementary slackness property: Given the association between variables in Table 6.7, the variables in the primal basic solution and the complementary dual basic solution satisfy the complementary slackness relationship shown in Table 6.8. Furthermore, this relationship is a symmetric one, so that these two basic solutions are complementary to each other. ■ TABLE 6.7 Association between variables in primal and dual problems Primal Variable Associated Dual Variable Any problem (Decision variable) xj (Slack variable) xn⫹i zj ⫺ cj (surplus variable) j ⫽ 1, 2, . . . , n yi (decision variable) i ⫽ 1, 2, . . . , m Wyndor problem Decision variables: Decision variables: Slack variables: Decision variables: Decision variables: z1 ⫺ c1 (surplus variables) z2 ⫺ c2 y1 (decision variables) y2 y3 x1 x2 x3 x4 x5 210 CHAPTER 6 DUALITY THEORY ■ TABLE 6.8 Complementary slackness relationship for complementary basic solutions Primal Variable Associated Dual Variable Basic Nonbasic Nonbasic Basic (m variables) (n variables) ■ TABLE 6.9 Complementary basic solutions for the Wyndor Glass Co. example Primal Problem Dual Problem No. Basic Solution Feasible? 1 2 3 (0, 0, 4, 12, 18) (4, 0, 0, 12, 6) (6, 0, ⫺2, 12, 0) Yes Yes No 4 (4, 3, 0, 6, 0) 5 ZW Feasible? Basic Solution 0 12 18 No No No Yes 27 No (0, 6, 4, 0, 6) Yes 30 No 6 (2, 6, 2, 0, 0) Yes 36 Yes 7 (4, 6, 0, 0, ⫺6) No 42 Yes 8 (0, 9, 4, ⫺6, 0) No 45 Yes (0, 0, 0, ⫺3, ⫺5) (3, 0, 0, 0, ⫺5) (0, 0, 1, 0, ⫺3) 9 5 ⫺ᎏᎏ, 0, ᎏᎏ, 0, 0 2 2 5 0, ᎏᎏ, 0, ⫺3, 0 2 3 0, ᎏᎏ, 1, 0, 0 2 5 3, ᎏᎏ, 0, 0, 0 2 5 9 0, 0, ᎏᎏ, ᎏᎏ, 0 2 2 冢 冢 冢 冢 冢 冣 冣 冣 冣 冣 The reason for using the name complementary slackness for this latter property is that it says (in part) that for each pair of associated variables, if one of them has slack in its nonnegativity constraint (a basic variable ⬎ 0), then the other one must have no slack (a nonbasic variable ⫽ 0). We mentioned in Sec. 6.2 that this property has a useful economic interpretation for linear programming problems. Example. To illustrate these two properties, again consider the Wyndor Glass Co. problem of Sec. 3.1. All eight of its basic solutions (five feasible and three infeasible) are shown in Table 6.9. Thus, its dual problem (see Table 6.1) also must have eight basic solutions, each complementary to one of these primal solutions, as shown in Table 6.9. The three BF solutions obtained by the simplex method for the primal problem are the first, fifth, and sixth primal solutions shown in Table 6.9. You already saw in Table 6.5 how the complementary basic solutions for the dual problem can be read directly from row 0, starting with the coefficients of the slack variables and then the original variables. The other dual basic solutions also could be identified in this way by constructing row 0 for each of the other primal basic solutions, using the formulas given in the bottom part of Table 5.8. Alternatively, for each primal basic solution, the complementary slackness property can be used to identify the basic and nonbasic variables for the complementary dual basic solution, so that the system of equations given at the beginning of the section can be solved directly to obtain this complementary solution. For example, consider the next-tolast primal basic solution in Table 6.9, (4, 6, 0, 0, ⫺6). Note that x1, x2, and x5 are basic variables, since these variables are not equal to 0. Table 6.7 indicates that the associated dual variables are (z1 ⫺ c1), (z2 ⫺ c2), and y3. Table 6.8 specifies that these associated dual variables are nonbasic variables in the complementary basic solution, so z1 ⫺ c1 ⫽ 0, z2 ⫺ c2 ⫽ 0, y3 ⫽ 0. 6.3 PRIMAL-DUAL RELATIONSHIPS 211 Consequently, the augmented form of the functional constraints in the dual problem,  3y3 ⫺ (z1 ⫺ c1) ⫽ 3 y1 2y2 ⫹ 2y3 ⫺ (z2 ⫺ c2) ⫽ 5, reduce to y1 ⫹0⫺0⫽3 2y2 ⫹ 0 ⫺ 0 ⫽ 5, so that y1 ⫽ 3 and y2 ⫽ 5ᎏ2ᎏ. Combining these values with the values of 0 for the nonbasic variables gives the basic solution (3, 5ᎏ2ᎏ, 0, 0, 0), shown in the rightmost column and nextto-last row of Table 6.9. Note that this dual solution is feasible for the dual problem because all five variables satisfy the nonnegativity constraints. Finally, notice that Table 6.9 demonstrates that (0, 3ᎏ2ᎏ, 1, 0, 0) is the optimal solution for the dual problem, because it is the basic feasible solution with minimal W (36). Relationships between Complementary Basic Solutions We now turn our attention to the relationships between complementary basic solutions, beginning with their feasibility relationships. The middle columns in Table 6.9 provide some valuable clues. For the pairs of complementary solutions, notice how the yes or no answers on feasibility also satisfy a complementary relationship in most cases. In particular, with one exception, whenever one solution is feasible, the other is not. (It also is possible for neither solution to be feasible, as happened with the third pair.) The one exception is the sixth pair, where the primal solution is known to be optimal. The explanation is suggested by the Z ⫽ W column. Because the sixth dual solution also is optimal (by the complementary optimal solutions property), with W ⫽ 36, the first five dual solutions cannot be feasible because W ⬍ 36 (remember that the dual problem objective is to minimize W). By the same token, the last two primal solutions cannot be feasible because Z ⬎ 36. This explanation is further supported by the strong duality property that optimal primal and dual solutions have Z ⫽ W. Next, let us state the extension of the complementary optimal solutions property of Sec. 6.1 for the augmented forms of the two problems. Complementary optimal basic solutions property: An optimal basic solution in the primal problem has a complementary optimal basic solution in the dual problem, where their respective objective function values (Z and W) are equal. Given row 0 of the simplex tableau for the optimal primal solution, the complementary optimal dual solution (y*, z* ⫺ c) is found as shown in Table 6.4. To review the reasoning behind this property, note that the dual solution (y*, z* ⫺ c) must be feasible for the dual problem because the condition for optimality for the primal problem requires that all these dual variables (including surplus variables) be nonnegative. Since this solution is feasible, it must be optimal for the dual problem by the weak duality property (since W ⫽ Z, so y*b ⫽ cx* where x* is optimal for the primal problem). Basic solutions can be classified according to whether they satisfy each of two conditions. One is the condition for feasibility, namely, whether all the variables (including slack variables) in the augmented solution are nonnegative. The other is the condition for optimality, namely, whether all the coefficients in row 0 (i.e., all the variables in the complementary basic solution) are nonnegative. Our names for the different types of basic solutions are summarized in Table 6.10. For example, in Table 6.9, primal basic 212 CHAPTER 6 DUALITY THEORY Primal problem Dual problem n 冱 cj xj  Z m W j1 Superoptimal 冱 bi yi i 1 Suboptimal (optimal) Z* W* (optimal) Superoptimal Suboptimal ■ FIGURE 6.1 Range of possible values of Z  W for certain types of complementary basic solutions. ■ TABLE 6.10 Classification of basic solutions Satisfies Condition for Optimality? Feasible? Yes No Yes Optimal Suboptimal No Superoptimal Neither feasible nor superoptimal ■ TABLE 6.11 Relationships between complementary basic solutions Both Basic Solutions Primal Basic Solution Complementary Dual Basic Solution Suboptimal Optimal Superoptimal Neither feasible nor superoptimal Superoptimal Optimal Suboptimal Neither feasible nor superoptimal Primal Feasible? Dual Feasible? Yes Yes No No No Yes Yes No solutions 1, 2, 4, and 5 are suboptimal, 6 is optimal, 7 and 8 are superoptimal, and 3 is neither feasible nor superoptimal. Given these definitions, the general relationships between complementary basic solutions are summarized in Table 6.11. The resulting range of possible (common) values for the objective functions (Z  W) for the first three pairs given in Table 6.11 (the last pair can have any value) is shown in Fig. 6.1. Thus, while the simplex method is dealing 6.4 213 ADAPTING TO OTHER PRIMAL FORMS directly with suboptimal basic solutions and working toward optimality in the primal problem, it is simultaneously dealing indirectly with complementary superoptimal solutions and working toward feasibility in the dual problem. Conversely, it sometimes is more convenient (or necessary) to work directly with superoptimal basic solutions and to move toward feasibility in the primal problem, which is the purpose of the dual simplex method described in Sec. 8.1. The third and fourth columns of Table 6.11 introduce two other common terms that are used to describe a pair of complementary basic solutions. The two solutions are said to be primal feasible if the primal basic solution is feasible, whereas they are called dual feasible if the complementary dual basic solution is feasible for the dual problem. Using this terminology, the simplex method deals with primal feasible solutions and strives toward achieving dual feasibility as well. When this is achieved, the two complementary basic solutions are optimal for their respective problems. These relationships prove very useful, particularly in sensitivity analysis, as you will see in the next chapter. ■ 6.4 ADAPTING TO OTHER PRIMAL FORMS Thus far it has been assumed that the model for the primal problem is in our standard form. However, we indicated at the beginning of the chapter that any linear programming problem, whether in our standard form or not, possesses a dual problem. Therefore, this section focuses on how the dual problem changes for other primal forms. Each nonstandard form was discussed in Sec. 4.6, and we pointed out how it is possible to convert each one to an equivalent standard form if so desired. These conversions are summarized in Table 6.12. Hence, you always have the option of converting any model to our standard form and then constructing its dual problem in the usual way. To illustrate, we do this for our standard dual problem (it must have a dual also) in Table 6.13. Note that what we end up with is just our standard primal problem! Since any pair of primal and dual problems can be converted to these forms, this fact implies that the dual of the dual problem always is the primal problem. Therefore, for any primal problem and its dual problem, all relationships between them must be symmetric. This is just the symmetry property already stated in Sec. 6.1 (without proof), but now Table 6.13 demonstrates why it holds. One consequence of the symmetry property is that all the statements made earlier in the chapter about the relationships of the dual problem to the primal problem also hold in reverse. ■ TABLE 6.12 Conversions to standard form for linear programming models Nonstandard Form Equivalent Standard Form Minimize Maximize Z n 冱 aij xj  bi j1 n (⫺Z) n ⫺冱 aij xj ⱕ ⫺bi j⫽1 n 冱 aij xj ⫽ bi j⫽1 冱 aij xj ⱕ bi j⫽1 xj unconstrained in sign ⫺ x⫹ j ⫺ xj , n and ⫺冱 aij xj ⱕ ⫺bi j⫽1 x⫹ j ⱖ 0, x⫺ j ⱖ0 CHAPTER 6 DUALITY THEORY ■ TABLE 6.13 Constructing the dual of the dual problem Dual Problem Minimize Converted to Standard Form W ⫽ yb, (⫺W) ⫽ ⫺yb, Maximize subject to subject to yA ⱖ c → ⫺yA ⱕ ⫺c and and y ⱖ 0. Converted to Standard Form Its Dual Problem Maximize → y ⱖ 0. Z ⫽ cx, Minimize subject to Ax ⱕ b (⫺Z) ⫽ ⫺cx, subject to → 214 ⫺Ax ⱖ ⫺b and and x ⱖ 0. x ⱖ 0. Another consequence is that it is immaterial which problem is called the primal and which is called the dual. In practice, you might see a linear programming problem fitting our standard form being referred to as the dual problem. The convention is that the model formulated to fit the actual problem is called the primal problem, regardless of its form. Our illustration of how to construct the dual problem for a nonstandard primal problem did not involve either equality constraints or variables unconstrained in sign. Actually, for these two forms, a shortcut is available. It is possible to show (see Probs. 6.4-7 and 6.4-2a) that an equality constraint in the primal problem should be treated just like a  constraint in constructing the dual problem except that the nonnegativity constraint for the corresponding dual variable should be deleted (i.e., this variable is unconstrained in sign). By the symmetry property, deleting a nonnegativity constraint in the primal problem affects the dual problem only by changing the corresponding inequality constraint to an equality constraint. Another shortcut involves functional constraints in  form for a maximization problem. The straightforward (but longer) approach would begin by converting each such constraint to  form n n 冱 aij xj  bi → ⫺ j⫽1 冱 aij xj ⱕ ⫺bi. j1 Constructing the dual problem in the usual way then gives ⫺aij as the coefficient of yi in functional constraint j (which has ⱖ form) and a coefficient of ⫺bi in the objective function (which is to be minimized), where yi also has a nonnegativity constraint yi ⱖ 0. Now suppose we define a new variable yi⬘ ⫽ ⫺yi. The changes caused by expressing the dual problem in terms of yi⬘ instead of yi are that (1) the coefficients of the variable become ai j for functional constraint j and bi for the objective function and (2) the constraint on the variable becomes yi⬘ ⱕ 0 (a nonpositivity constraint). The shortcut is to use yi⬘ instead of yi as a dual variable so that the parameters in the original constraint (aij and bi) immediately become the coefficients of this variable in the dual problem. 6.4 ADAPTING TO OTHER PRIMAL FORMS 215 Here is a useful mnemonic device for remembering what the forms of dual constraints should be. With a maximization problem, it might seem sensible for a functional constraint to be in  form, slightly odd to be in  form, and somewhat bizarre to be in  form. Similarly, for a minimization problem, it might seem sensible to be in  form, slightly odd to be in  form, and somewhat bizarre to be in  form. For the constraint on an individual variable in either kind of problem, it might seem sensible to have a nonnegativity constraint, somewhat odd to have no constraint (so the variable is unconstrained in sign), and quite bizarre for the variable to be restricted to be less than or equal to zero. Now recall the correspondence between entities in the primal and dual problems indicated in Table 6.3; namely, functional constraint i in one problem corresponds to variable i in the other problem, and vice versa. The sensible-odd-bizarre method, or SOB method for short, says that the form of a functional constraint or the constraint on a variable in the dual problem should be sensible, odd, or bizarre, depending on whether the form for the corresponding entity in the primal problem is sensible, odd, or bizarre. Here is a summary. The SOB Method for Determining the Form of Constraints in the Dual.3 1. Formulate the primal problem in either maximization form or minimization form, and then the dual problem automatically will be in the other form. 2. Label the different forms of functional constraints and of constraints on individual variables in the primal problem as being sensible, odd, or bizarre according to Table 6.14. The labeling of the functional constraints depends on whether the problem is a maximization problem (use the second column) or a minimization problem (use the third column). 3. For each constraint on an individual variable in the dual problem, use the form that has the same label as for the functional constraint in the primal problem that corresponds to this dual variable (as indicated by Table 6.3). 4. For each functional constraint in the dual problem, use the form that has the same label as for the constraint on the corresponding individual variable in the primal problem (as indicated by Table 6.3). The arrows between the second and third columns of Table 6.14 spell out the correspondence between the forms of constraints in the primal and dual. Note that the correspondence always is between a functional constraint in one problem and a constraint on an individual variable in the other problem. Since the primal problem can be either a maximization or minimization problem, where the dual then will be of the opposite type, the second column of the table gives the form for whichever is the maximization problem and the third column gives the form for the other problem (a minimization problem). To illustrate, consider the radiation therapy example presented at the beginning of Sec. 3.4. To show the conversion in both directions in Table 6.14, we begin with the maximization form of this model as the primal problem, before using the (original) minimization form. The primal problem in maximization form is shown on the left side of Table 6.15. By using the second column of Table 6.14 to represent this problem, the arrows in this table indicate the form of the dual problem in the third column. These same arrows are used in Table 6.15 to show the resulting dual problem. (Because of these arrows, we have placed the functional constraints last in the dual problem rather than in their usual top position.) 3 This particular mnemonic device (and a related one) for remembering what the forms of the dual constraints should be has been suggested by Arthur T. Benjamin, a mathematics professor at Harvey Mudd College. An interesting and wonderfully bizarre fact about Professor Benjamin himself is that he is one of the world’s great human calculators who can perform such feats as quickly multiplying six-digit numbers in his head. For a further discussion and derivation of the SOB method, see A. T. Benjamin: “Sensible Rules for Remembering Duals — The S-O-B Method,” SIAM Review, 37(1): 85–87, 1995. 216 CHAPTER 6 DUALITY THEORY ■ TABLE 6.14 Corresponding primal-dual forms Label Primal Problem (or Dual Problem) Dual Problem (or Primal Problem) Maximize Minimize Z (or W) W (or Z) Sensible Odd Bizarre Constraint i: ⱕ form ⫽ form ⱖ form Variable yi (or xi): yi ⱖ 0 Unconstrained yi⬘ ⱕ 0 Sensible Odd Bizarre Variable xj (or yj): Constraint j: xj ⱖ 0 ←→ ⱖ form Unconstrained ←→ ⫽ form xj⬘ ⱕ 0 ←→ ⱕ form ←→ ←→ ←→ ■ TABLE 6.15 One primal-dual form for the radiation therapy example Primal Problem Maximize ⫺Z ⫽ ⫺0.4x1 ⫺ 0.5x2, 0.3x1 ⫹ 0.1x2 ⱕ 2.7 0.5x1 ⫹ 0.5x2 ⫽ 6 0.6x1 ⫹ 0.4x2 ⱖ 6 W ⫽ 2.7y1 ⫹ 6y2 ⫹ 6y3⬘, ←→ ←→ ←→ y1 ⱖ 0 y2 unconstrained in sign y3⬘ ⱕ 0 (S) (O) (B) and and (S) (S) Minimize subject to subject to (S) (O) (B) Dual Problem x1 ⱖ 0 x2 ⱖ 0 ←→ ←→ 0.3y1 ⫹ 0.5y2 ⫹ 0.6y3⬘ ⱖ ⫺0.4 0.1y1 ⫹ 0.5y2 ⫹ 0.4y3⬘ ⱖ ⫺0.5 (S) (S) Beside each constraint in both problems, we have inserted (in parentheses) an S, O, or B to label the form as sensible, odd, or bizarre. As prescribed by the SOB method, the label for each dual constraint always is the same as for the corresponding primal constraint. However, there was no need (other than for illustrative purposes) to convert the primal problem to maximization form. Using the original minimization form, the equivalent primal problem is shown on the left side of Table 6.16. Now we use the third column of Table 6.14 to represent this primal problem, where the arrows indicate the form of the dual problem in the second column. These same arrows in Table 6.16 show the resulting dual problem on the right side. Again, the labels on the constraints show the application of the SOB method. Just as the primal problems in Tables 6.15 and 6.16 are equivalent, the two dual problems also are completely equivalent. The key to recognizing this equivalency lies in the fact that the variables in each version of the dual problem are the negative of those in the other version (y1⬘ ⫽ ⫺y1, y2⬘ ⫽ ⫺y2, y3 ⫽ ⫺y3⬘). Therefore, for each version, if the variables in the other version are used instead, and if both the objective function and the constraints are multiplied through by ⫺1, then the other version is obtained. (Problem 6.4-5 asks you to verify this.) If you would like to see another example of using the SOB method to construct a dual problem, one is given in the Solved Examples section of the book’s website. If the simplex method is to be applied to either a primal or a dual problem that has any variables constrained to be nonpositive (for example, y3⬘ ⱕ 0 in the dual problem of Table 6.15), this variable may be replaced by its nonnegative counterpart (for example, y3 ⫽ ⫺y3⬘). 6.5 217 THE ROLE OF DUALITY THEORY IN SENSITIVITY ANALYSIS ■ TABLE 6.16 The other primal-dual form for the radiation therapy example Primal Problem Minimize Z ⫽ 0.4x1 ⫹ 0.5x2, 0.3x1 ⫹ 0.1x2 ⱕ 2.7 0.5x1 ⫹ 0.5x2 ⫽ 6 0.6x1 ⫹ 0.4x2 ⱖ 6 W ⫽ 2.7y1⬘ ⫹ 6y2⬘ ⫹ 6y3, ←→ ←→ ←→ y1⬘ ⱕ 0 y2⬘ unconstrained in sign y3 ⱖ 0 (B) (O) (S) and and (S) (S) Maximize subject to subject to (B) (O) (S) Dual Problem x1 ⱖ 0 x2 ⱖ 0 ←→ ←→ 0.3y1⬘ ⫹ 0.5y2⬘ ⫹ 0.6y3 ⱕ 0.4 0.1y1⬘ ⫹ 0.5y2⬘ ⫹ 0.4y3 ⱕ 0.6 (S) (S) When artificial variables are used to help the simplex method solve a primal problem, the duality interpretation of row 0 of the simplex tableau is the following: Since artificial variables play the role of slack variables, their coefficients in row 0 now provide the values of the corresponding dual variables in the complementary basic solution for the dual problem. Since artificial variables are used to replace the real problem with a more convenient artificial problem, this dual problem actually is the dual of the artificial problem. However, after all the artificial variables become nonbasic, we are back to the real primal and dual problems. With the two-phase method, the artificial variables would need to be retained in phase 2 in order to read off the complete dual solution from row 0. With the Big M method, since M has been added initially to the coefficient of each artificial variable in row 0, the current value of each corresponding dual variable is the current coefficient of this artificial variable minus M. For example, look at row 0 in the final simplex tableau for the radiation therapy example, given at the bottom of Table 4.12. After M is subtracted from the coefficients of the artificial variables x苶4 and x苶6, the optimal solution for the corresponding dual problem given in Table 6.15 is read from the coefficients of x3, x苶4, and x苶6 as (y1, y2, y3⬘) ⫽ (0.5, ⫺1.1, 0). As usual, the surplus variables for the two functional constraints are read from the coefficients of x1 and x2 as z1 ⫺ c1 ⫽ 0 and z2 ⫺ c2 ⫽ 0. ■ 6.5 THE ROLE OF DUALITY THEORY IN SENSITIVITY ANALYSIS As described further in the next chapter, sensitivity analysis basically involves investigating the effect on the optimal solution if changes occur in the values of the model parameters aij , bi, and cj. However, changing parameter values in the primal problem also changes the corresponding values in the dual problem. Therefore, you have your choice of which problem to use to investigate each change. Because of the primal-dual relationships presented in Secs. 6.1 and 6.3 (especially the complementary basic solutions property), it is easy to move back and forth between the two problems as desired. In some cases, it is more convenient to analyze the dual problem directly in order to determine the complementary effect on the primal problem. We begin by considering two such cases. Changes in the Coefficients of a Nonbasic Variable Suppose that the changes made in the original model occur in the coefficients of a variable that was nonbasic in the original optimal solution. What is the effect of these changes on this solution? Is it still feasible? Is it still optimal? 218 CHAPTER 6 DUALITY THEORY Because the variable involved is nonbasic (value of zero), changing its coefficients cannot affect the feasibility of the solution. Therefore, the open question in this case is whether it is still optimal. As Tables 6.10 and 6.11 indicate, an equivalent question is whether the complementary basic solution for the dual problem is still feasible after these changes are made. Since these changes affect the dual problem by changing only one constraint, this question can be answered simply by checking whether this complementary basic solution still satisfies this revised constraint. We shall illustrate this case in the corresponding subsection of Sec. 7.2 after developing a relevant example. The Solved Examples section of the book’s website also gives another example for both this case and the next one. Introduction of a New Variable As indicated in Table 6.6, the decision variables in the model typically represent the levels of the various activities under consideration. In some situations, these activities were selected from a larger group of possible activities, where the remaining activities were not included in the original model because they seemed less attractive. Or perhaps these other activities did not come to light until after the original model was formulated and solved. Either way, the key question is whether any of these previously unconsidered activities are sufficiently worthwhile to warrant initiation. In other words, would adding any of these activities to the model change the original optimal solution? Adding another activity amounts to introducing a new variable, with the appropriate coefficients in the functional constraints and objective function, into the model. The only resulting change in the dual problem is to add a new constraint (see Table 6.3). After these changes are made, would the original optimal solution, along with the new variable equal to zero (nonbasic), still be optimal for the primal problem? As for the preceding case, an equivalent question is whether the complementary basic solution for the dual problem is still feasible. And, as before, this question can be answered simply by checking whether this complementary basic solution satisfies one constraint, which in this case is the new constraint for the dual problem. To illustrate, suppose for the Wyndor Glass Co. problem introduced in Sec. 3.1 that a possible third new product now is being considered for inclusion in the product line. Letting xnew represent the production rate for this product, we show the resulting revised model as follows: Maximize Z  3x1  5x2  4xnew, subject to x1  2x2  2xnew  4 3x1  2x2  3xnew  12 3x1  2x2  xnew  18 and x1  0, x2  0, xnew  0. After we introduced slack variables, the original optimal solution for this problem without xnew (given by Table 4.8) was (x1, x2, x3, x4, x5)  (2, 6, 2, 0, 0). Is this solution, along with xnew  0, still optimal? To answer this question, we need to check the complementary basic solution for the dual problem. As indicated by the complementary optimal basic solutions property in Sec. 6.3, 6.5 THE ROLE OF DUALITY THEORY IN SENSITIVITY ANALYSIS 219 this solution is given in row 0 of the final simplex tableau for the primal problem, using the locations shown in Table 6.4 and illustrated in Table 6.5. Therefore, as given in both the bottom row of Table 6.5 and the sixth row of Table 6.9, the solution is 冢 冣 3 (y1, y2, y3, z1 ⫺ c1, z2 ⫺ c2) ⫽ 0, ᎏᎏ, 1, 0, 0 . 2 (Alternatively, this complementary basic solution can be derived in the way that was illustrated in Sec. 6.3 for the complementary basic solution in the next-to-last row of Table 6.9.) Since this solution was optimal for the original dual problem, it certainly satisfies the original dual constraints shown in Table 6.1. But does it satisfy this new dual constraint? 2y1 ⫹ 3y2 ⫹ y3 ⱖ 4 Plugging in this solution, we see that 冢冣 3 2(0) ⫹ 3 ᎏᎏ ⫹ (1) ⱖ 4 2 is satisfied, so this dual solution is still feasible (and thus still optimal). Consequently, the original primal solution (2, 6, 2, 0, 0), along with xnew ⫽ 0, is still optimal, so this third possible new product should not be added to the product line. This approach also makes it very easy to conduct sensitivity analysis on the coefficients of the new variable added to the primal problem. By simply checking the new dual constraint, you can immediately see how far any of these parameter values can be changed before they affect the feasibility of the dual solution and so the optimality of the primal solution. Other Applications Already we have discussed two other key applications of duality theory to sensitivity analysis, namely, shadow prices and the dual simplex method. As described in Secs. 4.7 and 6.2, the optimal dual solution (y1*, y2*, . . . , ym*) provides the shadow prices for the respective resources that indicate how Z would change if (small) changes were made in the bi (the resource amounts). The resulting analysis will be illustrated in some detail in Sec. 7.2. In more general terms, the economic interpretation of the dual problem and of the simplex method presented in Sec. 6.2 provides some useful insights for sensitivity analysis. When we investigate the effect of changing the bi or the aij values (for basic variables), the original optimal solution may become a superoptimal basic solution (as defined in Table 6.10) instead. If we then want to reoptimize to identify the new optimal solution, the dual simplex method (discussed at the end of Secs. 6.1 and 6.3) should be applied, starting from this basic solution. (This important variant of the simplex method will be described in Sec. 8.1.) We mentioned in Sec. 6.1 that sometimes it is more efficient to solve the dual problem directly by the simplex method in order to identify an optimal solution for the primal problem. When the solution has been found in this way, sensitivity analysis for the primal problem then is conducted by applying the procedure described in Sections 7.1 and 7.2 directly to the dual problem and then inferring the complementary effects on the primal problem (e.g., see Table 6.11). This approach to sensitivity analysis is relatively straightforward because of the close primal-dual relationships described in Secs. 6.1 and 6.3. 220 ■ 6.6 CHAPTER 6 DUALITY THEORY CONCLUSIONS Every linear programming problem has associated with it a dual linear programming problem. There are a number of very useful relationships between the original (primal) problem and its dual problem that enhance our ability to analyze the primal problem. For example, the economic interpretation of the dual problem gives shadow prices that measure the marginal value of the resources in the primal problem and provides an interpretation of the simplex method. Because the simplex method can be applied directly to either problem in order to solve both of them simultaneously, considerable computational effort sometimes can be saved by dealing directly with the dual problem. Duality theory, including the dual simplex method (Sec. 8.1) for working with superoptimal basic solutions, also plays a major role in sensitivity analysis. ■ SELECTED REFERENCES 1. Dantzig, G. B., and M. N. Thapa: Linear Programming 1: Introduction, Springer, New York, 1997. 2. Denardo, E. V.: Linear Programming and Generalizations: A Problem-based Introduction with Spreadsheets, Springer, New York, 2011, chap. 12. 3. Luenberger, D. G., and Y. Ye: Linear and Nonlinear Programming, 3rd ed., Springer, New York, 2008, chap. 4. 4. Murty, K. G.: Optimization for Decision Making: Linear and Quadratic Models, Springer, New York, 2010, chap. 5. 5. Nazareth, J. L.: An Optimization Primer: On Models, Algorithms, and Duality, Springer-Verlag, New York, 2004. 6. Vanderbei, R. J.: Linear Programming: Foundations and Extensions, 4th ed., Springer, New York, 2014, chap 5. ■ LEARNING AIDS FOR THIS CHAPTER ON OUR WEBSITE (www.mhhe.com/hillier) Solved Examples: Examples for Chapter 6 Interactive Procedure in IOR Tutorial: Interactive Graphical Method Automatic Procedures in IOR Tutorial: Solve Automatically by the Simplex Method Graphical Method and Sensitivity Analysis Glossary for Chapter 6 See Appendix 1 for documentation of the software. 221 PROBLEMS ■ PROBLEMS The symbols to the left of some of the problems (or their parts) have the following meaning: I: We suggest that you use the corresponding interactive procedure just listed (the printout records your work). C: Use the computer with any of the software options available to you (or as instructed by your instructor) to solve the problem automatically. An asterisk on the problem number indicates that at least a partial answer is given in the back of the book. 6.1-1.* Construct the dual problem for each of the following linear programming models fitting our standard form. (a) Model in Prob. 3.1-6 (b) Model in Prob. 4.7-5 x1 ⫹ x2 ⫹ 2x3 ⱕ 12 x1 ⫹ x2 ⫺ x3 ⱕ 1 and x1 ⱖ 0, x2 ⱖ 0, x3 ⱖ 0. (a) Construct the dual problem. (b) Use duality theory to show that the optimal solution for the primal problem has Z ⱕ 0. 6.1-5. Consider the following problem. Maximize Z ⫽ 2x1 ⫹ 6x2 ⫹ 9x3, subject to x1 ⫹ x3 ⱕ 3 x1x2 ⫹ 2x3 ⱕ 5 (resource 1) (resource 2) 6.1-2. Consider the linear programming model in Prob. 4.5-4. (a) Construct the primal-dual table and the dual problem for this model. (b) What does the fact that Z is unbounded for this model imply about its dual problem? I 6.1-3. For each of the following linear programming models, give your recommendation on which is the more efficient way (probably) to obtain an optimal solution: by applying the simplex method directly to this primal problem or by applying the simplex method directly to the dual problem instead. Explain. (a) Maximize Z  10x1  4x2  7x3, 6.1-6. Follow the instructions of Prob. 6.1-5 for the following problem. and x1 ⱖ 0,      Maximize x2 2x2 x2 x2 x2      2x3 3x3 2x3 x3 x3      25 25 40 90 20 2x1 ⫹ 2x2 ⫺ 2x3 ⱕ 6 (resource 1) 2x1 ⫺x2 ⫹ 2x3 ⱕ 4 (resource 2) and (b) Maximize x1 ⱖ 0, x2  0, x3  0. Maximize Z ⫽ x1 ⫹ 2x2, ⫺x1 ⫹ x2 ⱕ ⫺2 4x1 ⫹ x2 ⱕ 4 and and x1 ⱖ 0, for j  1, 2, 3, 4, 5. 6.1-4. Consider the following problem. subject to x3 ⱖ 0. subject to x1  3x2  2x3  3x4  x5  6 4x1  6x2  5x3  7x4  x5  15 Maximize x2 ⱖ 0, 6.1-7. Consider the following problem. Z  2x1  5x2  3x3  4x4  x5, subject to xj  0, Z ⫽ x1 ⫺ 3x2 ⫹ 2x3, subject to and x1  0, x3 ⱖ 0. (a) Construct the dual problem for this primal problem. (b) Solve the dual problem graphically. Use this solution to identify the shadow prices for the resources in the primal problem. C (c) Confirm your results from part (b) by solving the primal problem automatically by the simplex method and then identifying the shadow prices. subject to 3x1 x1 5x1 x1 2x1 x2 ⱖ 0, Z ⫽ ⫺x1 ⫺ 2x2 ⫺ x3, x2 ⱖ 0. (a) Demonstrate graphically that this problem has no feasible solutions. (b) Construct the dual problem. I (c) Demonstrate graphically that the dual problem has an unbounded objective function. I 222 CHAPTER 6 DUALITY THEORY 6.1-8. Construct and graph a primal problem with two decision variables and two functional constraints that has feasible solutions and an unbounded objective function. Then construct the dual problem and demonstrate graphically that it has no feasible solutions. I 6.1-9. Construct a pair of primal and dual problems, each with two decision variables and two functional constraints, such that both problems have no feasible solutions. Demonstrate this property graphically. I 6.1-10. Construct a pair of primal and dual problems, each with two decision variables and two functional constraints, such that the primal problem has no feasible solutions and the dual problem has an unbounded objective function. 6.1-11. Use the weak duality property to prove that if both the primal and the dual problem have feasible solutions, then both must have an optimal solution. 6.1-12. Consider the primal and dual problems in our standard form presented in matrix notation at the beginning of Sec. 6.1. Use only this definition of the dual problem for a primal problem in this form to prove each of the following results. (a) The weak duality property presented in Sec. 6.1. (b) If the primal problem has an unbounded feasible region that permits increasing Z indefinitely, then the dual problem has no feasible solutions. 6.1-13. Consider the primal and dual problems in our standard form presented in matrix notation at the beginning of Sec. 6.1. Let y* denote the optimal solution for this dual problem. Suppose that b is then replaced by 苶 b. Let x苶 denote the optimal solution for the new primal problem. Prove that cx苶  y*b 苶. 6.1-14. For any linear programming problem in our standard form and its dual problem, label each of the following statements as true or false and then justify your answer. (a) The sum of the number of functional constraints and the number of variables (before augmenting) is the same for both the primal and the dual problems. (b) At each iteration, the simplex method simultaneously identifies a CPF solution for the primal problem and a CPF solution for the dual problem such that their objective function values are the same. (c) If the primal problem has an unbounded objective function, then the optimal value of the objective function for the dual problem must be zero. 6.2-1. Consider the simplex tableaux for the Wyndor Glass Co. problem given in Table 4.8. For each tableau, give the economic interpretation of the following items: (a) Each of the coefficients of the slack variables (x3, x4, x5) in row 0 (b) Each of the coefficients of the decision variables (x1, x2) in row 0 (c) The resulting choice for the entering basic variable (or the decision to stop after the final tableau) 6.3-1.* Consider the following problem. Z  6x1  8x2, Maximize subject to 5x1  2x2  20 x1  2x2  10 and x1  0, x2  0. (a) Construct the dual problem for this primal problem. (b) Solve both the primal problem and the dual problem graphically. Identify the CPF solutions and corner-point infeasible solutions for both problems. Calculate the objective function values for all these solutions. (c) Use the information obtained in part (b) to construct a table listing the complementary basic solutions for these problems. (Use the same column headings as for Table 6.9.) I (d) Work through the simplex method step by step to solve the primal problem. After each iteration (including iteration 0), identify the BF solution for this problem and the complementary basic solution for the dual problem. Also identify the corresponding corner-point solutions. 6.3-2. Consider the model with two functional constraints and two variables given in Prob. 4.1-5. Follow the instructions of Prob. 6.3-1 for this model. 6.3-3. Consider the primal and dual problems for the Wyndor Glass Co. example given in Table 6.1. Using Tables 5.5, 5.6, 6.8, and 6.9, construct a new table showing the eight sets of nonbasic variables for the primal problem in column 1, the corresponding sets of associated variables for the dual problem in column 2, and the set of nonbasic variables for each complementary basic solution in the dual problem in column 3. Explain why this table demonstrates the complementary slackness property for this example. 6.3-4. Suppose that a primal problem has a degenerate BF solution (one or more basic variables equal to zero) as its optimal solution. What does this degeneracy imply about the dual problem? Why? Is the converse also true? 6.3-5. Consider the following problem. Z  2x1 ⫺ 4x2, Maximize subject to x1 ⫺ x2 ⱕ 1 and x1 ⱖ 0, x2 ⱖ 0. (a) Construct the dual problem, and then find its optimal solution by inspection. (b) Use the complementary slackness property and the optimal solution for the dual problem to find the optimal solution for the primal problem. 223 PROBLEMS (c) Suppose that c1, the coefficient of x1 in the primal objective function, actually can have any value in the model. For what values of c1 does the dual problem have no feasible solutions? For these values, what does duality theory then imply about the primal problem? 6.3-6. Consider the following problem. Maximize Z  2x1  7x2  4x3, subject to x1  2x2  x3  10 3x1  3x2  2x3  10 and x1  0, x2  0, x3  0. (a) Construct the dual problem for this primal problem. (b) Use the dual problem to demonstrate that the optimal value of Z for the primal problem cannot exceed 25. (c) It has been conjectured that x2 and x3 should be the basic variables for the optimal solution of the primal problem. Directly derive this basic solution (and Z) by using Gaussian elimination. Simultaneously derive and identify the complementary basic solution for the dual problem by using Eq. (0) for the primal problem. Then draw your conclusions about whether these two basic solutions are optimal for their respective problems. I (d) Solve the dual problem graphically. Use this solution to identify the basic variables and the nonbasic variables for the optimal solution of the primal problem. Directly derive this solution, using Gaussian elimination. 6.3-7.* Reconsider the model of Prob. 6.1-3b. (a) Construct its dual problem. I (b) Solve this dual problem graphically. (c) Use the result from part (b) to identify the nonbasic variables and basic variables for the optimal BF solution for the primal problem. (d) Use the results from part (c) to obtain the optimal solution for the primal problem directly by using Gaussian elimination to solve for its basic variables, starting from the initial system of equations [excluding Eq. (0)] constructed for the simplex method and setting the nonbasic variables to zero. (e) Use the results from part (c) to identify the defining equations (see Sec. 5.1) for the optimal CPF solution for the primal problem, and then use these equations to find this solution. 6.3-8. Consider the model given in Prob. 5.3-10. (a) Construct the dual problem. (b) Use the given information about the basic variables in the optimal primal solution to identify the nonbasic variables and basic variables for the optimal dual solution. (c) Use the results from part (b) to identify the defining equations (see Sec. 5.1) for the optimal CPF solution for the dual problem, and then use these equations to find this solution. I (d) Solve the dual problem graphically to verify your results from part (c). 6.3-9. Consider the model given in Prob. 3.1-5. (a) Construct the dual problem for this model. (b) Use the fact that (x1, x2)  (13, 5) is optimal for the primal problem to identify the nonbasic variables and basic variables for the optimal BF solution for the dual problem. (c) Identify this optimal solution for the dual problem by directly deriving Eq. (0) corresponding to the optimal primal solution identified in part (b). Derive this equation by using Gaussian elimination. (d) Use the results from part (b) to identify the defining equations (see Sec. 5.1) for the optimal CPF solution for the dual problem. Verify your optimal dual solution from part (c) by checking to see that it satisfies this system of equations. 6.3-10. Suppose that you also want information about the dual problem when you apply the matrix form of the simplex method (see Sec. 5.2) to the primal problem in our standard form. (a) How would you identify the optimal solution for the dual problem? (b) After obtaining the BF solution at each iteration, how would you identify the complementary basic solution in the dual problem? 6.4-1. Consider the following problem. Maximize Z  x1  x2, subject to x1  2x2  10 2x1  x2  2 and x2  0 (x1 unconstrained in sign). (a) Use the SOB method to construct the dual problem. (b) Use Table 6.12 to convert the primal problem to our standard form given at the beginning of Sec. 6.1, and construct the corresponding dual problem. Then show that this dual problem is equivalent to the one obtained in part (a). 6.4-2. Consider the primal and dual problems in our standard form presented in matrix notation at the beginning of Sec. 6.1. Use only this definition of the dual problem for a primal problem in this form to prove each of the following results. (a) If the functional constraints for the primal problem Ax  b are changed to Ax  b, the only resulting change in the dual problem is to delete the nonnegativity constraints, y  0. (Hint: The constraints Ax  b are equivalent to the set of constraints Ax  b and Ax  b.) (b) If the functional constraints for the primal problem Ax  b are changed to Ax  b, the only resulting change in the dual problem is that the nonnegativity constraints y  0 are replaced by nonpositivity constraints y  0, where the current dual variables are interpreted as the negative of the original dual variables. 224 CHAPTER 6 DUALITY THEORY (Hint: The constraints Ax  b are equivalent to ⫺Ax ⱕ ⫺b.) (c) If the nonnegativity constraints for the primal problem x ⱖ 0 are deleted, the only resulting change in the dual problem is to replace the functional constraints yA ⱖ c by yA ⫽ c. (Hint: A variable unconstrained in sign can be replaced by the difference of two nonnegative variables.) 6.4-3.* Construct the dual problem for the linear programming problem given in Prob. 4.6-3. 6.4-4. Consider the following problem. Minimize Z ⫽ x1 ⫹ 2x2, subject to ⫺2x1 ⫹ x2 ⱖ 1 x1 ⫺ 2x2 ⱖ 1 and x1 ⱖ 0, 6.4-8.* Consider the model without nonnegativity constraints given in Prob. 4.6-14. (a) Construct its dual problem. (b) Demonstrate that the answer in part (a) is correct (i.e., variables without nonnegativity constraints yield equality constraints in the dual problem) by first converting the primal problem to our standard form (see Table 6.12), then constructing its dual problem, and finally converting this dual problem to the form obtained in part (a). 6.4-9. Consider the dual problem for the Wyndor Glass Co. example given in Table 6.1. Demonstrate that its dual problem is the primal problem given in Table 6.1 by going through the conversion steps given in Table 6.13. 6.4-10. Consider the following problem. Minimize Z ⫽ ⫺x1 ⫺ 3x2, subject to x2 ⱖ 0. ⫺x1 ⫺ 2x2 ⱕ 2 ⫺x1 ⫹ x2 ⱕ 4 (a) Construct the dual problem. I (b) Use graphical analysis of the dual problem to determine whether the primal problem has feasible solutions and, if so, whether its objective function is bounded. and 6.4-5. Consider the two versions of the dual problem for the radiation therapy example that are given in Tables 6.15 and 6.16. Review in Sec. 6.4 the general discussion of why these two versions are completely equivalent. Then fill in the details to verify this equivalency by proceeding step by step to convert the version in Table 6.15 to equivalent forms until the version in Table 6.16 is obtained. (a) Demonstrate graphically that this problem has an unbounded objective function. (b) Construct the dual problem. I (c) Demonstrate graphically that the dual problem has no feasible solutions. 6.4-6. For each of the following linear programming models, use the SOB method to construct its dual problem. (a) Model in Prob. 4.6-7 (b) Model in Prob. 4.6-16 6.4-7. Consider the model with equality constraints given in Prob. 4.6-2. (a) Construct its dual problem. (b) Demonstrate that the answer in part (a) is correct (i.e., equality constraints yield dual variables without nonnegativity constraints) by first converting the primal problem to our standard form (see Table 6.12), then constructing its dual problem, and next converting this dual problem to the form obtained in part (a). x1 ⱖ 0, x2 ⱖ 0. I 6.5-1. Consider the model of Prob. 7.2-2. Use duality theory directly to determine whether the current basic solution remains optimal after each of the following independent changes. (a) The change in part (e) of Prob. 7.2-2 (b) The change in part (g) of Prob. 7.2-2 6.5-2. Consider the model of Prob. 7.2-4. Use duality theory directly to determine whether the current basic solution remains optimal after each of the following independent changes. (a) The change in part (b) of Prob. 7.2-4 (b) The change in part (d ) of Prob. 7.2-4 6.5-3. Reconsider part (d ) of Prob. 7.2-6. Use duality theory directly to determine whether the original optimal solution is still optimal. 7 C H A P T E R Linear Programming under Uncertainty One of the key assumptions of linear programming described in Sec. 3.3 is the certainty assumption, which says that the value assigned to each parameter of a linear programming model is assumed to be a known constant. This is a convenient assumption, but it seldom is satisfied precisely. These models typically are formulated to select some future course of action, so the parameter values need to be based on a prediction of future conditions. This sometimes results in having a significant amount of uncertainty about what the parameter values actually will turn to be when the optimal solution from the model is implemented. We now turn our attention to introducing some techniques for dealing with this uncertainty. The most important of these techniques is sensitivity analysis. As previously mentioned in Secs. 2.3, 3.3, and 4.7, sensitivity analysis is an important part of most linear programming studies. One purpose is to determine the effect on the optimal solution from the model if some of the estimates of the parameter values turn out to be wrong. This analysis often will identify some parameters that need to be estimated more carefully before applying the model. It may also identify a new solution that performs better for most plausible values of the parameters. Furthermore, certain parameter values (such as resource amounts) may represent managerial decisions, in which case the choice of the parameter values may be the main issue to be studied, which can be done through sensitivity analysis. The basic procedure for sensitivity analysis (which is based on the fundamental insight of Sec. 5.3) is summarized in Sec. 7.1 and illustrated in Sec. 7.2. Section 7.3 focuses on how to use spreadsheets to perform sensitivity analysis in a straightforward way. (If you don’t have much time to devote to this chapter, it is feasible to read only Sec. 7.3 to obtain a relatively brief introduction to sensitivity analysis.) The remainder of the chapter introduces some other important techniques for dealing with linear programming under uncertainty. For problems where there is no latitude at all for violating the constraints even a little bit, the robust optimization approach described in Sec. 7.4 provides a way of obtaining a solution that is virtually guaranteed to be 225 226 CHAPTER 7 LINEAR PROGRAMMING UNDER UNCERTAINTY feasible and nearly optimal regardless of reasonable deviations of the parameter values from their estimated values. When there is latitude for violating some constraints a little bit without very serious complications, chance constraints introduced in Sec. 7.5 can be used. A chance constraint modifies an original constraint by only requiring that there be some very high probability that the original constraint will be satin two (or more) stages, so the decisions in stage 2 can help compensate for any stage 1 decisions that do not turn out as well as hoped because of errors in estimating some parameter values. Section 7.6 describes stochastic programming with recourse for dealing with such problems. ■ 7.1 THE ESSENCE OF SENSITIVITY ANALYSIS The work of the operations research team usually is not even nearly done when the simplex method has been successfully applied to identify an optimal solution for the model. As we pointed out at the end of Sec. 3.3, one assumption of linear programming is that all the parameters of the model (aij, bi, and cj) are known constants. Actually, the parameter values used in the model normally are just estimates based on a prediction of future conditions. The data obtained to develop these estimates often are rather crude or nonexistent, so that the parameters in the original formulation may represent little more than quick rules of thumb provided by busy line personnel. The data may even represent deliberate overestimates or underestimates to protect the interests of the estimators. Thus, the successful manager and operations research staff will maintain a healthy skepticism about the original numbers coming out of the computer and will view them in many cases as only a starting point for further analysis of the problem. An “optimal” solution is optimal only with respect to the specific model being used to represent the real problem, and such a solution becomes a reliable guide for action only after it has been verified as performing well for other reasonable representations of the problem. Furthermore, the model parameters (particularly bi) sometimes are set as a result of managerial policy decisions (e.g., the amount of certain resources to be made available to the activities), and these decisions should be reviewed after their potential consequences are recognized. For these reasons it is important to perform sensitivity analysis to investigate the effect on the optimal solution provided by the simplex method if the parameters take on other possible values. Usually there will be some parameters that can be assigned any reasonable value without the optimality of this solution being affected. However, there may also be parameters with likely alternative values that would yield a new optimal solution. This situation is particularly serious if the original solution would then have a substantially inferior value of the objective function, or perhaps even be infeasible! Therefore, one main purpose of sensitivity analysis is to identify the sensitive parameters (i.e., the parameters whose values cannot be changed without changing the optimal solution). For coefficients in the objective function that are not categorized as sensitive, it is also very helpful to determine the range of values of the coefficient over which the optimal solution will remain unchanged. (We call this range of values the allowable range for that coefficient.) In some cases, changing the right-hand side of a functional constraint can affect the feasibility of the optimal BF solution. For such parameters, it is useful to determine the range of values over which the optimal BF solution (with adjusted values for the basic variables) will remain feasible. (We call this range of values the allowable range for the right-hand side involved.) This range of values also is the range over which the current shadow price for the corresponding constraint remains valid. In the next section, we will describe the specific procedures for obtaining this kind of information. 7.1 THE ESSENCE OF SENSITIVITY ANALYSIS 227 Such information is invaluable in two ways. First, it identifies the more important parameters, so that special care can be taken to estimate them closely and to select a solution that performs well for most of their likely values. Second, it identifies the parameters that will need to be monitored particularly closely as the study is implemented. If it is discovered that the true value of a parameter lies outside its allowable range, this immediately signals a need to change the solution. For small problems, it would be straightforward to check the effect of a variety of changes in parameter values simply by reapplying the simplex method each time to see if the optimal solution changes. This is particularly convenient when using a spreadsheet formulation. Once Solver has been set up to obtain an optimal solution, all you have to do is make any desired change on the spreadsheet and then click on the Solve button again. However, for larger problems of the size typically encountered in practice, sensitivity analysis would require an exorbitant computational effort if it were necessary to reapply the simplex method from the beginning to investigate each new change in a parameter value. Fortunately, the fundamental insight discussed in Sec. 5.3 virtually eliminates computational effort. The basic idea is that the fundamental insight immediately reveals just how any changes in the original model would change the numbers in the final simplex tableau (assuming that the same sequence of algebraic operations originally performed by the simplex method were to be duplicated ). Therefore, after making a few simple calculations to revise this tableau, we can check easily whether the original optimal BF solution is now nonoptimal (or infeasible). If so, this solution would be used as the initial basic solution to restart the simplex method (or dual simplex method) to find the new optimal solution, if desired. If the changes in the model are not major, only a very few iterations should be required to reach the new optimal solution from this “advanced” initial basic solution. To describe this procedure more specifically, consider the following situation. The simplex method already has been used to obtain an optimal solution for a linear programming model with specified values for the bi , cj , and aij parameters. To initiate sensitivity analysis, at least one of the parameters is changed. After the changes are made, let bi , cj , and aij denote the values of the various parameters. Thus, in matrix notation, bb , c  c, A A, for the revised model. The first step is to revise the final simplex tableau to reflect these changes. In particular, we want to find the revised final tableau that would result if exactly the same algebraic operations (including the same multiples of rows being added to or subtracted from other rows) that led from the initial tableau to the final tableau were repeated when starting from the new initial tableau. (This isn’t necessarily the same as reapplying the simplex method since the changes in the initial tableau might cause the simplex method to change some of the algebraic operations being used.) Continuing to use the notation presented in Table 5.9, as well as the accompanying formulas for the fundamental insight [(1) t*  t  y*T and (2) T*  S*T], the revised final tableau is calculated from y* and S* (which have not changed) and the new initial tableau, as shown in Table 7.1. Note that y* and S* together are the coefficients of the slack variables in the final simplex tableau, where the vector y* (the dual variables) equals these coefficients in row 0 and the matrix S* gives these coefficients in the other rows of the tableau. Thus, simply by using y*, S*, and the revised numbers in the initial tableau, Table 7.1 reveals how the revised numbers in the rest of the final tableau are calculated immediately without having to repeat any algebraic operations. 228 CHAPTER 7 LINEAR PROGRAMMING UNDER UNCERTAINTY ■ TABLE 7.1 Revised final simplex tableau resulting from changes in original model Coefficient of: Eq. Z Original Variables Slack Variables Right Side (0) 1 c 0 0 (1, 2, . . . , m) 0 A  I  b (0) 1 z*  c  y*A   c y* Z*  y*b  (1, 2, . . . , m) 0 A*  S*A  S* b*  S*b  New initial tableau Revised final tableau Example (Variation 1 of the Wyndor Model). To illustrate, suppose that the first revision in the model for the Wyndor Glass Co. problem of Sec. 3.1 is the one shown in Table 7.2. Thus, the changes from the original model are c1  3  4, a31  3  2, and b2  12  24. Figure 7.1 shows the graphical effect of these changes. For the original model, the simplex method already has identified the optimal CPF solution as (2, 6), x2 x1  0 (3, 12) 2x2  24 10 x1  4 (0, 9) optimal 8 6 2x2  12 (2, 6) 4 ■ FIGURE 7.1 Shift of the final corner-point solution from (2, 6) to (3, 12) for Variation 1 of the Wyndor Glass Co. model where c1  3  4, a31  3  2, and b2  12  24. 2x1  2x2  18 2 3x1  2x2  18 x2  0 0 2 4 6 8 x1 7.1 229 THE ESSENCE OF SENSITIVITY ANALYSIS ■ TABLE 7.2 The original model and the first revised model (variation 1) for conducting sensitivity analysis on the Wyndor Glass Co. model Original Model Maximize Z  [3, 5] Revised Model x , x1 Maximize Z  [4, 5] x1 2 2 subject to subject to 1 0  3 x , 0 x1 2  x2  2    4  12     18  1 0  2 0 x1 2  x2  2    4  24     18  and and x  0. x  0. lying at the intersection of the two constraint boundaries, shown as dashed lines 2x2  12 and 3x1  2x2  18. Now the revision of the model has shifted both of these constraint boundaries as shown by the dark lines 2x2  24 and 2x1  2x2  18. Consequently, the previous CPF solution (2, 6) now shifts to the new intersection (3, 12), which is a corner-point infeasible solution for the revised model. The procedure described in the preceding paragraphs finds this shift algebraically (in augmented form). Furthermore, it does so in a manner that is very efficient even for huge problems where graphical analysis is impossible. To carry out this procedure, we begin by displaying the parameters of the revised model in matrix form: 1   A  0  2 c  [4, 5], 0  2 ,  2  4    b   24  .    18  The resulting new initial simplex tableau is shown at the top of Table 7.3. Below this tableau is the original final tableau (as first given in Table 4.8). We have drawn dark boxes around the portions of this final tableau that the changes in the model definitely do not change, namely, the coefficients of the slack variables in both row 0 (y*) and the rest of the rows (S*). Thus, y*  [0, 32, 1], 1  S*   0  0 1  3 1  2 1  3  13   0 . 1  3 These coefficients of the slack variables necessarily are unchanged with the same algebraic operations originally performed by the simplex method because the coefficients of these same variables in the initial tableau are unchanged. However, because other portions of the initial tableau have changed, there will be changes in the rest of the final tableau as well. Using the formulas in Table 7.1, we calculate the revised numbers in the rest of the final tableau as follows: 1  z*  c  [0, , 1]  0  2 3  2 0  2   [4, 5]  [2, 0],  2  4   Z*  [0, , 1]  24   54,    18  3  2 230 CHAPTER 7 LINEAR PROGRAMMING UNDER UNCERTAINTY ■ TABLE 7.3 Obtaining the revised final simplex tableau for Variation 1 of the Wyndor Glass Co. model Coefficient of: New initial tableau Final tableau for original model Basic Variable Eq. Z x1 x2 x3 Z x3 x4 x5 (0) (1) (2) (3) 1 0 0 0 4 1 0 2 5 0 2 2 0 1 0 0 Z (0) 1 0 0 0 x3 (1) 0 0 0 1 x2 (2) 0 0 1 0 x1 (3) 0 1 0 0 Z (0) 1 2 0 0 x3 (1) 0 1  3 0 1 x2 (2) 0 0 1 0 x1 (3) 0 2  3 0 0 Revised final tableau 1  A*   0  0 1  3 1  2 1  3  13   1  0  0 1   3  2  13 0   2   0   2 2 3 x4 0 0 1 0 3  2 1  3 1  2 1  3 3  2 1  3 1  2 1  3 x5 Right Side 0 0 0 1 0 4 24 18 1 36 1  3 2 0 6 1  3 2 1 54 1  3 6 0 12 1  3 2 0  1 ,  0 1  1  6 13   4  3      1  0 24 b*   0      12  . 2     1 1    0 3  2  3   18  The resulting revised final tableau is shown at the bottom of Table 7.3. Actually, we can substantially streamline these calculations for obtaining the revised final tableau. Because none of the coefficients of x2 changed in the original model (tableau), none of them can change in the final tableau, so we can delete their calculation. Several other original parameters (a11, a21, b1, b3) also were not changed, so another shortcut is to calculate only the incremental changes in the final tableau in terms of the incremental changes in the initial tableau, ignoring those terms in the vector or matrix multiplication that involve zero change in the initial tableau. In particular, the only incremental changes in the initial tableau are c1  1, a31  1, and b2  12, so these are the only terms that need be considered. This streamlined approach is shown below, where a zero or dash appears in each spot where no calculation is needed. 7.1 231 THE ESSENCE OF SENSITIVITY ANALYSIS  0  (z*  c)  y* A  c  [0, 32, 1]  0   1 —  —   [1, —]  [2, —].  —  0   Z*  y* b  [0, 32, 1]  12   18.    0 1  A*  S* A   0  0 1  b*  S* b   0  0 1  3 1  2 1  3  1  3 1  2 1  3  13   0  0  0 1   3   1  13 —   —   0   1 —  3 —  — .  —  4 13   0      0   12    6  .    1    4  3   0 Adding these increments to the original quantities in the final tableau (middle of Table 7.3) then yields the revised final tableau (bottom of Table 7.3). This incremental analysis also provides a useful general insight, namely, that changes in the final tableau must be proportional to each change in the initial tableau. We illustrate in the next section how this property enables us to use linear interpolation or extrapolation to determine the range of values for a given parameter over which the final basic solution remains both feasible and optimal. After obtaining the revised final simplex tableau, we next convert the tableau to proper form from Gaussian elimination (as needed). In particular, the basic variable for row i must have a coefficient of 1 in that row and a coefficient of 0 in every other row (including row 0) for the tableau to be in the proper form for identifying and evaluating the current basic solution. Therefore, if the changes have violated this requirement (which can occur only if the original constraint coefficients of a basic variable have been changed), further changes must be made to restore this form. This restoration is done by using Gaussian elimination, i.e., by successively applying step 3 of an iteration for the simplex method (see Chap. 4) as if each violating basic variable were an entering basic variable. Note that these algebraic operations may also cause further changes in the right-side column, so that the current basic solution can be read from this column only when the proper form from Gaussian elimination has been fully restored. For the example, the revised final simplex tableau shown in the top half of Table 7.4 is not in proper form from Gaussian elimination because of the column for the basic variable x1. Specifically, the coefficient of x1 in its row (row 3) is 23 instead of 1, and it has nonzero coefficients (2 and 13) in rows 0 and 1. To restore proper form, row 3 is multiplied by 32; then 2 times this new row 3 is added to row 0 and 13 times new row 3 is subtracted from row 1. This yields the proper form from Gaussian elimination shown in the bottom half of Table 7.4, which now can be used to identify the new values for the current (previously optimal) basic solution: (x1, x2, x3, x4, x5)  (3, 12, 7, 0, 0). Because x1 is negative, this basic solution no longer is feasible. However, it is superoptimal (as defined in Table 6.10), and so dual feasible, because all the coefficients in row 0 still are nonnegative. Therefore, the dual simplex method (presented in Sec. 8.1) can be used to reoptimize (if desired), by starting from this basic solution. (The sensitivity analysis procedure in IOR 232 CHAPTER 7 LINEAR PROGRAMMING UNDER UNCERTAINTY ■ TABLE 7.4 Converting the revised final simplex tableau to proper form from Gaussian elimination for Variation 1 of the Wyndor Glass Co. model Coefficient of: Revised final tableau Converted to proper form Basic Variable Eq. Z x1 x2 Z (0) 1 2 0 0 x3 (1) 0 1  3 0 1 x2 (2) 0 0 1 0 x1 (3) 0 2  3 0 0 Z (0) 1 0 0 0 x3 (1) 0 0 0 1 x2 (2) 0 0 1 0 x1 (3) 0 1 0 0 x3 x4 3  2 1  3 1  2 1  3 1  2 1  2 1  2 1  2 x5 Right Side 1 54 1  3 6 0 12 1  3 2 2 48 1  2 7 0 12 1  2 3 Tutorial includes this option.) Referring to Fig. 7.1 (and ignoring slack variables), the dual simplex method uses just one iteration to move from the corner-point solution (3, 12) to the optimal CPF solution (0, 9). (It is often useful in sensitivity analysis to identify the solutions that are optimal for some set of likely values of the model parameters and then to determine which of these solutions most consistently performs well for the various likely parameter values.) If the basic solution (3, 12, 7, 0, 0) had been neither primal feasible nor dual feasible (i.e., if the tableau had negative entries in both the right-side column and row 0), artificial variables could have been introduced to convert the tableau to the proper form for an initial simplex tableau.1 The General Procedure. When one is testing to see how sensitive the original optimal solution is to the various parameters of the model, the common approach is to check each parameter (or at least cj and bi) individually. In addition to finding allowable ranges as described in the next section, this check might include changing the value of the parameter from its initial estimate to other possibilities in the range of likely values (including the endpoints of this range). Then some combinations of simultaneous changes of parameter values (such as changing an entire functional constraint) may be investigated. Each time one (or more) of the parameters is changed, the procedure described and illustrated here would be applied. Let us now summarize this procedure. Summary of Procedure for Sensitivity Analysis 1. Revision of model: Make the desired change or changes in the model to be investigated next. 2. Revision of final tableau: Use the fundamental insight (as summarized by the formulas on the bottom of Table 7.1) to determine the resulting changes in the final simplex tableau. (See Table 7.3 for an illustration.) 3. Conversion to proper form from Gaussian elimination: Convert this tableau to the proper form for identifying and evaluating the current basic solution by applying (as necessary) Gaussian elimination. (See Table 7.4 for an illustration.) 1 There also exists a primal-dual algorithm that can be directly applied to such a simplex tableau without any conversion. 7.2 233 APPLYING SENSITIVITY ANALYSIS 4. Feasibility test: Test this solution for feasibility by checking whether all its basic variable values in the right-side column of the tableau still are nonnegative. 5. Optimality test: Test this solution for optimality (if feasible) by checking whether all its nonbasic variable coefficients in row 0 of the tableau still are nonnegative. 6. Reoptimization: If this solution fails either test, the new optimal solution can be obtained (if desired) by using the current tableau as the initial simplex tableau (and making any necessary conversions) for the simplex method or dual simplex method. The interactive routine entitled sensitivity analysis in IOR Tutorial will enable you to efficiently practice applying this procedure. In addition, a demonstration in OR Tutor (also entitled sensitivity analysis) provides you with another example. For problems with only two decision variables, graphical analysis provides an alternative to the above algebraic procedure for performing sensitivity analysis. IOR Tutorial includes a procedure called Graphical Method and Sensitivity Analysis for performing such graphical analysis efficiently. In the next section, we shall discuss and illustrate the application of the above algebraic procedure to each of the major categories of revisions in the original model. We also will use graphical analysis to illuminate what is being accomplished algebraically. This discussion will involve, in part, expanding upon the example introduced in this section for investigating changes in the Wyndor Glass Co. model. In fact, we shall begin by individually checking each of the preceding changes. At the same time, we shall integrate some of the applications of duality theory to sensitivity analysis discussed in Sec. 6.5. ■ 7.2 APPLYING SENSITIVITY ANALYSIS Sensitivity analysis often begins with the investigation of changes in the values of bi, the amount of resource i (i  1, 2, . . . , m) being made available for the activities under consideration. The reason is that there generally is more flexibility in setting and adjusting these values than there is for the other parameters of the model. As already discussed in Secs. 4.7 and 6.2, the economic interpretation of the dual variables (the yi) as shadow prices is extremely useful for deciding which changes should be considered. Case 1—Changes in bi Suppose that the only changes in the current model are that one or more of the bi parameters (i  1, 2, . . . , m) has been changed. In this case, the only resulting changes in the final simplex tableau are in the right-side column. Consequently, the tableau still will be in proper form from Gaussian elimination and all the nonbasic variable coefficients in row 0 still will be nonnegative. Therefore, both the conversion to proper form from Gaussian elimination and the optimality test steps of the general procedure can be skipped. After revising the right-side column of the tableau, the only question will be whether all the basic variable values in this column still are nonnegative (the feasibility test). , the As shown in Table 7.1, when the vector of the bi values is changed from b to b formulas for calculating the new right-side column in the final tableau are Right side of final row 0: Right side of final rows 1, 2, . . . , m: Z*  y*b , b*  S*b . (See the bottom of Table 7.1 for the location of the unchanged vector y* and matrix S* in the final tableau.) The first equation has a natural economic interpretation that relates to the economic interpretation of the dual variables presented at the beginning of Sec. 6.2. 234 CHAPTER 7 LINEAR PROGRAMMING UNDER UNCERTAINTY The vector y* gives the optimal values of the dual variables, where these values are interpreted as the shadow prices of the respective resources. In particular, when Z* represents the profit from using the optimal primal solution x* and each bi represents the amount of resource i being made available, yi* indicates how much the profit could be increased per unit increase in bi (for small increases in bi). Example (Variation 2 of the Wyndor Model). Sensitivity analysis is begun for the original Wyndor Glass Co. problem introduced in Sec. 3.1 by examining the optimal values of the yi dual variables ( y1*  0, y2*  23, y3*  1). These shadow prices give the marginal value of each resource i (the available production capacity of Plant i) for the activities (two new products) under consideration, where marginal value is expressed in the units of Z (thousands of dollars of profit per week). As discussed in Sec. 4.7 (see Fig. 4.8), the total profit from these activities can be increased $1,500 per week ( y2* times $1,000 per week) for each additional unit of resource 2 (hour of production time per week in Plant 2) that is made available. This increase in profit holds for relatively small changes that do not affect the feasibility of the current basic solution (and so do not affect the yi* values). Consequently, the OR team has investigated the marginal profitability from the other current uses of this resource to determine if any are less than $1,500 per week. This investigation reveals that one old product is far less profitable. The production rate for this product already has been reduced to the minimum amount that would justify its marketing expenses. However, it can be discontinued altogether, which would provide an additional 12 units of resource 2 for the new products. Thus, the next step is to determine the profit that could be obtained from the new products if this shift were made. This shift changes b2 from 12 to 24 in the linear programming model. Figure 7.2 shows the graphical effect x2 14 x1  0 (2, 12) 2x2  24 10 x1  4 (0, 9) optimal 8 (2, 6) 6 2x2  12 Z  45  3x1  5x2 4 Feasible region 2 ■ FIGURE 7.2 Feasible region for Variation 2 of the Wyndor Glass Co. model where b2  12 → 24. 3x1  2x2  18 x2  0 0 2 4 6 8 x1 7.2 235 APPLYING SENSITIVITY ANALYSIS of this change, including the shift in the final corner-point solution from (2, 6) to (2, 12). (Note that this figure differs from Fig. 7.1, which depicts Variation 1 of the Wyndor model, because the constraint 3x1  2x2  18 has not been changed here.) Thus, for Variation 2 of the Wyndor model, the only revision in the original model is the following change in the vector of the bi values:  4   b   12  → b     18   4    24  .    18  so only b2 has a new value. Analysis of Variation 2. When the fundamental insight (Table 7.1) is applied, the effect of this change in b2 on the original final simplex tableau (middle of Table 7.3) is that the entries in the right-side column change to the following values:  4   Z*  y*b   [0, , 1]  24   54,    18  3  2 1  b*  S*b   0  0 1  3 1  2 1  3   6 13   4      0   24    12  ,      1   2  3   18   x3   6     so  x2    12  .      x1   2  Equivalently, because the only change in the original model is b2  24  12  12, incremental analysis can be used to calculate these same values more quickly. Incremental analysis involves calculating just the increments in the tableau values caused by the change (or changes) in the original model, and then adding these increments to the original values. In this case, the increments in Z* and b* are  b1    Z*  y*b  y*  b2   y*    b3   0    12  ,    0  b1    b*  S* b  S*  b2   S*    b3   0    12  .    0 Therefore, using the second component of y* and the second column of S*, the only calculations needed are 3 Z*  (12)  18, so Z*  36  18  54, 2 1 b1*  (12)  4, so b1*  2  4  6, 3 1 b2*  (12)  6, so b2*  6  6  12, 2 1 b3*  (12)  4, so b3*  2  4  2, 3 where the original values of these quantities are obtained from the right-side column in the original final tableau (middle of Table 7.3). The resulting revised final tableau corresponds completely to this original final tableau except for replacing the right-side column with these new values. 236 CHAPTER 7 LINEAR PROGRAMMING UNDER UNCERTAINTY Therefore, the current (previously optimal) basic solution has become (x1, x2, x3, x4, x5)  (2, 12, 6, 0, 0), which fails the feasibility test because of the negative value. The dual simplex method described in Sec. 8.1 now can be applied, starting with this revised simplex tableau, to find the new optimal solution. This method leads in just one iteration to the new final simplex tableau shown in Table 7.5. (Alternatively, the simplex method could be applied from the beginning, which also would lead to this final tableau in just one iteration in this case.) This tableau indicates that the new optimal solution is (x1, x2, x3, x4, x5)  (0, 9, 4, 6, 0), with Z  45, thereby providing an increase in profit from the new products of 9 units ($9,000 per week) over the previous Z  36. The fact that x4  6 indicates that 6 of the 12 additional units of resource 2 are unused by this solution. Based on the results with b2  24, the relatively unprofitable old product will be discontinued and the unused 6 units of resource 2 will be saved for some future use. Since y3* still is positive, a similar study is made of the possibility of changing the allocation of resource 3, but the resulting decision is to retain the current allocation. Therefore, the current linear programming model at this point (Variation 2) has the parameter values and optimal solution shown in Table 7.5. This model will be used as the starting point for investigating other types of changes in the model later in this section. However, before turning to these other cases, let us take a broader look at the current case. The Allowable Range for a Right-Hand Side. Although b2  12 proved to be too large an increase in b2 to retain feasibility (and so optimality) with the basic solution where x1, x2, and x3 are the basic variables (middle of Table 7.3), the above incremental analysis shows immediately just how large an increase is feasible. In particular, note that 1 b1*  2   b2, 3 1 b2*  6   b2, 2 1 b3*  2   b2, 3 where these three quantities are the values of x3, x2, and x1, respectively, for this basic solution. The solution remains feasible, and so optimal, as long as all three quantities remain nonnegative. 1 1 2   b2  0 ⇒  b2  2 ⇒ b2  6, 3 3 ■ TABLE 7.5 Data for Variation 2 of the Wyndor Glass Co. model Final Simplex Tableau after Reoptimization Coefficient of: Model Parameters c1  3, a11  1, a21  0, a31  3, c2  5 a12  0, a22  2, a32  2, (n  2) b1  4 b2  24 b3  18 Basic Variable Eq. Z Z (0) 1 x3 (1) 0 x2 (2) 0 x4 (3) 0 x1 9  2 1 3  2 3 x2 x3 x4 0 0 0 0 1 0 1 0 0 0 0 1 x5 5  2 0 1  2 1 Right Side 45 4 9 6 An Application Vignette Prior to its acquisition by the Humboldt Redwood Company in 2008, the Pacific Lumber Company (PALCO) was a large timber-holding company with headquarters in Scotia, California. The company had over 200,000 acres of highly productive forest lands that supported five mills located in Humboldt County in northern California. The lands included some of the most spectacular redwood groves in the world that have been given or sold at low cost to be preserved as parks. PALCO managed the remaining lands intensively for sustained timber production, subject to strong forest practice laws. Since PALCO’s forests were home to many species of wildlife, including endangered species such as spotted owls and marbled murrelets, the provisions of the federal Endangered Species Act also needed to be carefully observed. To obtain a sustained yield plan for the entire landholding, PALCO management contracted with a team of OR consultants to develop a 120-year, 12-period, long-term forest ecosystem management plan. The OR team performed this task by formulating and applying a linear programming model to optimize the company’s overall timberland operations and profitability after satisfying the various constraints. The model was a huge one with approximately 8,500 functional constraints and 353,000 decision variables. A major challenge in applying the linear programming model was the many uncertainties in estimating what the parameters of the model should be. The major factors causing these uncertainties were the continuing fluctuations in market supply and demand, logging costs, and environmental regulations. Therefore, the OR team made extensive use of detailed sensitivity analysis. The resulting sustained yield plan increased the company’s present net worth by over $398 million while also generating a better mix of wildlife habitat acres. Source: L. R. Fletcher, H. Alden, S. P. Holmen, D. P. Angelis, and M. J. Etzenhouser: “Long-Term Forest Ecosystem Planning at Pacific Lumber,” Interfaces, 29(1): 90–112, Jan–Feb. 1999. (A link to this article is provided on our website, www.mhhe.com/hillier.) 1 1 6   b2  0 ⇒  b2  6 ⇒ b2  12, 2 2 1 1 2   b2  0 ⇒ 2   b2 ⇒ b2  6. 3 3 Therefore, since b2  12  b2, the solution remains feasible only if 6  b2  6, that is, 6  b2  18. (Verify this graphically in Fig. 7.2.) As introduced in Sec. 4.7, this range of values for b2 is referred to as its allowable range. For any bi, recall from Sec. 4.7 that its allowable range is the range of values over which the current optimal BF solution2 (with adjusted values for the basic variables) remains feasible. Thus, the shadow price for bi remains valid for evaluating the effect on Z of changing bi only as long as bi remains within this allowable range. (It is assumed that the change in this one bi value is the only change in the model.) The adjusted values for the basic variables are obtained from the formula b*  S*b . The calculation of the allowable range then is based on finding the range of values of bi such that b*  0. Many linear programming software packages use this same technique for automatically generating the allowable range for each bi. (A similar technique, discussed under Cases 2a and 3, also is used to generate an allowable range for each cj.) In Chap. 4, we showed the corresponding output for Solver and LINDO in Figs. 4.10 and A4.2, respectively. Table 7.6 summarizes this same output with respect to the bi for the original Wyndor Glass Co. model. For example, both the allowable increase and allowable decrease for b2 are 6, that is, 6  b2  6. The analysis in the preceding paragraph shows how these quantities were calculated. 238 CHAPTER 7 LINEAR PROGRAMMING UNDER UNCERTAINTY Analyzing Simultaneous Changes in Right-Hand Sides. When multiple bi values are changed simultaneously, the formula b*  S*b  can again be used to see how the righthand sides change in the final tableau. If all these right-hand sides still are nonnegative, the feasibility test will indicate that the revised solution provided by this tableau still is feasible. Since row 0 has not changed, being feasible implies that this solution also is optimal. Although this approach works fine for checking the effect of a specific set of changes in the bi, it does not give much insight into how far the bi can be simultaneously changed from their original values before the revised solution will no longer be feasible. As part of postoptimality analysis, the management of an organization often is interested in investigating the effect of various changes in policy decisions (e.g., the amounts of resources being made available to the activities under consideration) that determine the right-hand sides. Rather than considering just one specific set of changes, management may want to explore directions of changes where some right-hand sides increase while others decrease. Shadow prices are invaluable for this kind of exploration. However, shadow prices remain valid for evaluating the effect of such changes on Z only within certain ranges of changes. For each bi, the allowable range gives this range if none of the other bi are changing at the same time. What do these allowable ranges become when some of the bi are changing simultaneously? A partial answer to this question is provided by the following 100 percent rule, which combines the allowable changes (increase or decrease) for the individual bi that are given by the last two columns of a table like Table 7.6. The 100 Percent Rule for Simultaneous Changes in Right-Hand Sides: The shadow prices remain valid for predicting the effect of simultaneously changing the right-hand sides of some of the functional constraints as long as the changes are not too large. To check whether the changes are small enough, calculate for each change the percentage of the allowable change (increase or decrease) for that right-hand side to remain within its allowable range. If the sum of the percentage changes does not exceed 100 percent, the shadow prices definitely will still be valid. (If the sum does exceed 100 percent, then we cannot be sure.) Example (Variation 3 of the Wyndor Model). To illustrate this rule, consider Variation 3 of the Wyndor Glass Co. model, which revises the original model by changing the right-hand side vector as follows:  4   b   12   b     18   4    15  .    15  The calculations for the 100 percent rule in this case are ■ TABLE 7.6 Typical software output for sensitivity analysis of the right-hand sides for the original Wyndor Glass Co. model Constraint Plant 1 Plant 2 Plant 3 2 Shadow Price Current RHS Allowable Increase Allowable Decrease 0.0 1.5 1.0 4 12 18 6 6 2 6 6 When there is more than one optimal BF solution for the current model (before changing bi), we are referring here to the one obtained by the simplex method. 7.2 239 APPLYING SENSITIVITY ANALYSIS 冢 冣 冢 冣 b2: 12  15. 15  12 Percentage of allowable increase  100   50% 6 b3: 18  15. 18  15 Percentage of allowable decrease  100   50% 6 Sum  100% Since the sum of 100 percent barely does not exceed 100 percent, the shadow prices definitely are valid for predicting the effect of these changes on Z. In particular, since the shadow prices of b2 and b3 are 1.5 and 1, respectively, the resulting change in Z would be Z  1.5(3)  1(3)  1.5, so Z* would increase from 36 to 37.5. Figure 7.3 shows the feasible region for this revised model. (The dashed lines show the original locations of the revised constraint boundary lines.) The optimal solution now is the CPF solution (0, 7.5), which gives Z  3x1  5x2  0  5(7.5)  37.5, just as predicted by the shadow prices. However, note what would happen if either b2 were further increased above 15 or b3 were further decreased below 15, so that the sum of the percentages of allowable changes would exceed 100 percent. This would cause the previously optimal corner-point solution to slide to the left of the x2 axis (x1 0), so this infeasible solution would no longer be optimal. Consequently, the old shadow prices would no longer be valid for predicting the new value of Z*. x2 8 (0, 7.5) optimal 2x2  15 6 x1  4 4 Feasible region 2 ■ FIGURE 7.3 Feasible region for Variation 3 of the Wyndor Glass Co. model where b2  12  15 and b3  18  15. 3x1  2x2  15 0 2 4 6 8 x1 240 CHAPTER 7 LINEAR PROGRAMMING UNDER UNCERTAINTY Case 2a—Changes in the Coefficients of a Nonbasic Variable Consider a particular variable xj (fixed j) that is a nonbasic variable in the optimal solution shown by the final simplex tableau. In Case 2a, the only change in the current model is that one or more of the coefficients of this variable—cj , a1j , a2j , . . . , amj —have been changed. Thus, letting cj and aij denote the new values of these parameters, with A j (column j of matrix A ) as the vector containing the aij , we have cj → cj , Aj → A j for the revised model. As described at the beginning of Sec. 6.5, duality theory provides a very convenient way of checking these changes. In particular, if the complementary basic solution y* in the dual problem still satisfies the single dual constraint that has changed, then the original optimal solution in the primal problem remains optimal as is. Conversely, if y* violates this dual constraint, then this primal solution is no longer optimal. If the optimal solution has changed and you wish to find the new one, you can do so rather easily. Simply apply the fundamental insight to revise the xj column (the only one that has changed) in the final simplex tableau. Specifically, the formulas in Table 7.1 reduce to the following: Coefficient of xj in final row 0: Coefficient of xj in final rows 1 to m: z j*  cj  y*A j  cj , * Aj  S*A j. With the current basic solution no longer optimal, the new value of zj*  cj now will be the one negative coefficient in row 0, so restart the simplex method with xj as the initial entering basic variable. Note that this procedure is a streamlined version of the general procedure summarized at the end of Sec. 7.1. Steps 3 and 4 (conversion to proper form from Gaussian elimination and the feasibility test) have been deleted as irrelevant, because the only column being changed in the revision of the final tableau (before reoptimization) is for the nonbasic variable xj. Step 5 (optimality test) has been replaced by a quicker test of optimality to be performed right after step 1 (revision of model). It is only if this test reveals that the optimal solution has changed, and you wish to find the new one, that steps 2 and 6 (revision of final tableau and reoptimization) are needed. Example (Variation 4 of the Wyndor Model). Since x1 is nonbasic in the current optimal solution (see Table 7.5) for Variation 2 of the Wyndor Glass Co. model, the next step in its sensitivity analysis is to check whether any reasonable changes in the estimates of the coefficients of x1 could still make it advisable to introduce product 1. The set of changes that goes as far as realistically possible to make product 1 more attractive would be to reset c1  4 and a31  2. Rather than exploring each of these changes independently (as is often done in sensitivity analysis), we will consider them together. Thus, the changes under consideration are  1  1     c1  3 → c1  4, A1   0  →  A1   0  .      3  2 These two changes in Variation 2 give us Variation 4 of the Wyndor model. Variation 4 actually is equivalent to Variation 1 considered in Sec. 7.1 and depicted in Fig. 7.1, since Variation 1 combined these two changes with the change in the original Wyndor model (b2  12  24) that gave Variation 2. However, the key difference from the treatment of Variation 1 in Sec. 7.1 is that the analysis of Variation 4 treats Variation 2 as being the original model, so our starting point is the final simplex tableau given in Table 7.5 where x1 now is a nonbasic variable. 7.2 241 APPLYING SENSITIVITY ANALYSIS x2 12 2x2  24 10 x1  4 (0, 9) optimal 8 6 4 ■ FIGURE 7.4 Feasible region for Variation 4 of the Wyndor model where Variation 2 (Fig. 6.3) has been revised so a31  3  2 and c1  3  4. Feasible region Z  45  4x1  5x2 2 2x1  2x2  18 2 0 4 6 8 10 x1 The change in a31 revises the feasible region from that shown in Fig. 7.2 to the corresponding region in Fig. 7.4. The change in c1 revises the objective function from Z  3x1  5x2 to Z  4x1  5x2. Figure 7.4 shows that the optimal objective function line Z  45  4x1  5x2 still passes through the current optimal solution (0, 9), so this solution remains optimal after these changes in a31 and c1. To use duality theory to draw this same conclusion, observe that the changes in c1 and a31 lead to a single revised constraint for the dual problem, namely, the constraint that a11y1  a21y2  a31y3  c1. Both this revised constraint and the current y* (coefficients of the slack variables in row 0 of Table 7.5) are shown below: 5 y3*  , 2 y1  3y3  3 → y1  2y3  4, 5 0  2   4. 2 y1*  0, y2*  0, 冢冣 Since y* still satisfies the revised constraint, the current primal solution (Table 7.5) is still optimal. Because this solution is still optimal, there is no need to revise the xj column in the final tableau (step 2). Nevertheless, we do so below for illustrative purposes:  1   z 1*  c1  y*A 1  c1  [0, 0, ]  0   4  1.    2 5  2 242 CHAPTER 7 LINEAR PROGRAMMING UNDER UNCERTAINTY 1  A1*  S*A 1   0  0  1 0  1 0   1      0    1 . 0 2     1 1   2   2  The fact that z1*  c1  0 again confirms the optimality of the current solution. Since z1*  c1 is the surplus variable for the revised constraint in the dual problem, this way of testing for optimality is equivalent to the one used above. This completes the analysis of the effect of changing the current model (Variation 2) to Variation 4. Because any larger changes in the original estimates of the coefficients of x1 would be unrealistic, the OR team concludes that these coefficients are insensitive parameters in the current model. Therefore, they will be kept fixed at their best estimates shown in Table 7.5—c1  3 and a31  3—for the remainder of the sensitivity analysis. The Allowable Range for an Objective Function Coefficient of a Nonbasic Variable. We have just described and illustrated how to analyze simultaneous changes in the coefficients of a nonbasic variable xj. It is common practice in sensitivity analysis to also focus on the effect of changing just one parameter, cj. As introduced in Sec. 4.7, this involves streamlining the above approach to find the allowable range for cj. For any cj, recall from Sec. 4.7 that its allowable range is the range of values over which the current optimal solution (as obtained by the simplex method for the current model before cj is changed) remains optimal. (It is assumed that the change in this one cj is the only change in the current model.) When xj is a nonbasic variable for this solution, the solution remains optimal as long as z*j  cj  0, where z*j  y*Aj is a constant unaffected by any change in the value of cj. Therefore, the allowable range for cj can be calculated as cj  y*Aj. For example, consider the current model (Variation 2) for the Wyndor Glass Co. problem summarized on the left side of Table 7.5, where the current optimal solution (with c1  3) is given on the right side. When considering only the decision variables, x1 and x2, this optimal solution is (x1, x2) = (0, 9), as displayed in Fig. 7.2. When just c1 is changed, this solution remains optimal as long as  1   c1  y*A1  [0, 0, 52]  0   712,    3 so c1  712 is the allowable range. An alternative to performing this vector multiplication is to note in Table 7.5 that z1*  c1 9    2 (the coefficient of x1 in row 0) when c1  3, so z1*  3  92  712. Since z1*  y*A1, this immediately yields the same allowable range. Figure 7.2 provides graphical insight into why c1  712 is the allowable range. At c1  712, the objective function becomes Z  7.5x1  5x2  2.5(3x1  2x2), so the optimal objective line will lie on top of the constraint boundary line 3x1  2x2  18 shown in the figure. Thus, at this endpoint of the allowable range, we have multiple optimal solutions consisting of the line segment between (0, 9) and (4, 3). If c1 were to be increased any further (c1 712 ), only (4, 3) would be optimal. Consequently, we need c1  712 for (0, 9) to remain optimal. IOR Tutorial includes a procedure called Graphical Method and Sensitivity Analysis that enables you to perform this kind of graphical analysis very efficiently. 7.2 243 APPLYING SENSITIVITY ANALYSIS For any nonbasic decision variable xj , the value of z*j  cj sometimes is referred to as the reduced cost for xj , because it is the minimum amount by which the unit cost of activity j would have to be reduced to make it worthwhile to undertake activity j (increase xj from zero). Interpreting cj as the unit profit of activity j (so reducing the unit cost increases cj by the same amount), the value of z*j  cj thereby is the maximum allowable increase in cj to keep the current BF solution optimal. The sensitivity analysis information generated by linear programming software packages normally includes both the reduced cost and the allowable range for each coefficient in the objective function (along with the types of information displayed in Table 7.6). This was illustrated in Fig. 4.10 for Solver and in Figs. A4.1 and A4.2 for LINGO and LINDO. Table 7.7 displays this information in a typical form for our current model (Variation 2 of the Wyndor Glass Co. model). The last three columns are used to calculate the allowable range for each coefficient, so these allowable ranges are c1  3  4.5  7.5, c2  5  3  2. As was discussed in Sec. 4.7, if any of the allowable increases or decreases had turned out to be zero, this would have been a signpost that the optimal solution given in the table is only one of multiple optimal solutions. In this case, changing the corresponding coefficient a tiny amount beyond the zero allowed and re-solving would provide another optimal CPF solution for the original model. Thus far, we have described how to calculate the type of information in Table 7.7 for only nonbasic variables. For a basic variable like x2, the reduced cost automatically is 0. We will discuss how to obtain the allowable range for cj when xj is a basic variable under Case 3. Analyzing Simultaneous Changes in Objective Function Coefficients. Regardless of whether xj is a basic or nonbasic variable, the allowable range for cj is valid only if this objective function coefficient is the only one being changed. However, when simultaneous changes are made in the coefficients of the objective function, a 100 percent rule is available for checking whether the original solution must still be optimal. Much like the 100 percent rule for simultaneous changes in right-hand sides, this 100 percent rule combines the allowable changes (increase or decrease) for the individual cj that are given by the last two columns of a table like Table 7.7, as described below. The 100 Percent Rule for Simultaneous Changes in Objective Function Coefficients: If simultaneous changes are made in the coefficients of the objective function, calculate for each change the percentage of the allowable change (increase or decrease) for that coefficient to remain within its allowable range. If the sum of the percentage changes does not exceed 100 percent, the original optimal solution definitely will still be optimal. (If the sum does exceed 100 percent, then we cannot be sure.) Using Table 7.7 (and referring to Fig. 7.2 for visualization), this 100 percent rule says that (0, 9) will remain optimal for Variation 2 of the Wyndor Glass Co. model even if we ■ TABLE 7.7 Typical software output for sensitivity analysis of the objective function coefficients for Variation 2 of the Wyndor Glass Co. model Variable Value Reduced Cost Current Coefficient Allowable Increase x1 x2 0 9 4.5 0.0 3 5 4.5 Allowable Decrease 3 244 CHAPTER 7 LINEAR PROGRAMMING UNDER UNCERTAINTY simultaneously increase c1 from 3 and decrease c2 from 5 as long as these changes are not too large. For example, if c1 is increased by 1.5 (3313 percent of the allowable change), then c2 can be decreased by as much as 2 (6623 percent of the allowable change). Similarly, if c1 is increased by 3 (6623 percent of the allowable change), then c2 can only be decreased by as much as 1 (3313 percent of the allowable change). These maximum changes revise the objective function to either Z  4.5x1  3x2 or Z  6x1  4x2, which causes the optimal objective function line in Fig. 7.2 to rotate clockwise until it coincides with the constraint boundary equation 3x1  2x2  18. In general, when objective function coefficients change in the same direction, it is possible for the percentages of allowable changes to sum to more than 100 percent without changing the optimal solution. We will give an example at the end of the discussion of Case 3. Case 2b—Introduction of a New Variable After solving for the optimal solution, we may discover that the linear programming formulation did not consider all the attractive alternative activities. Considering a new activity requires introducing a new variable with the appropriate coefficients into the objective function and constraints of the current model—which is Case 2b. The convenient way to deal with this case is to treat it just as if it were Case 2a! This is done by pretending that the new variable xj actually was in the original model with all its coefficients equal to zero (so that they still are zero in the final simplex tableau) and that xj is a nonbasic variable in the current BF solution. Therefore, if we change these zero coefficients to their actual values for the new variable, the procedure (including any reoptimization) does indeed become identical to that for Case 2a. In particular, all you have to do to check whether the current solution still is optimal is to check whether the complementary basic solution y* satisfies the one new dual constraint that corresponds to the new variable in the primal problem. We already have described this approach and then illustrated it for the Wyndor Glass Co. problem in Sec. 6.5. Case 3—Changes in the Coefficients of a Basic Variable Now suppose that the variable xj (fixed j) under consideration is a basic variable in the optimal solution shown by the final simplex tableau. Case 3 assumes that the only changes in the current model are made to the coefficients of this variable. Case 3 differs from Case 2a because of the requirement that a simplex tableau be in proper form from Gaussian elimination. This requirement allows the column for a nonbasic variable to be anything, so it does not affect Case 2a. However, for Case 3, the basic variable xj must have a coefficient of 1 in its row of the simplex tableau and a coefficient of 0 in every other row (including row 0). Therefore, after the changes in the xj column of the final simplex tableau have been calculated,3 it probably will be necessary to apply Gaussian elimination to restore this form, as illustrated in Table 7.4. In turn, this step probably will change the value of the current basic solution and may make it either infeasible or nonoptimal (so reoptimization may be needed). Consequently, all the steps of the overall procedure summarized at the end of Sec. 7.1 are required for Case 3. 3 For the relatively sophisticated reader, we should point out a possible pitfall for Case 3 that would be discovered at this point. Specifically, the changes in the initial tableau can destroy the linear independence of the columns of coefficients of basic variables. This event occurs only if the unit coefficient of the basic variable xj in the final tableau has been changed to zero at this point, in which case more extensive simplex method calculations must be used for Case 3. 7.2 245 APPLYING SENSITIVITY ANALYSIS Before Gaussian elimination is applied, the formulas for revising the xj column are the same as for Case 2a, as summarized below: z j*  cj  y*A j  cj. A*j  S*A j. Coefficient of xj in final row 0: Coefficient of xj in final rows 1 to m: Example (Variation 5 of the Wyndor Model). Because x2 is a basic variable in Table 7.5 for Variation 2 of the Wyndor Glass Co. model, sensitivity analysis of its coefficients fits Case 3. Given the current optimal solution (x1  0, x2  9), product 2 is the only new product that should be introduced, and its production rate should be relatively large. Therefore, the key question now is whether the initial estimates that led to the coefficients of x2 in the current model (Variation 2) could have overestimated the attractiveness of product 2 so much as to invalidate this conclusion. This question can be tested by checking the most pessimistic set of reasonable estimates for these coefficients, which turns out to be c2  3, a22  3, and a32  4. Consequently, the changes to be investigated (Variation 5 of the Wyndor model) are c2  5 → c2  3,  0   A2   2  → A 2     2  0    3 .    4 The graphical effect of these changes is that the feasible region changes from the one shown in Fig. 7.2 to the one in Fig. 7.5. The optimal solution in Fig. 7.2 is (x1, x2) x2 x1  0 12 2x2  24 10 x1  4 (0, 9) 3x2  24 8 6 (0, 92) 3x1  2x2  18 4 ■ FIGURE 7.5 Feasible region for Variation 5 of the Wyndor model where Variation 2 (Fig. 7.2) has been revised so c2  5  3, a22  2  3, and a32  2  4. 2 3x1  4x2  18 3 (4, 2 )optimal Feasible region x2  0 0 2 4 6 8 x1 246 CHAPTER 7 LINEAR PROGRAMMING UNDER UNCERTAINTY  (0, 9), which is the corner-point solution lying at the intersection of the x1  0 and 3x1  2x2  18 constraint boundaries. With the revision of the constraints, the corresponding corner-point solution in Fig. 7.5 is (0, 92 ). However, this solution no longer is optimal, because the revised objective function of Z  3x1  3x2 now yields a new optimal solution of (x1, x2)  (4, 32 ). Analysis of Variation 5. Now let us see how we draw these same conclusions algebraically. Because the only changes in the model are in the coefficients of x2, the only resulting changes in the final simplex tableau (Table 7.5) are in the x2 column. Therefore, the above formulas for Case 3 are used to recompute just this column.  0 5     z2  c2  y*A 2  c2  [0, 0, 2]  3   3  7.    4 1  A2*  S*A 2   0  0  0 0 0  0   1      3    0 2 . 2     1 1   4   1  (Equivalently, incremental analysis with c2  2, a22  1, and a32  2 can be used in the same way to obtain this column.) The resulting revised final tableau is shown at the top of Table 7.8. Note that the new coefficients of the basic variable x2 do not have the required values, so the conversion to proper form from Gaussian elimination must be applied next. This step involves dividing row 2 by 2, subtracting 7 times the new row 2 from row 0, and adding the new row 2 to row 3. The resulting second tableau in Table 7.8 gives the new value of the current basic solution, namely, x3  4, x2  92, x4  221 (x1  0, x5  0). Since all these variables are nonnegative, the solution is still feasible. However, because of the negative coefficient of x1 in row 0, we know that it is no longer optimal. Therefore, the simplex method would be applied to this tableau, with this solution as the initial BF solution, to find the new optimal solution. The initial entering basic variable is x1, with x3 as the leaving basic variable. Just one iteration is needed in this case to reach the new optimal solution x1  4, x2  32, x4  329 (x3  0, x5  0), as shown in the last tableau of Table 7.8. All this analysis suggests that c2, a22, and a32 are relatively sensitive parameters. However, additional data for estimating them more closely can be obtained only by conducting a pilot run. Therefore, the OR team recommends that production of product 2 be initiated immediately on a small scale (x2  32) and that this experience be used to guide the decision on whether the remaining production capacity should be allocated to product 2 or product 1. The Allowable Range for an Objective Function Coefficient of a Basic Variable. For Case 2a, we described how to find the allowable range for any cj such that xj is a nonbasic variable for the current optimal solution (before cj is changed). When xj is a basic variable instead, the procedure is somewhat more involved because of the need to convert to proper form from Gaussian elimination before testing for optimality. To illustrate the procedure, consider Variation 5 of the Wyndor Glass Co. model (with c2  3, a22  3, a23  4) that is graphed in Fig. 7.5 and solved in Table 7.8. Since x2 is 7.2 247 APPLYING SENSITIVITY ANALYSIS ■ TABLE 7.8 Sensitivity analysis procedure applied to Variation 5 of the Wyndor Glass Co. model Coefficient of: Revised final tableau Converted to proper form New final tableau after reoptimization (only one iteration of the simplex method needed in this case) Basic Variable Eq. Z Z (0) 1 x3 (1) 0 x2 (2) 0 x4 (3) 0 Z (0) 1 x3 (1) 0 x2 (2) 0 x4 (3) 0 Z (0) 1 0 0 x1 (1) 0 1 0 x2 (2) 0 0 1 x4 (3) 0 0 0 x1 9  2 1 3  2 3 3  4 1 3  4 9  4 x2 x3 7 x4 0 0 0 1 0 2 0 0 1 0 1 0 0 0 0 1 0 1 0 0 0 0 1 3  4 1 3  4 9  4 0 0 0 1 x5 5  2 0 1  2 Right Side 45 4 9 1 6 3  4 0 1  4 3  4 27  2 4 9  2 21  2 3  4 0 1  4 3  4 33  2 4 3  2 39  2 a basic variable for the optimal solution (with c2  3) given at the bottom of this table, the steps needed to find the allowable range for c2 are the following: 1. Since x2 is a basic variable, note that its coefficient in the new final row 0 (see the bottom tableau in Table 7.8) is automatically z2*  c2  0 before c2 is changed from its current value of 3. 2. Now increment c2  3 by c2 (so c2  3  c2). This changes the coefficient noted in step 1 to z2*  c2  c2, which changes row 0 to 冤 冥 3 3 33 Row 0  0, c2, , 0,   . 4 4 2 3. With this coefficient now not zero, we must perform elementary row operations to restore proper form from Gaussian elimination. In particular, add to row 0 the product, c2 times row 2, to obtain the new row 0, as shown below: 冤0, c , 34,c 0, 34c 3 1  冤0, c , c , 0, c 4 4 2 2 冤 2 2 2 2 冥 33  2 3 c2 2 冥 冥 3 3 3 1 33 3 New row 0  0, 0,   c2, 0,   c2   c2 4 4 4 4 2 2 248 CHAPTER 7 LINEAR PROGRAMMING UNDER UNCERTAINTY 4. Using this new row 0, solve for the range of values of c2 that keeps the coefficients of the nonbasic variables (x3 and x5) nonnegative. 3 3    c2  0 4 4 3 1    c2  0 4 4 ⇒ ⇒ 3 3    c2 ⇒ c2  1. 4 4 1 3  c2   ⇒ c2  3. 4 4 Thus, the range of values is 3  c2  1. 5. Since c2  3  c2, add 3 to this range of values, which yields 0  c2  4 as the allowable range for c2. With just two decision variables, this allowable range can be verified graphically by using Fig. 7.5 with an objective function of Z  3x1  c2 x2. With the current value of c2  3, the optimal solution is (4, 23). When c2 is increased, this solution remains optimal only for c2  4. For c2  4, (0, 29) becomes optimal (with a tie at c2  4), because of the constraint boundary 3x1  4x2  18. When c2 is decreased instead, (4, 23) remains optimal only for c2  0. For c2  0, (4, 0) becomes optimal because of the constraint boundary x1  4. In a similar manner, the allowable range for c1 (with c2 fixed at 3) can be derived either algebraically or graphically to be c1  49. (Problem 7.2-10 asks you to verify this both ways.) Thus, the allowable decrease for c1 from its current value of 3 is only 34. However, it is possible to decrease c1 by a larger amount without changing the optimal solution if c2 also decreases sufficiently. For example, suppose that both c1 and c2 are decreased by 1 from their current value of 3, so that the objective function changes from Z  3x1  3x2 to Z  2x1  2x2. According to the 100 percent rule for simultaneous changes in objective function coefficients, the percentages of allowable changes are 13313 percent and 3313 percent, respectively, which sum to far over 100 percent. However, the slope of the objective function line has not changed at all, so (4, 32) still is optimal. Case 4—Introduction of a New Constraint In this case, a new constraint must be introduced to the model after it has already been solved. This case may occur because the constraint was overlooked initially or because new considerations have arisen since the model was formulated. Another possibility is that the constraint was deleted purposely to decrease computational effort because it appeared to be less restrictive than other constraints already in the model, but now this impression needs to be checked with the optimal solution actually obtained. To see if the current optimal solution would be affected by a new constraint, all you have to do is to check directly whether the optimal solution satisfies the constraint. If it does, then it would still be the best feasible solution (i.e., the optimal solution), even if the constraint were added to the model. The reason is that a new constraint can only eliminate some previously feasible solutions without adding any new ones. If the new constraint does eliminate the current optimal solution, and if you want to find the new solution, then introduce this constraint into the final simplex tableau (as an additional row) just as if this were the initial tableau, where the usual additional variable (slack variable or artificial variable) is designated to be the basic variable for this new row. Because the new row probably will have nonzero coefficients for some of the other basic variables, the conversion to proper form from Gaussian elimination is applied next, and then the reoptimization step is applied in the usual way. 7.2 x2 249 APPLYING SENSITIVITY ANALYSIS x1  0 14 2x2  24 12 10 x1  4 (0, 9) 8 (0, 8) optimal 6 2x1  3x2  24 4 ■ FIGURE 7.6 Feasible region for Variation 6 of the Wyndor model where Variation 2 (Fig. 7.2) has been revised by adding the new constraint, 2x1  3x2  24. 2 Feasible region 3x1  2x2  18 0 2 4 6 8 x2  0 10 12 14 x1 Just as for some of the preceding cases, this procedure for Case 4 is a streamlined version of the general procedure summarized at the end of Sec. 7.1. The only question to be addressed for this case is whether the previously optimal solution still is feasible, so step 5 (optimality test) has been deleted. Step 4 (feasibility test) has been replaced by a much quicker test of feasibility (does the previously optimal solution satisfy the new constraint?) to be performed right after step 1 (revision of model). It is only if this test provides a negative answer, and you wish to reoptimize, that steps 2, 3, and 6 are used (revision of final tableau, conversion to proper form from Gaussian elimination, and reoptimization). Example (Variation 6 of the Wyndor Model). To illustrate this case, we consider Variation 6 of the Wyndor Glass Co. model, which simply introduces the new constraint 2x1  3x2  24 into the Variation 2 model given in Table 7.5. The graphical effect is shown in Fig. 7.6. The previous optimal solution (0, 9) violates the new constraint, so the optimal solution changes to (0, 8). To analyze this example algebraically, note that (0, 9) yields 2x1  3x2  27 24, so this previous optimal solution is no longer feasible. To find the new optimal solution, add the new constraint to the current final simplex tableau as just described, with the slack variable x6 as its initial basic variable. This step yields the first tableau shown in Table 7.9. The conversion to proper form from Gaussian elimination then requires subtracting from the new row the product, 3 times row 2, which identifies the current basic solution x3  4, x2  9, x4  6, x6  3 (x1  0, x5  0), as shown in the second tableau. Applying the dual simplex method (described in Sec. 8.1) to this tableau then leads in just one iteration (more are sometimes needed) to the new optimal solution in the last tableau of Table 7.9. 250 CHAPTER 7 LINEAR PROGRAMMING UNDER UNCERTAINTY ■ TABLE 7.9 Sensitivity analysis procedure applied to Variation 6 of the Wyndor Glass Co. model Coefficient of: Revised final tableau Converted to proper form New final tableau after reoptimization (only one iteration of dual simplex method needed in this case) Basic Variable Eq. Z Z (0) 1 x3 (1) 0 x2 (2) 0 x4 x6 (3) New 0 0 Z (0) 1 x3 (1) 0 x2 (2) 0 x4 (3) 0 x6 New 0 Z (0) 1 x3 (1) 0 x2 (2) 0 x4 (3) 0 x5 New 0 x1 9  2 1 3  2 3 2 9  2 1 3  2 3 5  2 1  3 1 2  3 4  3 5  3 x2 x3 x4 0 0 0 x5 5  2 0 1  2 1 0 0 1 0 1 0 0 0 3 0 0 1 0 0 0 0 0 1 0 1 0 0 0 0 1 0 0 0 0 0 0 0 0 1 0 0 1 0 0 0 0 0 1 0 0 0 0 1 5  2 0 1  2 1 3  2 x6 Right Side 0 45 0 4 0 9 0 1 6 24 0 45 0 4 0 9 0 6 1 3 5  3 0 1  3 2  3 2  3 40 4 8 8 2 So far we have described how to test specific changes in the model parameters. Another common approach to sensitivity analysis, called parametric linear programming, is to vary one or more parameters continuously over some interval(s) to see when the optimal solution changes. We shall describe the algorithms for performing parametric linear programming in Sec. 8.2. ■ 7.3 PERFORMING SENSITIVITY ANALYSIS ON A SPREADSHEET 4 With the help of Solver, spreadsheets provide an alternative, relatively straightforward way of performing much of the sensitivity analysis described in Secs. 7.1 and 7.2. The spreadsheet approach is basically the same for each of the cases considered in Sec. 7.2 for the types of changes made in the original model. Therefore, we will focus on only the effect of changes in the coefficients of the variables in the objective function (Cases 2a and 3 in Sec. 7.2). We will illustrate this effect by making changes in the original Wyndor model formulated in Sec. 3.1, where the coefficients of x1 (number of batches of the new door produced per week) and x2 (number of batches of the new window produced per week) in the objective function are c1  3  profit (in thousands of dollars) per batch of the new type of door, c2  5  profit (in thousands of dollars) per batch of the new type of window. 4 We have written this section in a way that can be understood without first reading either of the preceding sections in this chapter. However, Sec. 4.7 is important background for the latter part of this section. 7.3 251 PERFORMING SENSITIVITY ANALYSIS ON A SPREADSHEET For your convenience, the spreadsheet formulation of this model (Fig. 3.22) is repeated here as Fig. 7.7. Note that the cells containing the quantities to be changed are ProfitPerBatch (C4:D4). Spreadsheets actually provide three methods of performing sensitivity analysis. One is to check the effect of an individual change in the model by simply making the change on the spreadsheet and re-solving. A second is to systematically generate a table on a single spreadsheet that shows the effect of a series of changes in one or two parameters of the model. A third is to obtain and apply Excel’s sensitivity report. We describe each of these methods in turn below. Checking Individual Changes in the Model One of the great strengths of a spreadsheet is the ease with which it can be used interactively to perform various kinds of sensitivity analysis. Once Solver has been set up to obtain an optimal solution, you can immediately find out what would happen if one of the parameters of the model were changed to some other value. All you have to do is make this change on the spreadsheet and then click on the Solve button again. To illustrate, suppose that Wyndor management is quite uncertain about what the profit per batch of doors (c1) will turn out to be. Although the figure of 3 given in Fig. 7.7 is considered to be a reasonable initial estimate, management feels that the true profit could end up deviating substantially from this figure in either direction. However, the range between c1  2 and c1  5 is considered fairly likely. A 1 2 3 4 5 6 7 8 9 10 11 12 B E F G Wyndor Glass Co. Product-Mix Problem Doors 3 Profit Per Batch ($000) Plant 1 Plant 2 Plant 3 Windows 5 Hours Hours Used Per Batch Produced Used 2 <= 1 0 12 <= 0 2 18 <= 3 2 Batches Produced Solver Parameters Set Objective Cell:TotalProfit To:Max By Changing Variable Cells: BatchesProduced Subject to the Constraints: HoursUsed<= HoursAvailable Solver Options: Make Variables Nonnegative Solving Method: Simplex LP ■ FIGURE 7.7 The spreadsheet model and the optimal solution obtained for the original Wyndor problem before performing sensitivity analysis. D C Doors 2 5 6 7 8 9 Windows 6 Hours Available 4 12 18 Total Profit ($000) 36 E Hours Used =SUMPRODUCT(C7:D7,BatchesProduced) =SUMPRODUCT(C8:D8,BatchesProduced) =SUMPRODUCT(C9:D9,BatchesProduced) G 11 Total Profit 12 =SUMPRODUCT(ProfitPerBatch,BatchesProduced) Range Name BatchesProduced HoursAvailable HoursUsed HoursUsedPerBatchProduced ProfitPerBatch TotalProfit Cells C12:D12 G7:G9 E7:E9 C7:D9 C4:D4 G12 252 CHAPTER 7 A ■ FIGURE 7.8 The revised Wyndor problem where the estimate of the profit per batch of doors has been decreased from c1 = 3 to c2 = 2, but no change occurs in the optimal solution for the product mix. 1 2 3 4 5 6 7 8 9 10 11 12 LINEAR PROGRAMMING UNDER UNCERTAINTY B C D E F G <= <= <= Hours Available 4 12 18 Wyndor Glass Co. Product-Mix Problem Profit Per Batch ($000) Plant 1 Plant 2 Plant 3 Batches Produced Doors 2 Windows 5 Hours Used Per Batch Produced 1 0 0 2 3 2 Doors 2 Hours Used 2 12 18 Windows 6 Total Profit ($000) 34 Figure 7.8 shows what would happen if the profit per batch of doors were to drop from c1  3 to c1  2. Comparing with Fig. 7.7, there is no change at all in the optimal solution for the product mix. In fact, the only changes in the new spreadsheet are the new value of c1 in cell C4 and a decrease of 2(in thousands of dollars) in the total profit shown in cell G12 (because each of the two batches of doors produced per week provides 1 thousand dollars less profit). Because the optimal solution does not change, we now know that the original estimate of c1  3 can be considerably too high without invalidating the model’s optimal solution. But what happens if this estimate is too low instead? Figure 7.9 shows what would happen if c1 were increased to c1  5. Again, there is no change in the optimal solution. Therefore, we now know that the range of values of c1 over which the current optimal solution remains optimal (i.e., the allowable range discussed in Sec. 7.2) includes the range from 2 to 5 and may extend further. Because the original value of c1  3 can be changed considerably in either direction without changing the optimal solution, c1 is a relatively insensitive parameter. It is not necessary to pin down this estimate with great accuracy in order to have confidence that the model is providing the correct optimal solution. This may be all the information that is needed about c1. However, if there is a good possibility that the true value of c1 will turn out to be even outside this broad range from 2 to 5, further investigation would be desirable. How much higher or lower can c1 be before the optimal solution would change? Figure 7.10 demonstrates that the optimal solution would indeed change if c1 is increased all the way up to c1  10. Thus, we now know that this change occurs somewhere between 5 and 10 during the process of increasing c1. A ■ FIGURE 7.9 The revised Wyndor problem where the estimate of the profit per batch of doors has been increased from c1  3 to c1  5, but no change occurs in the optimal solution for the product mix. 1 2 3 4 5 6 7 8 9 10 11 12 B C D E F G <= <= <= Hours Available 4 12 18 Wyndor Glass Co. Product-Mix Problem Profit Per Batch ($000) Plant 1 Plant 2 Plant 3 Batches Produced Doors 5 Windows 5 Hours Used Per Batch Produced 1 0 0 2 3 2 Doors 2 Windows 6 Hours Used 2 12 18 Total Profit ($000) 4 7.3 ■ FIGURE 7.10 The revised Wyndor problem where the estimate of the profit per batch of doors has been increased from C1  3 to C1  10, which results in a change in the optimal solution for the product mix. A 1 2 3 4 5 6 7 8 9 10 11 12 253 PERFORMING SENSITIVITY ANALYSIS ON A SPREADSHEET B C D E F G <= <= <= Hours Available 4 12 18 Wyndor Glass Co. Product-Mix Problem Profit Per Batch ($000) Plant 1 Plant 2 Plant 3 Batches Produced Doors 10 Windows 5 Hours Used Per Batch Produced 1 0 0 2 3 2 Doors 4 Windows 3 Hours Used 4 6 18 Total Profit ($000) 55 Using a Parameter Analysis Report to Do Sensitivity Analysis Systematically To pin down just when the optimal solution will change, we could continue selecting new values of c1 at random. However, a better approach is to systematically consider a range of values of c1. The Analytic Solver Platform for Education (ASPE), first introduced in Sec. 3.5, can generate a parameter analysis report that is designed to do just this sort of analysis. Instructions for installing ASPE are on a supplementary insert included with the book and also on the bookís website, www.mhhe.com/hillier. The data cell containing a parameter that will be systematically varied (ProfitPerBatchOfDoors in cell C4 in this case) is referred to as a parameter cell. A parameter analysis report is used to show the results in the changing cells and/or the objective cell for various trial values in the parameter cell. For each trial value, these results are obtained by using Solver to re-solve the problem. To generate a parameter analysis report, the first step is to define the parameter cell. In this case, select cell C4 (the profit per batch of doors) and choose Optimization under the Parameters menu on the ASPE ribbon.In the parameter cell dialog box, shown in Fig.7.11,enter the range of trial values for the parameter cell.The entries shown specify ■ FIGURE 7.11 The parameter cell dialog box for c1 (cell C4) specifies here that this parameter cell for the Wyndor problem will be systematically varied from 1 to 10. 254 CHAPTER 7 LINEAR PROGRAMMING UNDER UNCERTAINTY that c1 will be systematically varied from 1 to 10. If desired, additional parameter cells could be defined in this same way, but we will not do so at this point. Next choose Optimization>Parameter Analysis under the Reports menu on the ASPE ribbon. This brings up the dialog box shown in Fig. 7.12 that allows you to specify which parameter cells to vary and which results to show. The choice of which parameter cells to vary is made under Parameters in the bottom half of the dialog box. Clicking on (>>) will select all of the parameter cells defined so far (moving them to the box on the right). In the Wyndor example, only one parameter has been defined, so this causes the single parameter cell (ProfitPerBatchOfDoors or cell C4) to appear on the right. If more parameter cells had been defined, particular parameter cells can be chosen for immediate analysis by clicking on the + next to Wyndor to reveal the list of parameter cells that have been defined in the Wyndor spreadsheet. Clicking on (>) then moves individual parameter cells to the list on the right. The choice of which results to show as the parameter cell is varied is made in the upper half of the dialog box. Clicking on (>)will cause all of the changing cells (DoorBatchesProduced or C12, and WindowBatchesProduced or D12) and the objective cell (Total Profit or G12) to appear in the list on the right. To instead choose a subset of these cells, click on the small + next to Variables (or Objective) to reveal a list of all the changing cells (or objective cell) and then click on > to move that changing cell (or objective cell) to the right. ■ FIGURE 7.12 The dialog box for the parameter analysis report specifies here for the Wyndor problem that the ProfitPerBatchOfDoors (C4) parameter cell will be varied and that results from all the changing cells (DoorBatchesProduced and WindowBatchesProduced) and the objective cell (TotalProfit) will be shown. 7.3 ■ FIGURE 7.13 The parameter analysis report that shows the effect of systematically varying the estimate of the profit per batch of doors for the Wyndor problem. 1 2 3 4 5 6 7 8 9 10 11 PERFORMING SENSITIVITY ANALYSIS ON A SPREADSHEET A ProfitPerBatchOfDoors 1 2 3 4 5 6 7 8 9 10 B DoorBatchesProduced 2 2 2 2 2 2 2 4 4 4 C WindowBatchesProduced 6 6 6 6 6 6 6 3 3 3 255 D TotalProfit 32 34 36 38 40 42 44 47 51 55 Finally, enter the number of Major Axis Points to specify how many different values of the parameter cell will be shown in the parameter analysis report. The values will be spread evenly between the lower and upper values specified in the parameters cell dialog box in Fig. 7.11. With 10 major axis points, a lower value of 1, and an upper value of 10, the parameter analysis report will show results for c1 of 1, 2, 3,...10. Clicking on the OK button generates the parameter analysis report shown in Fig. 7.13. One at a time, the trial values listed in the first column of the table are put into the parameter cell (ProfitPerBatchOfDoors or C4) and then Solver is called on to re-solve the problem. The optimal results for that particular trial value of the parameter cell are then shown in the remaining columns—DoorBatchesProduced (C4), WindowsBatchesProduced (D4), and TotalProfit (G12). This is repeated automatically for each remaining trial value of the parameter cell. The end result (which happens very quickly for small problems) is the completely-filled-in parameter analysis report shown in Fig. 7.13. The parameter analysis report reveals that the optimal solution remains the same all the way from c1  1 (and perhaps lower) to c1  7, but that a change occurs somewhere between 7 and 8. We next could systematically consider values of c1 between 7 and 8 to determine more closely where the optimal solution changes. However, this is not necessary since, as discussed a little later, a shortcut is to use the Excel sensitivity report to determine exactly where the optimal solution changes. Thus far, we have illustrated how to systematically investigate the effect of changing only c1 (cell C4 in Fig. 7.7). The approach is the same for c2 (cell D4). In fact, a parameter analysis report can be used in this way to investigate the effect of changing any single data cell in the model, including any cell in HoursAvailable (G7:G9) or HoursUsedPerBatchProduced (C7:D9). We next will illustrate how to investigate simultaneous changes in two data cells with a spreadsheet, first by itself and then with the help of the a parameter analysis report. Checking Two-Way Changes in the Model When using the original estimates for c1 (3) and c2 (5), the optimal solution indicated by the model (Fig. 7.7) is heavily weighted toward producing the windows (6 batches per week) rather than the doors (only 2 batches per week). Suppose that Wyndor management is concerned about this imbalance and feels that the problem may be that the estimate for c1 is too low and the estimate for c2 is too high. This raises the question: If the estimates are indeed off in these directions, would this lead to a more balanced product mix being the most profitable one? (Keep in mind that it is the ratio of c1 to c2 that is relevant for determining the optimal product mix, so having their estimates be off in the same direction with little change in this ratio is unlikely to change the optimal product mix.) 256 ■ FIGURE 7.14 The revised Wyndor problem where the estimates of the profits per batch of doors and windows have been changed to c1  4.5 and c2  4, respectively, but no change occurs in the optimal product mix. CHAPTER 7 A Profit Per Batch ($000) Plant 1 Plant 2 Plant 3 Batches Produced A ■ FIGURE 7.15 The revised Wyndor problem where the estimates of the profits per batch of doors and windows have been changed to 6 and 3, respectively, which results in a change in the optimal product mix. B C D E F G <= <= <= Hours Available 4 12 18 Wy ndor Glass Co. Product-Mix Problem 1 2 3 4 5 6 7 8 9 10 11 12 1 2 3 4 5 6 7 8 9 10 11 12 LINEAR PROGRAMMING UNDER UNCERTAINTY B Doors 4.5 Windows 4 33 Hours Used Per Batch Produced 1 0 0 2 3 2 Doors 2 Hours Used 2 12 18 Total Profit ($000) 33 Windows 6 C D E F G <= <= <= Hours Available 4 12 18 Wy ndor Glass Co. Product-Mix Problem Profit Per Batch ($000) Plant 1 Plant 2 Plant 3 Batches Produced Doors 6 Windows 3 Hours Used Per Batch 1 0 3 Doors 4 Produced 0 2 2 Windows 3 Hours Used 4 6 18 Total Profit ($000) 33 This question can be answered in a matter of seconds simply by substituting new estimates of the profits per batch in the original spreadsheet in Fig. 7.7 and clicking on the Solve button. Figure 7.14 shows that new estimates of 4.5 for doors and 4 for windows causes no change at all in the solution for the optimal product mix. (The total profit does change, but this occurs only because of the changes in the profits per batch.) Would even larger changes in the estimates of profits per batch finally lead to a change in the optimal product mix? Figure 7.15 shows that this does happen, yielding a relatively balanced product mix of (x1, x2)  (4, 3), when estimates of 6 for doors and 3 for windows are used. Figures 7.14 and 7.15 don’t reveal where the optimal product mix changes as the profit estimates increase from 4.5 to 6 for doors and decrease from 4 to 3 for windows. We next describe how a parameter analysis report can systematically help to pin this down better. Using a Two-Way Parameter Analysis Report (ASPE) for This Analysis Using APSE, a two-way parameter analysis report, provides a way of systematically investigating the effect if the estimates of both profits per batch are inaccurate. This kind of parameter analysis report shows the results in a single output cell for various trial values in two parameter cells. Therefore, for example, it can be used to show how TotalProfit (G12) in Fig. 5.1 varies over a range of trial values in the two parameter cells, ProfitPerBatchOfDoors (C4) and ProfitPerBatchOfWindows (D4). For each pair of trial values in these data cells, Solver is called on to re-solve the problem. 7.3 PERFORMING SENSITIVITY ANALYSIS ON A SPREADSHEET 257 ■ FIGURE 7.16 The dialog box for the parameter analysis report specifies here that the ProfitPerBatchOfDoors (C4) and ProfitPerBatchOfWindows (D4) parameter cells will be varied and results from the objective cell, TotalProfit (G12) will be shown for the Wyndor problem. To create such a two-way parameter analysis report for the Wyndor problem, both ProfitPerBatchOfDoors (C4) and ProfitPerBatchOfWindows (D4) need to be defined as parameter cells. In turn, select cell C4 and D4, then choose Optimization under the Parameters menu on the ASPE ribbon, and then enter the range of trial values for each parameter cell (as was done in Fig. 7.11 in the previous section). For this example, ProfitPerBatchOfDoors (C4) will be varied from 3 to 6 while ProfitPerBatchOfWindows (D4) will be varied from 1 to 5. Next, choose Optimization>Parameter Analysis under the reports menu on the ASPE ribbon to bring up the dialog box shown in Fig. 7.16. For a two-way parameter analysis report, two parameter cells are chosen, but only a single result can be shown. Under Parameters, clicking on (>>) chooses both of the defined parameter cells, ProfitPerBatchOfDoors (C4) and ProfitPerBatchOfWindows (D4). Under Results, click on (<<) to clear out the list of cells on the right, click on the + next to Objective to reveal the objective cell (TotalProfit or G12), select TotalProfit, and then click on > to move this cell to the right. The next step is to choose the option in the menu at the bottom to Vary Two Selected Parameters Independently. This will allow both parameter cells to be varied independently over their entire ranges. The number of different values of the first parameter cell and the second parameter cell to be shown in the parameter analysis report are entered in Major Axis Points and Minor Axis Points, respectively. These values will be spread evenly over the range of values specified in the parameter dialog box for each parameter cell. 258 ■ FIGURE 7.17 The parameter analysis report that shows how the optimal TotalProfit (G12) changes when systematically varying the estimate of both the ProfitPerBatchOfDoors (C4) and the ProfitPerBatchOfWindows (D4) for the Wyndor problem. ■ FIGURE 7.18 The pair of parameter analysis reports that show how the optimal number of doors to produce (top report) and the optimal number of windows to produce (bottom report) change when systematically varying the estimate of both the unit profit for doors and the unit profit for windows for the Wyndor problem. CHAPTER 7 1 2 3 4 5 6 LINEAR PROGRAMMING UNDER UNCERTAINTY A TotalProfit ProfitPerBatchOfDoors B ProfitPerBatchOfWindows 1 15 19 23 27 C D E F 2 18 22 26 30 3 24 26 29 33 4 30 32 34 36 5 36 38 40 42 Therefore, choosing 4 and 5 for the respective number of values, as shown in Fig. 7.16, will vary ProfitPerBatchOfDoors over the four values of 3, 4, 5, and 6 while simultaneously varying ProfitPerBatchOfWindows over the five values of 1, 2, 3, 4, and 5. Clicking on the OK button generates the parameter analysis report shown in Fig. 7.17. The trial values for the respective parameter cells are listed in the first column and first row of the table. For each combination of a trial value from the first column and from the first row, Solver has solved for the value of the output cell of interest (the objective cell for this example) and entered it into the corresponding column and row of the table. It also is possible to choose either DoorBatchesProduced (C12) or WindowBatchesProduced (D12) instead of TotalProfit (G12), as the Result to show in the dialog box of Fig. 7.16. A similar parameter analysis report then could have been generated to show either the optimal number of doors to produce or the optimal number of windows to produce for each combination of values for the unit profits. These two parameter analysis reports are shown in Fig. 7.18. The upper right-hand corner (cell F3) of both reports, taken together, gives the optimal solution of (x1, x2)  (2, 6) when using the original profit estimates of 3 per batch of doors and 5 per batch of windows. Moving down from this cell corresponds to increasing this estimate for doors while moving to the left amounts to decreasing the estimate for windows. (The cells when moving up or to the right of H26 are not shown because these changes would only increase the attractiveness of (x1, x2)  (2, 6) as the optimal solution.) Note that (x1, x2)  (2, 6) continues to be the optimal solution for all the cells near H26. This indicates that the original estimates of profit per batch would need to be very inaccurate indeed before the optimal product mix would change. 1 2 3 4 5 6 1 2 3 4 5 6 A Door Batches Produced Profit Per Batch Of Doors B Profit Per Batch Of Windows 3 4 5 6 A B Window Batches Produced Profit Per Batch Of Windows Profit Per Batch Of Doors 3 4 5 6 1 4 4 4 4 1 3 3 3 3 C D E F 2 2 4 4 4 3 2 2 4 4 4 2 2 2 2 5 2 2 2 2 C D E F 2 6 3 3 3 3 6 6 3 3 4 6 6 6 6 5 6 6 6 6 7.3 259 PERFORMING SENSITIVITY ANALYSIS ON A SPREADSHEET Using the Sensitivity Report to Perform Sensitivity Analysis You now have seen how some sensitivity analysis can be performed readily on a spreadsheet either by interactively making changes in data cells and re-solving or by using a parameter analysis report to generate similar information systematically. However, there is a shortcut. Some of the same information (and more) can be obtained more quickly and precisely by simply using the sensitivity report provided by Solver. (Essentially the same sensitivity report is a standard part of the output available from other linear programming software packages as well, including MPL/Solvers, LINDO, and LINGO.) Section 4.7 already has discussed the sensitivity report and how it is used to perform sensitivity analysis. Figure 4.10 in that section shows the sensitivity report for the Wyndor problem. Part of this report is shown here in Fig. 7.19. Rather than repeating Sec. 4.7, we will focus here on illustrating how the sensitivity report can efficiently address the specific questions raised in the preceding subsections for the Wyndor problem. The question considered in the first two subsections was how far the initial estimate of 3 for c1 could be off before the current optimal solution, (x1, x2)  (2, 6), would change. Figures 7.9 and 7.10 showed that the optimal solution would not change until c1 is raised to somewhere between 5 and 10. Figure 7.13 then narrowed down the gap for where the optimal solution changes to somewhere between 7 and 8. This figure also showed that if the initial estimate of 3 for c1 is too high rather than too low, c1 would need to be dropped to somewhere below 1 before the optimal solution would change. Now look at how the portion of the sensitivity report in Figure 7.19 addresses this same question. The DoorBatchesProduced row in this report provides the following information about c1: Current value of c1: Allowable increase in c1: Allowable decrease in c1: Allowable range for c1: 3. 4.5. 3. So c1  3  4.5  7.5 So c1  3 – 3  0. 0  c1  7.5. Therefore, if c1 is changed from its current value (without making any other change in the model), the current solution (x1, x2)  (2, 6) will remain optimal so long as the new value of c1 is within this allowable range, 0  c1  7.5. Figure 7.20 provides graphical insight into this allowable range. For the original value of c1  3, the solid line in the figure shows the slope of the objective function line passing through (2, 6). At the lower end of the allowable range, where c1  0, the objective function line that passes through (2, 6) now is line B in the figure, so every point on the line segment between (0, 6) and (2, 6) is an optimal solution. For any value of c1 0, the objective function line will have rotated even further so that (0, 6) becomes the only optimal solution. At the upper end of the allowable range, when c1  7.5, the objective function line that passes through (2, 6) becomes line C, so every point on the line segment between (2, 6) and (4, 3) becomes an optimal solution. For any value of c1 7.5, the objective ■ FIGURE 7.19 Part of the sensitivity report generated by Solver for the original Wyndor problem (Fig. 6.3), where the last three columns identify the allowable ranges for the profits per batch of doors and windows. Variable Cells Cell $C$12 $D$12 Name DoorBatchesProduced WindowBatchesProduced Final Value 2 6 Reduced Cost 0 0 Objective Coefficient 3 5 Allowable Increase 4.5 1E+30 Allowable Decrease 3 3 260 CHAPTER 7 LINEAR PROGRAMMING UNDER UNCERTAINTY x2 (2,6) is optimal for 0 ⱕ C1 ⱕ 7.5 C1  0 ■ FIGURE 7.20 The two dashed lines that pass through solid constraint boundary lines are the objective function lines when c1 (the profit per batch of doors) is at an endpoint of its allowable range, 0  c1  7.5, since either line or any objective function line in between still yields (x1, x2)  (2, 6) as an optimal solution for the Wyndor problem. C1  3 C1  7.5 x1 function line is even steeper than line C, so (4, 3) becomes the only optimal solution. Consequently, the original optimal solution, (x1, x2)  (2, 6) remains optimal only as long as 0  c1  7.5. The procedure called Graphical Method and Sensitivity Analysis in IOR Tutorial is designed to help you perform this kind of graphical analysis. After you enter the model for the original Wyndor problem, the module provides you with the graph shown in Fig. 7.20 (without the dashed lines). You then can simply drag one end of the objective line up or down to see how far you can increase or decrease c1 before (x1, x2)  (2, 6) will no longer be optimal. Conclusion: The allowable range for c1 is 0  c1  7.5, because (x1, x2)  (2, 6) remains optimal over this range but not beyond. (When c1  0 or c1  7.5, there are multiple optimal solutions, but (x1, x2)  (2, 6) still is one of them.) With the range this wide around the original estimate of 3 (c1  3) for the profit per batch of doors, we can be quite confident of obtaining the correct optimal solution for the true profit. Now let us turn to the question considered in the preceding two subsections. What would happen if the estimate of c1 (3) were too low and the estimate of c2 (5) were too high simultaneously? Specifically, how far can the estimates be off in these directions before the current optimal solution, (x1, x2)  (2, 6), would change? Figure 7.14 showed that if c1 were increased by 1.5 (from 3 to 4.5) and C2 were decreased by 1 (from 5 to 4), the optimal solution would remain the same. Figure 7.15 then indicated that doubling these changes would result in a change in the optimal solution. However, it is unclear where the change in the optimal solution occurs. Figure 7.18 provided further information, but not a definitive answer to this question. Fortunately, additional information can be gleaned from the sensitivity report (Fig. 7.19) by using its allowable increases and allowable decreases in c1 and c2. The key is to apply the following rule (as first stated in Sec. 7.2): 7.3 PERFORMING SENSITIVITY ANALYSIS ON A SPREADSHEET 261 The 100 Percent Rule for Simultaneous Changes in Objective Function Coefficients: If simultaneous changes are made in the coefficients of the objective function, calculate for each change the percentage of the allowable change (increase or decrease) for that coefficient to remain within its allowable range. If the sum of the percentage changes does not exceed 100 percent, the original optimal solution definitely will still be optimal. (If the sum does exceed 100 percent, then we cannot be sure.) This rule does not spell out what happens if the sum of the percentage changes does exceed 100 percent. The consequence depends on the directions of the changes in the coefficients. Remember that it is the ratios of the coefficients that are relevant for determining the optimal solution, so the original optimal solution might indeed remain optimal even when the sum of the percentage changes greatly exceeds 100 percent if the changes in the coefficients are in the same direction. Thus, exceeding 100 percent may or may not change the optimal solution, but so long as 100 percent is not exceeded, the original optimal solution definitely will still be optimal. Keep in mind that we can safely use the entire allowable increase or decrease in a single objective function coefficient only if none of the other coefficients have changed at all. With simultaneous changes in the coefficients, we focus on the percentage of the allowable increase or decrease that is being used for each coefficient. To illustrate, consider the Wyndor problem again, along with the information provided by the sensitivity report in Fig. 7.19. Suppose now that the estimate of c1 has increased from 3 to 4.5 while the estimate of c2 has decreased from 5 to 4. The calculations for the 100 percent rule now are c1: 3 → 4.5. 冢 冣 冢 冣 4.5 – 3 1 Percentage of allowable increase  100  %  33% 4.5 3 c2: 5 → 4. 5– 4 1 Percentage of allowable decrease  100  %  33% 3 3 2 Sum  66%. 3 Since the sum of the percentages does not exceed 100 percent, the original optimal solution (x1, x2)  (2, 6) definitely is still optimal, just as we found earlier in Fig. 6.14. Now suppose that the estimate of c1 has increased from 3 to 6 while the estimate C2 has decreased from 5 to 3. The calculations for the 100 percent rule now are c1: 3 → 6. 冢 冣 6–3 2 Percentage of allowable increase  100  %  66% 4.5 3 c2: 5 → 3. 冢 冣 5– 3 2 Percentage of allowable decrease  100 100  %  66% 3 3 1 Sum  133%. 3 262 CHAPTER 7 LINEAR PROGRAMMING UNDER UNCERTAINTY x2 Profit  31.5  5.25x1  3.5x2 since c1  5.25 and c2 3.5 ■ FIGURE 7.21 When the estimates of the profits per batch of doors and windows change to c1  5.25 and c2  3.5, which lies at the edge of what is allowed by the 100 percent rule, the graphical method shows that (x1, x2)  (2, 6) still is an optimal solution, but now every other point on the line segment between this solution and (4, 3) also is optimal. x1 Since the sum of the percentages now exceeds 100 percent, the 100 percent rule says that we can no longer guarantee that (x1, x2)  (2, 6) is still optimal. In fact, we found earlier in both Figs. 7.15 and 7.18 that the optimal solution has changed to (x1, x2)  (4, 3). These results suggest how to find just where the optimal solution changes while c1 is being increased and c2 is being decreased by these relative amounts. Since 100 percent is midway between 6623 percent and 13313 percent, the sum of the percentage changes will equal 100 percent when the values of c1 and c2 are midway between their values in the above cases. In particular, c1  5.25 is midway between 4.5 and 6 and c2  3.5 is midway between 4 and 3. The corresponding calculations for the 100 percent rule are c1: 3 → 5.25. 冢 冣 5.25 – 3 Percentage of allowable increase  100 %   50% 4.5 c2: 5 → 3.5. 5 – 3.5 Percentage of allowable decrease  100  %  50% 3 冢 冣 Sum  100%. Although the sum of the percentages equals 100 percent, the fact that it does not exceed 100 percent guarantees that (x1, x2)  (2, 6) is still optimal. Figure 7.21 shows graphically that both (2, 6) and (4, 3) are now optimal, as well as all the points on the line segment connecting these two points. However. If c1 and c2 were to be changed any further from their original values (so that the sum of the percentages exceeds 100 percent), the objective function line would be rotated so far toward the vertical that (x1, x2)  (4, 3) would become the only optimal solution. 7.3 263 PERFORMING SENSITIVITY ANALYSIS ON A SPREADSHEET At the same time, keep in mind that having the sum of the percentages of allowable changes exceed 100 percent does not automatically mean that the optimal solution will change. For example, suppose that the estimates of both unit profits are halved. The resulting calculations for the 100 percent rule are c1: 3 → 1.5. 冢 冣 冢 冣 3 – 1.5 Percentage of allowable decrease  100  %  3 50% c2: 5 → 2.5. 5 – 2.5 1 Percentage of allowable decrease  100  %  83% 3 3 1 Sum  133%. 3 Even though this sum exceeds 100 percent, Fig. 7.22 shows that the original optimal solution is still optimal. In fact, the objective function line has the same slope as the original objective function line (the solid line in Fig. 7.20). This happens whenever proportional changes are made to all the profit estimates, which will automatically lead to the same optimal solution. ■ FIGURE 7.22 When the estimates of the profits per batch of doors and windows change to c1  1.5 and c2  2.5 (half of their original values), the graphical method shows that the optimal solution still is (x1, x2)  (2, 6), even though the 100 percent rule says that the optimal solution might change. x2 Profit  18  1.5x1  2.5x2 Optimal solution x1 264 CHAPTER 7 LINEAR PROGRAMMING UNDER UNCERTAINTY Other Types of Sensitivity Analysis This section has focused on how to use a spreadsheet to investigate the effect of changes in only the coefficients of the variables in the objective function. One often is interested in investigating the effect of changes in the right-hand sides of the functional constraints as well. Occasionally you might even want to check whether the optimal solution would change if changes need to be made in some coefficients in the functional constraints. The spreadsheet approach for investigating these other kinds of changes in the model is virtually the same as for the coefficients in the objective function. Once again, you can try out any changes in the data cells by simply making these changes on the spreadsheet and using Solver to re-solve the model. And once again, you can systematically check the effect of a series of changes in any one or two data cells by using a parameter analysis report. As already described in Sec. 4.7, the sensitivity report generated by Solver (or any other linear programming software package) also provides some valuable information, including the shadow prices, regarding the effect of changing the right-hand side of any single functional constraint. When changing a number of right-hand sides simultaneously, there also is a “100 percent rule” for this case that is analogous to the 100 percent rule for simultaneous changes in objective function constraints. (See the Case 1 portion of Sec. 7.2 for details about how to investigate the effect of changes in right-hand sides, including the application of the 100 percent rule for simultaneous changes in right-hand sides.) The Solved Examples section of the book’s website includes another example of using a spreadsheet to investigate the effect of changing individual right-hand sides. ■ 7.4 ROBUST OPTIMIZATION As described in the preceding sections, sensitivity analysis provides an important way of dealing with uncertainty about the true values of the parameters in a linear programming model. The main purpose of sensitivity analysis is to identify the sensitive parameters, namely, those parameters that cannot be changed without changing the optimal solution. This is valuable information since these are the parameters that need to be estimated with special care to minimize the risk of obtaining an erroneous optimal solution. However, this is not the end of the story for dealing with linear programming under uncertainty. The true values of the parameters may not become known until considerably later when the optimal solution (according to the model) is actually implemented. Therefore, even after estimating the sensitive parameters as carefully as possible, significant estimation errors can occur for these parameters along with even larger estimation errors for the other parameters. This can lead to unfortunate consequences. Perhaps the optimal solution (according to the model) will not be optimal after all. In fact, it may not even be feasible. The seriousness of these unfortunate consequences depends somewhat on whether there is any latitude in the functional constraints in the model. It is useful to make the following distinction between these constraints. A soft constraint is a constraint that actually can be violated a little bit without very serious complications. By contrast, a hard constraint is a constraint that must be satisfied. Robust optimization is especially designed for dealing with problems with hard constraints. For very small linear programming problems, it often is not difficult to work around the complications that the optimal solution with respect to the model may no longer be 7.4 ROBUST OPTIMIZATION 265 optimal, and may not even be feasible, when the time comes to implement the solution. If the model contains only soft constraints, it may be OK to use a solution that is not quite feasible (according to the model). Even if some or all of the constraints are hard constraints, the situation depends upon whether it is possible to make a last-minute adjustment in the solution being implemented. (In some cases, the solution to be implemented will be locked into place well in advance.) If this is possible, it may be easy to see how to make a small adjustment in the solution to make it feasible. It may even be easy to see how to adjust the solution a little bit to make it optimal. However, the situation is quite different when dealing with the larger linear programming problems that are typically encountered in practice. For example, Selected Reference 1 at the end of the chapter describes what happened when dealing with the problems in a library of 94 large linear programming problems (hundreds or thousands of constraints and variables). It was assumed that the parameters could be randomly in error by as much as 0.01 percent. Even with such tiny errors throughout the model, the optimal solution was found to be infeasible in 13 of these problems and badly so for 6 of the problems. Furthermore, it was not possible to see how the solution could be adjusted to make it feasible. If all the constraints in the model are hard constraints, this is a serious problem. Therefore, considering that the estimation errors for the parameters in many realistic linear programming problems often would be much larger than 0.01 percent— perhaps even 1 percent or more—there clearly is a need for a technique that will find a very good solution that is virtually guaranteed to be feasible. This is where the technique of robust optimization can play a key role. The goal of robust optimization is to find a solution for the model that is virtually guaranteed to remain feasible and near optimal for all plausible combinations of the actual values for the parameters. This is a daunting goal, but an elaborate theory of robust optimization now has been developed, as presented in Selected References 1 and 3. Much of this theory (including various extensions of linear programming) is beyond the scope of this book, but we will introduce the basic concept by considering the following straightforward case of independent parameters. Robust Optimization with Independent Parameters This case makes four basic assumptions: 1. 2. 3. 4. Each parameter has a range of uncertainty surrounding its estimated value. This parameter can take any value between the minimum and maximum specified by this range of uncertainty. This value is uninfluenced by the values taken on by the other parameters. All the functional constraints are in either  or  form. To guarantee that the solution will remain feasible regardless of the values taken on by these parameters within their ranges of uncertainty, we simply assign the most conservative value to each parameter as follows: • For each functional constraint in  form, use the maximum value of each aij and the minimum value of bi. • For each functional constraint in  form, do the opposite of the above. • For an objective function in maximization form, use the minimum value of each cj. • For an objective function in minimization form, use the maximum value of each cj. We now will illustrate this approach by returning again to the Wyndor example. 266 CHAPTER 7 LINEAR PROGRAMMING UNDER UNCERTAINTY Example Continuing the prototype example for linear programming first introduced in Sec. 3.1, the management of the Wyndor Glass Company now is negotiating with a wholesale distributor that specializes in the distribution of doors and windows. The goal is to arrange with this distributor to sell all of the special new doors and windows (referred to as Products 1 and 2 in Sec. 3.1) after their production begins in the near future. The distributor is interested but also is concerned that the volume of these doors and windows may be too small to justify this special arrangement. Therefore, the distributor has asked Wyndor to specify the minimum production rates of these products (measured by the number of batches produced per week) that Wyndor will guarantee, where Wyndor would need to pay a penalty if the rates fall below these minimum amounts. Because these special new doors and windows have never been produced before, Wyndor management realizes that the parameters of their linear programming model formulated in Sec. 3.2 (and based on Table 3.1) are only estimates. For each product, the production time per batch in each plant (the aij) may turn out to be significantly different from the estimates given in Table 3.1. The same is true for the estimates of the profit per batch (the cj). Arrangements currently are being made to reduce the production rates of certain current products in order to free up production time in each plant for the two new products. Therefore, there also is some uncertainty about how much production time will be available in each of the plants (the bi) for the new products. After further investigation, Wyndor staff now feels confident that they have identified the minimum and maximum quantities that could be realized for each of the parameters of the model after production begins. For each parameter, the range between this minimum and maximum quantity is referred to as its range of uncertainty. Table 7.10 shows the range of uncertainty for the respective parameters. Applying the procedure for robust optimization with independent parameters outlined in the preceding subsection, we now refer to these ranges of uncertainty to determine the value of each parameter to use in the new linear programming model. In particular, we choose the maximum value of each aij and the minimum value of each bi and cj. The resulting model is shown below: Maximize Z = 2.5x1  4.5 x2, subject to 1.2x1  3.6 2.2x2  11 3.5x1  2.5x2  16 and x1  0, x2  0. ■ Table 7.10 Range of uncertainty for the parameters of the Wyndor Glass Co. model Parameter a11 a22 a31 a32 b1 b2 b3 c1 c2 Range of Uncertainty 0.8 1.8 2.5 1.5 3.6 11 16 2.5 4.5 – – – – – – – – – 1.2 2.2 3.5 2.5 4.4 13 20 3.5 5.5 7.4 ROBUST OPTIMIZATION 267 This model can be solved readily, including by the graphical method. Its optimal solution is x1 = 1 and x2 = 5, with Z = 25 (a total profit of $25,000 per week). Therefore, Wyndor management now can give the wholesale distributor a guarantee that Wyndor can provide the distributor with a minimum of one batch of the special new door (Product 1) and five batches of the special new window (Product 2) for sale per week. Extensions Although it is straightforward to use robust optimization when the parameters are independent, it frequently is necessary to extend the robust optimization approach to other cases where the final values of some parameters are influenced by the values taken on by other parameters. Two such cases commonly arise. One case is where the parameters in each column of the model (the coefficients of a single variable or the right-hand sides) are not independent of each other but are independent of the other parameters. For example, the profit per batch of each product (the cj) in the Wyndor problem might be influenced by the production time per batch in each plant (the aij) that is realized when production begins. Therefore, a number of scenarios regarding the values of the coefficients of a single variable need to be considered. Similarly, by shifting some personnel from one plant to another, it might be possible to increase the production time available per week in one plant by decreasing this quantity in another plant. This again could lead to a number of possible scenarios to be considered regarding the different sets of values of the bi. Fortunately, linear programming still can be used to solve the resulting robust optimization model. The other common case is where the parameters in each row of the model are not independent of each other but are independent of the other parameters. For example, by shifting personnel and equipment in Plant 3 for the Wyndor problem, it might be possible to decrease either a31 or a32 by increasing the other one (and perhaps even change b3 in the process). This would lead to considering a number of scenarios regarding the values of the parameters in that row of the model. Unfortunately, solving the resulting robust optimization model requires using something more complicated than linear programming. We will not delve further into these or other cases. Selected References 1 and 3 provide details (including even how to apply robust optimization when the original model is something more complicated than a linear programming model). One drawback of the robust optimization approach is that it can be extremely conservative in tightening the model far more than is realistically necessary. This is especially true when dealing with large models with hundreds or thousands (perhaps even millions) of parameters. However, Selected Reference 4 provides a good way of largely overcoming this drawback when the uncertain parameters are some of the aij and either all these parameters are independent or the only dependencies are within single columns of aij. The basic idea is to recognize that the random variations from the estimated values of the uncertain aij shouldn't result in every variation going fully in the direction of making it more difficult to achieve feasibility. Some of the variations will be negligible (or even zero), some will go in the direction of making it easier to achieve feasibility, and only some will go very far in the opposite direction. Therefore, it should be safe to assume that only a modest number of the parameters will go strongly in the direction of making it more difficult to achieve feasibility. Doing so will still lead to a feasible solution with very high probability. Being able to choose this modest number also provides the flexibility to achieve the desired trade-off between obtaining a very good solution and virtually ensuring that this solution will turn out to be feasible when the solution is implemented. 268 CHAPTER 7 LINEAR PROGRAMMING UNDER UNCERTAINTY ■ 7.5 CHANCE CONSTRAINTS The parameters of a linear programming model typically remain uncertain until the actual values of these parameters can be observed at some later time when the adopted solution is implemented for the first time. The preceding section describes how robust optimization deals with this uncertainty by revising the values of the parameters in the model to ensure that the resulting solution actually will be feasible when it finally is implemented. This involves identifying an upper and lower bound on the possible value of each uncertain parameter. The estimated value of the parameter then is replaced by whichever of these two bounds make it more difficult to achieve feasibility. This is a useful approach when dealing with hard constraints, i.e., those constraints that must be satisfied. However, it does have certain shortcomings. One is that it might not be possible to accurately identify an upper and lower bound for an uncertain parameter. In fact, it might not even have an upper and lower bound. This is the case, for example, when the underlying probability distribution for a parameter is a normal distribution, which has long tails with no bounds. A related shortcoming is that when the underlying probability distribution has long tails with no bounds, the tendency would be to assign values to the bounds that are so wide that they would lead to overly conservative solutions. Chance constraints are designed largely to deal with parameters whose distribution has long tails with no bounds. For simplicity, we will deal with the relatively straightforward case where the only uncertain parameters are the right-hand sides (the bi) where these bi are independent random variables with a normal distribution. We will denote the mean and standard deviation of this distribution for each bi by µi and ␴i respectively. To be specific, we also assume that all the functional constraints are in  form. (The  form would be treated similarly, but chance constraints aren't applicable when the original constraint is in = form.) The Form of a Chance Constraint When the original constraint is n 冱 aij xj  bi , j1 the corresponding chance constraint says that we will only require the original constraint to be satisfied with some very high probability. Let   minimum acceptable probability that the original constraint will hold. In other words, the chance constraint is { P n } 冱 aij xj  bi j1  , which says that the probability that the original constraint will hold must be at least ␣. It next is possible to replace this chance constraint by an equivalent constraint that is simply a linear programming constraint. In particular, because bi is the only random variable in the chance constraint, where bi is assumed to have a normal distribution, this deterministic equivalent of the chance constraint is n 冱 aij xj  mi  Ka i, j1 where Ka is the constant in the table for the normal distribution given in Appendix 5 that gives this probability a. For example, K0.90  1.28, K0.95  –1.645, and K0.99  –2.33. 7.5 ■ FIGURE 7.23 The underlying distribution of bi is assumed to have the normal distribution shown here. CHANCE CONSTRAINTS 269 1 2 ␣ 5 0.05 ␮i 1 K␣␴i ␮i Thus, if a = 0.95, the deterministic equivalent of the chance constraint becomes n  aij xj  mi 1.645␴i. j1 In other words, if mi corresponds to the original estimated value of bi, then reducing this right-hand side by 1.645␴i will ensure that the constraint will be satisfied with probability at least 0.95. (This probability will be exactly 0.95 if this deterministic form holds with equality but will be greater than 0.95 if the left-hand side is less than the right-hand side.) Figure 7.23 illustrates what is going on here. This normal distribution represents the probability density function of the actual value of bi that will be realized when the solution is implemented. The cross-hatched area (0.05) on the left side of the figure gives the probability that bi will turn out to be less than mi 1.645␴i, so the probability is 0.95 that bi will be greater than this quantity. Therefore, requiring that the left-hand side of the constraint be  this quantity means that this left-hand side will be less than the final value of bi at least 95 percent of the time. Example To illustrate the use of chance constraints, we return to the original version of the Wyndor Glass Co. problem and its model as formulated in Sec. 3.1. Suppose now that there is some uncertainty about how much production time will be available for the two new products when their production begins in the three plants a little later. Therefore, b1, b2, and b3 now are uncertain parameters (random variables) in the model. Assuming that these parameters have a normal distribution, the first step is to estimate the mean and standard deviation for each one. Table 3.1 gives the original estimate for how much production time will be available per week in the three plants, so these quantities can be taken to be the mean if they still seem to be the most likely available production times. The standard deviation provides a measure of how much the actual production time available might deviate from this mean. In particular, the normal distribution has the property that approximately two-thirds of the distribution lay within one standard deviation of the mean. Therefore, a good way to estimate the standard deviation of each bi is to ask how much the actual available production time could turn out to deviate from the mean such that there is a 2-in-3 chance that the deviation will not be larger than this. Another important step is to select an appropriate value of ␣ as defined above. This choice depends on how serious it would be if an original constraint ends up being violated when the solution if implemented. How difficult would it be to make the necessary adjustments if this were to happen? When dealing with soft constraints that actually can be violated a little bit without very serious complications, a value of approximately ␣ = 0.95 would be a common choice and that is what we will use in this example. (We will discuss the case of hard constraints in the next subsection.) Table 7.11 shows the estimates of the mean and standard deviation of each bi for this example. The last two columns also show the original right-hand side (RHS) and the adjusted right-hand side for each of the three functional constraints. 270 CHAPTER 7 LINEAR PROGRAMMING UNDER UNCERTAINTY ■ TABLE 7.11 The data for the example of using chance constraints to adjust the Wyndor Glass Co. model Parameter bi b2 b3 Mean Standard Deviation Original RHS Adjusted RHS 4 12 18 0.2 0.5 1 4 12 18 4  1.645 (0.2)  3.671 12  1.645 (0.5)  11.178 18  1.645 (1)  16.355 Using the data in Table 7.11 to replace the three chance constraints by their deterministic equivalents leads to the following linear programming model: Maximize Z = 3x1 + 5x2, subject to  3.671 2x2  11.178 3x1 + 2x2  16.355 x1 and x1  0, x2  0. Its optimal solution is x1 = 1.726 and x2 = 5.589, with Z = 33.122 (a total profit of $33,122 per week). This total profit per week is a significant reduction from the $36,000 found for the original version of the Wyndor model. However, by reducing the production rates of the two new products from their original values of x1 = 2 and x2 = 6, we now have a high probability that the new production plan actually will be feasible without needing to make any adjustments when production gets under way. We can estimate this high probability if we assume not only that the three bi have normal distributions but also that these three distributions are statistically independent. The new production plan will turn out to be feasible if all three of the original functional constraints are satisfied. For each of these constraints, the probability is at least 0.95 that it will be satisfied, where the probability will be exactly 0.95 if the deterministic equivalent of the corresponding chance constraint is satisfied with equality by the optimal solution for the linear programming model. Therefore, the probability that all three constraints are satisfied is at least (0.95)3 = 0.857. However, only the second and third deterministic equivalents are satisfied with equality in this case, so the probability that the first constraint will be satisfied is larger than 0.95. In the best case where this probability is essentially 1, the probability that all three constraints will be satisfied is essentially (0.95)2 = 0.9025. Consequently, the probability that the new production plan will turn out to be feasible is somewhere between the lower bound of 0.857 and the upper bound of 0.9025. (In this case, x1 = 1.726 is more that 11 standard deviations below 4, the mean of b1, so the probability of satisfying the first constraint is essentially 1, which means that the probability of satisfying all three constraints is essentially 0.9025.) Dealing with Hard Constraints Chance constraints are well suited for dealing with soft constraints, i.e., constraints that actually can be violated a little bit without very serious complications. However, they also might have a role to play when dealing with hard constraints, i.e., constraints that must be satisfied. Recall that robust optimization described in the preceding section is especially designed for addressing problems with hard constraints. When bi is the uncertain parameter in a hard constraint, robust optimization begins by estimating the upper bound and the lower bound on bi. However, if the probability distribution of bi has long tails with no bounds, such as with a normal distribution, it becomes impossible to set bounds 7.6 271 STOCHASTIC PROGRAMMING WITH RECOURSE on bi that will have zero probability of being violated. Therefore, an attractive alternative approach is to replace such a constraint by a chance constraint with a very high value of ␣, say, at least 0.99. Since K0.99 = –2.33, this would further reduce the right-hand sides calculated in Table 7.10 to b1 = 3.534, b2 = 10.835, and b3 = 15.67. Although = 0.99 might seem reasonably safe, there is a hidden danger involved. What we actually want is to have a very high probability that all the original constraints will be satisfied. This probability is somewhat less than the probability that a specific single original constraint will be satisfied and it can be much less if the number of functional constraints is very large. We described in the last paragraph of the preceding subsection how to calculate both a lower bound and an upper bound on the probability that all the original constraints will be satisfied. In particular, if there are M functional constraints with uncertain bi, the lower bound is M. After replacing the chance constraints by their deterministic equivalents and solving for the optimal solution for the resulting linear programming problem, the next step is to count the number of these deterministic equivalents that are satisfied with equality by this optimal solution. Denoting this number by N, the upper bound is N. Thus, M  Probability that all the constraints will be satisfied  N . When using ␣  0.99, these bounds on this probability can be less than desirable if M and N are large. Therefore, for a problem with a large number of uncertain bi, it might be advisable to use a value of ␣ much closer to 1 than 0.99. Extensions Thus far, we have only considered the case where the only uncertain parameters are the bi. If the coefficients in the objective function (the cj) also are uncertain parameters, it is quite straightforward to deal with this case as well. In particular, after estimating the probability distribution of each cj, each of these parameters can be replaced by the mean of this distribution. The quantity to be maximized or minimized then becomes the expected value (in the statistical sense) of the objective function. Furthermore, this expected value is a linear function, so linear programming still can be used to solve the model. The case where the coefficients in the functional constraints (the aij) are uncertain parameters is much more difficult. For each constraint, the deterministic equivalent of the corresponding chance constraint now includes a complicated nonlinear expression. It is not impossible to solve the resulting nonlinear programming model. In fact, LINGO has special features for converting a deterministic model to a chance-constrained model with probabilistic coefficients and then solving it. This can be done with any of the major probability distributions for the parameters of the model. ■ 7.6 STOCHASTIC PROGRAMMING WITH RECOURSE Stochastic programming provides an important approach to linear programming under uncertainty that (like chance constraints) began being developed as far back as the 1950's and it continues to be widely used today. (By contrast, robust optimization described in Sec. 7.4 only began significant development about the turn of the century.) It addresses linear programming problems where there currently are uncertainties about the data of the problem and about how the situation will evolve when the chosen solution is implemented in the future. It assumes that probability distributions can be estimated for the random variables in the problem and then these distributions are heavily used in the analysis. Chance constraints sometimes are incorporated into the model. The goal often is to optimize the expected value of the objective function over the long run. 272 CHAPTER 7 LINEAR PROGRAMMING UNDER UNCERTAINTY This approach is quite different from the robust optimization approach described in Sec. 7.4. Robust optimization largely avoids using probability distributions by focusing instead on the worst possible outcomes. Therefore, it tends to lead to very conservative solutions. Robust optimization is especially designed for dealing with problems with hard constraints (constraints that must be satisfied because there is no latitude for violating the constraint even a little bit). By contrast, stochastic programming seeks solutions that will perform well on the average. There is no effort to play it safe with especially conservative solutions. Thus, stochastic programming is better suited for problems with soft constraints (constraints that actually can be violated a little bit without very serious consequences). If hard constraints are present, it will be important to be able to make lastminute adjustments in the solution being implemented to reach feasibility. Another key feature of stochastic programming is that it commonly addresses problems where some of the decisions can be delayed until later when the experience with the initial decisions has eliminated some or all of the uncertainties in the problem. This is referred to as stochastic programming with recourse because corrective action can be taken later to compensate for any undesirable outcomes with the initial decisions. With a twostage problem, some decisions are made now in stage 1, more information is obtained, and then additional decisions are made later in stage 2. Multistage problems have multiple stages over time where decisions are made as more information is obtained. This section introduces the basic idea of stochastic programming with recourse for two-stage problems. This idea is illustrated by the following simple version of the Wyndor Glass Co. problem. Example The management of the Wyndor Glass Co. now has heard a rumor that a competitor is planning to produce and market a special new product that would compete directly with the company's new 4  6 foot double-hung wood-framed window (“product 2”). If this rumor turns out to be true, Wyndor would need to make some changes in the design of product 2 and also reduce its price in order to be competitive. However, if the rumor proves to be false, then no change would be made in product 2 and all the data presented in Table 3.1 of Sec. 3.1 would still apply. Therefore, there now are two alternative scenarios of the future that will affect management's decisions on how to proceed: Scenario 1: The rumor about the competitor planning a competitive product turns out to be not true, so all the data in Table 3.1 still applies. Scenario 2: This rumor turns out to be true, so Wyndor will need to modify product 2 and reduce its price. Table 7.12 shows the new data that will apply under scenario 2. ■ Table 7.12 Data for the Wyndor problem under scenario 2 Production Time per Batch, Hours Product Plant 1 2 Production Times Available per Week, Hours 1 2 3 1 0 3 0 2 6 4 12 18 $3,000 $1,000 Profit per Batch 7.6 STOCHASTIC PROGRAMMING WITH RECOURSE 273 With this in mind, Wyndor management has decided to move ahead soon with producing product 1 but to delay the decision regarding what to do about product 2 until it learns which scenario is occurring. Using a second subscript to indicate the scenario, the relevant decision variables now are x1  number of batches of product 1 produced per week, x21  number of batches of product 2 produced per week under scenario 1, x22  number of batches of the modified product 2 produced per week under scenario 2. This is a two-stage problem because the production of product 1 will begin right away in stage 1 but the production of some version of product 2 (whichever becomes relevant) will only begin later in stage 2. However, by using stochastic programming with recourse, we can formulate a model and solve now for the optimal value of all three decision variables. The chosen value of x1 will enable setting up the production facilities to immediately begin production of product 1 at that rate throughout stages 1 and 2. The chosen value of x21 or x22 (whichever becomes relevant) will enable the planning to start regarding the production of some version of product 2 at the indicated rate later in stage 2 when it is learned which scenario is occurring. This small stochastic programming problem only has one probability distribution associated with it, namely, the distribution about which scenario will occur. Based on the information it has been able to acquire, Wyndor management has developed the following estimates: Probability that scenario 1 will occur = 1/4 = 0.25 Probability that scenario 2 will occur = 3/4 = 0.75 Not knowing which scenario will occur is unfortunate since the optimal solutions under the two scenarios are quite different. In particular, if we knew that scenario 1 definitely will occur, the appropriate model is the original Wyndor linear programming model formulated in Sec. 3.1, which leads to the optimal solution, x1 = 2 and x21 = 6 with Z = 36. On the other hand, if we knew that scenario 2 definitely will occur, then the appropriate model would be the linear programming model, Maximize Z = 3x1 + x22, subject to  4 2x22  12 3x1  6x22  18 x1 and x1  0, x22  0, which yields its optimal solution, x1 = 4 and x22 = 1 with Z = 16.5. However, we need to formulate a model that simultaneously considers both scenarios. This model would include all the constraints under either scenario. Given the probabilities of the two scenarios, the expected value (in the statistical sense) of the total profit is calculated by weighting the total profit under each scenario by its probability. The resulting stochastic programming model is Maximize Z  0.25(3x1  5x21)  0.75(3x1  x22)  3x1  1.25x21  0.75 x22, 274 CHAPTER 7 LINEAR PROGRAMMING UNDER UNCERTAINTY subject to x1 2x21 3x1 + 2x21 3x1  4  12 2x22  12  18 + 6x22  18 and x1  0, x21  0, x22  0 The optimal solution for this model is x1 = 4, x21 = 3, and x22 = 1, with Z = 16.5. In words, the optimal plan is Produce 4 batches of product 1 per week; Produce 3 batches of the original version of product 2 per week later only if scenario 1 occurs. Produce 1 batch of the modified version of product 2 per week later only if scenario 2 occurs. Note that stochastic programming with recourse has enabled us to find a new optimal plan that is very different from the original plan (produce 2 batches of product 1 per week and 6 batches of the original version of product 2 per week) that was obtained in Sec. 3.1 for the Wyndor problem. Some Typical Applications Like the above example, any application of stochastic programming with recourse involves a problem where there are alternative scenarios about what will evolve in the future and this uncertainty affects both immediate decisions and later decisions that are contingent on which scenario is occurring. However, most applications lead to models that are much larger (often vastly larger) than the one above. The example has only two stages, only one decision to be made in stage 1, only two scenarios, and only one decision to be made in stage 2. Many applications must consider a substantial number of possible scenarios, perhaps will have more than two stages, and will require many decisions at each stage. The resulting model might have hundreds or thousands of decision variables and functional constraints. The reasoning though is basically the same as for this tiny example. Stochastic programming with recourse has been widely used for many years. These applications have arisen in a wide variety of areas. We briefly describe a few of these areas of application below. Production planning often involves developing a plan for how to allocate various limited resources to the production of various products over a number of time periods into the future. There are some uncertainties about how the future will evolve (demands for the products, resource availabilities, etc.) that can be described in terms of a number of possible scenarios. It is important to take these uncertainties into account for developing the production plan, including the product mix in the next time period. This plan also would make the product mix in subsequent time periods contingent upon the information being obtained about which scenario is occurring. The number of stages for the stochastic programming formulation would equal the number of time periods under consideration. Our next application involves a common marketing decision whenever a company develops a new product. Because of the major advertising and marketing expense required to introduce a new product to a national market, it may be unclear whether the product 7.6 STOCHASTIC PROGRAMMING WITH RECOURSE 275 would be profitable. Therefore, the company's marketing department frequently chooses to try out the product in a test market first before making the decision about whether to go ahead with marketing the product nationally. The first decisions involve the plan (production level, advertising level, etc.) for trying out the product in the test market. Then there are various scenarios regarding how well the product is received in this test market. Based on which scenario occurs, decisions next need to be made about whether to go ahead with the product and, if so, what the plan should be for producing and marketing the product nationally. Based on how well this goes, the next decisions might involve marketing the product internationally. If so, this becomes a three-stage problem for stochastic programming with recourse. When making a series of risky financial investments, the performance of these investments may depend greatly on how some outside factor (the state of the economy, the strength of a certain sector of the economy, the rise of new competitive companies, etc.) that evolves over the lives of these investments. If so, a number of possible scenarios for this evolution need to be considered. Decisions need to be made about how much to invest in the first investment and then, contingent upon the information being obtained about which scenario is occurring, how much to invest (if any) in each of the subsequent investment opportunities. This again fits right in with stochastic programming with recourse over a number of stages. The agricultural industry is one which faces great uncertainty as it approaches each growing season. If the weather is favorable, the season can be very profitable. However, if drought occurs, or there is too much rain, or a flood, or an early frost, etc., the crops can be poor. A number of decisions about the number of acres to devote to each crop need to be made early before anything is known about which weather scenario will occur. Then the weather evolves and the crops (good or poor) need to be harvested, at which point additional decisions need to be made about how much of each crop to sell, how much should be retained as feed for livestock, how much seed to retain for the next season, etc. Therefore, this is a two-stage problem to which stochastic programming with recourse can be applied. As these examples illustrate, when initial decisions need to be made in the face of uncertainty, it can be very helpful to be able to make recourse decisions at a later stage when the uncertainty is gone. These recourse decisions can help compensate for any unfortunate decisions made in the first stage. Stochastic programming is not the only technique that can incorporate recourse into the analysis. Robust optimization (described in Sec. 7.4) also can incorporate recourse. Selected Reference 6 (cited at the end of the chapter) describes how a computer package named ROME (an acronym for Robust Optimization Made Easy) can apply robust optimization with recourse. It also describes examples in the areas of inventory management, project management, and portfolio optimization. Other software packages also are available for such techniques. For example, Analytic Solver Platform for Education in your OR Courseware has some functionality in robust optimization, chance constraints, and stochastic programming with recourse. LINGO also has considerable functionality in these areas. For example, it has special features for converting a deterministic model into a stochastic programming model and then solving it. In fact, LINGO can solve multiperiod stochastic programming problems with an arbitrary sequence of “we make a decision, nature makes a random decision, we make a recourse decision, nature makes another random decision, we make another recourse decision, etc.” MPL has some functionality for stochastic programming with recourse as well. Selected Reference 9 also provides information on solving very large applications of stochastic programming with recourse. 276 CHAPTER 7 LINEAR PROGRAMMING UNDER UNCERTAINTY ■ 7.7 CONCLUSIONS The values used for the parameters of a linear programming model generally are just estimates. Therefore, sensitivity analysis needs to be performed to investigate what happens if these estimates are wrong. The fundamental insight of Sec. 5.3 provides the key to performing this investigation efficiently. The general objectives are to identify the sensitive parameters that affect the optimal solution, to try to estimate these sensitive parameters more closely, and then to select a solution that remains good over the range of likely values of the sensitive parameters. Sensitivity analysis also can help guide managerial decisions that affect the values of certain parameters (such as the amounts of the resources to make available for the activities under consideration). These various kinds of sensitivity analysis are an important part of most linear programming studies. With the help of Solver, spreadsheets also provide some useful methods of performing sensitivity analysis. One method is to repeatedly enter changes in one or more parameters of the model into the spreadsheet and then click on the Solve button to see immediately if the optimal solution changes. A second is to use ASPE in your OR Courseware to systematically check on the effect of making a series of changes in one or two parameters of the model. A third is to use the sensitivity report provided by Solver to identify the allowable range for the coefficients in the objective function, the shadow prices for the functional constraints, and the allowable range for each right-hand side over which its shadow price remains valid. (Other software that applies the simplex method, including various software in your OR Courseware, also provides such a sensitivity report upon request.) Some other important techniques also are available for dealing with linear programming problems where there is substantial uncertainty about what the true values of the parameters will turn out to be. For problems that have only hard constraints (constraints that must be satisfied), robust optimization will provide a solution that is virtually guaranteed to be feasible and nearly optimal for all plausible combinations of the actual values for the parameters. When dealing with soft constraints (constraints that actually can be violated a little bit without serious complications), each such constraint can be replaced by a chance constraint that only requires a very high probability that the original constraint will be satisfied. Stochastic programming with recourse is designed for dealing with problems where decisions are made over two (or more) stages, so later decisions can use updated information about the values of some of the parameters. ■ SELECTED REFERENCES 1. Ben-Tal, A., L. El Ghaoui, and A. Nemirovski: Robust Optimization, Princeton University Press, Princeton, NJ, 2009. 2. Bertsimas, D., D. B. Brown, and C. Caramanis: “Theory and Applications of Robust Optimization,” SIAM Review, 53(3): 464–501, 2011. 3. Bertsimas, D., and M. Sim: “The Price of Robustness,” Operations Research, 52(1): 35–53, January—February 2004. 4. Birge, J. R., and F. Louveaux: Introduction to Stochastic Programming, 2nd ed., Springer, New York, 2011. 5. Gal, T., and H. Greenberg (eds): Advances in Sensitivity Analysis and Parametric Analysis, Kluwer Academic Publishers (now Springer), Boston, MA, 1997. 6. Goh, J., and M. Sim: “Robust Optimization Made Easy with ROME,” Operations Research, 59(4): 973–985, July—August 2011. 7. Higle, J. L., and S. W. Wallace: “Sensitivity Analysis and Uncertainty in Linear Programming,” Interfaces, 33(4): 53–60, July—August 2003. 8. Hillier, F. S., and M. S. Hillier: Introduction to Management Science: A Modeling and Case Studies Approach with Spreadsheets, 5th ed., McGraw-Hill/Irwin, Burr Ridge, IL, 2014, chap. 5. 277 PROBLEMS 9. Infanger, G.: Planning Under Uncertainty: Solving Large-Scale Stochastic Linear Programs, Boyd and Fraser, New York, 1994. 10. Infanger, G. (ed.): Stochastic Programming: The State of the Art in Honor of George B. Dantzig, Springer, New York, 2011. 11. Kall, P., and J. Mayer: Stochastic Linear Programming: Models, Theory, and Computation, 2nd ed., Springer, New York, 2011. 12. Sen, S., and J. L. Higle: “An Introductory Tutorial on Stochastic Linear Programming Models,” Interfaces, 29(2): 33–61, March—April, 1999. ■ LEARNING AIDS FOR THIS CHAPTER ON OUR WEBSITE (www.mhhe.com/hillier) Solved Examples: Examples for Chapter 7 A Demonstration Example in OR Tutor: Sensitivity Analysis Interactive Procedures in IOR Tutorial: Interactive Graphical Method Enter or Revise a General Linear Programming Model Solve Interactively by the Simplex Method Sensitivity Analysis Automatic Procedures in IOR Tutorial: Solve Automatically by the Simplex Method Graphical Method and Sensitivity Analysis Excel Add-In: Analytic Solver Platform for Education (ASPE) Files (Chapter 3) for Solving the Wyndor Example: Excel Files LINGO/LINDO File MPL/Solvers File Glossary for Chapter 7 See Appendix 1 for documentation of the software. ■ PROBLEMS The symbols to the left of some of the problems (or their parts) have the following meaning: D: The demonstration example just listed may be helpful. I: We suggest that you use the corresponding interactive procedure just listed (the printout records your work). C: Use the computer with any of the software options available to you (or as instructed by your instructor) to solve the problem automatically. E*: Use Excel, perhaps including the ASPE add-in. An asterisk on the problem number indicates that at least a partial answer is given in the back of the book. 278 CHAPTER 7 LINEAR PROGRAMMING UNDER UNCERTAINTY (g) Use the fundamental insight presented in Sec. 5.3 to identify the coefficients of xnew as a nonbasic variable in the final set of equations resulting from the introduction of xnew into the original model as shown in part ( f ). 7.1-1.* Consider the following problem. Z  3x1  x2  4x3, Maximize subject to and x1  0, x2  0, x3  0. The corresponding final set of equations yielding the optimal solution is 1 3 (0) Z  2x2  x4  x5  17 5 5 1 1 1 5 (1) x1  x2  x4  x5   3 3 3 3 1 2 (2) x2  x3  x4  x5  3. 5 5 (a) Identify the optimal solution from this set of equations. (b) Construct the dual problem. I (c) Identify the optimal solution for the dual problem from the final set of equations. Verify this solution by solving the dual problem graphically. (d) Suppose that the original problem is changed to Maximize Z  3x1  3x2  4x3, subject to x2  0, x3  0. Use duality theory to determine whether the previous optimal solution is still optimal. (e) Use the fundamental insight presented in Sec. 5.3 to identify the new coefficients of x2 in the final set of equations after it has been adjusted for the changes in the original problem given in part (d ). (f) Now suppose that the only change in the original problem is that a new variable xnew has been introduced into the model as follows: W  5y1  4y2, subject to 4y1 2y1 y1 y1     3y2  4 y2  3 2y2  1 y2  2 y2  0. Because this primal problem has more functional constraints than variables, suppose that the simplex method has been applied directly to its dual problem. If we let x5 and x6 denote the slack variables for this dual problem, the resulting final simplex tableau is Coefficient of: Basic Variable Eq. Z x1 x2 x3 x4 x5 x6 Right Side Z x2 x4 (0) (1) (2) 1 0 0 3 1 2 0 1 0 2 1 3 0 0 1 1 1 1 1 1 2 9 1 3 Z  3x1  x2  4x3  2xnew, subject to 6x1  3x2  5x3  3xnew  25 3x1  4x2  5x3  2xnew  20 and x1  0, Minimize y1  0, and Maximize 7.1-3. Consider the following problem. D,I and 6x1  2x2  5x3  25 3x1  3x2  5x3  20 x1  0, 7.1-2. Reconsider the model of Prob. 7.1-1. You are now to conduct sensitivity analysis by independently investigating each of the following six changes in the original model. For each change, use the sensitivity analysis procedure to revise the given final set of equations (in tableau form) and convert it to proper form from Gaussian elimination. Then test this solution for feasibility and for optimality. (Do not reoptimize.) (a) Change the right-hand side of constraint 1 to b1  10. (b) Change the right-hand side of constraint 2 to b2  10. (c) Change the coefficient of x2 in the objective function to c2  3. (d) Change the coefficient of x3 in the objective function to c3  2. (e) Change the coefficient of x2 in constraint 2 to a22  2. (f) Change the coefficient of x1 in constraint 1 to a11  8. D,I 6x1  3x2  5x3  25 3x1  4x2  5x3  20 x2  0, x3  0, xnew  0. Use duality theory to determine whether the previous optimal solution, along with xnew  0, is still optimal. For each of the following independent changes in the original primal model, you now are to conduct sensitivity analysis by directly investigating the effect on the dual problem and then inferring the complementary effect on the primal problem. For each change, apply the procedure for sensitivity analysis summarized at the end of Sec. 7.1 to the dual problem (do not reoptimize), and then give your conclusions as to whether the current basic solution for the primal problem still is feasible and whether it still is optimal. Then check your conclusions by a direct graphical analysis of the primal problem. 279 PROBLEMS (a) Change the objective function to W  3y1  5y2. (b) Change the right-hand sides of the functional constraints to 3, 5, 2, and 3, respectively. (c) Change the first constraint to 2y1  4y2  7. (d) Change the second constraint to 5y1  2y2  10. 7.2-1. Read the referenced article that fully describes the OR study summarized in the application vignette presented in Sec. 7.2. Briefly describe how sensitivity analysis was applied in this study. Then list the various financial and nonfinancial benefits that resulted from the study. 7.2-2.* Consider the following problem. D,I Z  5x1  5x2  13x3, Maximize  c2   6   a12    2  .      a22   5  (g) Introduce a new variable x6 with coefficients  c6   10   a16    3  .      a26   5  (h) Introduce a new constraint 2x1  3x2  5x3  50. (Denote its slack variable by x6.) (i) Change constraint 2 to 10x1  5x2  10x3  100. subject to x1  x2  3x3  20 12x1  4x2  10x3  90 7.2-3.* Reconsider the model of Prob. 7.2-2. Suppose that the right-hand sides of the functional constraints are changed to 20  2␪ and xj  0 ( j  1, 2, 3). If we let x4 and x5 be the slack variables for the respective constraints, the simplex method yields the following final set of equations: (0) (1) (2) Z  2x3  5x4  100. x1  x2  3x3  x4  20. 16x1  2x3  4x4  x5  10. Now you are to conduct sensitivity analysis by independently investigating each of the following nine changes in the original model. For each change, use the sensitivity analysis procedure to revise this set of equations (in tableau form) and convert it to proper form from Gaussian elimination for identifying and evaluating the current basic solution. Then test this solution for feasibility and for optimality. (Do not reoptimize.) (a) Change the right-hand side of constraint 1 to b1  30. (b) Change the right-hand side of constraint 2 to b2  70. (c) Change the right-hand sides to 冤 冥 冤 冥 b1 10  . b2 100 (d) Change the coefficient of x3 in the objective function to c3  8. (e) Change the coefficients of x1 to  c1   2   a11    0  .      a21   5  (f) Change the coefficients of x2 to (for constraint 1) and 90  ␪ (for constraint 2), where ␪ can be assigned any positive or negative values. Express the basic solution (and Z ) corresponding to the original optimal solution as a function of ␪. Determine the lower and upper bounds on ␪ before this solution would become infeasible. 7.2-4. Consider the following problem. D,I Maximize Z  2x1  7x2  3x3, subject to x1  3x2  4x3  30 x1  4x2  x3  10 and x1  0, x2  0, x3  0. By letting x4 and x5 be the slack variables for the respective constraints, the simplex method yields the following final set of equations: (0) (1) (2) Z  x2  x3  2x5  20,  x2  5x3  x4  x5  20, x1  4x2  x3  x5  10. Now you are to conduct sensitivity analysis by independently investigating each of the following seven changes in the original model. For each change, use the sensitivity analysis procedure to revise this set of equations (in tableau form) and convert it to proper form from Gaussian elimination for identifying and evaluating the current basic solution. Then test this solution for feasibility and for optimality. If either test fails, reoptimize to find a new optimal solution. (a) Change the right-hand sides to 280 CHAPTER 7 LINEAR PROGRAMMING UNDER UNCERTAINTY 冤b 冥  冤30冥. 20 b1 2 (b) Change the coefficients of x3 to  c3   2   a13    3  .      a23   2  (c) Change the coefficients of x1 to  c1   4   a11    3  .      a21   2  set of equations (in tableau form) and convert it to proper form from Gaussian elimination for identifying and evaluating the current basic solution. Then test this solution for feasibility and for optimality. If either test fails, reoptimize to find a new optimal solution. (a) Change the right-hand sides to  b1   10   b2    4  .      b3   2  (b) Change the coefficient of x3 in the objective function to c3  2. (c) Change the coefficient of x1 in the objective function to c1  3. (d) Change the coefficients of x3 to  c3   4   a13   3   a    2 .  23     a33   1  (d) Introduce a new variable x6 with coefficients  c6   3   a16    1  .      a26   2  (e) Change the coefficients of x1 and x2 to  c1   1   a11   1   a    2   21     a31   3  (e) Change the objective function to Z  x1  5x2  2x3. (f) Introduce a new constraint 3x1  2x2  3x3  25. (g) Change constraint 2 to x1  2x2  2x3  35. 7.2-5. Reconsider the model of Prob. 7.2-4. Suppose that the right-hand sides of the functional constraints are changed to 30  3␪ (for constraint 1) and 10  ␪ (for constraint 2), where ␪ can be assigned any positive or negative values. Express the basic solution (and Z) corresponding to the original optimal solution as a function of ␪. Determine the lower and upper bounds on ␪ before this solution would become infeasible. 7.2-6. Consider the following problem. D,I Maximize Z  2x1  x2  x3, subject to 3x1  2x2  2x3  15 x1  x2  x3  3 x1  x2  x3  4 and x1  0, x2  0, x3  0. If we let x4, x5, and x6 be the slack variables for the respective constraints, the simplex method yields the following final set of equations: (0) (1) (2) (3)  2x3  x4  x5  18, x2  5x3  x4  3x5  24, 2x3  x5  x6  7, x1  4x3  x4  2x5  21. Z Now you are to conduct sensitivity analysis by independently investigating each of the following eight changes in the original model. For each change, use the sensitivity analysis procedure to revise this and  c2   2   a12   2   a22    3  ,      a32   2  respectively. (f) Change the objective function to Z  5x1  x2  3x3. (g) Change constraint 1 to 2x1  x2  4x3  12. (h) Introduce a new constraint 2x1  x2  3x3  60. 7.2-7 Consider the Distribution Unlimited Co. problem presented in Sec. 3.4 and summarized in Fig. 3.13. Although Fig. 3.13 gives estimated unit costs for shipping through the various shipping lanes, there actually is some uncertainty about what these unit costs will turn out to be. Therefore, before adopting the optimal solution given at the end of Sec. 3.4, management wants additional information about the effect of inaccuracies in estimating these unit costs. Use a computer package based on the simplex method to generate sensitivity analysis information preparatory to addressing the following questions. (a) Which of the unit shipping costs given in Fig. 3.13 has the smallest margin for error without invalidating the optimal solution given in Sec. 3.4? Where should the greatest effort be placed in estimating the unit shipping costs? (b) What is the allowable range for each of the unit shipping costs? (c) How should these allowable ranges be interpreted to management? (d) If the estimates change for more than one of the unit shipping costs, how can you use the generated sensitivity analysis information to determine whether the optimal solution might change? C 7.2-8. Consider the following problem. Maximize subject to 2x1  x2  b1 x1  x2  b2 Z  c1x1  c2x2, 281 PROBLEMS subject to and x1  0, x2  0. Let x3 and x4 denote the slack variables for the respective functional constraints. When c1  3, c2  2, b1  30, and b2  10, x1  x2  x3  3 x1  2x2  x3  1 x1  2x2  x3  2 and Coefficient of: Basic Variable Eq. Z x1 x2 x3 x4 Right Side Z x2 x1 (0) (1) (2) 1 0 0 0 0 1 0 1 0 1 1 1 1 2 1 40 10 20 the simplex method yields the following final simplex tableau. I (a) Use graphical analysis to determine the allowable range for c1 and c2. (b) Use algebraic analysis to derive and verify your answers in part (a). I (c) Use graphical analysis to determine the allowable range for b1 and b2. (d) Use algebraic analysis to derive and verify your answers in part (c) C (e) Use a software package based on the simplex method to find these allowable ranges. 7.2-9. Consider Variation 5 of the Wyndor Glass Co. model (see Fig. 7.5 and Table 7.8), where the changes in the parameter values given in Table 7.5 are c2  3, a22  3, and a32  4. Use the formula b*  S*b  to find the allowable range for each bi. Then interpret each allowable range graphically. I 7.2-10. Consider Variation 5 of the Wyndor Glass Co. model (see Fig. 7.5 and Table 7.8), where the changes in the parameter values given in Table 7.5 are c2  3, a22  3, and a32  4. Verify both algebraically and graphically that the allowable range for c1 is c1  49. I 7.2-11. For the problem given in Table 7.5, find the allowable range for c2. Show your work algebraically, using the tableau given in Table 7.5. Then justify your answer from a geometric viewpoint, referring to Fig. 7.2. 7.2-12.* For the original Wyndor Glass Co. problem, use the last tableau in Table 4.8 to do the following. (a) Find the allowable range for each bi. (b) Find the allowable range for c1 and c2. C (c) Use a software package based on the simplex method to find these allowable ranges. 7.2-13. For Variation 6 of the Wyndor Glass Co. model presented in Sec. 7.2, use the last tableau in Table 7.9 to do the following. (a) Find the allowable range for each bi. (b) Find the allowable range for c1 and c2. C (c) Use a software package based on the simplex method to find these allowable ranges. 7.2-14. Consider the following problem. Maximize Z  2x1  x2  3x3, x1  0, x2  0, x3  0. Suppose that the Big M method (see Sec. 4.6) is used to obtain the initial (artificial) BF solution. Let x4 be the artificial slack variable for the first constraint, x5 the surplus variable for the second constraint, x6 the artificial variable for the second constraint, and x7 the slack variable for the third constraint. The corresponding final set of equations yielding the optimal solution is (0) (1) (2) (3)  (M  2)x4  Mx6  x7  8, Z  5x2 x1  x2   x7  1, x4 2x2  x3  x7  2, 3x2  x4  x5  x6  2. Suppose that the original objective function is changed to Z  2x1  3x2  4x3 and that the original third constraint is changed to 2x2  x3  1. Use the sensitivity analysis procedure to revise the final set of equations (in tableau form) and convert it to proper form from Gaussian elimination for identifying and evaluating the current basic solution. Then test this solution for feasibility and for optimality. (Do not reoptimize.) 7.3-1. Consider the following problem. Maximize Z  2x1  5x2, subject to x1  2x2  10 (resource 1) x1  3x2  12 (resource 2) and x1  0, x2  0, where Z measures the profit in dollars from the two activities. While doing sensitivity analysis, you learn that the estimates of the unit profits are accurate only to within 50 percent. In other words, the ranges of likely values for these unit profits are $1 to $3 for activity 1 and $2.50 to $7.50 for activity 2. E* (a) Formulate a spreadsheet model for this problem based on the original estimates of the unit profits. Then use Solver to find an optimal solution and to generate the sensitivity report. E* (b) Use the spreadsheet and Solver to check whether this optimal solution remains optimal if the unit profit for activity 1 changes from $2 to $1. From $2 to $3. E* (c) Also check whether the optimal solution remains optimal if the unit profit for activity 1 still is $2 but the unit profit for activity 2 changes from $5 to $2.50. From $5 to $7.50. E* (d) Use a parameter analysis report to systematically generate the optimal solution and total profit as the unit profit of activity 1 increases in 20¢ increments from $1 to $3 (without changing the unit profit of activity 2). Then do the same as the unit profit of activity 2 increases in 50¢ incre- 282 CHAPTER 7 LINEAR PROGRAMMING UNDER UNCERTAINTY ments from $2.50 to $7.50 (without changing the unit profit of activity 1). Use these results to estimate the allowable range for the unit profit of each activity. I (e) Use the Graphical Method and Sensitivity Analysis procedure in IOR Tutorial to estimate the allowable range for the unit profit of each activity. E* (f) Use the sensitivity report provided by Solver to find the allowable range for the unit profit of each activity. Then use these ranges to check your results in parts (b–e). E* (g) Use a two-way parameter analysis report to systematically generate the optimal solution as the unit profits of the two activities are changed simultaneously as described in part (d). I (h) Use the Graphical Method and Sensitivity Analysis procedure in IOR Tutorial to interpret the results in part (g) graphically. 7.3-2. Reconsider the model given in Prob. 7.3-1. While doing sensitivity analysis, you learn that the estimates of the right-hand sides of the two functional constraints are accurate only to within 50 percent. In other words, the ranges of likely values for these parameters are 5 to 15 for the first right-hand side and 6 to 18 for the second right-hand side. (a) After solving the original spreadsheet model, determine the shadow price for the first functional constraint by increasing its right-hand side by 1 and solving again. (b) Use a parameter analysis report to generate the optimal solution and total profit as the right-hand side of the first functional constraint is incremented by 1 from 5 to 15. Use this report to estimate the allowable range for this right-hand side, i.e., the range over which the shadow price obtained in part (a) is valid. (c) Repeat part (a) for the second functional constraint. (d) Repeat part (b) for the second functional constraint where its right-hand side is incremented by 1 from 6 to 18. (e) Use Solver’s sensitivity report to determine the shadow price for each functional constraint and the allowable range for the right-hand side of each of these constraints. E* resource available increases in increments of 1 from 4 less than the original value up to 6 more than the current value. Use these results to estimate the allowable range for the amount available of each resource. (e) Use Solver’s sensitivity report to obtain the shadow prices. Also use this report to find the range for the amount of each resource available over which the corresponding shadow price remains valid. (f) Describe why these shadow prices are useful when management has the flexibility to change the amounts of the resources being made available. 7.3-4.* One of the products of the G.A. Tanner Company is a special kind of toy that provides an estimated unit profit of $3. Because of a large demand for this toy, management would like to increase its production rate from the current level of 1,000 per day. However, a limited supply of two subassemblies (A and B) from vendors makes this difficult. Each toy requires two subassemblies of type A, but the vendor providing these subassemblies would only be able to increase its supply rate from the current 2,000 per day to a maximum of 3,000 per day. Each toy requires only one subassembly of type B, but the vendor providing these subassemblies would be unable to increase its supply rate above the current level of 1,000 per day. Because no other vendors currently are available to provide these subassemblies, management is considering initiating a new production process internally that would simultaneously produce an equal number of subassemblies of the two types to supplement the supply from the two vendors. It is estimated that the company’s cost for producing one subassembly of each type would be $2.50 more than the cost of purchasing these subassemblies from the two vendors. Management wants to determine both the production rate of the toy and the production rate of each pair of subassemblies (one A and one B) that would maximize the total profit. The following table summarizes the data for the problem. Resource Usage per Unit of Each Activity 7.3-3. Consider the following problem. Maximize Z  x1  2x2, Activity subject to x1  3x2  8 (resource 1) x1  x2  4 (resource 2) and x1  0, x2  0, where Z measures the profit in dollars from the two activities and the right-hand sides are the number of units available of the respective resources. I (a) Use the graphical method to solve this model. I (b) Use graphical analysis to determine the shadow price for each of these resources by solving again after increasing the amount of the resource available by 1. E* (c) Use the spreadsheet model and Solver instead to do parts (a) and (b). E* (d) For each resource in turn, use a parameter analysis report to systematically generate the optimal solution and the total profit when the only change is that the amount of that Resource Subassembly A Subassembly B Unit profit E* E* Produce Produce Amount of Toys Subassemblies Resource Available 2 1 $3 –1 –1 –$2.50 3,000 1,000 (a) Formulate and solve a spreadsheet model for this problem. (b) Since the stated unit profits for the two activities are only estimates, management wants to know how much each of these estimates can be off before the optimal solution would change. Begin exploring this question for the first activity (producing toys) by using the spreadsheet and Solver to manually generate a table that gives the optimal solution and total profit as the unit profit for this activity increases in 50¢ increments from $2 to $4. What conclusion can be drawn about how much the estimate of this unit profit can 283 PROBLEMS differ in each direction from its original value of $3 before the optimal solution would change? E* (c) Repeat part (b) for the second activity (producing subassemblies) by generating a table as the unit profit for this activity increases in 50¢ increments from –$3.50 to –$1.50 (with the unit profit for the first activity fixed at $3). E* (d) Use the parameter analysis report to systematically generate all the data requested in parts (b) and (c), except use 25¢ increments instead of 50¢ increments. Use these data to refine your conclusions in parts (b) and (c). I (e) Use the Graphical Method and Sensitivity Analysis procedure in IOR Tutorial to determine how much the unit profit of each activity can change in either direction (without changing the unit profit of the other activity) before the optimal solution would change. Use this information to specify the allowable range for the unit profit of each activity. E* (f) Use Solver’s sensitivity report to find the allowable range for the unit profit of each activity. E* (g) Use a two-way parameter analysis report to systematically generate the optimal solution as the unit profits of the two activities are changed simultaneously as described in parts (b) and (c). (h) Use the information provided by Solver’s sensitivity report to describe how far the unit profits of the two activities can change simultaneously before the optimal solution might change. 7.3-5. Reconsider Prob. 7.3-4. After further negotiations with each vendor, management of the G.A. Tanner Co. has learned that either of them would be willing to consider increasing their supply of their respective subassemblies over the previously stated maxima (3,000 subassemblies of type A per day and 1,000 of type B per day) if the company would pay a small premium over the regular price for the extra subassemblies. The size of the premium for each type of subassembly remains to be negotiated. The demand for the toy being produced is sufficiently high so that 2,500 per day could be sold if the supply of subassemblies could be increased enough to support this production rate. Assume that the original estimates of unit profits given in Prob. 7.3-4 are accurate. (a) Formulate and solve a spreadsheet model for this problem with the original maximum supply levels and the additional constraint that no more than 2,500 toys should be produced per day. (b) Without considering the premium, use the spreadsheet and Solver to determine the shadow price for the subassembly A constraint by solving the model again after increasing the maximum supply by 1. Use this shadow price to determine the maximum premium that the company should be willing to pay for each subassembly of this type. (c) Repeat part (b) for the subassembly B constraint. (d) Estimate how much the maximum supply of subassemblies of type A could be increased before the shadow price (and the corresponding premium) found in part (b) would no longer be valid by using a parameter analysis report to generate the optimal solution and total profit (excluding the premium) as the maximum supply increases in increments of 100 from 3,000 to 4,000. (e) Repeat part (d) for subassemblies of type B by using a parameter analysis report as the maximum supply increases in increments of 100 from 1,000 to 2,000. E* (f) Use Solver’s sensitivity report to determine the shadow price for each of the subassembly constraints and the allowable range for the right-hand side of each of these constraints. 7.3-6.* Consider the Union Airways problem presented in Sec. 3.4, including the data given in Table 3.19. The Excel files for Chap. 3 include a spreadsheet that shows the formulation and optimal solution for this problem. You are to use this spreadsheet and Solver to do parts (a) to (g) below. Management is about to begin negotiations on a new contract with the union that represents the company’s customer service agents. This might result in some small changes in the daily costs per agent given in Table 3.19 for the various shifts. Several possible changes listed below are being considered separately. In each case, management would like to know whether the change might result in the solution in the spreadsheet no longer being optimal. Answer this question in parts (a) to (e) by using the spreadsheet and Solver directly. If the optimal solution changes, record the new solution. (a) The daily cost per agent for Shift 2 changes from $160 to $165. (b) The daily cost per agent for Shift 4 changes from $180 to $170. (c) The changes in parts (a) and (b) both occur. (d) The daily cost per agent increases by $4 for shifts 2, 4, and 5, but decreases by $4 for shifts 1 and 3. (e) The daily cost per agent increases by 2 percent for each shift. (f) Use Solver to generate the sensitivity report for this problem. Suppose that the above changes are being considered later without having the spreadsheet model immediately available on a computer. Show in each case how the sensitivity report can be used to check whether the original optimal solution must still be optimal. (g) For each of the five shifts in turn, use a parameter analysis report to systematically generate the optimal solution and total cost when the only change is that the daily cost per agent on that shift increases in $3 increments from $15 less than the current cost up to $15 more than the current cost. E* 7.3-7. Reconsider the Union Airways problem and its spreadsheet model that was dealt with in Prob. 7.3-6. Management now is considering increasing the level of service provided to customers by increasing one or more of the numbers in the rightmost column of Table 3.19 for the minimum number of agents needed in the various time periods. To guide them in making this decision, they would like to know what impact this change would have on total cost. Use Solver to generate the sensitivity report in preparation for addressing the following questions. (a) Which of the numbers in the rightmost column of Table 3.19 can be increased without increasing total cost? In each case, indicate how much it can be increased (if it is the only one being changed) without increasing total cost. (b) For each of the other numbers, how much would the total cost increase per increase of 1 in the number? For each answer, indicate how much the number can be increased (if it is the only one being changed) before the answer is no longer valid. (c) Do your answers in part (b) definitely remain valid if all the numbers considered in part (b) are simultaneously increased by one? E* 284 CHAPTER 7 LINEAR PROGRAMMING UNDER UNCERTAINTY (d) Do your answers in part (b) definitely remain valid if all 10 numbers are simultaneously increased by one? (e) How far can all 10 numbers be simultaneously increased by the same amount before your answers in part (b) may no longer be valid? 7.3–8 David, LaDeana, and Lydia are the sole partners and workers in a company which produces fine clocks. David and LaDeana each are available to work a maximum of 40 hours per week at the company, while Lydia is available to work a maximum of 20 hours per week. The company makes two different types of clocks: a grandfather clock and a wall clock. To make a clock, David (a mechanical engineer) assembles the inside mechanical parts of the clock while LaDeana (a woodworker) produces the handcarved wood casings. Lydia is responsible for taking orders and shipping the clocks. The amount of time required for each of these tasks is shown below. Time Required Task Assemble clock mechanism Carve wood casing Shipping Grandfather Clock Wall Clock 6 hours 8 hours 3 hours 4 hours 4 hours 3 hours Each grandfather clock built and shipped yields a profit of $300, while each wall clock yields a profit of $200. The three partners now want to determine how many clocks of each type should be produced per week to maximize the total profit. (a) Formulate a linear programming model in algebraic form for this problem. I (b) Use the Graphical Method and Sensitivity Analysis procedure in IOR Tutorial to solve the model. Then use this procedure to check if the optimal solution would change if the unit profit for grandfather clocks is changed from $300 to $375 (with no other changes in the model). Then check if the optimal solution would change if, in addition to this change in the unit profit for grandfather clocks, the estimated unit profit for wall clocks also changes from $200 to $175. E* (c) Formulate and solve this model on a spreadsheet. E* (d) Use Solver to check the effect of the changes specified in part (b). E* (e) Use a parameter analysis report to systematically generate the optimal solution and total profit as the unit profit for grandfather clocks is increased in $20 increments from $150 to $450 (with no change in the unit profit for wall clocks). Then do the same as the unit profit for wall clocks is increased in $20 increments from $50 to $350 (with no change in the unit profit for grandfather clocks). Use this information to estimate the allowable range for the unit profit of each type of clock. E* (f) Use a two-way parameter analysis report to systematically generate the optimal solution as the unit profits for the two types of clocks are changed simultaneously as specified in part (e), except use $50 increments instead of $20 increments. E* (g) For each of the three partners in turn, use Solver the effect on the optimal solution and the total profit if that partner alone were to increase the maximum number of hours available to work per week by 5 hours. E* (h) Use a parameter analysis report to systematically generate the optimal solution and the total profit when the only change is that David’s maximum number of hours available to work per week changes to each of the following values: 35, 37, 39, 41, 43, 45. Then do the same when the only change is that LaDeana’s number changes in the same way. Then do the same when the only change is that Lydia’s number changes to each of the following values: 15, 17, 19, 21, 23, 25. E* (i) Generate Solver’s sensitivity report and use it to determine the allowable range for the unit profit for each type of clock and the allowable range for the maximum number of hours each partner is available to work per week. (j) To increase the total profit, the three partners have agreed that one of them will slightly increase the maximum number of hours available to work per week. The choice of which one will be based on which one would increase the total profit the most. Use the sensitivity report to make this choice. (Assume no change in the original estimates of the unit profits.) (k) Explain why one of the shadow prices is equal to zero. (l) Can the shadow prices in the sensitivity report be validly used to determine the effect if Lydia were to change her maximum number of hours available to work per week from 20 to 25? If so, what would be the increase in the total profit? (m) Repeat part (l) if, in addition to the change for Lydia, David also were to change his maximum number of hours available to work per week from 40 to 35. I (n) Use graphical analysis to verify your answer in part (m). 7.4-1. Reconsider the example illustrating the use of robust optimization that was presented in Sec. 7.4. Wyndor management now feels that the analysis described in this example was overly conservative for three reasons: (1) it is unlikely that the true value of a parameter will turn out to be quite near either end of its range of uncertainty shown in Table 7.10, (2) it is even more unlikely that the true values of all the parameters in a constraint will turn out to simultaneously lean toward the undesirable end of their ranges of uncertainty, and (3) there is a bit of latitude in each constraint to compensate for violating the constraint by a tiny bit. Therefore, Wyndor management has asked its staff (you) to solve the model again while using ranges of uncertainty that are half as wide as those shown in Table 7.10. (a) What is the resulting optimal solution and how much would this increase the total profit per week? (b) If Wyndor would need to pay a penalty of $5000 per week to the distributor if the production rates fall below these new guaranteed minimum amounts, should Wyndor use these new guarantees? 285 PROBLEMS 7.4-2. Consider the following problem. a11x1  a12x2  b1 a21x1  a22x2  b2 a31x1  a32x2  b3 Maximize Z  c1x1  c2x2, subject to and a11x1  a12x2  b1 a21x1  a22x2  b2 C x1  0, x2  0. 7.4-4. Consider the following problem. and Maximize Z = 5x1 + c2x2 + c3x3, x1  0, x2  0. subject to The estimates and ranges of uncertainty for the parameters are shown in the next table. Parameter a11 a12 a21 a22 b1 b2 c1 c2 Estimate 1 2 2 1 9 8 3 4 Range of Uncertainty 0.9 – 1.1 1.6 – 2.4 1.8 – 2.2 0.8 – 1.2 8.5 – 9.5 7.6 – 8.4 2.7 – 3.3 3.6 – 4.4 (a) Use the graphical method to solve this model when using the estimates of the parameters. (b) Now use robust optimization to formulate a conservative version of this model. Use the graphical method to solve this model. Show the values of Z obtained in parts (a) and (b) and then calculate the percentage change in Z by replacing the original model by the robust optimization model. 7.4.3. Follow the instructions of Prob. 7.4-2 when considering the following problem and the information provided about its parameters in the table below. Minimize Z = c1x1 + c2x2, subject to the constraints shown at the top of the next column. Parameter Estimate a11 a12 a21 a22 a31 a32 b1 b2 b3 c1 c2 10 5 –2 10 5 5 50 20 30 20 15 Range of Uncertainty 6 – 12 4–6 –3 to –1 8 – 12 4–6 3–8 45 – 60 15 – 25 27 – 32 18 – 24 12 – 18 a11x1  3x2  2x3  b1 3x1  a22x2  x3  b2 2x1  4x2  a33x3  20 and x1  0, x2  0, x3  0. The estimates and ranges of uncertainty for the uncertain parameters are shown in the next table. Parameter Estimate Range of Uncertainty a11 a22 a33 b1 b2 c2 c3 4 1 3 30 20 8 4 3.6 – 4.4 1.4 to 0.6 2.5 – 3.5 27 – 33 19 – 22 9 to 7 3–5 (a) Solve this model when using the estimates of the parameters. (b) Now use robust optimization to formulate a conservative version of this model. Solve this model. Show the values of Z obtained in parts (a) and (b) and then calculate the percentage decrease in Z by replacing the original model by the robust optimization model. 7.5-1. Reconsider the example illustrating the use of chance constraints that was presented in Sec. 7.5. The concern is that there is some uncertainty about how much production time will be available for Wyndorís two new products when their production begins in the three plants a little later. Table 7.11 shows the initial estimates of the mean and standard deviation of the available production time per week in each of the three plants. Suppose now that a more careful investigation of these available production times has considerably narrowed down the range of what these times might turn out to be with any significant likelihood. In particular, the means in Table 7.11 remain the same but the standard deviations have been cut in half. However, to add more insurance that the original constraints 286 CHAPTER 7 LINEAR PROGRAMMING UNDER UNCERTAINTY still will hold when production begins, the value of a has been increased to  0.99. It is still assumed that the available production time in each plant has a normal distribution. (a) Use probability expressions to write the three chance constraints. Then show the deterministic equivalents of these chance constraints. (b) Solve the resulting linear programming model. How much total profit per week would this solution provide to Wyndor? Compare this total profit per week to what was obtained for the example in Sec. 7.5. What is the increase in total profit per week that was enabled by the more careful investigation that cut the standard deviations in half? 7.5-2. Consider the following constraint whose right-hand side b is assumed to have a normal distribution with a mean of 100 and some standard deviation s. (c) Suppose that all 20 of these functional constraints are considered to be hard constraints, i.e., constraints that must be satisfied if at all possible. Therefore, the decision maker desires to use a value of a that will guarantee a probability of at least 0.95 that the optimal solution for the new linear programming problem actually will turn out to be feasible for the original problem. Use trial and error to find the smallest value of a (to three significant digits) that will provide the decision maker with the desired guarantee. 7.5.4 Consider the following problem. Maximize Z  20x1  30x2  25x3, subject to 3x1  2x2 + x3  b1 2x1  4x2 + 2x3  b1 x1  3x2 + 5x3  b3 30x1  20x2  b A quick investigation of the possible spread of the random variable b has led to the estimate that s = 10. However, a subsequent more careful investigation has greatly narrowed down this spread, which has led to the refined estimate that s = 2. After choosing a minimum acceptable probability that the constraint will hold (denoted by a) this constraint will be treated as a chance constraint. (a) Use a probability expression to write the resulting chance constraint. Then write its deterministic equivalent in terms of s and K␣. (b) Prepare a table that compares the value of the right-hand side of this deterministic equivalent for s = 10 and s = 2 when using a = 0.9, 0.95, 0.975, 0.99, and 0.99865. 7.5-3. Suppose that a linear programming problem has 20 functional constraints in inequality form such that their righthand sides (the bi) are uncertain parameters, so chance constraints with some a are introduced in place of these constraints. After next substituting the deterministic equivalents of these chance constraints and solving the resulting new linear programming model, its optimal solution is found to satisfy 10 of these deterministic equivalents with equality whereas there is some slack in the other 10 deterministic equivalents. Answer the following questions under the assumption that the 20 uncertain bi have mutually independent normal distributions. (a) When choosing a = 0.95, what are the lower bound and upper bound on the probability that all of these 20 original constraints will turn out to be satisfied by the optimal solution for the new linear programming problem so this solution actually will be feasible for the original problem. (b) Now repeat part (a) with a = 0.99. and x1  0, x2  0, x3  0, where b1, b2, and b3 are uncertain parameters that have mutually independent normal distributions. The mean and standard deviation of these parameters are (90, 3), (150, 6), and (180, 9), respectively. (a) The proposal has been made to use the solution, (x1, x2, x3) = (7, 22, 19). What are the probabilities that the respective functional constraints will be satisfied by this solution? C (b) Formulate chance constraints for these three functional constraints where a  0.975 for the first constraint, a  0.95 for the second constraint, and a  0.90 for the third constraint. Then determine the deterministic equivalents of the three chance constraints and solve for the optimal solution for the resulting linear programming model. (c) Calculate the probability that the optimal solution for this new linear programming model will turn out to be feasible for the original problem. C 7.6-1. Reconsider the example illustrating the use of stochastic programming with recourse that was presented in Sec. 7.6. Wyndor management now has obtained additional information about the rumor that a competitor is planning to produce and market a special new product that would compete directly with Wyndor’s product 2. This information suggests that it is less likely that the rumor is true than was originally thought. Therefore, the estimate of the probability that the rumor is true has been reduced to 0.5. 287 PROBLEMS Formulate the revised stochastic programming model and solve for its optimal solution. Then describe the corresponding optimal plan in words. C 7.6-2. The situation is the same as described in Prob. 7.61 except that Wyndor management does not consider the additional information about the rumor to be reliable. Therefore, they havenít yet decided whether their best estimate of the probability that the rumor is true should be 0.5 or 0.75 or something in between. Consequently, they have asked you to find the break-even point for this probability below which the optimal plan presented in Sec. 7.6 will no longer be optimal. Use trial and error to find this break-even point (rounded up to two decimal points). What is the new optimal plan if the probability is a little less than this break-even point? 7.6-3. The Royal Cola Company is considering developing a special new carbonated drink to add to its standard product line of drinks for a couple years or so (after which it probably would be replaced by another special drink). However, it is unclear whether the new drink would be profitable, so analysis is needed to determine whether to go ahead with the development of the drink. If so, once the development is completed, the new drink would be marketed in a small regional test market to assess how popular the drink would become. If the test market suggests that the drink should become profitable, it then would be marketed nationally. Here are the relevant data. The cost of developing the drink and then arranging to test it in the test market is estimated to be $40 million. A total budget of $100 million has been allocated to advertising the drink in both the test market and nationally (if it goes national). A minimum of $5 million is needed for advertising in the test market and the maximum allowed for this purpose would be $10 million, which would leave between $90 million and $95 million for national advertising. To simplify the analysis, sales in either the test market or nationally is assumed to be proportional to the level of advertising there (while recognizing that the rate of additional sales would fall off after the amount of advertising reaches a saturation level). Excluding the fixed cost of $40 million, the net profit in the test market is expected to be half the level of advertising. To further simplify the analysis, the outcome of testing the drink in the test market would fall into just three categories: (1) very favorable, (2) barely favorable, (3) unfavorable. The probabilities of these outcomes are estimated to be 0.25, 0.25, and 0.50, respectively. If the outcome were C very favorable, the net profit after going national would be expected to be about twice the level of advertising. The corresponding net profit if the outcome were barely favorable would be about 0.2 times the level of advertising. If the outcome were unfavorable, the drink would be dropped and so would not be marketed nationally. Use stochastic programming with recourse to formulate a model for this problem. Assuming the company should go ahead with developing the drink, solve the model to determine how much advertising should be done in the test market and then how much advertising should be done nationally (if any) under each of the three possible outcomes in the test market. Finally, calculate the expected value (in the statistical sense) of the total net profit from the drink, including the fixed cost if the company goes ahead with developing the drink, where the company should indeed go ahead only if the expected total net profit is positive. C 7.6-4. Consider the following problem. Minimize Z = 5x1 + c2x2, subject to 3x1 + a12x2  60 2x1 + a22x2  60 and x1  0, x2  0, where x1 represents the level of activity 1 and x2 represents the level of activity 2. The values of c2, a12, and a22 have not been determined yet. Only activity 1 needs to be undertaken soon whereas activity 2 will be initiated somewhat later. There are different scenarios that could unfold between now and the time activity 2 is undertaken that would lead to different values for c2, a12, and a22. Therefore, the goal is to use all of this information to choose a value for x1 now and to simultaneously determine a plan for choosing a value of x2 later after seeing which scenario has occurred. Three scenarios are considered plausible possibilities. They are listed below, along with the values of c2, a12, and a22 that would result from each one: Scenario 1: c2 = 4, a12 = 2, and a22 = 3 Scenario 2: c2 = 6, a12 = 3, and a22 = 4 Scenario 3: c2 = 3, a12 = 2, and a22 = 1 These three scenarios are considered equally likely. Use stochastic programming with recourse to formulate the appropriate model for this problem and then to solve for the optimal plan. 288 CHAPTER 7 LINEAR PROGRAMMING UNDER UNCERTAINTY ■ CASES CASE 7.1 Controlling Air Pollution Refer to Sec. 3.4 (subsection entitled “Controlling Air Pollution”) for the Nori & Leets Co. problem. After the OR team obtained an optimal solution, we mentioned that the team then conducted sensitivity analysis. We now continue this story by having you retrace the steps taken by the OR team, after we provide some additional background. The values of the various parameters in the original formulation of the model are given in Tables 3.12, 3.13, and 3.14. Since the company does not have much prior experience with the pollution abatement methods under consideration, the cost estimates given in Table 3.14 are fairly rough, and each one could easily be off by as much as 10 percent in either direction. There also is some uncertainty about the parameter values given in Table 3.13, but less so than for Table 3.14. By contrast, the values in Table 3.12 are policy standards, and so are prescribed constants. However, there still is considerable debate about where to set these policy standards on the required reductions in the emission rates of the various pollutants. The numbers in Table 3.12 actually are preliminary values tentatively agreed upon before learning what the total cost would be to meet these standards. Both the city and company officials agree that the final decision on these policy standards should be based on the trade-off between costs and benefits. With this in mind, the city has concluded that each 10 percent increase in the policy standards over the current values (all the numbers in Table 3.12) would be worth $3.5 million to the city. Therefore, the city has agreed to reduce the company’s tax payments to the city by $3.5 million for each 10 percent reduction in the policy standards (up to 50 percent) that is accepted by the company. Finally, there has been some debate about the relative values of the policy standards for the three pollutants. As indicated in Table 3.12, the required reduction for particulates now is less than half of that for either sulfur oxides or hydrocarbons. Some have argued for decreasing this disparity. Others contend that an even greater disparity is justified because sulfur oxides and hydrocarbons cause considerably more damage than particulates. Agreement has been reached that this issue will be reexamined after information is obtained about which trade-offs in policy standards (increasing one while decreasing another) are available without increasing the total cost. (a) Use any available linear programming software to solve the model for this problem as formulated in Sec. 3.4. In addition to the optimal solution, obtain a sensitivity report for performing postoptimality analysis. This output provides the basis for the following steps. (b) Ignoring the constraints with no uncertainty about their parameter values (namely, xj  1 for j  1, 2, . . . , 6), identify the parameters of the model that should be classified as sensitive parameters. (Hint: See the subsection “Sensitivity Analysis” in Sec. 4.7.) Make a resulting recommendation about which parameters should be estimated more closely, if possible. (c) Analyze the effect of an inaccuracy in estimating each cost parameter given in Table 3.14. If the true value is 10 percent less than the estimated value, would this alter the optimal solution? Would it change if the true value were 10 percent more than the estimated value? Make a resulting recommendation about where to focus further work in estimating the cost parameters more closely. (d) Consider the case where your model has been converted to maximization form before applying the simplex method. Use Table 6.14 to construct the corresponding dual problem, and use the output from applying the simplex method to the primal problem to identify an optimal solution for this dual problem. If the primal problem had been left in minimization form, how would this affect the form of the dual problem and the sign of the optimal dual variables? (e) For each pollutant, use your results from part (d) to specify the rate at which the total cost of an optimal solution would change with any small change in the required reduction in the annual emission rate of the pollutant. Also specify how much this required reduction can be changed (up or down) without affecting the rate of change in the total cost. (f) For each unit change in the policy standard for particulates given in Table 3.12, determine the change in the opposite direction for sulfur oxides that would keep the total cost of an optimal solution unchanged. Repeat this for hydrocarbons instead of sulfur oxides. Then do it for a simultaneous and equal change for both sulfur oxides and hydrocarbons in the opposite direction from particulates. PREVIEWS OF ADDED CASES ON OUR WEBSITE 289 ■ PREVIEWS OF ADDED CASES ON OUR WEBSITE (www.mhhe.com/hillier) CASE 7.2 Farm Management The Ploughman family has owned and operated a 640-acre farm for several generations. The family now needs to make a decision about the mix of livestock and crops for the coming year. By assuming that normal weather conditions will prevail next year, a linear programming model can be formulated and solved to guide this decision. However, adverse weather conditions would harm the crops and greatly reduce the resulting value. Therefore, considerable postoptimality analysis is needed to explore the effect of several possible scenarios for the weather next year and the implications for the family’s decision. CASE 7.3 Assigning Students to Schools, Revisited This case is a continuation of Case 4.3, which involved the Springfield School Board assigning students from six residential areas to the city’s three remaining middle schools. After solving a linear programming model for the problem with any software package, that package’s sensitivity analysis report now needs to be used for two purposes. One is to check on the effect of an increase in certain bussing costs because of ongoing road construction in one of the residential areas. The other is to explore the advisability of adding portable classrooms to increase the capacity of one or more of the middle schools for a few years. CASE 7.4 Memo Writing a Nontechnical After setting goals for how much the sales of three products should increase as a result of an upcoming advertising campaign, the management of the Profit & Gambit Co. now wants to explore the trade-off between advertising cost and increased sales. Your first task is to perform the associated sensitivity analysis. Your main task then is to write a nontechnical memo to Profit & Gambit management presenting your results in the language of management. 8 C H A P T E R Other Algorithms for Linear Programming T he key to the extremely widespread use of linear programming is the availability of an exceptionally efficient algorithm—the simplex method—that will routinely solve the large-size problems that typically arise in practice. However, the simplex method is only part of the arsenal of algorithms regularly used by linear programming practitioners. We now turn to these other algorithms. This chapter begins with three algorithms that are, in fact, variants of the simplex method. In particular, the next three sections introduce the dual simplex method (a modification particularly useful for sensitivity analysis), parametric linear programming (an extension for systematic sensitivity analysis), and the upper bound technique (a streamlined version of the simplex method for dealing with variables having upper bounds). We will not go into the kind of detail with these algorithms that we did with the simplex method in Chaps. 4 and 5. The goal instead will be to briefly introduce their main ideas. Section 4.9 introduced another algorithmic approach to linear programming—a type of algorithm that moves through the interior of the feasible region. We describe this interiorpoint approach further in Sec. 8.4. A supplement to this chapter on the book’s website also introduces linear goal programming. In this case, rather than having a single objective (maximize or minimize Z) as for linear programming, the problem instead has several goals toward which we must strive simultaneously. Certain formulation techniques enable converting a linear goal programming problem back into a linear programming problem so that solution procedures based on the simplex method can still be used. The supplement describes these techniques and procedures. ■ 8.1 THE DUAL SIMPLEX METHOD The dual simplex method is based on the duality theory presented in Chap. 6. To describe the basic idea behind this method, it is helpful to use some terminology introduced in Tables 6.10 and 6.11 of Sec. 6.3 for describing any pair of complementary basic solutions in the primal and dual problems. In particular, recall that both solutions are said to be primal feasible if the primal basic solution is feasible, whereas they are called dual feasible 290 8.1 THE DUAL SIMPLEX METHOD 291 if the complementary dual basic solution is feasible for the dual problem. Also recall (as indicated on the right side of Table 6.11) that each complementary basic solution is optimal for its problem only if it is both primal feasible and dual feasible. The dual simplex method can be thought of as the mirror image of the simplex method. The simplex method deals directly with basic solutions in the primal problem that are primal feasible but not dual feasible. It then moves toward an optimal solution by striving to achieve dual feasibility as well (the optimality test for the simplex method). By contrast, the dual simplex method deals with basic solutions in the primal problem that are dual feasible but not primal feasible. It then moves toward an optimal solution by striving to achieve primal feasibility as well. Furthermore, the dual simplex method deals with a problem as if the simplex method were being applied simultaneously to its dual problem. If we make their initial basic solutions complementary, the two methods move in complete sequence, obtaining complementary basic solutions with each iteration. The dual simplex method is very useful in certain special types of situations. Ordinarily it is easier to find an initial basic solution that is feasible than one that is dual feasible. However, it is occasionally necessary to introduce many artificial variables to construct an initial BF solution artificially. In such cases it may be easier to begin with a dual feasible basic solution and use the dual simplex method. Furthermore, fewer iterations may be required when it is not necessary to drive many artificial variables to zero. When dealing with a problem whose initial basic solutions (without artificial variables) are neither primal feasible nor dual feasible, it also is possible to combine the ideas of the simplex method and dual simplex method into a primal-dual algorithm that strives toward both primal feasibility and dual feasibility. As we mentioned several times in Chaps. 6 and 7, as well as in Sec. 4.7, another important primary application of the dual simplex method is its use in conjunction with sensitivity analysis. Suppose that an optimal solution has been obtained by the simplex method but that it becomes necessary (or of interest for sensitivity analysis) to make minor changes in the model. If the formerly optimal basic solution is no longer primal feasible (but still satisfies the optimality test), you can immediately apply the dual simplex method by starting with this dual feasible basic solution. (We will illustrate this at the end of this section.) Applying the dual simplex method in this way usually leads to the new optimal solution much more quickly than would solving the new problem from the beginning with the simplex method. The dual simplex method also can be useful in solving certain huge linear programming problems from scratch because it is such an efficient algorithm. Computational experience with the most powerful versions of linear programming solvers indicates that the dual simplex method often is more efficient than the simplex method for solving particularly massive problems encountered in practice. The rules for the dual simplex method are very similar to those for the simplex method. In fact, once the methods are started, the only difference between them is in the criteria used for selecting the entering and leaving basic variables and for stopping the algorithm. To start the dual simplex method (for a maximization problem), we must have all the coefficients in Eq. (0) nonnegative (so that the basic solution is dual feasible). The basic solutions will be infeasible (except for the last one) only because some of the variables are negative. The method continues to decrease the value of the objective function, always retaining nonnegative coefficients in Eq. (0), until all the variables are nonnegative. Such a basic solution is feasible (it satisfies all the equations) and is, therefore, optimal by the simplex method criterion of nonnegative coefficients in Eq. (0). The details of the dual simplex method are summarized next. 292 CHAPTER 8 OTHER ALGORITHMS FOR LINEAR PROGRAMMING Summary of the Dual Simplex Method 1. Initialization: After converting any functional constraints in ⱖ form to ⱕ form (by multiplying through both sides by ⫺1), introduce slack variables as needed to construct a set of equations describing the problem. Find a basic solution such that the coefficients in Eq. (0) are zero for basic variables and nonnegative for nonbasic variables (so the solution is optimal if it is feasible). Go to the feasibility test. 2. Feasibility test: Check to see whether all the basic variables are nonnegative. If they are, then this solution is feasible, and therefore optimal, so stop. Otherwise, go to an iteration. 3. Iteration: Step 1 Determine the leaving basic variable: Select the negative basic variable that has the largest absolute value. Step 2 Determine the entering basic variable: Select the nonbasic variable whose coefficient in Eq. (0) reaches zero first as an increasing multiple of the equation containing the leaving basic variable is added to Eq. (0). This selection is made by checking the nonbasic variables with negative coefficients in that equation (the one containing the leaving basic variable) and selecting the one with the smallest absolute value of the ratio of the Eq. (0) coefficient to the coefficient in that equation. Step 3 Determine the new basic solution: Starting from the current set of equations, solve for the basic variables in terms of the nonbasic variables by Gaussian elimination. When we set the nonbasic variables equal to zero, each basic variable (and Z) equals the new right-hand side of the one equation in which it appears (with a coefficient of ⫹1). Return to the feasibility test. To fully understand the dual simplex method, you must realize that the method proceeds just as if the simplex method were being applied to the complementary basic solutions in the dual problem. (In fact, this interpretation was the motivation for constructing the method as it is.) Step 1 of an iteration, determining the leaving basic variable, is equivalent to determining the entering basic variable in the dual problem. The negative variable with the largest absolute value corresponds to the negative coefficient with the largest absolute value in Eq. (0) of the dual problem (see Table 6.3). Step 2, determining the entering basic variable, is equivalent to determining the leaving basic variable in the dual problem. The coefficient in Eq. (0) that reaches zero first corresponds to the variable in the dual problem that reaches zero first. The two criteria for stopping the algorithm are also complementary. An Example We shall now illustrate the dual simplex method by applying it to the dual problem for the Wyndor Glass Co. (see Table 6.1). Normally this method is applied directly to the problem of concern (a primal problem). However, we have chosen this problem because you have already seen the simplex method applied to its dual problem (namely, the primal problem1) in Table 4.8 so you can compare the two. To facilitate the comparison, we shall continue to denote the decision variables in the problem being solved by yi rather than xj. In maximization form, the problem to be solved is Maximize Z ⫽ ⫺4y1 ⫺ 12y2 ⫺ 18y3, subject to y1 ⫹ 3y3 ⱖ 3 2y2 ⫹ 2y3 ⱖ 5 1 Recall that the symmetry property in Sec. 6.1 points out that the dual of a dual problem is the original primal problem. 8.1 293 THE DUAL SIMPLEX METHOD ■ TABLE 8.1 Dual simplex method applied to the Wyndor Glass Co. dual problem Coefficient of: Iteration 0 1 2 Basic Variable Eq. Z y1 y2 y3 y4 y5 Z y4 y5 (0) (1) (2) 1 0 0 4 ⫺1 0 12 0 ⫺2 18 ⫺3 ⫺2 0 1 0 0 0 1 0 ⫺3 ⫺5 Z y4 (0) (1) 1 0 4 ⫺1 0 0 6 ⫺3 0 1 y2 (2) 0 0 1 1 0 6 0 1 ⫺ᎏᎏ 2 ⫺30 ⫺3 5 ᎏᎏ 2 2 1 ᎏᎏ 3 1 ⫺ᎏᎏ 3 2 1 ⫺ᎏᎏ 3 1 ᎏᎏ 3 Z (0) 1 y3 (1) 0 y2 (2) 0 0 0 0 1 1 0 Right Side 6 ⫺36 0 1 1 ⫺ᎏᎏ 2 3 ᎏᎏ 2 and y1 ⱖ 0, y2 ⱖ 0, y3 ⱖ 0. Since negative right-hand sides are now allowed, we do not need to introduce artificial variables to be the initial basic variables. Instead, we simply convert the functional constraints to ⱕ form and introduce slack variables to play this role. The resulting initial set of equations is that shown for iteration 0 in Table 8.1. Notice that all the coefficients in Eq. (0) are nonnegative, so the solution is optimal if it is feasible. The initial basic solution is y1 ⫽ 0, y2 ⫽ 0, y3 ⫽ 0, y4 ⫽ ⫺3, y5 ⫽ ⫺5, with Z ⫽ 0, which is not feasible because of the negative values. The leaving basic variable is y5 (5 ⬎ 3), and the entering basic variable is y2 (12/2 ⬍ 18/2), which leads to the second set of equations, labeled as iteration 1 in Table 8.1. The corresponding basic solution is y1 ⫽ 0, y2 ⫽ 5ᎏ2ᎏ , y3 ⫽ 0, y4 ⫽ ⫺3, y5 ⫽ 0, with Z ⫽ ⫺30, which is not feasible. The next leaving basic variable is y4, and the entering basic variable is y3 (6/3 ⬍ 4/1), which leads to the final set of equations in Table 8.1. The corresponding basic solution is y1 ⫽ 0, y2 ⫽ 3ᎏ2ᎏ, y3 ⫽ 1, y4 ⫽ 0, y5 ⫽ 0, with Z ⫽ ⫺36, which is feasible and therefore optimal. Notice that the optimal solution for the dual of this problem2 is x*1 ⫽ 2, x*2 ⫽ 6, x*3 ⫽ 2, x*4 ⫽ 0, x*5 ⫽ 0, as was obtained in Table 4.8 by the simplex method. We suggest that you now trace through Tables 8.1 and 4.8 simultaneously and compare the complementary steps for the two mirror-image methods. As mentioned earlier, an important primary application of the dual simplex method is that it frequently can be used to quickly re-solve a problem when sensitivity analysis results in making small changes in the original model. In particular, if the formerly optimal basic solution is no longer primal feasible (one or more right-hand sides now are negative) but still satisfies the optimality test (no negative coefficients in Row 0), you can immediately apply the dual simplex method by starting with this dual feasible basic solution. For example, this situation arises when a new constraint that violates the formerly optimal solution is added to the original model. To illustrate, suppose that the problem solved in Table 8.1 originally did not include its first functional constraint (y1 ⫹ 3y3 ⱖ 3). 2 The complementary optimal basic solutions property presented in Sec. 6.3 indicates how to read the optimal solution for the dual problem from row 0 of the final simplex tableau for the primal problem. This same conclusion holds regardless of whether the simplex method or the dual simplex method is used to obtain the final tableau. 294 CHAPTER 8 OTHER ALGORITHMS FOR LINEAR PROGRAMMING After deleting Row 1, the iteration 1 tableau in Table 8.1 shows that the resulting optimal solution is y1 ⫽ 0, y2 ⫽ 5ᎏ2ᎏ , y3 ⫽ 0, y5 ⫽ 0, with Z ⫽ ⫺30. Now suppose that sensitivity analysis leads to adding the originally omitted constraint, y1 ⫹ 3y3 ⱖ 3, which is violated by the original optimal solution since both y1 ⫽ 0 and y3 ⫽ 0. To find the new optimal solution, this constraint (including its slack variable y4) now would be added as Row 1 of the middle tableau in Table 8.1. Regardless of whether this tableau had been obtained by applying the simplex method or the dual simplex method to obtain the original optimal solution (perhaps after many iterations), applying the dual simplex method to this tableau leads to the new optimal solution in just one iteration. If you would like to see another example of applying the dual simplex method, one is provided in the Solved Examples section of the book’s website. ■ 8.2 PARAMETRIC LINEAR PROGRAMMING At the end of Sec. 7.2 we mentioned that parametric linear programming provides another useful way for conducting sensitivity analysis systematically by gradually changing various model parameters simultaneously rather than changing them one at a time. We shall now present the algorithmic procedure, first for the case where the cj parameters are being changed and then where the bi parameters are varied. Systematic Changes in the cj Parameters For the case where the cj parameters are being changed, the objective function of the ordinary linear programming model n Z ⫽ 冱 cj xj j⫽1 is replaced by n Z(␪) ⫽ 冱 (cj⫹␣ j␪)xj, j⫽1 where the ␣ j are given input constants representing the relative rates at which the coefficients are to be changed. Therefore, gradually increasing ␪ from zero changes the coefficients at these relative rates. The values assigned to the ␣j may represent interesting simultaneous changes of the cj for systematic sensitivity analysis of the effect of increasing the magnitude of these changes. They may also be based on how the coefficients (e.g., unit profits) would change together with respect to some factor measured by ␪. This factor might be uncontrollable, e.g., the state of the economy. However, it may also be under the control of the decision maker, e.g., the amount of personnel and equipment to shift from some of the activities to others. For any given value of ␪, the optimal solution of the corresponding linear programming problem can be obtained by the simplex method. This solution may have been obtained already for the original problem where ␪ ⫽ 0. However, the objective is to find the optimal solution of the modified linear programming problem [maximize Z(␪) subject to the original constraints] as a function of ␪. Therefore, in the solution procedure you need to be able to determine when and how the optimal solution changes (if it does) as ␪ increases from zero to any specified positive number. Figure 8.1 illustrates how Z*(␪), the objective function value for the optimal solution (given ␪), changes as ␪ increases. In fact, Z*(␪) always has this piecewise linear and convex3 form (see Prob. 8.2-7). The corresponding optimal solution changes (as ␪ increases) just at the 3 See Appendix 2 for a definition and discussion of convex functions. 8.2 PARAMETRIC LINEAR PROGRAMMING 295 Z* ( ) ■ FIGURE 8.1 The objective function value for an optimal solution as a function of ␪ for parametric linear programming with systematic changes in the cj parameters. 0 2 1 values of ␪ where the slope of the Z*(␪) function changes. Thus, Fig. 8.1 depicts a problem where three different solutions are optimal for different values of ␪, the first for 0  ␪  ␪1, the second for ␪1  ␪  ␪2, and the third for ␪ ⱖ ␪2. Because the value of each xj remains the same within each of these intervals for ␪, the value of Z*(␪) varies with ␪ only because the coefficients of the xj are changing as a linear function of ␪. The solution procedure is based directly upon the sensitivity analysis procedure for investigating changes in the cj parameters (Cases 2a and 3, Sec. 7.2). The only basic difference with parametric linear programming is that the changes now are expressed in terms of ␪ rather than as specific numbers. Example. To illustrate the solution procedure, suppose that ␣1 ⫽ 2 and ␣2 ⫽ ⫺1 for the original Wyndor Glass Co. problem presented in Sec. 3.1, so that Z(␪) ⫽ (3 ⫹ 2␪)x1 ⫹ (5 ⫺ ␪)x2. We begin with the final simplex tableau for ␪ ⫽ 0 in Table 4.8, as repeated here in the first tableau of Table 8.2 (after setting ␪ ⫽ 0). We see that its Eq. (0) is 3 (0) Z ⫹ ᎏᎏx4 ⫹ x5 ⫽ 36. 2 The first step is to have the changes from the original (␪ ⫽ 0) coefficients added into this Eq. (0) on the left-hand side: 3 (0) Z ⫺ 2␪x1 ⫹ ␪x2 ⫹ ᎏᎏx4 ⫹ x5 ⫽ 36. 2 Because both x1 and x2 are basic variables [appearing in Eqs. (3) and (2), respectively], they both need to be eliminated algebraically from Eq. (0): 3 Z ⫺ 2␪x1 ⫹ ␪x2 ⫹ ᎏᎏx4 ⫹ x5 ⫽ 36 2 ⫹ 2␪ times Eq. (3) ⫺ ␪ times Eq. (2) (0) 冢 冣 冢 冣 3 7 2 Z ⫹ ᎏᎏ ⫺ ᎏᎏ␪ x4 ⫹ 1 ⫹ ᎏᎏ␪ x5 ⫽ 36 ⫺ 2␪. 2 6 3 The optimality test says that the current BF solution will remain optimal as long as these coefficients of the nonbasic variables remain nonnegative: 3 7 ᎏᎏ ⫺ ᎏᎏ␪ ⱖ 0, 2 6 9 for 0 ⱕ ␪ ⱕ ᎏᎏ, 7 2 1 ⫹ ᎏᎏ␪ ⱖ 0, for all ␪ ⱖ 0. 3 This entire procedure is summarized on the next page. 296 CHAPTER 8 OTHER ALGORITHMS FOR LINEAR PROGRAMMING ■ TABLE 8.2 The cj parametric linear programming procedure applied to the Wyndor Glass Co. example Coefficient of: Range of ␪ Basic Variable Eq. Z x1 x2 x3 x4 x5 Right Side Optimal Solution Z(␪) (0) 1 0 0 ⫺0 9 ⫺ 7␪ ᎏᎏ 6 3 ⫹ 2␪ ᎏᎏ 3 36 ⫺ 2␪ x4 ⫽ 0 1 ⫺ᎏᎏ 3 1 ⫺ᎏᎏ 2 1 ⫺ᎏᎏ 3 1 ⫺ᎏᎏ 3 2 x3 ⫽ 2 ⫺0 6 x2 ⫽ 6 1 ⫺ᎏᎏ 3 2 x1 ⫽ 2 5⫺␪ ᎏᎏ 2 27 ⫹ 5␪ x5 ⫽ 0 9 0 ⱕ ␪ ⱕ ᎏᎏ 7 x3 (1) 0 0 0 ⫺1 x2 (2) 0 0 1 ⫺0 x1 (3) 0 1 0 ⫺0 Z(␪) (0) 1 0 0 ⫺9 ⫹ 7␪ ᎏᎏ 2 0 x3 ⫽ 0 x5 ⫽ 0 9 ᎏᎏ ⱕ ␪ ⱕ 5 7 ␪ⱖ5 x4 (1) 0 0 0 x2 (2) 0 0 1 x1 (3) 0 1 Z(␪) (0) 1 x4 x5 x1 (1) (2) (3) 0 0 0 ⫺3 1 ⫺1 6 x4 ⫽ 6 3 x2 ⫽ 3 4 x1 ⫽ 4 x2 ⫽ 0 x3 ⫽ 0 x4 ⫽ 12 x5 ⫽ 6 x1 ⫽ 4 0 3 ⫺ᎏᎏ 2 ⫺1 0 1 ⫺ᎏᎏ 2 ⫺0 0 ⫺5 ⫹ ␪ 3 ⫹ 2␪ 0 ⫺0 12 ⫹ 8␪ 0 0 1 2 2 0 ⫺0 ⫺3 ⫺1 1 0 0 ⫺0 ⫺1 ⫺0 12 6 4 0 Therefore, after ␪ is increased past ␪ ⫽ 9ᎏ7ᎏ, x4 would need to be the entering basic variable for another iteration of the simplex method, which takes us from the first tableau in Table 8.2 to the second tableau. Then ␪ would be increased further until another coefficient goes negative, which occurs for the coefficient of x5 in the second tableau when ␪ is increased past ␪ ⫽ 5. Another iteration of the simplex method then takes us to the final tableau of Table 8.2. Increasing ␪ further past 5 never leads to a negative coefficient in Eq. (0), so the procedure is completed. Summary of the Parametric Linear Programming Procedure for Systematic Changes in the cj Parameters 1. Solve the problem with ␪ ⫽ 0 by the simplex method. 2. Use the sensitivity analysis procedure (Cases 2a and 3, Sec. 7.2) to introduce the ⌬cj ⫽ ␣j␪ changes into Eq. (0). 3. Increase ␪ until one of the nonbasic variables has its coefficient in Eq. (0) go negative (or until ␪ has been increased as far as desired). 4. Use this variable as the entering basic variable for an iteration of the simplex method to find the new optimal solution. Return to step 3. Note in Table 8.2 how the first two steps of this procedure lead to the first tableau and then steps 3 and 4 lead to the second tableau. Repeating steps 3 and 4 next leads to the final tableau. Systematic Changes in the bi Parameters For the case where the bi parameters change systematically, the one modification made in the original linear programming model is that bi is replaced by bi ⫹ ␣i␪, for i ⫽ 1, 2, . . . , m, where the ␣i are given input constants. Thus, the problem becomes 8.2 PARAMETRIC LINEAR PROGRAMMING 297 n Maximize Z(␪) ⫽ 冱 cj xj, j⫽1 subject to n 冱 aij xj ⱕ bi ⫹ ␣i␪ for i ⫽ 1, 2, . . . , m j⫽1 and xj ⱖ 0 for j ⫽ 1, 2, . . . , n. The goal is to identify the optimal solution as a function of ␪. With this formulation, the corresponding objective function value Z*(␪) always has the piecewise linear and concave4 form shown in Fig. 8.2. (See Prob. 8.2-8.) The set of basic variables in the optimal solution still changes (as ␪ increases) only where the slope of Z*(␪) changes. However, in contrast to the preceding case, the values of these variables now change as a (linear) function of ␪ between the slope changes. The reason is that increasing ␪ changes the right-hand sides in the initial set of equations, which then causes changes in the right-hand sides in the final set of equations, i.e., in the values of the final set of basic variables. Figure 8.2 depicts a problem with three sets of basic variables that are optimal for different values of ␪, the first for 0 ⱕ ␪ ⱕ ␪1, the second for ␪1 ⱕ ␪ ⱕ ␪2, and the third for ␪ ⱖ ␪2. Within each of these intervals of ␪, the value of Z*(␪) varies with ␪ despite the fixed coefficients cj because the xj values are changing. The following solution procedure summary is very similar to that just presented for systematic changes in the cj parameters. The reason is that changing the bi values is equivalent to changing the coefficients in the objective function of the dual model. Therefore, the procedure for the primal problem is exactly complementary to applying simultaneously the procedure for systematic changes in the cj parameters to the dual problem. Consequently, the dual simplex method (see Sec. 8.1) now would be used to obtain each new optimal solution, and the applicable sensitivity analysis case (see Sec. 7.2) now is Case 1, but these differences are the only major differences. Summary of the Parametric Linear Programming Procedure for Systematic Changes in the bi Parameters 1. Solve the problem with ␪ ⫽ 0 by the simplex method. 2. Use the sensitivity analysis procedure (Case 1, Sec. 7.2) to introduce the ⌬bi ⫽ ␣i␪ changes to the right side column. Z* ( ) ■ FIGURE 8.2 The objective function value for an optimal solution as a function of ␪ for parametric linear programming with systematic changes in the bi parameters. 0 4 1 2 See Appendix 2 for a definition and discussion of concave functions. 298 CHAPTER 8 OTHER ALGORITHMS FOR LINEAR PROGRAMMING 3. Increase ␪ until one of the basic variables has its value in the right side column go negative (or until ␪ has been increased as far as desired). 4. Use this variable as the leaving basic variable for an iteration of the dual simplex method to find the new optimal solution. Return to step 3. Example. To illustrate this procedure in a way that demonstrates its duality relationship with the procedure for systematic changes in the cj parameters, we now apply it to the dual problem for the Wyndor Glass Co. (see Table 6.1). In particular, suppose that ␣1 ⫽ 2 and ␣2 ⫽ ⫺1 so that the functional constraints become y1 ⫹ 3y3 ⱖ 3 ⫹ 2␪ 2y2 ⫹ 2y3 ⱖ 5 ⫺ ␪ or or ⫺y1 ⫺ 3y3 ⱕ ⫺3 ⫺ 2␪ ⫺2y2 ⫺ 2y3 ⱕ ⫺5 ⫹ ␪. Thus, the dual of this problem is just the example considered in Table 8.2. This problem with ␪ ⫽ 0 has already been solved in Table 8.1, so we begin with the final simplex tableau given there. Using the sensitivity analysis procedure for Case 1, Sec. 7.2, we find that the entries in the right side column of the tableau change to the values given below. ⫺3 ⫺ 2␪ Z* ⫽ y*b  ⫽ [2, 6] ⫽ ⫺36 ⫹ 2␪, ⫺5 ⫹ ␪ 冤  ⫺ᎏ13ᎏ b* ⫽ S*b 苶 ⫽  1  ᎏ3ᎏ 冥 0  ⫺3 ⫺ 2␪  1 ⫹ ᎏ23␪ᎏ  ⫽  ᎏ3ᎏ ⫺ 7␪  . 1 ⫺ᎏ2ᎏ  ᎏ6   ⫺5 ⫹ ␪  2 冤 冥 Therefore, the two basic variables in this tableau 3 ⫹ 2␪ 9 ⫺ 7␪ y3 ⫽ ᎏᎏ and y2 ⫽ ᎏᎏ 3 6 remain nonnegative for 0 ⱕ ␪ ⱕ 9ᎏ7ᎏ. Increasing ␪ past ␪ ⫽ 9ᎏ7ᎏ requires making y2 a leaving basic variable for another iteration of the dual simplex method, and so on, as summarized in Table 8.3. ■ TABLE 8.3 The bi parametric linear programming procedure applied to the dual of the Wyndor Glass Co. example Coefficient of: Range of ␪ 9 0 ⱕ ␪ ⱕ ᎏᎏ 7 9 ᎏᎏ ⱕ ␪ ⱕ 5 7 ␪ⱖ5 Basic Variable Eq. Z y1 Z(␪) (0) 1 y3 (1) 0 y2 (2) 0 2 1 ᎏᎏ 3 1 ⫺ᎏᎏ 3 Z(␪) (0) 1 y3 (1) y1 y2 y3 0 0 0 1 1 0 0 6 0 0 (2) 0 Z(␪) (0) y5 y1 (1) (2) y4 y5 6 Right Side Optimal Solution y1 ⫽ y4 ⫽ y5 ⫽ 0 3 ⫹ 2␪ y3 ⫽ ᎏᎏ 3 9 ⫺ 7␪ y2 ⫽ ᎏᎏ 6 2 1 ⫺ᎏᎏ 3 1 ᎏᎏ 3 1 ⫺ᎏᎏ 2 ⫺36 ⫹ 2␪ 3 ⫹ 2␪ ᎏᎏ 3 9 ⫺ 7␪ ᎏᎏ 6 0 4 3 ⫺27 ⫺ 5␪ y2 ⫽ y4 ⫽ y5 ⫽ 0 1 1 0 1 ⫺3 0 ⫺1 1 ⫺ᎏᎏ 2 3 ᎏᎏ 2 5⫺␪ ᎏᎏ 2 ⫺9 ⫹ 7␪ ᎏᎏ 2 5⫺␪ y3 ⫽ ᎏᎏ 2 ⫺9 ⫹ 7␪ y1 ⫽ ᎏᎏ 2 1 0 12 6 4 0 ⫺12 ⫺ 8␪ y2 ⫽ y3 ⫽ y4 ⫽ 0 0 0 0 1 ⫺2 0 ⫺2 3 0 ⫺1 1 0 ⫺5 ⫹ ␪ 3 ⫹ 2␪ 0 y5 ⫽ ⫺5 ⫹ ␪ y1 ⫽ 3 ⫹ 2␪ 8.3 THE UPPER BOUND TECHNIQUE 299 We suggest that you now trace through Tables 8.2 and 8.3 simultaneously to note the duality relationship between the two procedures. The Solved Examples section of the book’s website includes another example of the procedure for systematic changes in the bi parameters. ■ 8.3 THE UPPER BOUND TECHNIQUE It is fairly common in linear programming problems for some of or all the individual xj variables to have upper bound constraints xj ⱕ uj, where uj is a positive constant representing the maximum feasible value of xj. We pointed out in Sec. 4.8 that the most important determinant of computation time for the simplex method is the number of functional constraints, whereas the number of nonnegativity constraints is relatively unimportant. Therefore, having a large number of upper bound constraints among the functional constraints greatly increases the computational effort required. The upper bound technique avoids this increased effort by removing the upper bound constraints from the functional constraints and treating them separately, essentially like nonnegativity constraints.5 Removing the upper bound constraints in this way causes no problems as long as none of the variables gets increased over its upper bound. The only time the simplex method increases some of the variables is when the entering basic variable is increased to obtain a new BF solution. Therefore, the upper bound technique simply applies the simplex method in the usual way to the remainder of the problem (i.e., without the upper bound constraints) but with the one additional restriction that each new BF solution must satisfy the upper bound constraints in addition to the usual lower bound (nonnegativity) constraints. To implement this idea, note that a decision variable xj with an upper bound constraint xj ⱕ uj can always be replaced by xj ⫽ uj ⫺ yj, where yj would then be the decision variable. In other words, you have a choice between letting the decision variable be the amount above zero (xj) or the amount below uj (yj ⫽ uj ⫺ xj). (We shall refer to xj and yj as complementary decision variables.) Because 0 ⱕ xj ⱕ uj it also follows that 0 ⱕ yj ⱕ uj. Thus, at any point during the simplex method, you can either 1. Use xj, where 0 ⱕ xj ⱕ uj, or 2. Replace xj by uj ⫺ yj , where 0 ⱕ yj ⱕ uj. The upper bound technique uses the following rule to make this choice: Rule: Begin with choice 1. Whenever xj ⫽ 0, use choice 1, so xj is nonbasic. 5 The upper bound technique assumes that the variables have the usual nonnegativity constraints in addition to the upper bound constraints. If a variable has a lower bound other than 0, say, xj ⱖ Lj, then this constraint can be converted into a nonnegativity constraint by making the change of variables, xj⬘ ⫽ xj ⫺ Lj, so xj⬘ ⱖ 0. 300 CHAPTER 8 OTHER ALGORITHMS FOR LINEAR PROGRAMMING Whenever xj ⫽ uj , use choice 2, so yj ⫽ 0 is nonbasic. Switch choices only when the other extreme value of xj is reached. Therefore, whenever a basic variable reaches its upper bound, you should switch choices and use its complementary decision variable as the new nonbasic variable (the leaving basic variable) for identifying the new BF solution. Thus, the one substantive modification being made in the simplex method is in the rule for selecting the leaving basic variable. Recall that the simplex method selects as the leaving basic variable the one that would be the first to become infeasible by going negative as the entering basic variable is increased. The modification now made is to select instead the variable that would be the first to become infeasible in any way, either by going negative or by going over the upper bound, as the entering basic variable is increased. (Notice that one possibility is that the entering basic variable may become infeasible first by going over its upper bound, so that its complementary decision variable becomes the leaving basic variable.) If the leaving basic variable reaches zero, then proceed as usual with the simplex method. However, if it reaches its upper bound instead, then switch choices and make its complementary decision variable the leaving basic variable. An Example To illustrate the upper bound technique, consider this problem: Maximize Z ⫽ 2x1 ⫹ x2 ⫹ 2x3, subject to 4x1 ⫹ x2 ⫽ 12 ⫺2x1 ⫹ x3 ⫽ 4 and 0 ⱕ x1 ⱕ 4, 0 ⱕ x2 ⱕ 15, 0 ⱕ x3 ⱕ 6. Thus, all three variables have upper bound constraints (u1 ⫽ 4, u2 ⫽ 15, u3 ⫽ 6). The two equality constraints are already in proper form from Gaussian elimination for identifying the initial BF solution (x1 ⫽ 0, x2 ⫽ 12, x3 ⫽ 4), and none of the variables in this solution exceeds its upper bound, so x2 and x3 can be used as the initial basic variables without artificial variables being introduced. However, these variables then need to be eliminated algebraically from the objective function to obtain the initial Eq. (0), as follows: (0) Z ⫹ 2(⫺ (2x1 ⫺ x2 ⫺ 2x3 ⫽ 0 Z ⫹ 2(⫹ (4x1 ⫹ x2 ⫺ 2x3 ⫽ 12) Z ⫹ 2(⫺ (2x1 ⫹ x2 ⫹ x3 ⫽ 4) Z ⫹ 2(⫺ (2x1 ⫺ x2 ⫺ 2x3 ⫽ 20. To start the first iteration, this initial Eq. (0) indicates that the initial entering basic variable is x1. Since the upper bound constraints are not to be included, the entire initial set of equations and the corresponding calculations for selecting the leaving basic variables are those shown in Table 8.4. The second column shows how much the entering basic variable x1 can be increased from zero before some basic variable (including x1) becomes infeasible. The maximum value given next to Eq. (0) is just the upper bound constraint for x1. For Eq. (1), since the coefficient of x1 is positive, increasing x1 to 3 decreases the basic variable in this equation (x2) from 12 to its lower bound of zero. For Eq. (2), since the coefficient of x1 is negative, increasing x1 to 1 increases the basic variable in this equation (x3) from 4 to its upper bound of 6. 8.4 301 AN INTERIOR-POINT ALGORITHM ■ TABLE 8.4 Equations and calculations for the initial leaving basic variable in the example for the upper bound technique Initial Set of Equations Maximum Feasible Value of x1 (0) Z ⫺ 2x1 ⫹ x2 ⫹ x3 ⫽ 20 x1 ⱕ 4 (since u1 ⫽ 4) 12 x1 ⱕ ᎏᎏ ⫽ 3 4 6⫺4 x1 ⱕ ᎏᎏ ⫽ 1 씯 minimum (because u3 ⫽ 6) 2 (1) Z ⫺ 4x1 ⫹ x2 ⫹ x3 ⫽ 12 (2) Z ⫺ 2x1 ⫹ x2 ⫹ x3 ⫽ 4 Because Eq. (2) has the smallest maximum feasible value of x1 in Table 8.4, the basic variable in this equation (x3) provides the leaving basic variable. However, because x3 reached its upper bound, replace x3 by 6 ⫺ y3, so that y3 ⫽ 0 becomes the new nonbasic variable for the next BF solution and x1 becomes the new basic variable in Eq. (2). This replacement leads to the following changes in this equation: (2) ⫺ 2x1 ⫹ x3 ⫽ 4 → ⫺ 2x1 ⫹ 6 ⫺ y3 ⫽ 4 → ⫺ 2x1 ⫺ y3 ⫽ ⫺2 1 → x1 ⫹ ᎏ y3 ⫽ 1 2 Therefore, after we eliminate x1 algebraically from the other equations, the second complete set of equations becomes (0) (1) Zx2x2 ⫹ y3 ⫽ 22 Zx2x2 ⫺ 2y3 ⫽ 8 1 (2) Zx1x2 ⫹ ᎏᎏy3 ⫽ 1. 2 The resulting BF solution is x1 ⫽ 1, x2 ⫽ 8, y3 ⫽ 0. By the optimality test, it also is an optimal solution, so x1 ⫽ 1, x2 ⫽ 8, x3 ⫽ 6 ⫺ y3 ⫽ 6 is the desired solution for the original problem. If you would like to see another example of the upper bound technique, the Solved Examples section of the book’s website includes one. ■ 8.4 AN INTERIOR-POINT ALGORITHM In Sec. 4.9 we discussed a dramatic development in linear programming that occurred in 1984, namely, the invention by Narendra Karmarkar of AT&T Bell Laboratories of a powerful algorithm for solving huge linear programming problems with an approach very different from the simplex method. We now introduce the nature of Karmarkar’s approach by describing a relatively elementary variant (the “affine” or “affine-scaling” variant) of his algorithm.6 (Your IOR Tutorial also includes this variant under the title, Solve Automatically by the Interior-Point Algorithm.) Throughout this section we shall focus on Karmarkar’s main ideas on an intuitive level while avoiding mathematical details. In particular, we shall bypass certain details 6 The basic approach for this variant actually was proposed in 1967 by a Russian mathematician I. I. Dikin and then rediscovered soon after the appearance of Karmarkar’s work by a number of researchers, including E. R. Barnes, T. M. Cavalier, and A. L. Soyster. Also see R. J. Vanderbei, M. S. Meketon, and B. A. Freedman, “A Modification of Karmarkar’s Linear Programming Algorithm,” Algorithmica, 1(4) (Special Issue on New Approaches to Linear Programming): 395–407, 1986. 302 CHAPTER 8 OTHER ALGORITHMS FOR LINEAR PROGRAMMING x2 8 (0, 8) optimal 6 Z ⫽ 16 ⫽ x1 ⫹ 2x2 4 (3, 4) (2, 2) 2 ■ FIGURE 8.3 Example for the interior-point algorithm. 2 0 4 6 8 x1 that are needed for the full implementation of the algorithm (e.g., how to find an initial feasible trial solution) but are not central to a basic conceptual understanding. The ideas to be described can be summarized as follows: Concept 1: Shoot through the interior of the feasible region toward an optimal solution. Concept 2: Move in a direction that improves the objective function value at the fastest possible rate. Concept 3: Transform the feasible region to place the current trial solution near its center, thereby enabling a large improvement when concept 2 is implemented. To illustrate these ideas throughout the section, we shall use the following example: Maximize Z ⫽ x1 ⫹ 2x2, subject to x1 ⫹ x2 ⱕ 8 and x1 ⱖ 0, x2 ⱖ 0. This problem is depicted graphically in Fig. 8.3, where the optimal solution is seen to be (x1, x2) ⫽ (0, 8) with Z ⫽ 16. (We will describe the significance of the arrow in the figure shortly.) You will see that our interior-point algorithm requires a considerable amount of work to solve this tiny example. The reason is that the algorithm is designed to solve huge problems efficiently, but is much less efficient than the simplex method (or the graphical method in this case) for small problems. However, having an example with only two variables will allow us to depict graphically what the algorithm is doing. The Relevance of the Gradient for Concepts 1 and 2 The algorithm begins with an initial trial solution that (like all subsequent trial solutions) lies in the interior of the feasible region, i.e., inside the boundary of the feasible region. Thus, for the example, the solution must not lie on any of the three lines (x1 ⫽ 0, x2 ⫽ 0, 8.4 303 AN INTERIOR-POINT ALGORITHM x1  x2 ⫽ 8) that form the boundary of this region in Fig. 8.3. (A trial solution that lies on the boundary cannot be used because this would lead to the undefined mathematical operation of division by zero at one point in the algorithm.) We have arbitrarily chosen (x1, x2) ⫽ (2, 2) to be the initial trial solution. To begin implementing concepts 1 and 2, note in Fig. 8.3 that the direction of movement from (2, 2) that increases Z at the fastest possible rate is perpendicular to (and toward) the objective function line Z ⫽ 16 ⫽ x1 ⫹ 2x2. We have shown this direction by the arrow from (2, 2) to (3, 4). Using vector addition, we have (3, 4) ⫽ (2, 2) ⫹ (1, 2), where the vector (1, 2) is the gradient of the objective function. (We will discuss gradients further in Sec. 13.5 in the broader context of nonlinear programming, where algorithms similar to Karmarkar’s have long been used.) The components of (1, 2) are just the coefficients in the objective function. Thus, with one subsequent modification, the gradient (1, 2) defines the ideal direction to which to move, where the question of the distance to move will be considered later. The algorithm actually operates on linear programming problems after they have been rewritten in augmented form. Letting x3 be the slack variable for the functional constraint of the example, we see that this form is Maximize Z ⫽ x1 ⫹ 2x2, subject to x1 ⫹ x2 ⫹ x3 ⫽ 8 and x1 ⱖ 0, x2 ⱖ 0, x3 ⱖ 0. In matrix notation (slightly different from Chap. 5 because the slack variable now is incorporated into the notation), the augmented form can be written in general as Maximize Z ⫽ cTx, subject to Ax ⫽ b and x ⱖ 0, where  1   c ⫽  2 ,    0  x1    x ⫽  x2  ,    x3  A ⫽ [1, 1, 1], b ⫽ [8],  0   0 ⫽  0    0 for the example. Note that cT ⫽ [1, 2, 0] now is the gradient of the objective function. The augmented form of the example is depicted graphically in Fig. 8.4. The feasible region now consists of the triangle with vertices (8, 0, 0), (0, 8, 0), and (0, 0, 8). Points in the interior of this feasible region are those where x1 ⬎ 0, x2 ⬎ 0, and x3 ⬎ 0. Each of these three xj ⬎ 0 conditions has the effect of forcing (x1, x2) away from one of the three lines forming the boundary of the feasible region in Fig. 8.3. 304 CHAPTER 8 OTHER ALGORITHMS FOR LINEAR PROGRAMMING x3 8 (2, 2, 4) (2, 3, 3) (3, 4, 4) 0 8 x1 ■ FIGURE 8.4 Example in augmented form for the interior-point algorithm. 8 x2 (0, 8, 0) optimal Using the Projected Gradient to Implement Concepts 1 and 2 In augmented form, the initial trial solution for the example is (x1, x2, x3) ⫽ (2, 2, 4). Adding the gradient (1, 2, 0) leads to (3, 4, 4) ⫽ (2, 2, 4) ⫹ (1, 2, 0). However, now there is a complication. The algorithm cannot move from (2, 2, 4) to (3, 4, 4), because (3, 4, 4) is infeasible! When x1 ⫽ 3 and x2 ⫽ 4, then x3 ⫽ 8 ⫺ x1 ⫺ x2 ⫽ 1 instead of 4. The point (3, 4, 4) lies on the near side as you look down on the feasible triangle in Fig. 8.4. Therefore, to remain feasible, the algorithm (indirectly) projects the point (3, 4, 4) down onto the feasible triangle by dropping a line that is perpendicular to this triangle. A vector from (0, 0, 0) to (1, 1, 1) is perpendicular to this triangle, so the perpendicular line through (3, 4, 4) is given by the equation (x1, x2, x3) ⫽ (3, 4, 4) ⫺ ␪(1, 1, 1), where ␪ is a scalar. Since the triangle satisfies the equation x1 ⫹ x2 ⫹ x3 ⫽ 8, this perpendicular line intersects the triangle at (2, 3, 3). Because (2, 3, 3) ⫽ (2, 2, 4) ⫹ (0, 1, ⫺1), 8.4 305 AN INTERIOR-POINT ALGORITHM the projected gradient of the objective function (the gradient projected onto the feasible region) is (0, 1, ⫺1). It is this projected gradient that defines the direction of movement from (2, 2, 4) for the algorithm, as shown by the arrow in Fig. 8.4. A formula is available for computing the projected gradient directly. By defining the projection matrix P as P ⫽ I ⫺ AT(AAT)⫺1A, the projected gradient (in column form) is cp ⫽ Pc. Thus, for the example, 冢 冣 1  P ⫽ 0  0 0 1 0  1 0    0  ⫺  1  [1    1  1 1  1   1]  1     1 1  ⫽ 0  0 0 1 0  1 0  1  0  ⫺ ᎏᎏ  1  [1 3   1  1 1 1] 1  ⫽ 0  0 0 1 0 1 0  1 0  ⫺ ᎏᎏ  1 3  1 1  23ᎏᎏ 1   1  ⫽  ⫺1ᎏ3ᎏ   1ᎏᎏ 1  ⫺3 1 1 1 ⫺1 [1 1 1] ⫺1ᎏ3ᎏ ⫺1ᎏ3ᎏ   2 ᎏᎏ ⫺1ᎏ3ᎏ  , 3 2 ᎏᎏ ⫺1ᎏ3ᎏ 3 so  ᎏ23ᎏ  cp ⫽  ⫺1ᎏ3ᎏ  1ᎏᎏ  ⫺3  0 ⫺1ᎏ3ᎏ ⫺1ᎏ3ᎏ   1    2 1   ᎏᎏ ᎏᎏ   2  ⫽  ⫺ 1 . 3 3   2   ᎏᎏ ⫺1ᎏ3ᎏ  ⫺1  3   0 Moving from (2, 2, 4) in the direction of the projected gradient (0, 1, ⫺1) involves increasing ␣ from zero in the formula  2   x ⫽  2  ⫹ 4␣cp ⫽    4  2  0      2  ⫹ 4␣  1  ,      4  ⫺1  where the coefficient 4 is used simply to give an upper bound of 1 for ␣ to maintain feasibility (all xj ⱖ 0). Note that increasing ␣ to ␣ ⫽ 1 would cause x3 to decrease to x3 ⫽ 4 ⫹ 4(1)(⫺1) ⫽ 0, where ␣ ⬎ 1 yields x3 ⬍ 0. Thus, ␣ measures the fraction used of the distance that could be moved before the feasible region is left. How large should ␣ be made for moving to the next trial solution? Because the increase in Z is proportional to ␣, a value close to the upper bound of 1 is good for giving a relatively large step toward optimality on the current iteration. However, the problem with a value too close to 1 is that the next trial solution then is jammed against a constraint boundary, thereby making it difficult to take large improving steps during subsequent iterations. Therefore, it is very helpful for trial solutions to be near the center of the feasible region (or at least near the center of the portion of the feasible region in the vicinity of an optimal solution), and not too close to any constraint boundary. With this in mind, Karmarkar has stated for his algorithm that a value as large as ␣ ⫽ 0.25 should be “safe.” In practice, much larger values (for example, ␣ ⫽ 0.9) sometimes are used. For the purposes of this example (and the problems at the end of the chapter), we have chosen ␣ ⫽ 0.5. (Your IOR Tutorial uses ␣ ⫽ 0.5 as the default value, but also has ␣ ⫽ 0.9 available.) 306 CHAPTER 8 OTHER ALGORITHMS FOR LINEAR PROGRAMMING A Centering Scheme for Implementing Concept 3 We now have just one more step to complete the description of the algorithm, namely, a special scheme for transforming the feasible region to place the current trial solution near its center. We have just described the benefit of having the trial solution near the center, but another important benefit of this centering scheme is that it keeps turning the direction of the projected gradient to point more nearly toward an optimal solution as the algorithm converges toward this solution. The basic idea of the centering scheme is straightforward—simply change the scale (units) for each of the variables so that the trial solution becomes equidistant from the constraint boundaries in the new coordinate system. (Karmarkar’s original algorithm uses a more sophisticated centering scheme.) For the example, there are three constraint boundaries in Fig. 8.3, each one corresponding to a zero value for one of the three variables of the problem in augmented form, namely, x1 ⫽ 0, x2 ⫽ 0, and x3 ⫽ 0. In Fig. 8.4, see how these three constraint boundaries intersect the Ax ⫽ b (x1 ⫹ x2 ⫹ x3 ⫽ 8) plane to form the boundary of the feasible region. The initial trial solution is (x1, x2, x3) ⫽ (2, 2, 4), so this solution is 2 units away from the x1 ⫽ 0 and x2 ⫽ 0 constraint boundaries and 4 units away from the x3 ⫽ 0 constraint boundary, when the units of the respective variables are used. However, whatever these units are in each case, they are quite arbitrary and can be changed as desired without changing the problem. Therefore, let us rescale the variables as follows: 1 ~x ⫽ ᎏxᎏ, 1 2 2 ~x ⫽ ᎏxᎏ, 2 2 ~x ⫽ ᎏxᎏ3 3 4 in order to make the current trial solution of (x1, x2, x3) ⫽ (2, 2, 4) become ~ , ~x , ~x ) ⫽ (1, 1, 1). (x 1 2 3 ~ for x , 2x ~ for x , and 4x ~ for x ), the problem In these new coordinates (substituting 2x 1 1 2 2 3 3 becomes ~ ⫹ 4x ~, Maximize Z ⫽ 2x 1 2 subject to ~ ⫹ 2x ~x ⫹ 4x ~ ⫽8 2x 1 2 3 and ~x ⱖ 0, 1 ~x ⱖ 0, 2 ~x ⱖ 0, 3 as depicted graphically in Fig. 8.5. Note that the trial solution (1, 1, 1) in Fig. 8.5 is equidistant from the three constraint boundaries ~x 1 ⫽ 0, ~x 2 ⫽ 0, ~x 3 ⫽ 0. For each subsequent iteration as well, the problem is rescaled again to achieve this same property, so that the current trial solution always is (1, 1, 1) in the current coordinates. Summary and Illustration of the Algorithm Now let us summarize and illustrate the algorithm by going through the first iteration for the example, then giving a summary of the general procedure, and finally applying this summary to a second iteration. Iteration 1. Given the initial trial solution (x1, x2, x3) ⫽ (2, 2, 4), let D be the corresponding diagonal matrix such that x ⫽ Dx~, so that 8.4 307 AN INTERIOR-POINT ALGORITHM ~ x3 2 (1, 1, 1) 0 ( ■ FIGURE 8.5 Example after rescaling for iteration 1. 4 5, 7, 1 4 4 2 4 ) ~ x1 (0, 4, 0) optimal ~ x2 2  D ⫽ 0  0 0 2 0 0  0 .  4 The rescaled variables then are the components of  ᎏ1ᎏ 2   ⫺1 ~ x⫽D x⫽  0     0 1 ᎏᎏ 2 0  ᎏxᎏ1  0 2      x1   x2    0   x2  ⫽  ᎏ2ᎏ  .      x3  1   x3  ᎏᎏ   ᎏ4ᎏ  4   In these new coordinates, A and c have become à ⫽ AD ⫽ [1 1 2  1]  0  0 2 ~c ⫽ Dc ⫽  0  0 0  0  4 0 2 0 0  0  ⫽ [2 2  4 0 2 0 4],  1  2      2 ⫽  4 .      0  0 Therefore, the projection matrix is P ⫽ I ⫺ ÃT(ÃÃT)⫺1à 1  P ⫽ 0  0 0 1 0 冢  2 0    0  ⫺  2  [2    1  4 2  2   4]  2     4 冣 ⫺1 [2 2 4] 308 CHAPTER 8 1  P ⫽ 0  0 OTHER ALGORITHMS FOR LINEAR PROGRAMMING 0 1 0 4 0 1  0  ⫺ ᎏ2ᎏ4  4   1 8 4 4 8  ᎏ56ᎏ 8   8  ⫽  ⫺1ᎏ6ᎏ   1ᎏᎏ 16   ⫺3 ⫺1ᎏ6ᎏ ⫺1ᎏ3ᎏ   5 ᎏᎏ ⫺1ᎏ3ᎏ  , 6 1 ᎏᎏ ⫺1ᎏ3ᎏ 3 so that the projected gradient is  ᎏ56ᎏ  ~ cp ⫽ Pc ⫽  ⫺1ᎏ6ᎏ  ᎏ1ᎏ  ⫺3 ⫺ᎏ16ᎏ 5 ᎏᎏ 6 1 ᎏᎏ 3 ⫺  1 ⫺ᎏ13ᎏ   2      ⫺1ᎏ3ᎏ   4  ⫽  3  .      1 ᎏᎏ  ⫺2  3   0 Define v as the absolute value of the negative component of cp having the largest absolute value, so that v ⫽ ⫺2 ⫽ 2 in this case. Consequently, in the current coordinates, the ~ , ~x , ~x ) ⫽ (1, 1, 1) to the next algorithm now moves from the current trial solution (x 1 2 3 trial solution  ᎏ5ᎏ  4   1 1  1      7 ␣ 0.5  ~ x ⫽  1  ⫹ ᎏᎏcp ⫽  1  ⫹ ᎏᎏ  3  ⫽  ᎏᎏ  , v 2       4 1 1  ⫺2  1  ᎏᎏ  2 as shown in Fig. 8.5. (The definition of v has been chosen to make the smallest component of ~ x equal to zero when ␣ ⫽ 1 in this equation for the next trial solution.) In the original coordinates, this solution is  x1     x2  ⫽ Dx~ ⫽    x3  2  0  0 0 2 0  5ᎏ2ᎏ  0   5ᎏ4ᎏ   7   0   ᎏ4ᎏ  ⫽  ᎏ72ᎏ  .     4   1ᎏ2ᎏ  2 This completes the iteration, and this new solution will be used to start the next iteration. These steps can be summarized as follows for any iteration. Summary of the Interior-Point Algorithm 1. Given the current trial solution (x1, x2, . . . , xn), set 0 ⭈⭈⭈ 0   x1 0   0 x2 0 ⭈⭈⭈ 0    D ⫽  0 0 x3 ⭈⭈⭈ 0     ⭈⭈⭈⭈⭈⭈⭈⭈⭈⭈⭈⭈⭈⭈⭈⭈⭈⭈⭈⭈⭈⭈⭈⭈⭈⭈⭈⭈⭈   0 ⭈⭈⭈ xn   0 0 2. Calculate à ⫽ AD and ~ c ⫽ Dc. 3. Calculate P ⫽ I ⫺ ÃT(ÃÃT)⫺1à and c ⫽ Pc~. p 4. Identify the negative component of cp having the largest absolute value, and set v equal to this absolute value. Then calculate 1    1 ␣ ~ x ⫽   ⫹ ᎏᎏcp, v ⯗  1  where ␣ is a selected constant between 0 and 1 (for example, ␣ ⫽ 0.5). ~ as the trial solution for the next iteration (step 1). (If this trial solu5. Calculate x ⫽ Dx tion is virtually unchanged from the preceding one, then the algorithm has virtually converged to an optimal solution, so stop.) 8.4 309 AN INTERIOR-POINT ALGORITHM Now let us apply this summary to iteration 2 for the example. Iteration 2 Step 1: Given the current trial solution (x1, x2, x3) ⫽ (5ᎏ2ᎏ, 7ᎏ2ᎏ, 2), set  5ᎏ2ᎏ  D ⫽ 0  0 0 7 ᎏᎏ 2 0 0  0 .  2 (Note that the rescaled variables are  2ᎏ5ᎏ 0 0   x1   ~x 1   2ᎏ5ᎏ x1   ~     2  2 ⫺1  x 2  ⫽ D x ⫽  0 7ᎏᎏ 0   x2  ⫽  7ᎏᎏ x2  ,  ~    1ᎏᎏ  1   0 0 ᎏ2ᎏ   x3   x3   2 x3  so that the BF solutions in these new coordinates are 8  1ᎏ56ᎏ      ~ x ⫽ D⫺1  0  ⫽  0  ,     0  0 0  0   ⫺1   ~ x ⫽ D  8  ⫽  1ᎏ76ᎏ  ,     0  0 and 0 0   ⫺1   ~ x ⫽ D 0 ⫽ 0 ,     8 4 as depicted in Fig. 8.6.) Step 2: 5 7 ᎏᎏ ᎏᎏ 2 2 à ⫽ AD ⫽ [ , , 2] and  52ᎏᎏ  ~c ⫽ Dc ⫽  7  .   0 ~ x3 ■ FIGURE 8.6 Example after rescaling for iteration 2. 4 2 (1, 1, 1) 0 1 16 7 3 ~ x2 2 16 (0.83, 1.40, 0.5) 5 (0, 167 , 0) optimal 4 ~ x1 310 CHAPTER 8 OTHER ALGORITHMS FOR LINEAR PROGRAMMING Step 3:  11ᎏ3ᎏ8  P ⫽  ⫺ᎏ17ᎏ8  2ᎏᎏ  ⫺9 ⫺ᎏ17ᎏ8 41 ᎏᎏ 90 14 ᎏᎏ 45 ⫺ ⫺2ᎏ9ᎏ   ⫺1ᎏ44ᎏ5  37  ᎏᎏ 45  and  ⫺1ᎏ11ᎏ2    cp ⫽  1ᎏ63ᎏ03  .  4ᎏ1ᎏ   ⫺15  Step 4: ⫺4ᎏ11ᎏ5 ⬎ ⫺1ᎏ11ᎏ2, so v ⫽ 4ᎏ11ᎏ5 and 1  ⫺1ᎏ11ᎏ2   2ᎏ372ᎏ38   0.83       461    0.5 133 ~ ᎏᎏ ᎏᎏ x ⫽ 1 ⫹ ᎏ 41  60  ⫽  328  ⬇  1.40  . ᎏ ᎏ    1ᎏᎏ    41  15  1  ⫺ᎏ1ᎏ5   2  0.50  Step 5:  2.08   1ᎏ6356ᎏ65      ~ x ⫽ Dx ⫽  3ᎏ6252ᎏ67  ⬇  4.92       1.00   1 is the trial solution for iteration 3. Since there is little to be learned by repeating these calculations for additional iterations, we shall stop here. However, we do show in Fig. 8.7 the reconfigured feasible region after rescaling based on the trial solution just obtained for iteration 3. As always, ■ FIGURE 8.7 Example after rescaling for iteration 3. ~ x3 8 (1, 1, 1) 0 1.63 x~2 (0, 1.63, 0) optimal 3.85 ~ x1 8.4 311 AN INTERIOR-POINT ALGORITHM the rescaling has placed the trial solution at (x~1, ~x 2, ~x 3) ⫽ (1, 1, 1), equidistant from the ~x ⫽ 0, ~x ⫽ 0, and ~x ⫽ 0 constraint boundaries. Note in Figs. 8.5, 8.6, and 8.7 how the 1 2 3 sequence of iterations and rescaling have the effect of “sliding” the optimal solution toward (1, 1, 1) while the other BF solutions tend to slide away. Eventually, after enough ~ , ~x , ~x ) ⫽ (0, 1, 0) after rescaling, iterations, the optimal solution will lie very near (x 1 2 3 while the other two BF solutions will be very far from the origin on the ~x 1 and ~x 3 axes. Step 5 of that iteration then will yield a solution in the original coordinates very near the optimal solution of (x1, x2, x3) ⫽ (0, 8, 0). Figure 8.8 shows the progress of the algorithm in the original x1 ⫽ x2 coordinate system before the problem is augmented. The three points—(x1, x2) ⫽ (2, 2), (2.5, 3.5), and (2.08, 4.92)—are the trial solutions for initiating iterations 1, 2, and 3, respectively. We then have drawn a smooth curve through and beyond these points to show the trajectory of the algorithm in subsequent iterations as it approaches (x1, x2) ⫽ (0, 8). The functional constraint for this particular example happened to be an inequality constraint. However, equality constraints cause no difficulty for the algorithm, since it deals with the constraints only after any necessary augmenting has been done to convert them to equality form (Ax ⫽ b) anyway. To illustrate, suppose that the only change in the example is that the constraint x1 ⫹ x2 ⱕ 8 is changed to x1 ⫹ x2 ⫽ 8. Thus, the feasible region in Fig. 8.3 changes to just the line segment between (8, 0) and (0, 8). Given an initial feasible trial solution in the interior (x1 ⬎ 0 and x2 ⬎ 0) of this line segment—say, (x1, x2) ⫽ (4, 4)—the algorithm can proceed just as presented in the five-step summary with just the two variables and A ⫽ [1, 1]. For each iteration, the projected gradient points along this line segment in the direction of (0, 8). With ␣ ⫽ 1ᎏ2ᎏ, iteration 1 leads from (4, 4) to (2, 6), iteration 2 leads from (2, 6) to (1, 7), etc. (Problem 8.4-3 asks you to verify these results.) Although either version of the example has only one functional constraint, having more than one leads to just one change in the procedure as already illustrated (other than more extensive calculations). Having a single functional constraint in the example meant that A ■ FIGURE 8.8 Trajectory of the interiorpoint algorithm for the example in the original x1-x2 coordinate system. x2 8 (0, 8) optimal 6 (2.08, 4.92) 4 (2.5, 3.5) 2 0 (2, 2) 2 4 6 8 x1 312 CHAPTER 8 OTHER ALGORITHMS FOR LINEAR PROGRAMMING had only a single row, so the (ÃÃT)⫺1 term in step 3 only involved taking the reciprocal of the number obtained from the vector product ÃÃT. Multiple functional constraints mean that A has multiple rows, so then the (ÃÃT)⫺1 term involves finding the inverse of the matrix obtained from the matrix product ÃÃT. To conclude, we need to add a comment to place the algorithm into better perspective. For our extremely small example, the algorithm requires relatively extensive calculations and then, after many iterations, obtains only an approximation of the optimal solution. By contrast, the graphical procedure of Sec. 3.1 finds the optimal solution in Fig. 8.3 immediately, and the simplex method requires only one quick iteration. However, do not let this contrast fool you into downgrading the efficiency of the interior-point algorithm. This algorithm is designed for dealing with big problems that may have many thousands of functional constraints. The simplex method typically requires thousands of iterations on such problems. By “shooting” through the interior of the feasible region, the interior-point algorithm tends to require a substantially smaller number of iterations (although with considerably more work per iteration). This sometimes enables an interior-point algorithm to efficiently solve huge linear programming problems that might even be beyond the reach of either the simplex method or the dual simplex method. Therefore, interior-point algorithms similar to the one presented here plays an important role in linear programming. See Sec. 4.9 for a comparison of the interior-point approach with the simplex method. Section 4.9 also discusses the complementary roles of the interior-point approach and the simplex method, including how they can even be combined into a hybrid algorithm. Finally, we should emphasize that this section has provided only a conceptual introduction to the interior-point approach to linear programming by describing a relatively elementary variant of Karmakar’s path-breaking 1984 algorithm. Over the many subsequent years, a number of top-notch researchers have developed many key advances in the interior-point approach. The resulting interior-point algorithms now are commonly referred to as barrier algorithms (or barrier methods). Further coverage of this advanced topic is beyond the scope of this book. However, the interested reader can find many details in the selected references listed at the end of this chapter. ■ 8.5 CONCLUSIONS The dual simplex method and parametric linear programming are especially valuable for postoptimality analysis, although they also can be very useful in other contexts. The upper bound technique provides a way of streamlining the simplex method for the common situation in which many or all of the variables have explicit upper bounds. It can greatly reduce the computational effort for large problems. Mathematical-programming computer packages usually include all three of these procedures, and they are widely used. Because their basic structure is based largely upon the simplex method as presented in Chap. 4, they retain the exceptional computational efficiency possessed by the simplex method. Various other special-purpose algorithms also have been developed to exploit the special structure of particular types of linear programming problems (such as those to be discussed in Chaps. 9 and 10). Much research continues to be done in this area. Karmarkar’s interior-point algorithm initiated another key line of research into how to solve linear programming problems. Variants of this algorithm now provide a powerful approach for efficiently solving some very large problems. LEARNING AIDS FOR THIS CHAPTER ON OUR WEBSITE 313 ■ SELECTED REFERENCES 1. Hooker, J. N.: “Karmarkar’s Linear Programming Algorithm,” Interfaces, 16: 75–90, July– August 1986. 2. Jones, D., and M. Tamiz: Practical Goal Programming, Springer, New York, 2010. 3. Luenberger, D., and Y. Ye: Linear and Nonlinear Programming, 3rd ed., Springer, New York, 2008. 4. Marsten, R., R. Subramanian, M. Saltzman, I. Lustig, and D. Shanno: “Interior-Point Methods for Linear Programming: Just Call Newton, Lagrange, and Fiacco and McCormick!,” Interfaces, 20: 105–116, July–August 1990. 5. Murty, K. G.: Optimization for Decision Making: Linear and Quadratic Models, Springer, New York, 2010. 6. Vanderbei, R. J.: “Affine-Scaling for Linear Programs with Free Variables,” Mathematical Programming, 43: 31–44, 1989. 7. Vanderbei, R. J.: Linear Programming: Foundations and Extensions, 4th ed., Springer, New York, 2014. 8. Ye, Y.: Interior-Point Algorithms: Theory and Analysis, Wiley, Hoboken, NJ, 1997. ■ LEARNING AIDS FOR THIS CHAPTER ON OUR WEBSITE (www.mhhe.com/hillier) Solved Examples: Examples for Chapter 8 Interactive Procedures in IOR Tutorial: Enter or Revise a General Linear Programming Model Set Up for the Simplex Method—Interactive Only Solve Interactively by the Simplex Method Interactive Graphical Method Automatic Procedures in IOR Tutorial: Solve Automatically by the Simplex Method Solve Automatically by the Interior-Point Algorithm Graphical Method and Sensitivity Analysis An Excel Add-In: Analytic Solver Platform for Education (ASPE) “Ch. 8—Other Algorithms for LP” Files for Solving the Examples: Excel Files LINGO/LINDO File MPL/Solvers File Glossary for Chapter 8 Supplement to This Chapter: Linear Goal Programming and Its Solution Procedures (includes two accompanying cases: A Cure for Cuba and Airport Security) See Appendix 1 for documentation of the software. 314 CHAPTER 8 OTHER ALGORITHMS FOR LINEAR PROGRAMMING ■ PROBLEMS The symbols to the left of some of the problems (or their parts) have the following meaning: We suggest that you use one of the procedures in IOR Tutorial (the print-out records your work). For parametric linear programming, this only applies to ␪ ⫽ 0, after which you should proceed manually. C: Use the computer to solve the problem by using the automatic procedure for the interior-point algorithm in IOR Tutorial. subject to 3x1 ⫹ x2 ⱕ 12 x1 ⫹ x2 ⱕ 6 5x1 ⫹ 3x2 ⱕ 27 I: An asterisk on the problem number indicates that at least a partial answer is given in the back of the book. 8.1-1. Consider the following problem. Maximize Z ⫽ ⫺x1 ⫺ x2, subject to x1 ⫹ x2 ⱕ 8 x2 ⱖ 3 ⫺x1 ⫹ x2 ⱕ 2 and x1 ⱖ 0, x2 ⱖ 0. (a) Solve this problem graphically. (b) Use the dual simplex method manually to solve this problem. (c) Trace graphically the path taken by the dual simplex method. I 8.1-2.* Use the dual simplex method manually to solve the following problem. Minimize Z ⫽ 5x1 ⫹ 2x2 ⫹ 4x3, subject to x2 ⱖ 0, x3 ⱖ 0. 8.1-3. Use the dual simplex method manually to solve the following problem. Z ⫽ 7x1 ⫹ 2x2 ⫹ 5x3 ⫹ 4x4, subject to and for j ⫽ 1, 2, 3, 4. 8.1-4. Consider the following problem. Maximize (a) Solve by the original simplex method (in tabular form). Identify the complementary basic solution for the dual problem obtained at each iteration. (b) Solve the dual of this problem manually by the dual simplex method. Compare the resulting sequence of basic solutions with the complementary basic solutions obtained in part (a). 8.1-5. Consider the example for case 1 of sensitivity analysis given in Sec. 7.2, where the initial simplex tableau of Table 4.8 is modified by changing b2 from 12 to 24, thereby changing the respective entries in the right-side column of the final simplex tableau to 54, 6, 12, and ⫺2. Starting from this revised final simplex tableau, use the dual simplex method to obtain the new optimal solution shown in Table 7.5. Show your work. 8.1-6.* Consider part (a) of Prob. 7.2-2. Use the dual simplex method manually to reoptimize, starting from the revised final tableau. 8.2-1.* Consider the following problem. Maximize Z ⫽ 8x1 ⫹ 24x2, subject to x1 ⫹ 2x2 ⱕ 10 2x1 ⫹ x2 ⱕ 10 x2 ⱖ 0. Suppose that Z represents profit and that it is possible to modify the objective function somewhat by an appropriate shifting of key personnel between the two activities. In particular, suppose that the unit profit of activity 1 can be increased above 8 (to a maximum of 18) at the expense of decreasing the unit profit of activity 2 below 24 by twice the amount. Thus, Z can actually be represented as Z(␪) ⫽ (8 ⫹ ␪)x1 ⫹ (24 ⫺ 2␪)x2, 2x1 ⫹ 4x2 ⫹ 7x3 ⫹ x4 ⱖ 5 8x1 ⫹ 4x2 ⫹ 6x3 ⫹ 4x4 ⱖ 8 3x1 ⫹ 8x2 ⫹ x3 ⫹ 4x4 ⱖ 4 x j ⱖ 0, x2 ⱖ 0. I x1 ⱖ 0, and Minimize x1 ⱖ 0, and 3x1 ⫹ x2 ⫹ 2x3 ⱖ 4 6x1 ⫹ 3x2 ⫹ 5x3 ⱖ 10 x1 ⱖ 0, and Z ⫽ 3x1 ⫹ 2x2, where ␪ is also a decision variable such that 0 ⱕ ␪ ⱕ 10. I (a) Solve the original form of this problem graphically. Then extend this graphical procedure to solve the parametric extension of the problem; i.e., find the optimal solution and the optimal value of Z(␪) as a function of ␪, for 0 ⱕ ␪ ⱕ 10. I (b) Find an optimal solution for the original form of the problem by the simplex method. Then use parametric linear programming to find an optimal solution and the optimal value of Z(␪) as a function of ␪, for 0 ⱕ ␪ ⱕ 10. Plot Z(␪). 315 PROBLEMS (c) Determine the optimal value of ␪. Then indicate how this optimal value could have been identified directly by solving only two ordinary linear programming problems. (Hint: A convex function achieves its maximum at an endpoint.) subject to 8.2-2. Use parametric linear programming to find the optimal solution for the following problem as a function of ␪, for 0  ␪  20. and I Maximize Z(␪) ⫽ (20 ⫹ 4␪)x1 ⫹ (30 ⫺ 3␪)x2 ⫹ 5x3, subject to 3x1 ⫹ 3x2 ⫹ x3 ⱕ 30 8x1 ⫹ 6x2 ⫹ 4x3 ⱕ 75 6x1 ⫹ x2 ⫹ x3 ⱕ 45 I x2 ⱖ 0, x3 ⱖ 0. 8.2-3. Consider the following problem. Maximize xj ⱖ 0, for j ⫽ 1, 2, 3, 4. Then identify the value of ␪ that gives the largest optimal value of Z(␪). 8.2-6. Consider Prob. 7.2-3. Use parametric linear programming to find an optimal solution as a function of ␪ for ⫺20 ⱕ ␪ ⱕ 0. (Hint: Substitute ⫺␪⬘ for ␪, and then increase ␪⬘ from zero.) and x1 ⱖ 0, 3x1 ⫺ 2x2 ⫹ x3 ⫹ 3x4 ⱕ 135 ⫺ 2␪ 2x1 ⫹ 4x2 ⫺ x3 ⫹ 2x4 ⱕ 78 ⫺ ␪ x1 ⫹ 2x2 ⫹ x3 ⫹ 2x4 ⱕ 30 ⫹ ␪ Z(␪) ⫽ (10 ⫺ ␪)x1 ⫹ (12 ⫹ ␪)x2 ⫹ (7 ⫹ 2␪)x3, subject to x1 ⫹ 2x2 ⫹ 2x3 ⱕ 30 x1 ⫹ x2 ⫹ x3 ⱕ 20 8.2-7. Consider the Z*(␪) function shown in Fig. 8.1 for parametric linear programming with systematic changes in the cj parameters. (a) Explain why this function is piecewise linear. (b) Show that this function must be convex. 8.2-8. Consider the Z*(␪) function shown in Fig. 8.2 for parametric linear programming with systematic changes in the bi parameters. (a) Explain why this function is piecewise linear. (b) Show that this function must be concave. 8.2-9. Let and n x1 ⱖ 0, x2 ⱖ 0, x3 ⱖ 0. (a) Use parametric linear programming to find an optimal solution for this problem as a function of ␪, for ␪ ⱖ 0. (b) Construct the dual model for this problem. Then find an optimal solution for this dual problem as a function of ␪, for ␪ ⱖ 0, by the method described in the latter part of Sec. 8.2. Indicate graphically what this algebraic procedure is doing. Compare the basic solutions obtained with the complementary basic solutions obtained in part (a). 8.2-4.* Use the parametric linear programming procedure for making systematic changes in the bi parameters to find an optimal solution for the following problem as a function of ␪, for 0 ⱕ ␪ ⱕ 25. I Maximize Z(␪) ⫽ 2x1 ⫹ x2, Z* ⫽ max j j j⫽1 subject to n 冱 aij xj ⱕ bi, j⫽1 for i ⫽ 1, 2, . . . , m, and xj ⱖ 0, for j ⫽ 1, 2, . . . , n (where the aij , bi , and cj are fixed constants), and let (y1*, y2*, . . . , y*m) be the corresponding optimal dual solution. Then let n Z** ⫽ max 冦 冱 c x 冧, j j j⫽1 subject to subject to n x1 ⱕ 10 ⫹ 2␪ x1 ⫹ x2 ⱕ 25 ⫺ ␪ x2 ⱕ 10 ⫹ 2␪ 冱 aij xj ⱕ bi ⫹ ki, j⫽1 x2 ⱖ 0. Indicate graphically what this algebraic procedure is doing. 8.2-5. Use parametric linear programming to find an optimal solution for the following problem as a function of ␪, for 0 ⱕ ␪ ⱕ 30. for i ⫽ 1, 2, . . . , m, and xj ⱖ 0, and x1 ⱖ 0, 冦 冱 c x 冧, for j ⫽ 1, 2, . . . , n, where k1, k2, . . . , km are given constants. Show that m Z** ⱕ Z* ⫹ 冱 ki yi*. i⫽1 I Maximize Z(␪) ⫽ 5x1 ⫹ 6x2 ⫹ 4x3 ⫹ 7x4, 8.3-1. Consider the following problem. Maximize Z ⫽ 2x1 ⫹ x2, 316 CHAPTER 8 OTHER ALGORITHMS FOR LINEAR PROGRAMMING subject to subject to x1 ⫹ x2 ⫹ x3 ⱖ 15 x2 ⫹ x3 ⱖ 10 x1 ⫺ x2 ⱕ 5 x1 ⱕ 10 x2 ⱕ 10 and and x1 ⱖ 0, 0 ⱕ x1 ⱕ 25, x2 ⱖ 0. 0 ⱕ x2 ⱕ 5, 0 ⱕ x3 ⱕ 15. (a) Solve this problem graphically. (b) Use the upper bound technique manually to solve this problem. (c) Trace graphically the path taken by the upper bound technique. 8.4-1. Reconsider the example used to illustrate the interiorpoint algorithm in Sec. 8.4. Suppose that (x1, x2) ⫽ (1, 3) were used instead as the initial feasible trial solution. Perform two iterations manually, starting from this solution. Then use the automatic procedure in your IOR Tutorial to check your work. 8.3-2.* Use the upper bound technique manually to solve the following problem. 8.4-2. Consider the following problem. C I Maximize x1 ⫹ x2 ⱕ 4 x2 ⫺ 2x3 ⱕ 1 2x1 ⫹ x2 ⫹ 2x3 ⱕ 8 x1 ⱕ1 x2 ⱕ3 x3 ⱕ 2 and x1 ⱖ 0, (a) Solve this problem graphically. Also identify all CPF solutions. (b) Starting from the initial trial solution (x1, x2 ) ⫽ (1, 1), perform four iterations of the interior-point algorithm presented in Sec. 8.4 manually. Then use the automatic procedure in your IOR Tutorial to check your work. (c) Draw figures corresponding to Figs. 8.4, 8.5, 8.6, 8.7, and 8.8 for this problem. In each case, identify the basic (or cornerpoint) feasible solutions in the current coordinate system. (Trial solutions can be used to determine projected gradients.) I x2 ⱖ 0, x3 ⱖ 0. 8.3-3. Use the upper bound technique manually to solve the following problem. Z ⫽ 2x1 ⫹ 3x2 ⫺ 2x3 ⫹ 5x4, subject to 8.4-3. Consider the following problem. 2x1 ⫹ 2x2 ⫹ x3 ⫹ 2x4 ⱕ 5 x1 ⫹ 2x2 ⫺ 3x3 ⫹ 4x4 ⱕ 5 Maximize x1 ⫹ x2 ⫽ 8 for j ⫽ 1, 2, 3, 4. and x1 ⱖ 0, 8.3-4. Use the upper bound technique manually to solve the following problem. Maximize Z ⫽ 2x1 ⫹ 5x2 ⫹ 3x3 ⫹ 4x4 ⫹ x5, subject to x1 ⫹ 3x2 ⫹ 2x3 ⫹ 3x4 ⫹ x5 ⱕ 6 4x1 ⫹ 6x2 ⫹ 5x3 ⫹ 7x4 ⫹ x5 ⱕ 15 and 0 ⱕ xj ⱕ 1, for j ⫽ 1, 2, 3, 4, 5. 8.3-5. Simultaneously use the upper bound technique and the dual simplex method manually to solve the following problem. Minimize Z ⫽ x1 ⫹ 2x2, subject to and 0 ⱕ xj ⱕ 1, x2 ⱖ 0. C and Maximize Z ⫽ 3x1 ⫹ x2, subject to subject to x1 ⱖ 0, Maximize Z ⫽ x1 ⫹ 3x2 ⫺ 2x3, Z ⫽ 3x1 ⫹ 4x2 ⫹ 2x3, x2 ⱖ 0. (a) Near the end of Sec. 8.4, there is a discussion of what the interior-point algorithm does on this problem when starting from the initial feasible trial solution (x1, x2) ⫽ (4, 4). Verify the results presented there by performing two iterations manually. Then use the automatic procedure in your IOR Tutorial to check your work. (b) Use these results to predict what subsequent trial solutions would be if additional iterations were to be performed. (c) Suppose that the stopping rule adopted for the algorithm in this application is that the algorithm stops when two successive trial solutions differ by no more than 0.01 in any component. Use your predictions from part (b) to predict the final trial solution and the total number of iterations required to get there. How close would this solution be to the optimal solution (x1, x2) ⫽ (0, 8)? C 317 PROBLEMS 8.4-4. Consider the following problem. and subject to x1 ⱖ 0, x1 ⫹ 2x2 ⱕ 9 2x1 ⫹ x2 ⱕ 9 x2 ⱖ 0, x3 ⱖ 0. (a) Graph the feasible region. (b) Find the gradient of the objective function, and then find the projected gradient onto the feasible region. (c) Starting from the initial trial solution (x1, x2, x3) ⫽ (1, 1, 1), perform two iterations of the interior-point algorithm presented in Sec. 8.4 manually. C (d) Starting from this same initial trial solution, use your IOR Tutorial to perform 10 iterations of this algorithm. I and x2 ⱖ 0. (a) Solve the problem graphically. (b) Find the gradient of the objective function in the original x1-x2 coordinate system. If you move from the origin in the direction of the gradient until you reach the boundary of the feasible region, where does it lead relative to the optimal solution? C (c) Starting from the initial trial solution (x1, x2) ⫽ (1, 1), use your IOR Tutorial to perform 10 iterations of the interiorpoint algorithm presented in Sec. 8.4. C (d) Repeat part (c) with ␣ ⫽ 0.9. I 8.4-5. Consider the following problem. Maximize x1 ⫹ 2x2 ⫹ 3x3 ⫽ 6 Z ⫽ x1 ⫹ x2, Maximize x1 ⱖ 0, subject to Z ⫽ 2x1 ⫹ 5x2 ⫹7x3, C 8.4-6. Starting from the initial trial solution (x1, x2) ⫽ (2, 2), use your IOR Tutorial to apply 15 iterations of the interior-point algorithm presented in Sec. 8.4 to the Wyndor Glass Co. problem presented in Sec. 3.1. Also draw a figure like Fig. 8.8 to show the trajectory of the algorithm in the original x1-x2 coordinate system. 9 C H A P T E R The Transportation and Assignment Problems C hapter 3 emphasized the wide applicability of linear programming. We continue to broaden our horizons in this chapter by discussing two particularly important (and related) types of linear programming problems. One type, called the transportation problem, received this name because many of its applications involve determining how to optimally transport goods. However, some of its important applications (e.g., production scheduling) actually have nothing to do with transportation. The second type, called the assignment problem, involves such applications as assigning people to tasks. Although its applications appear to be quite different from those for the transportation problem, we shall see that the assignment problem can be viewed as a special type of transportation problem. The next chapter will introduce additional special types of linear programming problems involving networks, including the minimum cost flow problem (Sec. 10.6). There we shall see that both the transportation and assignment problems actually are special cases of the minimum cost flow problem. We introduce the network representation of the transportation and assignment problems in this chapter. Applications of the transportation and assignment problems tend to require a very large number of constraints and variables, so a straightforward computer application of the simplex method may require an exorbitant computational effort. Fortunately, a key characteristic of these problems is that most of the aij coefficients in the constraints are zeros, and the relatively few nonzero coefficients appear in a distinctive pattern. As a result, it has been possible to develop special streamlined algorithms that achieve dramatic computational savings by exploiting this special structure of the problem. Therefore, it is important to become sufficiently familiar with these special types of problems that you can recognize them when they arise and apply the proper computational procedure. To describe special structures, we shall introduce the table (matrix) of constraint coefficients shown in Table 9.1, where aij is the coefficient of the jth variable in the ith functional constraint. Later, portions of the table containing only coefficients equal to zero will be indicated by leaving them blank, whereas blocks containing nonzero coefficients will be shaded. After presenting a prototype example for the transportation problem, we describe the special structure in its model and give additional examples of its applications. Section 9.2 presents the transportation simplex method, a special streamlined version of the simplex 318 9.1 THE TRANSPORTATION PROBLEM 319 ■ TABLE 9.1 Table of constraint coefficients for linear programming A  a11 a12 … a1n   a21 a22 … a2n  ………………………    am1 am2 … amn  method for efficiently solving transportation problems. (You will see in Sec. 10.7 that this algorithm is related to the network simplex method, another streamlined version of the simplex method for efficiently solving any minimum cost flow problem, including both transportation and assignment problems.) Section 9.3 focuses on the assignment problem. Section 9.4 then presents a specialized algorithm, called the Hungarian algorithm, for solving only assignment problems very efficiently. The book’s website also provides a supplement to this chapter. It is a complete case study (including the analysis) that illustrates how a corporate decision regarding where to locate a new facility (an oil refinery in this case) may require solving many transportation problems. (One of the cases for this chapter asks you to continue the analysis for an extension of this case study.) ■ 9.1 THE TRANSPORTATION PROBLEM Prototype Example One of the main products of the P & T COMPANY is canned peas. The peas are prepared at three canneries (near Bellingham, Washington; Eugene, Oregon; and Albert Lea, Minnesota) and then shipped by truck to four distributing warehouses in the western United States (Sacramento, California; Salt Lake City, Utah; Rapid City, South Dakota; and Albuquerque, New Mexico), as shown in Fig. 9.1. Because the shipping costs are a major expense, management is initiating a study to reduce them as much as possible. For the upcoming season, an estimate has been made of the output from each cannery, and each warehouse has been allocated a certain amount from the total supply of peas. This information (in units of truckloads), along with the shipping cost per truckload for each cannery-warehouse combination, is given in Table 9.2. Thus, there are a total of 300 truckloads to be shipped. The problem now is to determine which plan for assigning these shipments to the various cannery-warehouse combinations would minimize the total shipping cost. By ignoring the geographical layout of the canneries and warehouses, we can provide a network representation of this problem in a simple way by lining up all the canneries in one column on the left and all the warehouses in one column on the right. This representation is shown in Fig. 9.2. The arrows show the possible routes for the truckloads, where the number next to each arrow is the shipping cost per truckload for that route. A square bracket next to each location gives the number of truckloads to be shipped out of that location (so that the allocation into each warehouse is given as a negative number). The problem depicted in Fig. 9.2 is actually a linear programming problem of the transportation problem type. To formulate the model, let Z denote total shipping cost, and let xij (i  1, 2, 3; j  1, 2, 3, 4) be the number of truckloads to be shipped from cannery An Application Vignette Procter & Gamble (P&G) is the world’s largest and most profitable consumer products company. It makes and markets hundreds of brands of consumer goods worldwide and had over $83 billion in sales in 2012. Fortune magazine ranked the company at 5th place in its “World’s Most Admired Companies” list in 2011. The company has grown continuously over its long history tracing back to the 1830s. To maintain and accelerate that growth, a major OR study was undertaken to strengthen P&G’s global effectiveness. Prior to the study, the company’s supply chain consisted of hundreds of suppliers, over 50 product categories, over 60 plants, 15 distribution centers, and over 1,000 customer zones. However, as the company moved toward global brands, management realized that it needed to consolidate plants to reduce manufacturing expenses, improve speed to market, and reduce capital investment. Therefore, the study focused on redesigning the company’s production and distribution system for its North American operations. The result was a reduction in the number of North American plants by almost 20 percent, saving over $200 million in pretax costs per year. A major part of the study revolved around formulating and solving transportation problems for individual product categories. For each option regarding the plants to keep open, and so forth, solving the corresponding transportation problem for a product category showed what the distribution cost would be for shipping the product category from those plants to the distribution centers and customer zones. Source: J. D. Camm, T. E. Chorman, F. A. Dill, J. R. Evans, D. J. Sweeney, and G. W. Wegryn: “Blending OR/MS, Judgment, and GIS: Restructuring P & G’s Supply Chain,” Interfaces, 27(1): 128–142, Jan.–Feb. 1997. (A link to this article is provided on our website, www.mhhe.com/hillier.) CANNERY 1 Bellingham WAREHOUSE 3 Rapid City CANNERY 2 Eugene CANNERY 3 Albert Lea WAREHOUSE 2 Salt Lake City WAREHOUSE 1 Sacramento WAREHOUSE 4 Albuquerque ■ FIGURE 9.1 Location of canneries and warehouses for the P & T Co. problem. 9.1 321 THE TRANSPORTATION PROBLEM ■ TABLE 9.2 Shipping data for P & T Co. Shipping Cost ($) per Truckload Warehouse 1 Cannery 2 3 Allocation 1 2 3 4 Output 464 352 995 513 416 682 654 690 388 867 791 685 75 125 100 80 65 70 85 i to warehouse j. Thus, the objective is to choose the values of these 12 decision variables (the xij) so as to Minimize Z  464x11  513x12  654x13  867x14  352x21  416x22  690x23  791x24  995x31  682x32  388x33  685x34, subject to the constraints x11  x12  x13  x14  x21  x21  x21  x21  x21  x21  x21  x21  75  x21  x21  x21  x21x21  x22  x23  x24  x21  x21  x21  x21  125  x21  x21  x21  x21  x21  x21  x21  x21x31  x32  x33  x34  100 x11  x21  x21  x21  x21  x21  x21  x21  x31  x21  x21  x21  80 x11  x12  x21  x21  x21  x22  x21  x21 x21  x32  x21  x21  65 x11  x12  x13  x21  x21  x21  x23  x21  x21  x21  x33  x21  70 x11  x12  x13  x14  x21  x21  x21  x24  x21  x21  x21  x34  85 and xij  0 ■ FIGURE 9.2 Network representation of the P & T Co. problem. (i  1, 2, 3; j  1, 2, 3, 4). W1 [80] 464 [75] C1 513 654 867 352 [125] C2 W2 [65] 416 690 791 995 [100] C3 682 388 W3 [70] 685 W4 [85] CHAPTER 9 THE TRANSPORTATION AND ASSIGNMENT PROBLEMS ■ TABLE 9.3 Constraint coefficients for P & T Co. Coefficient of: A         x11 x12 x13 x14 1 1 1 1 x21 x22 x23 x24 1 1 1 1 x31 1 1 1 1 x33 1 1 x34 1 1 1 1 x32 1 1 1 1 1 1                 322 Cannery constraints Warehouse constraints Table 9.3 shows the constraint coefficients. As you will see later in this section, it is the special structure in the pattern of these coefficients that distinguishes this problem as a transportation problem, not its context. However, we first will describe the various other characteristics of the transportation problem model. The Transportation Problem Model To describe the general model for the transportation problem, we need to use terms that are considerably less specific than those for the components of the prototype example. In particular, the general transportation problem is concerned (literally or figuratively) with distributing any commodity from any group of supply centers, called sources, to any group of receiving centers, called destinations, in such a way as to minimize the total distribution cost. The correspondence in terminology between the prototype example and the general problem is summarized in Table 9.4. As indicated by the fourth and fifth rows of the table, each source has a certain supply of units to distribute to the destinations, and each destination has a certain demand for units to be received from the sources. The model for a transportation problem makes the following assumption about these supplies and demands: The requirements assumption: Each source has a fixed supply of units, where this entire supply must be distributed to the destinations. (We let si denote the number of units being supplied by source i, for i  1, 2, . . . , m.) Similarly, each destination has a fixed demand for units, where this entire demand must be received from the sources. (We let dj denote the number of units being received by destination j, for j  1, 2, . . . , n.) This assumption holds for the P & T Co. problem since each cannery (source) has a fixed output and each warehouse (destination) has a fixed allocation. ■ TABLE 9.4 Terminology for the transportation problem Prototype Example General Problem Truckloads of canned peas Three canneries Four warehouses Output from cannery i Allocation to warehouse j Shipping cost per truckload from cannery i to warehouse j Units of a commodity m sources n destinations Supply si from source i Demand dj at destination j Cost cij per unit distributed from source i to destination j 9.1 323 THE TRANSPORTATION PROBLEM This assumption that there is no leeway in the amounts to be sent or received means that there needs to be a balance between the total supply from all sources and the total demand at all destinations. The feasible solutions property: A transportation problem will have feasible solutions if and only if m n 冱 si  j1 冱 dj. i1 Fortunately, these sums are equal for the P & T Co. since Table 9.2 indicates that the supplies (outputs) sum to 300 truckloads and so do the demands (allocations). In some real problems, the supplies actually represent maximum amounts (rather than fixed amounts) to be distributed. Similarly, in other cases, the demands represent maximum amounts (rather than fixed amounts) to be received. Such problems do not quite fit the model for a transportation problem because they violate the requirements assumption. However, it is possible to reformulate the problem so that they then fit this model by introducing a dummy destination or a dummy source to take up the slack between the actual amounts and maximum amounts being distributed. We will illustrate how this is done with two examples at the end of this section. The last row of Table 9.4 refers to a cost per unit distributed. This reference to a unit cost implies the following basic assumption for any transportation problem: The cost assumption: The cost of distributing units from any particular source to any particular destination is directly proportional to the number of units distributed. Therefore, this cost is just the unit cost of distribution times the number of units distributed. (We let cij denote this unit cost for source i and destination j.) This assumption holds for the P & T Co. problem since the cost of shipping peas from any cannery to any warehouse is directly proportional to the number of truckloads being shipped. The only data needed for a transportation problem model are the supplies, demands, and unit costs. These are the parameters of the model. All these parameters can be summarized conveniently in a single parameter table as shown in Table 9.5. The model: Any problem (whether involving transportation or not) fits the model for a transportation problem if it can be described completely in terms of a parameter table like Table 9.5 and it satisfies both the requirements assumption and the cost assumption. The objective is to minimize the total cost of distributing the units. All the parameters of the model are included in this parameter table. ■ TABLE 9.5 Parameter table for the transportation problem Cost per Unit Distributed Destination 1 2 … n 1 2 Source ⯗ m … c11 c12 c1n … c21 c22 c2n ………………………………………………………………… … cm1 cm2 cmn Demand d1 d2 … dn Supply s1 s2 ⯗ sm 324 CHAPTER 9 THE TRANSPORTATION AND ASSIGNMENT PROBLEMS Therefore, formulating a problem as a transportation problem only requires filling out a parameter table in the format of Table 9.5. (The parameter table for the P & T Co. problem is shown in Table 9.2.) Alternatively, the same information can be provided by using the network representation of the problem shown in Fig. 9.3 (as was done in Fig. 9.2 for the P & T Co. problem). Some problems that have nothing to do with transportation also can be formulated as a transportation problem in either of these two ways. The Solved Examples section of the book’s website includes another example of such a problem. Since a transportation problem can be formulated simply by either filling out a parameter table or drawing its network representation, it is not necessary to write out a formal mathematical model for the problem. However, we will go ahead and show you this model once for the general transportation problem just to emphasize that it is indeed a special type of linear programming problem. Letting Z be the total distribution cost and xij (i  1, 2, . . . , m; j  1, 2, . . . , n) be the number of units to be distributed from source i to destination j, the linear programming formulation of this problem is m Z冱 n 冱 cij xij, i1 j1 Minimize subject to n 冱 xij  si for i  1, 2, . . . , m, j1 ■ FIGURE 9.3 Network representation of the transportation problem. [s1] c11 S1 D1 [d1] c12 c1 n c 21 [s2] S2 D2 [d2] c22 c2 n 1 cm c m2 [sm] Sm cmn Dn [dn] 9.1 325 THE TRANSPORTATION PROBLEM m 冱 xij  dj for j  1, 2, . . . , n, i1 and xij  0, for all i and j. Note that the resulting table of constraint coefficients has the special structure shown in Table 9.6. Any linear programming problem that fits this special formulation is of the transportation problem type, regardless of its physical context. In fact, there have been numerous applications unrelated to transportation that have been fitted to this special structure, as we shall illustrate in the next example later in this section. (The assignment problem described in Sec. 9.3 is an additional example.) This is one of the reasons why the transportation problem is considered such an important special type of linear programming problem. For many applications, the supply and demand quantities in the model (the si and dj) have integer values, and implementation will require that the distribution quantities (the xij) also have integer values. Fortunately, because of the special structure shown in Table 9.6, all such problems have the following property: Integer solutions property: For transportation problems where every si and dj have an integer value, all the basic variables (allocations) in every basic feasible (BF) solution (including an optimal one) also have integer values. The solution procedure described in Sec. 9.2 deals only with BF solutions, so it automatically will obtain an integer optimal solution for this case. (You will be able to see why this solution procedure actually gives a proof of the integer solutions property after you learn the procedure; Prob. 9.2-20 guides you through the reasoning involved.) Therefore, it is unnecessary to add a constraint to the model that the xij must have integer values. As with other linear programming problems, the usual software options (Excel with either the standard Solver or ASPE, LINGO/LINDO, MPL/Solvers) are available to you for setting up and solving transportation problems (and assignment problems), as demonstrated in the files for this chapter in your OR Courseware. However, because the Excel approach now is somewhat different from what you have seen previously, we next describe this approach. Using Excel to Formulate and Solve Transportation Problems As described in Sec. 3.5, the process of using a spreadsheet to formulate a linear programming model for a problem begins by developing answers to three questions. What are the decisions to be made? What are the constraints on these decisions? What is the overall measure of performance for these decisions? Since a transportation problem is a special type of ■ TABLE 9.6 Constraint coefficients for the transportation problem Coefficient of: x12 … x1n 1 1 … 1 x21 x22 … x2n … 1 1 … 1 … xm1 1 1 1 1 1 … 1 … 1 … xmn 1 1 1 … xm2 … … 1 1                A      x11 Supply constraints Demand constraints 326 CHAPTER 9 THE TRANSPORTATION AND ASSIGNMENT PROBLEMS linear programming problem, addressing these questions also is a suitable starting point for formulating this kind of problem on a spreadsheet. The design of the spreadsheet then revolves around laying out this information and the associated data in a logical way. To illustrate, consider the P & T Co. problem again. The decisions to be made are the number of truckloads of peas to ship from each cannery to each warehouse. The constraints on these decisions are that the total amount shipped from each cannery must equal its output (the supply) and the total amount received at each warehouse must equal its allocation (the demand). The overall measure of performance is the total shipping cost, so the objective is to minimize this quantity. This information leads to the spreadsheet model shown in Fig. 9.4. All the data provided in Table 9.2 are displayed in the following data cells: UnitCost (D5:G7), Supply (J12:J14), and Demand (D17:G17). The decisions on shipping quantities are given by the changing cells, ShipmentQuantity (D12:G14). The output cells are TotalShipped (H12:H14) and TotalReceived (D15:G15), where the SUM functions entered into these cells are shown near the bottom of Fig. 9.4. The constraints, TotalShipped (H12:H14) = Supply (J12:J14) and TotalReceived (D15:G15) = Demand (D17:G17), have been specified on the spreadsheet and entered into Solver. The objective cell is TotalCost (J17), where its SUMPRODUCT function is shown in the lower right-hand corner of Fig. 9.4. The Solver parameters box specifies that the objective is to minimize this objective cell. Choosing the Make Variables Nonnegative option specifies that all shipment quantities must be nonnegative. The Simplex LP solving method is chosen because this is a linear programming problem. To begin the process of solving the problem, any value (such as 0) can be entered in each of the changing cells. After clicking on the Solve button, Solver will use the simplex method to solve the transportation problem and determine the best value for each of A 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 ■ FIGURE 9.4 A spreadsheet formulation of the P & T Co. problem as a transportation problem, including the objective cell TotalCost (J17) and the other output cells TotalShipped (H12:H14) and TotalReceived (D15:G15), as well as the specifications needed to set up the model. The changing cells ShipmentQuantity (D12:G14) show the optimal shipping plan obtained by Solver. B C D E F G H I J P&T Co. Distribution Problem Unit Cost Destination (Warehouse) Sacramento Salt Lake City Rapid City $464 $513 $654 $352 $416 $690 $995 $682 $388 Albuquerque $867 $791 $685 Shipm ent Quantity Destination (Warehouse) (Truckloads) Sacramento Salt Lake City Rapid City Source Bellingham 0 20 0 (Cannery) Eugene 80 45 0 Albert Lea 0 0 70 Total Received 80 65 70 = = = Demand 80 65 70 Albuquerque 55 0 30 85 = 85 Source (Cannery) Bellingham Eugene Albert Lea Solver Parameters Set Objective Cell:Total Cost To:Min By Changing Variable Cells: ShipmentQuantity Subject to the Constraints: TotalReceived = Demand TotalShipped = Supply Solver Options: Make Variables Nonnegative Solving Method: Simplex LP Total Shipped 75 = 125 = 100 = Supply 75 125 100 Total Cost $ 152,535 9.1 THE TRANSPORTATION PROBLEM 327 the decision variables. This optimal solution is shown in ShipmentQuantity (D12:G14) in Fig. 9.4, along with the resulting value $152,535 in the objective cell TotalCost (J17). Note that Solver simply uses the general simplex method to solve a transportation problem rather than a streamlined version that is specially designed for solving transportation problems very efficiently, such as the transportation simplex method presented in the next section. Therefore, a software package that includes such a streamlined version should solve a large transportation problem much faster than Solver. We mentioned earlier that some problems do not quite fit the model for a transportation problem because they violate the requirements assumption, but that it is possible to reformulate such a problem to fit this model by introducing a dummy destination or a dummy source. When using Solver, it is not necessary to do this reformulation since the simplex method can solve the original model where the supply constraints are in  form or the demand constraints are in  form. (The Excel files for the next two examples in your OR Courseware illustrate spreadsheet formulations that retain either the supply constraints or the demand constraints in their original inequality form.) However, the larger the problem, the more worthwhile it becomes to do the reformulation and use the transportation simplex method (or equivalent) instead with another software package. The next two examples illustrate how to do this kind of reformulation. An Example with a Dummy Destination The NORTHERN AIRPLANE COMPANY builds commercial airplanes for various airline companies around the world. The last stage in the production process is to produce the jet engines and then to install them (a very fast operation) in the completed airplane frame. The company has been working under some contracts to deliver a considerable number of airplanes in the near future, and the production of the jet engines for these planes must now be scheduled for the next four months. To meet the contracted dates for delivery, the company must supply engines for installation in the quantities indicated in the second column of Table 9.7. Thus, the cumulative number of engines produced by the end of months 1, 2, 3, and 4 must be at least 10, 25, 50, and 70, respectively. The facilities that will be available for producing the engines vary according to other production, maintenance, and renovation work scheduled during this period. The resulting monthly differences in the maximum number that can be produced and the cost (in millions of dollars) of producing each one are given in the third and fourth columns of Table 9.7. Because of the variations in production costs, it may well be worthwhile to produce some of the engines a month or more before they are scheduled for installation, and this possibility is being considered. The drawback is that such engines must be stored until the scheduled installation (the airplane frames will not be ready early) at a storage cost of $15,000 per month (including interest on expended capital) for each engine,1 as shown in the rightmost column of Table 9.7. The production manager wants a schedule developed for the number of engines to be produced in each of the four months so that the total of the production and storage costs will be minimized. Formulation. One way to formulate a mathematical model for this problem is to let xj be the number of jet engines to be produced in month j, for j  1, 2, 3, 4. By using only 1 For modeling purposes, assume that this storage cost is incurred at the end of the month for just those engines that are being held over into the next month. Thus, engines that are produced in a given month for installation in the same month are assumed to incur no storage cost. 328 CHAPTER 9 THE TRANSPORTATION AND ASSIGNMENT PROBLEMS ■ TABLE 9.7 Production scheduling data for Northern Airplane Co. Month Scheduled Installations Maximum Production Unit Cost* of Production Unit Cost* of Storage 1 2 3 4 10 15 25 20 25 35 30 10 1.08 1.11 1.10 1.13 0.015 0.015 0.015 *Cost is expressed in millions of dollars. these four decision variables, the problem can be formulated as a linear programming problem that does not fit the transportation problem type. (See Prob. 9.2-18.) On the other hand, by adopting a different viewpoint, we can instead formulate the problem as a transportation problem that requires much less effort to solve. This viewpoint will describe the problem in terms of sources and destinations and then identify the corresponding xij, cij, si, and dj. (See if you can do this before reading further.) Because the units being distributed are jet engines, each of which is to be scheduled for production in a particular month and then installed in a particular (perhaps different) month, Source i Destination j xij cij  production of jet engines in month i (i  1, 2, 3, 4)  installation of jet engines in month j ( j  1, 2, 3, 4)  number of engines produced in month i for installation in month j  cost associated with each unit of xij  per unit for production and any storage 冦cost ? if i  j if i  j si  ? dj  number of scheduled installations in month j. The corresponding (incomplete) parameter table is given in Table 9.8. Thus, it remains to identify the missing costs and the supplies. Since it is impossible to produce engines in one month for installation in an earlier month, xij must be zero if i ⬎ j. Therefore, there is no real cost that can be associated with such xij. Nevertheless, in order to have a well-defined transportation problem to which the solution procedure of Sec. 9.2 can be applied, it is necessary to assign some value for the unidentified costs. Fortunately, we can use the Big M method introduced in Sec. 4.6 to ■ TABLE 9.8 Incomplete parameter table for Northern Airplane Co. Cost per Unit Distributed Destination Source Demand 1 2 3 4 1 2 3 4 Supply 1.080 ? ? ? 1.095 1.110 ? ? 1.110 1.125 1.100 ? 1.125 1.140 1.115 1.130 ? ? ? ? 10 15 25 20 9.1 329 THE TRANSPORTATION PROBLEM assign this value. Thus, we assign a very large number (denoted by M for convenience) to the unidentified cost entries in Table 9.8 to force the corresponding values of xij to be zero in the final solution. The numbers that need to be inserted into the supply column of Table 9.8 are not obvious because the “supplies,” the amounts produced in the respective months, are not fixed quantities. In fact, the objective is to solve for the most desirable values of these production quantities. Nevertheless, it is necessary to assign some fixed number to every entry in the table, including those in the supply column, to have a transportation problem. A clue is provided by the fact that although the supply constraints are not present in the usual form, these constraints do exist in the form of upper bounds on the amount that can be supplied, namely, x11  x12  x13  x14 x21  x22  x23  x24 x31  x32  x33  x34 x41  x42  x43  x44     25, 35, 30, 10. The only change from the standard model for the transportation problem is that these constraints are in the form of inequalities instead of equalities. To convert these inequalities to equations in order to fit the transportation problem model, we use the familiar device of slack variables, introduced in Sec. 4.2. In this context, the slack variables are allocations to a single dummy destination that represent the unused production capacity in the respective months. This change permits the supply in the transportation problem formulation to be the total production capacity in the given month. Furthermore, because the demand for the dummy destination is the total unused capacity, this demand is (25  35  30  10)  (10  15  25  20)  30. With this demand included, the sum of the supplies now equals the sum of the demands, which is the condition given by the feasible solutions property for having feasible solutions. The cost entries associated with the dummy destination should be zero because there is no cost incurred by a fictional allocation. (Cost entries of M would be inappropriate for this column because we do not want to force the corresponding values of xij to be zero. In fact, these values need to sum to 30.) The resulting final parameter table is given in Table 9.9, with the dummy destination labeled as destination 5(D). By using this formulation, it is quite easy to find the optimal production schedule by the solution procedure described in Sec. 9.2. (See Prob. 9.2-10 and its answer in the back of the book.) ■ TABLE 9.9 Complete parameter table for Northern Airplane Co. Cost per Unit Distributed Destination Source Demand 1 2 3 4 1 2 3 4 5(D) Supply 1.080 M M M 1.095 1.110 M M 1.110 1.125 1.100 M 1.125 1.140 1.115 1.130 0 0 0 0 25 35 30 10 10 15 25 20 30 330 CHAPTER 9 THE TRANSPORTATION AND ASSIGNMENT PROBLEMS An Example with a Dummy Source METRO WATER DISTRICT is an agency that administers water distribution in a large geographic region. The region is fairly arid, so the district must purchase and bring in water from outside the region. The sources of this imported water are the Colombo, Sacron, and Calorie rivers. The district then resells the water to users in the region. Its main customers are the water departments of the cities of Berdoo, Los Devils, San Go, and Hollyglass. It is possible to supply any of these cities with water brought in from any of the three rivers, with the exception that no provision has been made to supply Hollyglass with Calorie River water. However, because of the geographic layouts of the aqueducts and the cities in the region, the cost to the district of supplying water depends upon both the source of the water and the city being supplied. The variable cost per acre foot of water (in tens of dollars) for each combination of river and city is given in Table 9.10. Despite these variations, the price per acre foot charged by the district is independent of the source of the water and is the same for all cities. The management of the district is now faced with the problem of how to allocate the available water during the upcoming summer season. In units of 1 million acre feet, the amounts available from the three rivers are given in the rightmost column of Table 9.10. The district is committed to providing a certain minimum amount to meet the essential needs of each city (with the exception of San Go, which has an independent source of water), as shown in the minimum needed row of the table. The requested row indicates that Los Devils desires no more than the minimum amount, but that Berdoo would like to buy as much as 20 more, San Go would buy up to 30 more, and Hollyglass will take as much as it can get. Management wishes to allocate all the available water from the three rivers to the four cities in such a way as to at least meet the essential needs of each city while minimizing the total cost to the district. Formulation. Table 9.10 already is close to the proper form for a parameter table, with the rivers being the sources and the cities being the destinations. However, the one basic difficulty is that it is not clear what the demands at the destinations should be. The amount to be received at each destination (except Los Devils) actually is a decision variable, with both a lower bound and an upper bound. This upper bound is the amount requested unless the request exceeds the total supply remaining after the minimum needs of the other cities are met, in which case this remaining supply becomes the upper bound. Thus, insatiably thirsty Hollyglass has an upper bound of (50  60  50)  (30  70  0)  60. Unfortunately, just like the other numbers in the parameter table of a transportation problem, the demand quantities must be constants, not bounded decision variables. To ■ TABLE 9.10 Water resources data for Metro Water District Cost (Tens of Dollars) per Acre Foot Berdoo Los Devils San Go Hollyglass Supply Colombo River Sacron River Calorie River 16 14 19 13 13 20 22 19 23 17 15 — 50 60 50 Minimum needed Requested 30 50 70 70 0 30 10 ⴥ (in units of 1 million acre feet) 9.1 331 THE TRANSPORTATION PROBLEM begin resolving this difficulty, temporarily suppose that it is not necessary to satisfy the minimum needs, so that the upper bounds are the only constraints on amounts to be allocated to the cities. In this circumstance, can the requested allocations be viewed as the demand quantities for a transportation problem formulation? After one adjustment, yes! (Do you see already what the needed adjustment is?) The situation is analogous to Northern Airplane Co.’s production scheduling problem, where there was excess supply capacity. Now there is excess demand capacity. Consequently, rather than introducing a dummy destination to “receive” the unused supply capacity, the adjustment needed here is to introduce a dummy source to “send” the unused demand capacity. The imaginary supply quantity for this dummy source would be the amount by which the sum of the demands exceeds the sum of the real supplies: (50  70  30  60)  (50  60  50)  50. This formulation yields the parameter table shown in Table 9.11, which uses units of million acre feet and tens of millions of dollars. The cost entries in the dummy row are zero because there is no cost incurred by the fictional allocations from this dummy source. On the other hand, a huge unit cost of M is assigned to the Calorie River–Hollyglass spot. The reason is that Calorie River water cannot be used to supply Hollyglass, and assigning a cost of M will prevent any such allocation. Now let us see how we can take each city’s minimum needs into account in this kind of formulation. Because San Go has no minimum need, it is all set. Similarly, the formulation for Hollyglass does not require any adjustments because its demand (60) exceeds the dummy source’s supply (50) by 10, so the amount supplied to Hollyglass from the real sources will be at least 10 in any feasible solution. Consequently, its minimum need of 10 from the rivers is guaranteed. (If this coincidence had not occurred, Hollyglass would need the same adjustments that we shall have to make for Berdoo.) Los Devils’ minimum need equals its requested allocation, so its entire demand of 70 must be filled from the real sources rather than the dummy source. This requirement calls for the Big M method! Assigning a huge unit cost of M to the allocation from the dummy source to Los Devils ensures that this allocation will be zero in an optimal solution. Finally, consider Berdoo. In contrast to Hollyglass, the dummy source has an adequate (fictional) supply to “provide” at least some of Berdoo’s minimum need in addition to its extra requested amount. Therefore, since Berdoo’s minimum need is 30, adjustments must be made to prevent the dummy source from contributing more than 20 to Berdoo’s total demand of 50. This adjustment is accomplished by splitting Berdoo into two destinations, one having a demand of 30 with a unit cost of M for any allocation from the dummy source and the other having a demand of 20 with a unit cost of zero for the dummy source allocation. This formulation gives the final parameter table shown in Table 9.12. ■ TABLE 9.11 Parameter table without minimum needs for Metro Water District Cost (Tens of Millions of Dollars) per Unit Distributed Destination Source Demand Colombo River Sacron River Calorie River Dummy Berdoo Los Devils San Go Hollyglass Supply 16 14 19 0 13 13 20 0 22 19 23 0 17 15 M 0 50 60 50 50 50 70 30 60 332 CHAPTER 9 THE TRANSPORTATION AND ASSIGNMENT PROBLEMS ■ TABLE 9.12 Parameter table for Metro Water District Cost (Tens of Millions of Dollars) per Unit Distributed Destination Source Colombo River Source Sacron River Source Calorie River Source Dummy 1(D) 2(D) 3(D) 4(D) Demand Berdoo (min.) 1 Berdoo (extra) 2 Los Devils 3 San Go 4 Hollyglass 5 Supply 16 14 19 M 16 14 19 0 13 13 20 M 22 19 23 0 17 15 M 0 50 60 50 50 30 20 70 30 60 This problem will be solved in Sec. 9.2 to illustrate the solution procedure presented there. Generalizations of the Transportation Problem Even after the kinds of reformulations illustrated by the two preceding examples, some problems involving the distribution of units from sources to destinations fail to satisfy the model for the transportation problem. One reason may be that the distribution does not go directly from the sources to the destinations but instead passes through transfer points along the way. The Distribution Unlimited Co example in Sec. 3.4 (see Fig. 3.13) illustrates such a problem. In this case, the sources are the two factories and the destinations are the two warehouses. However, a shipment from a particular factory to a particular warehouse may first get transferred at a distribution center, or even at the other factory or the other warehouse, before reaching its destination. The unit shipping costs differ for these different shipping lanes. Furthermore, there are upper limits on how much can be shipped through some of the shipping lanes. Although it is not a transportation problem, this kind of problem still is a special type of linear programming problem, called the minimum cost flow problem, that will be discussed in Sec. 10.6. The network simplex method described in Sec. 10.7 provides an efficient way of solving minimum cost flow problems. A minimum cost flow problem that does not impose any upper limits on how much can be shipped through the shipping lanes is referred to as a transshipment problem. Section 23.1 on the book’s website is devoted to discussing transshipment problems. In other cases, the distribution may go directly from sources to destinations, but other assumptions of the transportation problem may be violated. The cost assumption will be violated if the cost of distributing units from any particular source to any particular destination is a nonlinear function of the number of units distributed. The requirements assumption will be violated if either the supplies from the sources or the demands at the destinations are not fixed. For example, the final demand at a destination may not become known until after the units have arrived and then a nonlinear cost is incurred if the amount received deviates from the final demand. If the supply at a source is not fixed, the cost of producing the amount supplied may be a nonlinear function of this amount. For example, a fixed cost may be part of the cost associated with a decision to open up a new source. Considerable research has been done to generalize the transportation problem and its solution procedure in these kinds of directions.2 2 For example, see K. Holmberg and H. Tuy: “A Production-Transportation Problem with Stochastic Demand and Concave Production Costs,” Mathematical Programming Series A, 85: 157–179, 1999. 9.2 A STREAMLINED SIMPLEX METHOD FOR THE TRANSPORTATION PROBLEM 333 ■ 9.2 A STREAMLINED SIMPLEX METHOD FOR THE TRANSPORTATION PROBLEM Because the transportation problem is just a special type of linear programming problem, it can be solved by applying the simplex method as described in Chap. 4. However, you will see in this section that some tremendous computational shortcuts can be taken in this method by exploiting the special structure shown in Table 9.6. We shall refer to this streamlined procedure as the transportation simplex method. As you read on, note particularly how the special structure is exploited to achieve great computational savings. This will illustrate an important OR technique—streamlining an algorithm to exploit the special structure in the problem at hand. Setting Up the Transportation Simplex Method To highlight the streamlining achieved by the transportation simplex method, let us first review how the general (unstreamlined) simplex method would set up a transportation problem in tabular form. After constructing the table of constraint coefficients (see Table 9.6), converting the objective function to maximization form, and using the Big M method to introduce artificial variables z1, z2, . . . , zmn into the m  n respective equality constraints (see Sec. 4.6), typical columns of the simplex tableau would have the form shown in Table 9.13, where all entries not shown in these columns are zeros. [The one remaining adjustment to be made before the first iteration of the simplex method is to algebraically eliminate the nonzero coefficients of the initial (artificial) basic variables in row 0.] After any subsequent iteration, row 0 then would have the form shown in Table 9.14. Because of the pattern of 0s and 1s for the coefficients in Table 9.13, by the fundamental insight presented in Sec. 5.3, ui and vj would have the following interpretation: ui  multiple of original row i that has been subtracted (directly or indirectly) from original row 0 by the simplex method during all iterations leading to the current simplex tableau. vj  multiple of original row m  j that has been subtracted (directly or indirectly) from original row 0 by the simplex method during all iterations leading to the current simplex tableau. ■ TABLE 9.13 Original simplex tableau before simplex method is applied to transportation problem Coefficient of: Basic Variable Z zi zmj … xij … zi Eq. Z (0) (1) ⯗ (i) ⯗ (m  j ) ⯗ (m  n) 1 cij M 0 1 1 0 1 … zmj M … Right side 0 si 1 dj 334 CHAPTER 9 THE TRANSPORTATION AND ASSIGNMENT PROBLEMS ■ TABLE 9.14 Row 0 of simplex tableau when simplex method is applied to transportation problem Coefficient of: Basic Variable Eq. Z Z (0) 1 … xij cij  ui  vj … zi M  ui … zmj M  vj Right Side … m n i1 j1 冱 siui  冱 djvj Using the duality theory introduced in Chap. 6, another property of the ui and vj is that they are the dual variables.3 If xij is a nonbasic variable, cij  ui  vj is interpreted as the rate at which Z will change as xij is increased. The Needed Information. To lay the groundwork for simplifying this setup, recall what information is needed by the simplex method. In the initialization, an initial BF solution must be obtained, which is done artificially by introducing artificial variables as the initial basic variables and setting them equal to si and dj. The optimality test and step 1 of an iteration (selecting an entering basic variable) require knowing the current row 0, which is obtained by subtracting a certain multiple of another row from the preceding row 0. Step 2 (determining the leaving basic variable) must identify the basic variable that reaches zero first as the entering basic variable is increased, which is done by comparing the current coefficients of the entering basic variable and the corresponding right side. Step 3 must determine the new BF solution, which is found by subtracting certain multiples of one row from the other rows in the current simplex tableau. Greatly Streamlined Ways of Obtaining This Information. Now, how does the transportation simplex method obtain the same information in much simpler ways? This story will unfold fully in the coming pages, but here are some preliminary answers. First, no artificial variables are needed, because a simple and convenient procedure (with several variations) is available for constructing an initial BF solution. Second, the current row 0 can be obtained without using any other row simply by calculating the current values of ui and vj directly. Since each basic variable must have a coefficient of zero in row 0, the current ui and vj are obtained by solving the set of equations cij  ui  vj  0 for each i and j such that xij is a basic variable. (We will illustrate this straightforward procedure later when discussing the optimality test for the transportation simplex method.) The special structure in Table 9.13 makes this convenient way of obtaining row 0 possible by yielding cij  ui  vj as the coefficient of xij in Table 9.14. Third, the leaving basic variable can be identified in a simple way without (explicitly) using the coefficients of the entering basic variable. The reason is that the special structure of the problem makes it easy to see how the solution must change as the entering basic variable is increased. As a result, the new BF solution also can be identified immediately without any algebraic manipulations on the rows of the simplex tableau. (You will see the details when we describe how the transportation simplex method performs an iteration.) The grand conclusion is that almost the entire simplex tableau (and the work of maintaining it) can be eliminated! Besides the input data (the cij, si, and dj values), the only 3 It would be easier to recognize these variables as dual variables by relabeling all these variables as yi and then changing all the signs in row 0 of Table 9.14 by converting the objective function back to its original minimization form. 9.2 A STREAMLINED SIMPLEX METHOD FOR THE TRANSPORTATION PROBLEM 335 information needed by the transportation simplex method is the current BF solution,4 the current values of ui and vj, and the resulting values of cij  ui  vj for nonbasic variables xij. When you solve a problem by hand, it is convenient to record this information for each iteration in a transportation simplex tableau, such as shown in Table 9.15. (Note carefully that the values of xij and cij  ui  vj are distinguished in these tableaux by circling the former but not the latter.) The Resulting Great Improvement in Efficiency. You can gain a fuller appreciation for the great difference in efficiency and convenience between the simplex and the transportation simplex methods by applying both to the same small problem (see Prob. 9.2-17). However, the difference becomes even more pronounced for large problems that must be solved on a computer. This pronounced difference is suggested somewhat by comparing the sizes of the simplex and the transportation simplex tableaux. Thus, for a transportation problem having m sources and n destinations, the simplex tableau would have m  n  1 rows and (m  1)(n  1) columns (excluding those to the left of the xij columns), and the transportation simplex tableau would have m rows and n columns (excluding the two extra informational rows and columns). Now try plugging in various values for m and n (for example, m  10 and n  100 would be a rather typical medium-size transportation problem), and note how the ratio of the number of cells in the simplex tableau to the number in the transportation simplex tableau increases as m and n increase. Initialization Recall that the objective of the initialization is to obtain an initial BF solution. Because all the functional constraints in the transportation problem are equality constraints, the simplex method would obtain this solution by introducing artificial variables and using them as the initial basic variables, as described in Sec. 4.6. The resulting basic solution ■ TABLE 9.15 Format of a transportation simplex tableau Destination 1 1 2 2 c11 c12 c21 c22 n   c1n  c2n  m Demand cm1   cm2 d1  d2 s2  cmn  ui s1 Source ⯗ Supply ⯗ sm dn Z vj Additional information to be added to each cell: If xij is a basic variable cij xij 4 If xij is a nonbasic variable cij cij  ui  vj Since nonbasic variables are automatically zero, the current BF solution is fully identified by recording just the values of the basic variables. We shall use this convention from now on. 336 CHAPTER 9 THE TRANSPORTATION AND ASSIGNMENT PROBLEMS actually is feasible only for a revised version of the problem, so a number of iterations are needed to drive these artificial variables to zero in order to reach the real BF solutions. The transportation simplex method bypasses all this by instead using a simpler procedure to directly construct a real BF solution on a transportation simplex tableau. Before outlining this procedure, we need to point out that the number of basic variables in any basic solution of a transportation problem is one fewer than you might expect. Ordinarily, there is one basic variable for each functional constraint in a linear programming problem. For transportation problems with m sources and n destinations, the number of functional constraints is m  n. However, Number of basic variables  m  n  1. The reason is that the functional constraints are equality constraints, and this set of m  n equations has one extra (or redundant) equation that can be deleted without changing the feasible region; i.e., any one of the constraints is automatically satisfied whenever the other m  n  1 constraints are satisfied. (This fact can be verified by showing that any supply constraint exactly equals the sum of the demand constraints minus the sum of the other supply constraints, and that any demand equation also can be reproduced by summing the supply equations and subtracting the other demand equations. See Prob. 9.2-19.) Therefore, any BF solution appears on a transportation simplex tableau with exactly m  n  1 circled nonnegative allocations, where the sum of the allocations for each row or column equals its supply or demand.5 The procedure for constructing an initial BF solution selects the m  n  1 basic variables one at a time. After each selection, a value that will satisfy one additional constraint (thereby eliminating that constraint’s row or column from further consideration for providing allocations) is assigned to that variable. Thus, after m  n  1 selections, an entire basic solution has been constructed in such a way as to satisfy all the constraints. A number of different criteria have been proposed for selecting the basic variables. We present and illustrate three of these criteria here, after outlining the general procedure. General Procedure6 for Constructing an Initial BF Solution. To begin, all source rows and destination columns of the transportation simplex tableau are initially under consideration for providing a basic variable (allocation). 1. From the rows and columns still under consideration, select the next basic variable (allocation) according to some criterion. 2. Make that allocation large enough to exactly use up the remaining supply in its row or the remaining demand in its column (whichever is smaller). 3. Eliminate that row or column (whichever had the smaller remaining supply or demand) from further consideration. (If the row and column have the same remaining supply and demand, then arbitrarily select the row as the one to be eliminated. The column will be used later to provide a degenerate basic variable, i.e., a circled allocation of zero.) 4. If only one row or only one column remains under consideration, then the procedure is completed by selecting every remaining variable (i.e., those variables that were neither previously selected to be basic nor eliminated from consideration by eliminating 5 However, note that any feasible solution with m  n  1 nonzero variables is not necessarily a basic solution because it might be the weighted average of two or more degenerate BF solutions (i.e., BF solutions having some basic variables equal to zero). We need not be concerned about mislabeling such solutions as being basic, however, because the transportation simplex method constructs only legitimate BF solutions. 6 In Sec. 4.1 we pointed out that the simplex method is an example of the algorithms (systematic solution procedures) so prevalent in OR work. Note that this procedure also is an algorithm, where each successive execution of the (four) steps constitutes an iteration. 9.2 A STREAMLINED SIMPLEX METHOD FOR THE TRANSPORTATION PROBLEM 337 their row or column) associated with that row or column to be basic with the only feasible allocation. Otherwise, return to step 1. Alternative Criteria for Step 1 1. Northwest corner rule: Begin by selecting x11 (that is, start in the northwest corner of the transportation simplex tableau). Thereafter, if xij was the last basic variable selected, then next select xi,j1 (that is, move one column to the right) if source i has any supply remaining. Otherwise, next select xi1,j (that is, move one row down). Example. To make this description more concrete, we now illustrate the general procedure on the Metro Water District problem (see Table 9.12) with the northwest corner rule being used in step 1. Because m  4 and n  5 in this case, the procedure would find an initial BF solution having m  n  1  8 basic variables. As shown in Table 9.16, the first allocation is x11  30, which exactly uses up the demand in column 1 (and eliminates this column from further consideration). This first iteration leaves a supply of 20 remaining in row 1, so next select x1,11  x12 to be a basic variable. Because this supply is no larger than the demand of 20 in column 2, all of it is allocated, x12  20, and this row is eliminated from further consideration. (Row 1 is chosen for elimination rather than column 2 because of the parenthetical instruction in step 3.) Therefore, select x11,2  x22 next. Because the remaining demand of 0 in column 2 is less than the supply of 60 in row 2, allocate x22  0 and eliminate column 2. Continuing in this manner, we eventually obtain the entire initial BF solution shown in Table 9.16, where the circled numbers are the values of the basic variables (x11  30, . . . , x45  50) and all the other variables (x13, etc.) are nonbasic variables equal to zero. Arrows have been added to show the order in which the basic variables (allocations) were selected. The value of Z for this solution is Z  16(30)  16(20)  . . .  0(50)  2,470  10M. 2. Vogel’s approximation method: For each row and column remaining under consideration, calculate its difference, which is defined as the arithmetic difference between the smallest and next-to-the-smallest unit cost cij still remaining in that row or column. (If two unit costs tie for being the smallest remaining in a row or column, then ■ TABLE 9.16 Initial BF solution from the Northwest Corner Rule Destination 1 16 1 2 16 30 → 20 14 0 19 22 13 20 10 15 60 23 M → 30 → 10 0 M 0 4(D) Demand 50 30 vj 20 ui 70 30 50  → 0 Supply 17 19 → 60 3 M 5 50  → 19 4 13  → 14 2 Source 3 60 50 Z  2,470  10M 338 CHAPTER 9 THE TRANSPORTATION AND ASSIGNMENT PROBLEMS the difference is 0.) In that row or column having the largest difference, select the variable having the smallest remaining unit cost. (Ties for the largest difference, or for the smallest remaining unit cost, may be broken arbitrarily.) Example. Now let us apply the general procedure to the Metro Water District problem by using the criterion for Vogel’s approximation method to select the next basic variable in step 1. With this criterion, it is more convenient to work with parameter tables (rather than with complete transportation simplex tableaux), beginning with the one shown in Table 9.12. At each iteration, after the difference for every row and column remaining under consideration is calculated and displayed, the largest difference is circled and the smallest unit cost in its row or column is enclosed in a box. The resulting selection (and value) of the variable having this unit cost as the next basic variable is indicated in the lower right-hand corner of the current table, along with the row or column thereby being eliminated from further consideration (see steps 2 and 3 of the general procedure). The table for the next iteration is exactly the same except for deleting this row or column and subtracting the last allocation from its supply or demand (whichever remains). Applying this procedure to the Metro Water District problem yields the sequence of parameter tables shown in Table 9.17, where the resulting initial BF solution consists of the eight basic variables (allocations) given in the lower right-hand corner of the respective parameter tables. This example illustrates two relatively subtle features of the general procedure that warrant special attention. First, note that the final iteration selects three variables (x31, x32, and x33) to become basic instead of the single selection made at the other iterations. The reason is that only one row (row 3) remains under consideration at this point. Therefore, step 4 of the general procedure says to select every remaining variable associated with row 3 to be basic. Second, note that the allocation of x23  20 at the next-to-last iteration exhausts both the remaining supply in its row and the remaining demand in its column. However, rather than eliminate both the row and column from further consideration, step 3 says to eliminate only the row, saving the column to provide a degenerate basic variable later. Column 3 is, in fact, used for just this purpose at the final iteration when x33  0 is selected as one of the basic variables. For another illustration of this same phenomenon, see Table 9.16 where the allocation of x12  20 results in eliminating only row 1, so that column 2 is saved to provide a degenerate basic variable, x22  0, at the next iteration. Although a zero allocation might seem irrelevant, it actually plays an important role. You will see soon that the transportation simplex method must know all m  n  1 basic variables, including those with value zero, in the current BF solution. 3. Russell’s approximation method: For each source row i remaining under consideration, determine its 苶ui, which is the largest unit cost cij still remaining in that row. For each destination column j remaining under consideration, determine its 苶vj, which is the largest unit cost cij still remaining in that column. For each variable xij not previously selected in these rows and columns, calculate ij  cij  苶ui  苶vj. Select the variable having the largest (in absolute terms) negative value of ij. (Ties may be broken arbitrarily.) Example. Using the criterion for Russell’s approximation method in step 1, we again apply the general procedure to the Metro Water District problem (see Table 9.12). The results, including the sequence of basic variables (allocations), are shown in Table 9.18. At iteration 1, the largest unit cost in row 1 is u苶1  22, the largest in column 1 is 苶v1  M, and so forth. Thus, 11  c11  苶u1  苶v1  16  22  M  6  M. 9.2 A STREAMLINED SIMPLEX METHOD FOR THE TRANSPORTATION PROBLEM 339 ■ TABLE 9.17 Initial BF solution from Vogel’s approximation method Destination 1 2 3 4(D) Source Demand Column difference 1 2 3 4 5 Supply Row Difference 16 14 19 M 16 14 19 0 13 13 20 M 22 19 23 0 17 15 M 0 50 60 50 50 3 1 0 0 30 2 20 14 70 0 30 19 60 15 Select x44  30 Eliminate column 4 Destination Source 1 2 3 4(D) Demand Column difference 1 2 3 5 Supply Row Difference 16 14 19 M 16 14 19 0 13 13 20 M 17 15 M 0 50 60 50 20 3 1 0 0 30 2 20 14 70 0 60 15 Select x45  20 Eliminate row 4(D) Destination 1 2 3 5 Supply Row Difference 1 2 3 16 14 19 16 14 19 13 13 20 17 15 M 50 60 50 3 1 0 Demand Column difference 30 2 20 2 70 0 40 2 Source Select x13  50 Eliminate row 1 Destination 1 2 3 5 Supply Row Difference 2 3 14 19 14 19 13 20 15 M 60 50 1 0 Demand Column difference 30 5 20 5 20 7 40 M  15 Source Select x25  40 Eliminate column 5 Destination 2 3 Source Demand Column difference 1 2 3 Supply Row Difference 14 19 14 19 13 20 20 50 1 0 30 5 20 5 20 7 Select x23  20 Eliminate row 2 Destination Source Demand 3 1 2 3 Supply 19 19 20 50 30 20 0 Select x31  30 Select x32  20 Select x33  0 Z  2,460 340 CHAPTER 9 THE TRANSPORTATION AND ASSIGNMENT PROBLEMS Calculating all the ij values for i  1, 2, 3, 4 and j  1, 2, 3, 4, 5 shows that 45  0  2M has the largest negative value, so x45  50 is selected as the first basic variable (allocation). This allocation exactly uses up the supply in row 4, so this row is eliminated from further consideration. Note that eliminating this row changes 苶v1 and v苶3 for the next iteration. Therefore, the second iteration requires recalculating the ij with j  1, 3 as well as eliminating i  4. The largest negative value now is 15  17  22  M  5  M, so x15  10 becomes the second basic variable (allocation), eliminating column 5 from further consideration. The subsequent iterations proceed similarly, but you may want to test your understanding by verifying the remaining allocations given in Table 9.18. As with the other procedures in this (and other) section(s), you should find your IOR Tutorial useful for doing the calculations involved and illuminating the approach. (See the interactive procedure for finding an initial BF solution.) Comparison of Alternative Criteria for Step 1. Now let us compare these three criteria for selecting the next basic variable. The main virtue of the northwest corner rule is that it is quick and easy. However, because it pays no attention to unit costs cij, usually the solution obtained will be far from optimal. (Note in Table 9.16 that x35  10 even though c35  M.) Expending a little more effort to find a good initial BF solution might greatly reduce the number of iterations then required by the transportation simplex method to reach an optimal solution (see Probs. 9.2-7 and 9.2-9). Finding such a solution is the objective of the other two criteria. Vogel’s approximation method has been a popular criterion for many years,7 partially because it is relatively easy to implement by hand. Because the difference represents the minimum extra unit cost incurred by failing to make an allocation to the cell having the smallest unit cost in that row or column, this criterion does take costs into account in an effective way. Russell’s approximation method provides another excellent criterion8 that is still quick to implement on a computer (but not manually). Although it is unclear as to which is more ■ TABLE 9.18 Initial BF solution from Russell’s approximation method Iteration u1 苶 u 苶2 u 苶3 u 苶4 v苶1 v苶2 v苶3 v苶4 v苶5 1 2 3 4 5 6 22 22 22 19 19 19 19 19 M M 23 23 23 M M 19 19 19 19 19 19 19 19 19 M 20 20 20 23 23 23 23 23 M M *Tie with 7 22 Largest Negative ⌬ij  2M  5  M 13  29 23  26 21  24* Irrelevant 45 15 Allocation x45  50 x15  10 x13  40 x23  30 x21  30 x31  0 x32  20 x34  30 o Z  2,570  24 broken arbitrarily. N. V. Reinfeld and W. R. Vogel: Mathematical Programming, Prentice-Hall, Englewood Cliffs, NJ, 1958. E. J. Russell: “Extension of Dantzig’s Algorithm to Finding an Initial Near-Optimal Basis for the Transportation Problem,” Operations Research, 17: 187–191, 1969. 8 9.2 A STREAMLINED SIMPLEX METHOD FOR THE TRANSPORTATION PROBLEM 341 effective on average, this criterion frequently does obtain a better solution than Vogel’s. (For the example, Vogel’s approximation method happened to find the optimal solution with Z  2,460, whereas Russell’s misses slightly with Z  2,570.) For a large problem, it may be worthwhile to apply both criteria and then use the better solution to start the iterations of the transportation simplex method. One distinct advantage of Russell’s approximation method is that it is patterned directly after step 1 for the transportation simplex method (as you will see soon), which somewhat simplifies the overall computer code. In particular, the u苶i and 苶vj values have been defined in such a way that the relative values of the cij  苶ui  苶vj estimate the relative values of cij  ui  vj that will be obtained when the transportation simplex method reaches an optimal solution. We now shall use the initial BF solution obtained in Table 9.18 by Russell’s approximation method to illustrate the remainder of the transportation simplex method. Thus, our initial transportation simplex tableau (before we solve for ui and vj) is shown in Table 9.19. The next step is to check whether this initial solution is optimal by applying the optimality test. Optimality Test Using the notation of Table 9.14, we can reduce the standard optimality test for the simplex method (see Sec. 4.3) to the following for the transportation problem: Optimality test: A BF solution is optimal if and only if cij  ui  vj  0 for every (i, j) such that xij is nonbasic.9 Thus, the only work required by the optimality test is the derivation of the values of ui and vj for the current BF solution and then the calculation of these cij  ui  vj, as described below. ■ TABLE 9.19 Initial transportation simplex tableau (before we obtain cij ⴚ ui ⴚ vj) from Russell’s approximation method Destination Iteration 0 1 16 1 14 2 Source 19 3 4(D) Demand 30 0 2 16 13 14 13 19 20 0 M 30 3 20 40 30 4 22 17 19 15 20 23 M 0 70 30 Supply 10 50 ui 60 M 0 30 5 50 50 60 50 Z  2,570 vj 9 The one exception is that two or more equivalent degenerate BF solutions (i.e., identical solutions having different degenerate basic variables equal to zero) can be optimal with only some of these basic solutions satisfying the optimality test. This exception is illustrated later in the example (see the identical solutions in the last two tableaux of Table 9.23, where only the latter solution satisfies the criterion for optimality). 342 CHAPTER 9 THE TRANSPORTATION AND ASSIGNMENT PROBLEMS Since cij  ui  vj is required to be zero if xij is a basic variable, ui and vj satisfy the set of equations cij  ui  vj for each (i, j) such that xij is basic. There are m  n  1 basic variables, and so there are m  n  1 of these equations. Since the number of unknowns (the ui and vj) is m  n, one of these variables can be assigned a value arbitrarily without violating the equations. The choice of this one variable and its value does not affect the value of any cij  ui  vj, even when xij is nonbasic, so the only (minor) difference it makes is in the ease of solving these equations. A convenient choice for this purpose is to select the ui that has the largest number of allocations in its row (break any tie arbitrarily) and to assign to it the value zero. Because of the simple structure of these equations, it is then very simple to solve for the remaining variables algebraically. To demonstrate, we give each equation that corresponds to a basic variable in our initial BF solution. x31: x32: x34: x21: x23: x13: x15: x45: 19  u3  v1. 19  u3  v2. 23  u3  v4. 14  u2  v1. 13  u2  v3. 13  u1  v3. 17  u1  v5. 0  u4  v5. Set u3  0, so v1  19, Set u3  0, so v2  19, Set u3  0, so v4  23. Know v1  19, so u2  5. Know u2   5, so v3  18. Know v3  18, so u1  5. Know u1  5, so v5  22. Know v5  22, so u4  22. Setting u3  0 (since row 3 of Table 9.19 has the largest number of allocations—3) and moving down the equations one at a time immediately give the derivation of values for the unknowns shown to the right of the equations. (Note that this derivation of the ui and vj values depends on which xij variables are basic variables in the current BF solution, so this derivation will need to be repeated each time a new BF solution is obtained.) Once you get the hang of it, you probably will find it even more convenient to solve these equations without writing them down by working directly on the transportation simplex tableau. Thus, in Table 9.19 you begin by writing in the value u3  0 and then picking out the circled allocations (x31, x32, x34) in that row. For each one you set vj  c3j and then look for circled allocations (except in row 3) in these columns (x21). Mentally calculate u2  c21  v1, pick out x23, set v3  c23  u2, and so on until you have filled in all the values for ui and vj. (Try it.) Then calculate and fill in the value of cij  ui  vj for each nonbasic variable xij (that is, for each cell without a circled allocation), and you will have the completed initial transportation simplex tableau shown in Table 9.20. We are now in a position to apply the optimality test by checking the values of cij  ui  vj given in Table 9.20. Because two of these values (c25  u2  v5  2 and c44  u4  v4  1) are negative, we conclude that the current BF solution is not optimal. Therefore, the transportation simplex method must next go to an iteration to find a better BF solution. An Iteration As with the full-fledged simplex method, an iteration for this streamlined version must determine an entering basic variable (step 1), a leaving basic variable (step 2), and then identify the resulting new BF solution (step 3). 9.2 A STREAMLINED SIMPLEX METHOD FOR THE TRANSPORTATION PROBLEM 343 ■ TABLE 9.20 Completed initial transportation simplex tableau Destination Iteration 0 1 2 16 16 1 2 14 2 3 13 14 13 19 0 Demand vj 20 0 M 4(D) M3 4 19 10 50 5 60 5 50 0 50 22 2 M 30 M  22 0 0 M4 3 ui 1 2 M Supply 15 23 20 5 17 30 0 19 3 22 40 2 30 Source 4 1 50 30 20 70 30 60 19 19 18 23 22 Z  2,570 Step 1: Find the Entering Basic Variable. Since cij  ui  vj represents the rate at which the objective function will change as the nonbasic variable xij is increased, the entering basic variable must have a negative cij  ui  vj value to decrease the total cost Z. Thus, the candidates in Table 9.20 are x25 and x44. To choose between the candidates, select the one having the larger (in absolute terms) negative value of cij  ui  vj to be the entering basic variable, which is x25 in this case. Step 2: Find the Leaving Basic Variable. Increasing the entering basic variable from zero sets off a chain reaction of compensating changes in other basic variables (allocations), in order to continue satisfying the supply and demand constraints. The first basic variable to be decreased to zero then becomes the leaving basic variable. With x25 as the entering basic variable, the chain reaction in Table 9.20 is the relatively simple one summarized in Table 9.21. (We shall always indicate the entering basic variable by placing a boxed plus sign in the center of its cell while leaving the corresponding value of cij  ui  vj in the lower right-hand corner of this cell.) Increasing x25 by some amount requires decreasing x15 by the same amount to restore the demand of 60 in column 5. This change then requires increasing x13 by this same amount to restore the ■ TABLE 9.21 Part of initial transportation simplex tableau showing the chain reaction caused by increasing the entering basic variable x25 Destination 3 1 … 2 … Source … Demand 13 13 40  30  4 22 17 4 19 5 Supply 10  50 15  1 2 … … … 70 30 60 60 344 CHAPTER 9 THE TRANSPORTATION AND ASSIGNMENT PROBLEMS supply of 50 in row 1. This change then requires decreasing x23 by this amount to restore the demand of 70 in column 3. This decrease in x23 successfully completes the chain reaction because it also restores the supply of 60 in row 2. (Equivalently, we could have started the chain reaction by restoring this supply in row 2 with the decrease in x23, and then the chain reaction would continue with the increase in x13 and decrease in x15.) The net result is that cells (2, 5) and (1, 3) become recipient cells, each receiving its additional allocation from one of the donor cells, (1, 5) and (2, 3). (These cells are indicated in Table 9.21 by the plus and minus signs.) Note that cell (1, 5) had to be the donor cell for column 5 rather than cell (4, 5), because cell (4, 5) would have no recipient cell in row 4 to continue the chain reaction. [Similarly, if the chain reaction had been started in row 2 instead, cell (2, 1) could not be the donor cell for this row because the chain reaction could not then be completed successfully after necessarily choosing cell (3, 1) as the next recipient cell and either cell (3, 2) or (3, 4) as its donor cell.] Also note that, except for the entering basic variable, all recipient cells and donor cells in the chain reaction must correspond to basic variables in the current BF solution. Each donor cell decreases its allocation by exactly the same amount as the entering basic variable (and other recipient cells) is increased. Therefore, the donor cell that starts with the smallest allocation—cell (1, 5) in this case (since 10 30 in Table 9.21)—must reach a zero allocation first as the entering basic variable x25 is increased. Thus, x15 becomes the leaving basic variable. In general, there always is just one chain reaction (in either direction) that can be completed successfully to maintain feasibility when the entering basic variable is increased from zero. This chain reaction can be identified by selecting from the cells having a basic variable: first the donor cell in the column having the entering basic variable, then the recipient cell in the row having this donor cell, then the donor cell in the column having this recipient cell, and so on until the chain reaction yields a donor cell in the row having the entering basic variable. When a column or row has more than one additional basic variable cell, it may be necessary to trace them all further to see which one must be selected to be the donor or recipient cell. (All but this one eventually will reach a dead end in a row or column having no additional basic variable cell.) After the chain reaction is identified, the donor cell having the smallest allocation automatically provides the leaving basic variable. (In the case of a tie for the donor cell having the smallest allocation, any one can be chosen arbitrarily to provide the leaving basic variable.) Step 3: Find the New BF Solution. The new BF solution is identified simply by adding the value of the leaving basic variable (before any change) to the allocation for each recipient cell and subtracting this same amount from the allocation for each donor cell. In Table 9.21 the value of the leaving basic variable x15 is 10, so the portion of the transportation simplex tableau in this table changes as shown in Table 9.22 for the new solution. (Since x15 is nonbasic in the new solution, its new allocation of zero is no longer shown in this new tableau.) We can now highlight a useful interpretation of the cij  ui  vj quantities derived during the optimality test. Because of the shift of 10 allocation units from the donor cells to the recipient cells (shown in Tables 9.21 and 9.22), the total cost changes by Z  10(15  17  13  13)  10(2)  10(c25  u2  v5). Thus, the effect of increasing the entering basic variable x25 from zero has been a cost change at the rate of 2 per unit increase in x25. This is precisely what the value of c25  u2  v5  2 in Table 9.20 indicates would happen. In fact, another (but less efficient) way of deriving cij  ui  vj for each nonbasic variable xij is to identify the chain reaction caused by increasing this variable from 0 to 1 and then to calculate the resulting 9.2 A STREAMLINED SIMPLEX METHOD FOR THE TRANSPORTATION PROBLEM 345 ■ TABLE 9.22 Part of second transportation simplex tableau showing the changes in the BF solution Destination 3 Source 1 … 2 … 13 4 22 5 17 50 13 … Demand Supply 50 19 15 20 10 … … … 70 30 60 60 cost change. This intuitive interpretation sometimes is useful for checking calculations during the optimality test. Before completing the solution of the Metro Water District problem, we now summarize the rules for the transportation simplex method. Summary of the Transportation Simplex Method Initialization: Construct an initial BF solution by the procedure outlined earlier in this section. Go to the optimality test. Optimality test: Derive ui and vj by selecting the row having the largest number of allocations, setting its ui  0, and then solving the set of equations cij  ui  vj for each (i, j) such that xij is basic. If cij  ui  vj  0 for every (i, j) such that xij is nonbasic, then the current solution is optimal, so stop. Otherwise, go to an iteration. Iteration: 1. Determine the entering basic variable: Select the nonbasic variable xij having the largest (in absolute terms) negative value of cij  ui  vj. 2. Determine the leaving basic variable: Identify the chain reaction required to retain feasibility when the entering basic variable is increased. From the donor cells, select the basic variable having the smallest value. 3. Determine the new BF solution: Add the value of the leaving basic variable to the allocation for each recipient cell. Subtract this value from the allocation for each donor cell. Continuing to apply this procedure to the Metro Water District problem yields the complete set of transportation simplex tableaux shown in Table 9.23. Since all the cij  ui  vj values are nonnegative in the fourth tableau, the optimality test identifies the set of allocations in this tableau as being optimal, which concludes the algorithm. It would be good practice for you to derive the values of ui and vj given in the second, third, and fourth tableaux. Try doing this by working directly on the tableaux. Also check out the chain reactions in the second and third tableaux, which are somewhat more complicated than the one you have seen in Table 9.21. Special Features of This Example Note three special points that are illustrated by this example. First, the initial BF solution is degenerate because the basic variable x31  0. However, this degenerate basic variable causes no complication, because cell (3, 1) becomes a recipient cell in the second tableau, which increases x31 to a value greater than zero. 346 CHAPTER 9 THE TRANSPORTATION AND ASSIGNMENT PROBLEMS ■ TABLE 9.23 Complete set of transportation simplex tableaux for the Metro Water District problem Destination Iteration 0 1 2 16 16 1 13 2 14 2 3 2 14 13 30 Source 19 3 0 19 0 vj 30  20 17 4 19 Supply ui 10  50 5  60 5 50 0 50 22 15 2 1 23 M 30 0 M M  22 0 M4 3 5 22 2 0 M3 Demand 40  20 M 4(D) 4 50 1 30 20 70 30 60 19 19 18 23 22 4 5 Z  2,570 Destination Iteration 1 1 2 16 16 1 13 2 14 2 Source 19 3 30  0  14 22 13 19 19 20 15 1 23 20 2 30  0 M 2 4 20 0 Demand vj 10  M  20 0 M2 1 ui 50 5 60 5 50 0 50 20 M  M1 Supply 17 50 2 0 M 4(D) 3 3 50  30 20 70 30 60 19 19 18 23 20 4 5 Z  2,550 Destination Iteration 2 1 2 16 1 16 5 14 2 14 19 3 vj M4 20  20 20 0 M Demand 13 19 22 50 3 30 4(D) 13 5 3 Source 3 0 M2 2 15 4 1 M 4 19 23 ui 50 8 60 8 50 0 50 23 17 7  Supply 0  30  40  M M  23 0 20  30 20 70 30 60 19 19 21 23 23 Z  2,460 9.2 A STREAMLINED SIMPLEX METHOD FOR THE TRANSPORTATION PROBLEM 347 ■ TABLE 9.23 (Continued) Destination Iteration 3 1 2 16 1 4 14 2 14 19 3 vj M3 19 20 3 Supply ui 50 7 60 7 50 0 50 22 15 23 40 M 1 0 M2 2 4 0 M 5 17 7 20 20 0 M Demand 13 19 4 22 50 2 30 4(D) 13 4 2 Source 3 16 M  22 0 30 20 30 20 70 30 60 19 19 20 22 22 Z  2,460 Second, another degenerate basic variable (x34) arises in the third tableau because the basic variables for two donor cells in the second tableau, cells (2, 1) and (3, 4), tie for having the smallest value (30). (This tie is broken arbitrarily by selecting x21 as the leaving basic variable; if x34 had been selected instead, then x21 would have become the degenerate basic variable.) This degenerate basic variable does appear to create a complication subsequently, because cell (3, 4) becomes a donor cell in the third tableau but has nothing to donate! Fortunately, such an event actually gives no cause for concern. Since zero is the amount to be added to or subtracted from the allocations for the recipient and donor cells, these allocations do not change. However, the degenerate basic variable does become the leaving basic variable, so it is replaced by the entering basic variable as the circled allocation of zero in the fourth tableau. This change in the set of basic variables changes the values of ui and vj. Therefore, if any of the ci j  ui  vj had been negative in the fourth tableau, the algorithm would have gone on to make real changes in the allocations (whenever all donor cells have nondegenerate basic variables). Third, because none of the cij  ui  vj turned out to be negative in the fourth tableau, the equivalent set of allocations in the third tableau is optimal also. Thus, the algorithm executed one more iteration than was necessary. This extra iteration is a flaw that occasionally arises in both the transportation simplex method and the simplex method because of degeneracy, but it is not sufficiently serious to warrant any adjustments to these algorithms. If you would like to see additional (smaller) examples of the application of the transportation simplex method, two are available. One is the demonstration provided for the transportation problem area in your OR Tutor. In addition, the Solved Examples section of the book’s website includes another example of this type. Also provided in your IOR Tutorial are both an interactive procedure and an automatic procedure for the transportation simplex method. Now that you have studied the transportation simplex method, you are in a position to check for yourself how the algorithm actually provides a proof of the integer solutions property presented in Sec. 9.1. Problem 9.2-20 helps to guide you through the reasoning. 348 ■ 9.3 CHAPTER 9 THE TRANSPORTATION AND ASSIGNMENT PROBLEMS THE ASSIGNMENT PROBLEM The assignment problem is a special type of linear programming problem where assignees are being assigned to perform tasks. For example, the assignees might be employees who need to be given work assignments. Assigning people to jobs is a common application of the assignment problem.10 However, the assignees need not be people. They also could be machines, or vehicles, or plants, or even time slots to be assigned tasks. The first example below involves machines being assigned to locations, so the tasks in this case simply involve holding a machine. A subsequent example involves plants being assigned products to be produced. To fit the definition of an assignment problem, these kinds of applications need to be formulated in a way that satisfies the following assumptions. 1. The number of assignees and the number of tasks are the same. (This number is denoted by n.) 2. Each assignee is to be assigned to exactly one task. 3. Each task is to be performed by exactly one assignee. 4. There is a cost cij associated with assignee i (i  1, 2, . . . , n) performing task j ( j  1, 2, . . . , n). 5. The objective is to determine how all n assignments should be made to minimize the total cost. Any problem satisfying all these assumptions can be solved extremely efficiently by algorithms designed specifically for assignment problems. The first three assumptions are fairly restrictive. Many potential applications do not quite satisfy these assumptions. However, it often is possible to reformulate the problem to make it fit. For example, dummy assignees or dummy tasks frequently can be used for this purpose. We illustrate these formulation techniques in the examples. Prototype Example The JOB SHOP COMPANY has purchased three new machines of different types. There are four available locations in the shop where a machine could be installed. Some of these locations are more desirable than others for particular machines because of their proximity to work centers that will have a heavy work flow to and from these machines. (There will be no work flow between the new machines.) Therefore, the objective is to assign the new machines to the available locations to minimize the total cost of materials handling. The estimated cost in dollars per hour of materials handling involving each of the machines is given in Table 9.24 for the respective locations. Location 2 is not considered suitable for machine 2, so no cost is given for this case. To formulate this problem as an assignment problem, we must introduce a dummy machine for the extra location. Also, an extremely large cost M should be attached to the assignment of machine 2 to location 2 to prevent this assignment in the optimal solution. The resulting assignment problem cost table is shown in Table 9.25. This cost table contains all the necessary data for solving the problem. The optimal solution is to assign machine 1 to location 4, machine 2 to location 3, and machine 3 to location 1, for a total cost of $29 per hour. The dummy machine is assigned to location 2, so this location is available for some future real machine. 10 For example, see L. J. LeBlanc, D. Randels, Jr., and T. K. Swann: “Heery International’s Spreadsheet Optimization Model for Assigning Managers to Construction Projects,” Interfaces, 30(6): 95–106, Nov.–Dec. 2000. Page 98 of this article also cites seven other applications of the assignment problem. 9.3 349 THE ASSIGNMENT PROBLEM ■ TABLE 9.24 Materials-handling cost data ($) for Job Shop Co. Location 1 2 3 Machine 1 2 3 4 13 15 5 16 — 7 12 13 10 11 20 6 ■ TABLE 9.25 Cost table for the Job Shop Co. assignment problem Task (Location) 1 2 3 4(D) Assignee (Machine) 1 2 3 4 13 15 5 0 16 M 7 0 12 13 10 0 11 20 6 0 We shall discuss how this solution is obtained after we formulate the mathematical model for the general assignment problem. The Assignment Problem Model The mathematical model for the assignment problem uses the following decision variables: xij  冦10 if assignee i performs task j, if not, for i  1, 2, . . . , n and j  1, 2, . . . , n. Thus, each xij is a binary variable (it has value 0 or 1). As discussed at length in the chapter on integer programming (Chap. 12), binary variables are important in OR for representing yes/no decisions. In this case, the yes/no decision is: Should assignee i perform task j? By letting Z denote the total cost, the assignment problem model is n Minimize Z冱 n 冱 cij xij, i1 j1 subject to n 冱 xij  1 j1 for i  1, 2, . . . , n, n 冱 xij  1 i1 for j  1, 2, . . . , n, and xij  0, for all i and j (xij binary, for all i and j). 350 CHAPTER 9 THE TRANSPORTATION AND ASSIGNMENT PROBLEMS The first set of functional constraints specifies that each assignee is to perform exactly one task, whereas the second set requires each task to be performed by exactly one assignee. If we delete the parenthetical restriction that the xij be binary, the model clearly is a special type of linear programming problem and so can be readily solved. Fortunately, for reasons about to unfold, we can delete this restriction. (This deletion is the reason that the assignment problem appears in this chapter rather than in the integer programming chapter.) Now compare this model (without the binary restriction) with the transportation problem model presented in the third subsection of Sec. 9.1 (including Table 9.6). Note how similar their structures are. In fact, the assignment problem is just a special type of transportation problem where the sources now are assignees and the destinations now are tasks and where Number of sources m  number of destinations n, Every supply si  1, Every demand dj  1. Now focus on the integer solutions property in the subsection on the transportation problem model. Because si and dj are integers ( 1) now, this property implies that every BF solution (including an optimal one) is an integer solution for an assignment problem. The functional constraints of the assignment problem model prevent any variable from being greater than 1, and the nonnegativity constraints prevent values less than 0. Therefore, by deleting the binary restriction to enable us to solve an assignment problem as a linear programming problem, the resulting BF solutions obtained (including the final optimal solution) automatically will satisfy the binary restriction anyway. Just as the transportation problem has a network representation (see Fig. 9.3), the assignment problem can be depicted in a very similar way, as shown in Fig. 9.5. The first column now lists the n assignees and the second column the n tasks. Each number in a square bracket indicates the number of assignees being provided at that location in the network, so the values are automatically 1 on the left, whereas the values of 1 on the right indicate that each task is using up one assignee. For any particular assignment problem, practitioners normally do not bother writing out the full mathematical model. It is simpler to formulate the problem by filling out a cost table (e.g., Table 9.25), including identifying the assignees and tasks, since this table contains all the essential data in a far more compact form. Problems occasionally arise that do not quite fit the model for an assignment problem because certain assignees will be assigned to more than one task. In this case, the problem can be reformulated to fit the model by splitting each such assignee into separate (but identical) new assignees where each new assignee will be assigned to exactly one task. (Table 9.29 will illustrate this for a subsequent example.) Similarly, if a task is to be performed by multiple assignees, that task can be split into separate (but identical) new tasks where each new task is to be performed by exactly one assignee according to the reformulated model. The Solved Examples section of the book’s website provides another example that illustrates both cases and the resulting reformulation to fit the model for an assignment problem. An alternative formulation as a transportation problem also is shown. Solution Procedures for Assignment Problems Alternative solution procedures are available for solving assignment problems. Problems that aren’t much larger than the Job Shop Co. example can be solved very quickly by the 9.3 THE ASSIGNMENT PROBLEM c11 [1] A1 351 T1 [1] c12 c1 n c 21 [1] A2 T2 [1] c22 c2 n c n1 c n2 ■ FIGURE 9.5 Network representation of the assignment problem. [1] An cnn Tn [1] general simplex method, so it may be convenient to simply use a basic software package (such as Excel and its Solver) that only employs this method. If this were done for the Job Shop Co. problem, it would not have been necessary to add the dummy machine to Table 9.25 to make it fit the assignment problem model. The constraints on the number of machines assigned to each location would be expressed instead as 3 冱 xij  1 i1 for j  1, 2, 3, 4. As shown in the Excel files for this chapter, a spreadsheet formulation for this example would be very similar to the formulation for a transportation problem displayed in Fig. 9.4 except now all the supplies and demands would be 1 and the demand constraints would be 1 instead of  1. However, large assignment problems can be solved much faster by using more specialized solution procedures, so we recommend using such a procedure instead of the general simplex method for big problems. Because the assignment problem is a special type of transportation problem, one convenient and relatively fast way to solve any particular assignment problem is to apply the transportation simplex method described in Sec. 9.2. This approach requires converting the cost table to a parameter table for the equivalent transportation problem, as shown in Table 9.26a. For example, Table 9.26b shows the parameter table for the Job Shop Co. problem that is obtained from the cost table of Table 9.25. When the transportation simplex method is applied to this transportation problem formulation, the resulting optimal solution has 352 CHAPTER 9 THE TRANSPORTATION AND ASSIGNMENT PROBLEMS ■ TABLE 9.26 Parameter table for the assignment problem formulated as a transportation problem, illustrated by the Job Shop Co. example (b) Job Shop Co. Example (a) General Case 1 2 Source ⯗ mn Demand Cost per Unit Distributed Cost per Unit Distributed Destination Destination (Location) 1 2 … n Supply c11 c21 … cn1 c12 c22 … cn2 … … … … c1n c2n … cnn 1 1 ⯗ 1 1 1 … 1 Source (Machine) Demand 1 2 3 4(D) 1 2 3 4 Supply 13 15 5 0 16 M 7 0 12 13 10 0 11 20 6 0 1 1 1 1 1 1 1 1 basic variables x13  0, x14  1, x23  1, x31  1, x41  0, x42  1, x43  0. (You are asked to verify this solution in Prob. 9.3-6.) The degenerate basic variables (xij  0) and the assignment for the dummy machine (x42  1) do not mean anything for the original problem, so the real assignments are machine 1 to location 4, machine 2 to location 3, and machine 3 to location 1. It is no coincidence that this optimal solution provided by the transportation simplex method has so many degenerate basic variables. For any assignment problem with n assignments to be made, the transportation problem formulation shown in Table 9.26a has m  n, that is, both the number of sources (m) and the number of destinations (n) in this formulation equal the number of assignments (n). Transportation problems in general have m  n  1 basic variables (allocations), so every BF solution for this particular kind of transportation problem has 2n  1 basic variables, but exactly n of these xij equal 1 (corresponding to the n assignments being made). Therefore, since all the variables are binary variables, there always are n  1 degenerate basic variables (xij  0). As discussed at the end of Sec. 9.2, degenerate basic variables do not cause any major complication in the execution of the algorithm. However, they do frequently cause wasted iterations, where nothing changes (same allocations) except for the labeling of which allocations of zero correspond to degenerate basic variables rather than nonbasic variables. These wasted iterations are a major drawback to applying the transportation simplex method in this kind of situation, where there always are so many degenerate basic variables. Another drawback of the transportation simplex method here is that it is purely a general-purpose algorithm for solving all transportation problems. Therefore, it does nothing to exploit the additional special structure in this special type of transportation problem (m  n, every si  1, and every dj  1). Fortunately, specialized algorithms have been developed to fully streamline the procedure for solving just assignment problems. These algorithms operate directly on the cost table and do not bother with degenerate basic variables. When a computer code is available for one of these algorithms, it generally should be used in preference to the transportation simplex method, especially for really big problems.11 Section 9.4 describes one of these specialized algorithms (called the Hungarian algorithm) for solving only assignment problems very efficiently. 11 For an article comparing various algorithms for the assignment problem, see J. L. Kennington and Z. Wang:“An Empirical Analysis of the Dense Assignment Problem: Sequential and Parallel Implementations,” ORSA Journal on Computing, 3: 299–306, 1991. 9.3 353 THE ASSIGNMENT PROBLEM Your IOR Tutorial includes both an interactive procedure and an automatic procedure for applying this algorithm. Example—Assigning Products to Plants The BETTER PRODUCTS COMPANY has decided to initiate the production of four new products, using three plants that currently have excess production capacity. The products require a comparable production effort per unit, so the available production capacity of the plants is measured by the number of units of any product that can be produced per day, as given in the rightmost column of Table 9.27. The bottom row gives the required production rate per day to meet projected sales. Each plant can produce any of these products, except that Plant 2 cannot produce product 3. However, the variable costs per unit of each product differ from plant to plant, as shown in the main body of Table 9.27. Management now needs to make a decision on how to split up the production of the products among plants. Two kinds of options are available. Option 1: Permit product splitting, where the same product is produced in more than one plant. Option 2: Prohibit product splitting. This second option imposes a constraint that can only increase the cost of an optimal solution based on Table 9.27. On the other hand, the key advantage of Option 2 is that it eliminates some hidden costs associated with product splitting that are not reflected in Table 9.27, including extra setup, distribution, and administration costs. Therefore, management wants both options analyzed before a final decision is made. For Option 2, management further specifies that every plant should be assigned at least one of the products. We will formulate and solve the model for each option in turn, where Option 1 leads to a transportation problem and Option 2 leads to an assignment problem. Formulation of Option 1. With product splitting permitted, Table 9.27 can be converted directly to a parameter table for a transportation problem. The plants become the sources, and the products become the destinations (or vice versa), so the supplies are the available production capacities and the demands are the required production rates. Only two changes need to be made in Table 9.27. First, because Plant 2 cannot produce product 3, such an allocation is prevented by assigning to it a huge unit cost of M. Second, the total capacity (75  75  45  195) exceeds the total required production (20  30  30  40  120), so a dummy destination with a demand of 75 is needed to balance these two quantities. The resulting parameter table is shown in Table 9.28. The optimal solution for this transportation problem has basic variables (allocations) x12  30, x13  30, x15  15, x24  15, x25  60, x31  20, and x34  25, so ■ TABLE 9.27 Data for the Better Products Co. problem Unit Cost ($) for Product Plant Production rate 1 2 3 1 2 3 4 Capacity Available 41 40 37 27 29 30 28 — 27 24 23 21 75 75 45 20 30 30 40 354 CHAPTER 9 THE TRANSPORTATION AND ASSIGNMENT PROBLEMS ■ TABLE 9.28 Parameter table for the transportation problem formulation of Option 1 for the Better Products Co. problem Cost per Unit Distributed Destination (Product) Source (Plant) 1 2 3 Demand 1 2 3 4 5(D) Supply 41 40 37 27 29 30 28 M 27 24 23 21 0 0 0 75 75 45 20 30 30 40 75 Plant 1 produces all of products 2 and 3. Plant 2 produces 37.5 percent of product 4. Plant 3 produces 62.5 percent of product 4 and all of product 1. The total cost is Z  $3,260 per day. Formulation of Option 2. Without product splitting, each product must be assigned to just one plant. Therefore, producing the products can be interpreted as the tasks for an assignment problem, where the plants are the assignees. Management has specified that every plant should be assigned at least one of the products. There are more products (four) than plants (three), so one of the plants will need to be assigned two products. Plant 3 has only enough excess capacity to produce one product (see Table 9.27), so either Plant 1 or Plant 2 will take the extra product. To make this assignment of an extra product possible within an assignment problem formulation, Plants 1 and 2 each are split into two assignees, as shown in Table 9.29. The number of assignees (now five) must equal the number of tasks (now four), so a dummy task (product) is introduced into Table 9.29 as 5(D). The role of this dummy task is to provide the fictional second product to either Plant 1 or Plant 2, whichever one receives only one real product. There is no cost for producing a fictional product so, as usual, the cost entries for the dummy task are zero. The one exception is the entry of M in the last row of Table 9.29. The reason for M here is that Plant 3 must be assigned a real product (a choice of product 1, 2, 3, or 4), so the Big M method is needed to prevent the assignment of the fictional product to Plant 3 instead. (As in Table 9.28, M also is used to prevent the infeasible assignment of product 3 to Plant 2.) The remaining cost entries in Table 9.29 are not the unit costs shown in Tables 9.27 or 9.28. Table 9.28 gives a transportation problem formulation (for Option 1), so unit costs ■ TABLE 9.29 Cost table for the assignment problem formulation of Option 2 for the Better Products Co. problem Task (Product) Assignee (Plant) 1a 1b 2a 2b 3 1 2 3 4 5(D) 820 820 800 800 740 810 810 870 870 900 840 840 M M 810 960 960 920 920 840 0 0 0 0 M 9.3 355 THE ASSIGNMENT PROBLEM are appropriate there, but now we are formulating an assignment problem (for Option 2). For an assignment problem, the cost cij is the total cost associated with assignee i performing task j. For Table 9.29, the total cost (per day) for Plant i to produce product j is the unit cost of production times the number of units produced (per day), where these two quantities for the multiplication are given separately in Table 9.27. For example, consider the assignment of Plant 1 to product 1. By using the corresponding unit cost in Table 9.28 ($41) and the corresponding demand (number of units produced per day) in Table 9.28 (20), we obtain Cost of Plant 1 producing one unit of product 1 Required (daily) production of product 1 Total (daily) cost of assigning plant 1 to product 1     $41 20 units 20 ($41) $820 so 820 is entered into Table 9.29 for the cost of either Assignee 1a or 1b performing Task 1. The optimal solution for this assignment problem is as follows: Plant 1 produces products 2 and 3. Plant 2 produces product 1. Plant 3 produces product 4. Here the dummy assignment is given to Plant 2. The total cost is Z  $3,290 per day. As usual, one way to obtain this optimal solution is to convert the cost table of Table 9.29 to a parameter table for the equivalent transportation problem (see Table 9.26) and then apply the transportation simplex method. Because of the identical rows in Table 9.29, this approach can be streamlined by combining the five assignees into three sources with supplies 2, 2, and 1, respectively. (See Prob. 9.3-5.) This streamlining also decreases by two the number of degenerate basic variables in every BF solution. Therefore, even though this streamlined formulation no longer fits the format presented in Table 9.26a for an assignment problem, it is a more efficient formulation for applying the transportation simplex method. Figure 9.6 shows how Excel and Solver can be used to obtain this optimal solution, which is displayed in the changing cells Assignment (C19:F21) of the spreadsheet. Since the general simplex method is being used, there is no need to fit this formulation into the format for either the assignment problem or transportation problem model. Therefore, the formulation does not bother to split Plants 1 and 2 into two assignees each, or to add a dummy task. Instead, Plants 1 and 2 are given a supply of 2 each, and then  signs are entered into cells H19 and H20 as well as into the corresponding constraints in the Solver dialogue box. There also is no need to include the Big M method to prohibit assigning product 3 to Plant 2 in cell E20, since this dialogue box includes the constraint that E20  0. The objective cell TotalCost (I24) shows the total cost of $3,290 per day. Now look back and compare this solution to the one obtained for Option 1, which included the splitting of product 4 between Plants 2 and 3. The allocations are somewhat different for the two solutions, but the total daily costs are virtually the same ($3,260 for Option 1 versus $3,290 for Option 2). However, there are hidden costs associated with product splitting (including the cost of extra setup, distribution, and administration) that are not included in the objective function for Option 1. As with any application of OR, the mathematical model used can provide only an approximate representation of the total problem, so management needs to consider factors that cannot be incorporated into the model before it makes a final decision. In this case, after evaluating the disadvantages of product splitting, management decided to adopt the Option 2 solution. 356 CHAPTER 9 THE TRANSPORTATION AND ASSIGNMENT PROBLEMS A 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 ■ FIGURE 9.6 A spreadsheet formulation of Option 2 for the Better Products Co. problem as a variant of an assignment problem. The objective cell is TotalCost (I24) and the other output cells are Cost (C12:F14), TotalAssignments (G19:G21), and TotalAssigned (C22:F22), where the equations entered into these cells are shown below the spreadsheet. The values of 1 in the changing cells Assignment (C19:F21) display the optimal production plan obtained by Solver. ■ 9.4 B C D E F G H I Better Products Co. Production Planning Problem (Option 2) Unit Cost Plant 1 Plant 2 Plant 3 Required Production Cost ($/day) Plant 1 Plant 2 Plant 3 Assignment Plant 1 Plant 2 Plant 3 Total Assigned Demand Product 1 $41 $40 $37 Product 2 $27 $29 $30 Product 3 $28 $27 Product 4 $24 $23 $21 20 30 30 40 Product 1 $820 $800 $740 Product 2 $810 $870 $900 Product 3 $840 $810 Product 4 $960 $920 $840 Product 1 0 1 0 1 = 1 Product 2 1 0 0 1 = 1 Product 3 1 0 0 1 = 1 Product 4 0 0 1 1 = 1 Solver Parameters Set Objective Cell:Total Cost To:Min By Changing Variable Cells: Assignment Subject to the Constraints: E20 = 0 G19:G20 <= I19:I20 G21 = I21 TotalAssigned = Supply Solver Options: Make Variables Nonnegative Solving Method: Simplex LP 22 Range Name Assignment Cost Demand RequiredProduction Supply TotalAssigned TotalAssignments TotalCost UnitCost 11 12 13 14 B Cost ($/day) Plant 1 Plant 2 Plant 3 Total Assignments 2 <= 1 <= 1 = Supply 2 2 1 Total Cost $3,290 C D E F Product 1 Product 2 Product 3 Product 4 =C4*C$8 =D4*D$8 =E4*E$8 =F4*F$8 =C5*C$8 =D5*D$8 =F5*F$8 =C6*C$8 =D6*D$8 =E6*E$8 =F6*F$8 17 18 19 20 21 G Total Assignments =SUM(C19:F19) =SUM(C20:F20) =SUM(C21:F21) B C D E F Total Assigned =SUM(C19:C21) =SUM(D19:D21) =SUM(E19:E21) =SUM(F19:F21) Cells C19:F21 C12:F14 C24:F24 C8:F8 I19:I21 C22:F22 G19:G21 I24 C4:F6 23 24 I Total Cost =SUMPRODUCT(Cost,Assignment) A SPECIAL ALGORITHM FOR THE ASSIGNMENT PROBLEM In Sec. 9.3, we pointed out that the transportation simplex method can be used to solve assignment problems but that a specialized algorithm designed for such problems should be more efficient. We now will describe a classic algorithm of this type. It is called the Hungarian algorithm (or Hungarian method) because it was developed by Hungarian mathematicians. We will focus just on the key ideas without filling in all the details needed for a complete computer implementation. The Role of Equivalent Cost Tables The algorithm operates directly on the cost table for the problem. More precisely, it converts the original cost table into a series of equivalent cost tables until it reaches one where an optimal solution is obvious. This final equivalent cost table is one consisting of only positive or 9.4 A SPECIAL ALGORITHM FOR THE ASSIGNMENT PROBLEM 357 zero elements where all the assignments can be made to the zero element positions. Since the total cost cannot be negative, this set of assignments with a zero total cost is clearly optimal. The question remaining is how to convert the original cost table into this form. The key to this conversion is the fact that one can add or subtract any constant from every element of a row or column of the cost table without really changing the problem. That is, an optimal solution for the new cost table must also be optimal for the old one, and conversely. Therefore, the algorithm begins by subtracting the smallest number in each row from every number in the row. This row reduction process will create an equivalent cost table that has a zero element in every row. If this cost table has any columns without a zero element, the next step is to perform a column reduction process by subtracting the smallest number in each such column from every number in the column.12 The new equivalent cost table will have a zero element in every row and every column. If these zero elements provide a complete set of assignments, these assignments constitute an optimal solution and the algorithm is finished. To illustrate, consider the cost table for the Job Shop Co. problem given in Table 9.25. To convert this cost table into an equivalent cost table, suppose that we begin the row reduction process by subtracting 11 from every element in row 1, which yields: 1 2 3 4 1 2 5 1 0 2 15 M 13 20 3 5 7 10 6 4(D) 0 0 0 0 Since any feasible solution must have exactly one assignment in row 1, the total cost for the new table must always be exactly 11 less than for the old table. Hence, the solution which minimizes total cost for one table must also minimize total cost for the other. Notice that, whereas the original cost table had only strictly positive elements in the first three rows, the new table has a zero element in row 1. Since the objective is to obtain enough strategically located zero elements to yield a complete set of assignments, this process should be continued on the other rows and columns. Negative elements are to be avoided, so the constant to be subtracted should be the minimum element in the row or column. Doing this for rows 2 and 3 yields the following equivalent cost table: 1 2 3 4 1 2 5 1 0 2 2 M 0 7 3 0 2 5 1 4(D) 0 0 0 0 This cost table has all the zero elements required for a complete set of assignments, as shown by the four boxes, so these four assignments constitute an optimal solution (as 12 The individual rows and columns actually can be reduced in any order, but starting with all the rows and then doing all the columns provides one systematic way of executing the algorithm. 358 CHAPTER 9 THE TRANSPORTATION AND ASSIGNMENT PROBLEMS claimed in Sec. 9.3 for this problem). The total cost for this optimal solution is seen in Table 9.25 to be Z  29, which is just the sum of the numbers that have been subtracted from rows 1, 2, and 3. Unfortunately, an optimal solution is not always obtained quite so easily, as we now illustrate with the assignment problem formulation of Option 2 for the Better Products Co. problem shown in Table 9.29. Because this problem’s cost table already has zero elements in every row but the last one, suppose we begin the process of converting to equivalent cost tables by subtracting the minimum element in each column from every entry in that column. The result is shown below. 1 2 3 4 5(D) 1a 80 0 30 120 0 1b 80 0 30 120 0 2a 60 60 M 80 0 2b 60 60 M 80 0 3 0 90 0 0 M Now every row and column has at least one zero element, but a complete set of assignments with zero elements is not possible this time. In fact, the maximum number of assignments that can be made in zero element positions in only 3. (Try it.) Therefore, one more idea must be implemented to finish solving this problem that was not needed for the first example. The Creation of Additional Zero Elements This idea involves a new way of creating additional positions with zero elements without creating any negative elements. Rather than subtracting a constant from a single row or column, we now add or subtract a constant from a combination of rows and columns. This procedure begins by drawing a set of lines through some of the rows and columns in such a way as to cover all the zeros. This is done with a minimum number of lines, as shown in the next cost table. 1 2 3 4 5(D) 1a 80 0 30 120 0 1b 80 0 30 120 0 2a 60 60 M 80 0 2b 60 60 M 80 0 3 0 90 0 0 M Notice that the minimum element not crossed out is 30 in the two top positions in column 3. Therefore, subtracting 30 from every element in the entire table, i.e., from every row or from every column, will create a new zero element in these two positions. Then, in order to restore the previous zero elements and eliminate negative elements, we 9.4 A SPECIAL ALGORITHM FOR THE ASSIGNMENT PROBLEM 359 add 30 to each row or column with a line covering it—row 3 and columns 2 and 5(D). This yields the following equivalent cost table. 1 2 3 4 5(D) 1a 50 0 0 90 0 1b 50 0 0 90 0 2a 30 60 M 50 0 2b 30 60 M 50 0 3 0 120 0 0 M A shortcut for obtaining this cost table from the preceding one is to subtract 30 from just the elements without a line through them and then add 30 to every element that lies at the intersection of two lines. Note that columns 1 and 4 in this new cost table have only a single zero element and they both are in the same row (row 3). Consequently, it now is possible to make four assignments to zero element positions, but still not five. (Try it.) In general, the minimum number of lines needed to cover all zeros equals the maximum number of assignments that can be made to zero element positions. Therefore, we repeat the above procedure, where four lines (the same number as the maximum number of assignments) now are the minimum needed to cover all zeros. One way of doing this is shown below. 1 2 3 4 5(D) 1a 50 0 0 90 0 1b 50 0 0 90 0 2a 30 60 M 50 0 2b 30 60 M 50 0 3 0 120 0 0 M The minimum element not covered by a line is again 30, where this number now appears in the first position in both rows 2a and 2b. Therefore, we subtract 30 from every uncovered element and add 30 to every doubly covered element (except for ignoring elements of M), which gives the following equivalent cost table. 1 2 3 4 5(D) 1a 50 0 0 90 30 1b 50 0 0 90 30 2a 0 30 M 20 0 2b 0 30 M 20 0 3 0 120 0 0 M 360 CHAPTER 9 THE TRANSPORTATION AND ASSIGNMENT PROBLEMS This table actually has several ways of making a complete set of assignments to zero element positions (several optimal solutions), including the one shown by the five boxes. The resulting total cost is seen in Table 9.29 to be Z  810  840  800  0  840  3,290. We now have illustrated the entire algorithm, as summarized below. Summary of the Hungarian Algorithm 1. Subtract the smallest number in each row from every number in the row. (This is called row reduction.) Enter the results in a new table. 2. Subtract the smallest number in each column of the new table from every number in the column. (This is called column reduction.) Enter the results in another table. 3. Test whether an optimal set of assignments can be made. You do this by determining the minimum number of lines needed to cover (i.e., cross out) all zeros. Since this minimum number of lines equals the maximum number of assignments that can be made to zero element positions, if the minimum number of lines equals the number of rows, an optimal set of assignments is possible. (If you find that a complete set of assignments to zero element positions is not possible, this means that you did not reduce the number of lines covering all zeros down to the minimum number.) In that case, go to step 6. Otherwise go on to step 4. 4. If the number of lines is less than the number of rows, modify the table in the following way: a. Subtract the smallest uncovered number from every uncovered number in the table. b. Add the smallest uncovered number to the numbers at intersections of covering lines. c. Numbers crossed out but not at the intersections of cross-out lines carry over unchanged to the next table. 5. Repeat steps 3 and 4 until an optimal set of assignments is possible. 6. Make the assignments one at a time in positions that have zero elements. Begin with rows or columns that have only one zero. Since each row and each column needs to receive exactly one assignment, cross out both the row and the column involved after each assignment is made. Then move on to the rows and columns that are not yet crossed out to select the next assignment, with preference again given to any such row or column that has only one zero that is not crossed out. Continue until every row and every column has exactly one assignment and so has been crossed out. The complete set of assignments made in this way is an optimal solution for the problem. Your IOR Tutorial provides an interactive procedure for applying this algorithm efficiently. An automatic procedure in included as well. ■ 9.5 CONCLUSIONS The linear programming model encompasses a wide variety of specific types of problems. The general simplex method is a powerful algorithm that can solve surprisingly large versions of any of these problems. However, some of these problem types have such simple formulations that they can be solved much more efficiently by streamlined algorithms that exploit their special structure. These streamlined algorithms can cut down tremendously on the computer time required for large problems, and they sometimes make it computationally feasible to solve huge problems. This is particularly true for the two types of linear programming problems studied in this chapter, namely, the transportation problem and the assignment problem. Both types have a number of common applications, so it is important to recognize them when they arise and to use the best available algorithms. These special-purpose algorithms are included in some linear programming software packages. LEARNING AIDS FOR THIS CHAPTER ON OUR WEBSITE 361 We shall reexamine the special structure of the transportation and assignment problems in Sec. 10.6. There we shall see that these problems are special cases of an important class of linear programming problems known as the minimum cost flow problem. This problem has the interpretation of minimizing the cost for the flow of goods through a network. A streamlined version of the simplex method called the network simplex method (described in Sec. 10.7) is widely used for solving this type of problem, including its various special cases. A supplementary chapter (Chap. 23) on the book’s website describes various additional special types of linear programming problems. One of these, called the transshipment problem, is a generalization of the transportation problem which allows shipments from any source to any destination to first go through intermediate transfer points. Since the transshipment problem also is a special case of the minimum cost flow problem, we will describe it further in Sec. 10.6. Much research continues to be devoted to developing streamlined algorithms for special types of linear programming problems, including some not discussed here. At the same time, there is widespread interest in applying linear programming to optimize the operation of complicated large-scale systems. The resulting formulations usually have special structures that can be exploited. Being able to recognize and exploit special structures is an important factor in the successful application of linear programming. ■ SELECTED REFERENCES 1. Dantzig, G. B., and M. N. Thapa: Linear Programming 1: Introduction, Springer, New York, 1997, chap. 8. 2. Hall, R. W.: Handbook of Transportation Science, 2nd ed., Kluwer Academic Publishers (now Springer), Boston, 2003. 3. Hillier, F. S., and M. S. Hillier: Introduction to Management Science: A Modeling and Case Studies Approach with Spreadsheets, 5th ed., McGraw-Hill/Irwin, Burr Ridge, IL, 2014, chap. 15. ■ LEARNING AIDS FOR THIS CHAPTER ON OUR WEBSITE (www.mhhe.com/hillier) Solved Examples: Examples for Chapter 9 A Demonstration Example in OR Tutor: The Transportation Problem Interactive Procedures in IOR Tutorial: Enter or Revise a Transportation Problem Find Initial Basic Feasible Solution—for Interactive Method Solve Interactively by the Transportation Simplex Method Solve an Assignment Problem Interactively Automatic Procedures in IOR Tutorial: Solve Automatically by the Transportation Simplex Method Solve an Assignment Problem Automatically 362 CHAPTER 9 THE TRANSPORTATION AND ASSIGNMENT PROBLEMS An Excel Add-in: Analytic Solver Platform for Education (ASPE) “Ch. 9—Transp. & Assignment” Files for Solving the Examples: Excel Files LINGO/LINDO File MPL/Solvers File Glossary for Chapter 9 Supplement to this Chapter: A Case Study with Many Transportation Problems See Appendix 1 for documentation of the software. ■ PROBLEMS The symbols to the left of some of the problems (or their parts) have the following meaning: D: I: The demonstration example just listed may be helpful. We suggest that you use the relevant interactive procedure in IOR Tutorial (the printout records your work). Use the computer with any of the software options available to you (or as instructed by your instructor) to solve the problem. C: An asterisk on the problem number indicates that at least a partial answer is given in the back of the book. 9.1-1. Read the referenced article that fully describes the OR study summarized in the application vignette in Sec. 9.1. Briefly describe how the model for the transportation problem was applied in this study. Then list the various financial and nonfinancial benefits that resulted from this study. 9.1-2. The Childfair Company has three plants producing child push chairs that are to be shipped to four distribution centers. Plants 1, 2, and 3 produce 12, 17, and 11 shipments per month, respectively. Each distribution center needs to receive 10 shipments per month. The distance from each plant to the respective distributing centers is given below: Distance Distribution Center Plant 1 2 3 1 2 3 4 800 miles 1,100 miles 600 miles 1,300 miles 1,400 miles 1,200 miles 400 miles 600 miles 800 miles 700 miles 1,000 miles 900 miles The freight cost for each shipment is $100 plus 50 cents per mile. How much should be shipped from each plant to each of the distribution centers to minimize the total shipping cost? (a) Formulate this problem as a transportation problem by constructing the appropriate parameter table. (b) Draw the network representation of this problem. C (c) Obtain an optimal solution. 9.1-3.* Tom would like 3 pints of home brew today and an additional 4 pints of home brew tomorrow. Dick is willing to sell a maximum of 5 pints total at a price of $3.00 per pint today and $2.70 per pint tomorrow. Harry is willing to sell a maximum of 4 pints total at a price of $2.90 per pint today and $2.80 per pint tomorrow. Tom wishes to know what his purchases should be to minimize his cost while satisfying his thirst requirements. (a) Formulate a linear programming model for this problem, and construct the initial simplex tableau (see Chaps. 3 and 4). (b) Formulate this problem as a transportation problem by constructing the appropriate parameter table. C (c) Obtain an optimal solution. 9.1-4. The Versatech Corporation has decided to produce three new products. Five branch plants now have excess product capacity. The unit manufacturing cost of the first product would be $31, $29, $32, $28, and $29 in Plants 1, 2, 3, 4, and 5, respectively. The unit manufacturing cost of the second product would be $45, $41, $46, $42, and $43 in Plants 1, 2, 3, 4, and 5, respectively. The unit manufacturing cost of the third product would be $38, $35, and $40 in Plants 1, 2, and 3, respectively, whereas Plants 4 and 5 do not have the capability for producing this product. Sales forecasts indicate that 600, 1,000, and 800 units of products 1, 2, and 3, respectively, should be produced per day. Plants 1, 2, 3, 4, and 5 have the capacity to produce 400, 600, 400, 600, and 1,000 units daily, 363 PROBLEMS respectively, regardless of the product or combination of products involved. Assume that any plant having the capability and capacity to produce them can produce any combination of the products in any quantity. Management wishes to know how to allocate the new products to the plants to minimize total manufacturing cost. (a) Formulate this problem as a transportation problem by constructing the appropriate parameter table. C (b) Obtain an optimal solution. 9.1-5. Reconsider the P & T Co. problem presented in Sec. 9.1. You now learn that one or more of the shipping costs per truckload given in Table 9.2 may change slightly before shipments begin. Use Solver to generate the Sensitivity Report for this problem. Use this report to determine the allowable range for each of the unit costs. What do these allowable ranges tell P & T management? C 9.1-6. The Onenote Co. produces a single product at three plants for four customers. The three plants will produce 60, 80, and 40 units, respectively, during the next time period. The firm has made a commitment to sell 40 units to customer 1, 60 units to customer 2, and at least 20 units to customer 3. Both customers 3 and 4 also want to buy as many of the remaining units as possible. The net profit associated with shipping a unit from plant i for sale to customer j is given by the following table: Customer Plant 1 2 3 1 2 3 4 $800 500 600 $700 200 400 $500 100 300 $200 300 500 Management wishes to know how many units to sell to customers 3 and 4 and how many units to ship from each of the plants to each of the customers to maximize profit. (a) Formulate this problem as a transportation problem where the objective function is to be maximized by constructing the appropriate parameter table that gives unit profits. (b) Now formulate this transportation problem with the usual objective of minimizing total cost by converting the parameter table from part (a) into one that gives unit costs instead of unit profits. (c) Display the formulation in part (a) on an Excel spreadsheet. C (d) Use this information and the Excel Solver to obtain an optimal solution. C (e) Repeat parts (c) and (d ) for the formulation in part (b). Compare the optimal solutions for the two formulations. 9.1-7. The Move-It Company has two plants producing forklift trucks that then are shipped to three distribution centers. The production costs are the same at the two plants, and the cost of shipping for each truck is shown for each combination of plant and distribution center: Distribution Center 1 2 3 A $800 $700 $400 B $600 $800 $500 Plant A total of 60 forklift trucks are produced and shipped per week. Each plant can produce and ship any amount up to a maximum of 50 trucks per week, so there is considerable flexibility on how to divide the total production between the two plants so as to reduce shipping costs. However, each distribution center must receive exactly 20 trucks per week. Management’s objective is to determine how many forklift trucks should be produced at each plant, and then what the overall shipping pattern should be to minimize total shipping cost. (a) Formulate this problem as a transportation problem by constructing the appropriate parameter table. (b) Display the transportation problem on an Excel spreadsheet. C (c) Use Solver to obtain an optimal solution. 9.1-8. Redo Prob. 9.1-7 when any distribution center may receive any quantity between 10 and 30 forklift trucks per week in order to further reduce total shipping cost, provided only that the total shipped to all three distribution centers must still equal 60 trucks per week. 9.1-9. The MJK Manufacturing Company must produce two products in sufficient quantity to meet contracted sales in each of the next three months. The two products share the same production facilities, and each unit of both products requires the same amount of production capacity. The available production and storage facilities are changing month by month, so the production capacities, unit production costs, and unit storage costs vary by month. Therefore, it may be worthwhile to overproduce one or both products in some months and store them until needed. For each of the three months, the second column of the following table gives the maximum number of units of the two products combined that can be produced on Regular Time (RT) and on Overtime (O). For each of the two products, the subsequent columns give (1) the number of units needed for the contracted sales, (2) the cost (in thousands of dollars) per unit produced on Regular Time, (3) the cost (in thousands of dollars) per unit produced on Overtime, and (4) the cost (in thousands of dollars) of storing each extra unit that is held over into the next month. In each case, the numbers for the two products are separated by a slash /, with the number for Product 1 on the left and the number for Product 2 on the right. 364 CHAPTER 9 THE TRANSPORTATION AND ASSIGNMENT PROBLEMS Unit Cost of Storage Use each of the following criteria to obtain an initial BF solution. Compare the values of the objective function for these solutions. (a) Northwest corner rule. (b) Vogel’s approximation method. (c) Russell’s approximation method. D,I Product 1/Product 2 Maximum Combined Production Unit Cost of Production ($1,000’s) Month RT OT Sales RT OT ($1,000’s) 1 2 3 10 8 10 3 2 3 5/3 3/5 4/4 15/16 17/15 19/17 18/20 20/18 22/22 1/2 2/1 Destination The production manager wants a schedule developed for the number of units of each of the two products to be produced on Regular Time and (if Regular Time production capacity is used up) on Overtime in each of the three months. The objective is to minimize the total of the production and storage costs while meeting the contracted sales for each month. There is no initial inventory, and no final inventory is desired after the three months. (a) Formulate this problem as a transportation problem by constructing the appropriate parameter table. C (b) Obtain an optimal solution. 9.2-1. Consider the transportation problem having the following parameter table: Destination 1 2 3 Source Demand 1 2 3 Supply 6 4 3 3 M 4 5 7 3 4 3 2 4 2 3 (a) Use Vogel’s approximation method manually (don’t use the interactive procedure in IOR Tutorial) to select the first basic variable for an initial BF solution. (b) Use Russell’s approximation method manually to select the first basic variable for an initial BF solution. (c) Use the northwest corner rule manually to construct a complete initial BF solution. 9.2-2.* Consider the transportation problem having the following parameter table: D,I Destination Source Demand 1 2 3 4 9.2-3. Consider the transportation problem having the following parameter table: 1 2 3 4 5 Supply 2 7 8 0 4 6 7 0 6 3 5 0 5 M 2 0 7 4 5 0 4 6 6 4 4 4 2 5 5 Source 1 2 3 4 5 Demand 1 2 3 4 5 6 Supply 13 14 3 18 30 10 13 0 9 24 22 16 M 19 34 29 21 11 23 36 18 M 6 11 28 0 0 0 0 0 5 6 7 4 3 3 5 4 5 6 2 Use each of the following criteria to obtain an initial BF solution. Compare the values of the objective function for these solutions. (a) Northwest corner rule. (b) Vogel’s approximation method. (c) Russell’s approximation method. 9.2-4. Consider the transportation problem having the following parameter table: Destination Source Demand 1 2 3 4 1 2 3 4 Supply 7 4 8 6 4 6 5 7 1 7 4 6 4 2 6 3 1 1 1 1 1 1 1 1 (a) Notice that this problem has three special characteristics: (1) number of sources  number of destinations, (2) each supply  1, and (3) each demand  1. Transportation problems with these characteristics are of a special type called the assignment problem (as described in Sec. 9.3). Use the integer solutions property to explain why this type of transportation problem can be interpreted as assigning sources to destinations on a one-to-one basis. (b) How many basic variables are there in every BF solution? How many of these are degenerate basic variables ( 0)? D,I (c) Use the northwest corner rule to obtain an initial BF solution. I (d) Construct an initial BF solution by applying the general procedure for the initialization step of the transportation simplex method. However, rather than using one of the three 365 PROBLEMS criteria for step 1 presented in Sec. 9.2, use the minimum cost criterion given next for selecting the next basic variable. (With the corresponding interactive routine in your OR Courseware, choose the Northwest Corner Rule, since this choice actually allows the use of any criterion.) Minimum cost criterion: From among the rows and columns still under consideration, select the variable xij having the smallest unit cost cij to be the next basic variable. (Ties may be broken arbitrarily.) D,I Compare the resulting number of iterations for the transportation simplex method. (a) Northwest corner rule. (b) Vogel’s approximation method. (c) Russell’s approximation method. 9.2-8. The Cost-Less Corp. supplies its four retail outlets from its four plants. The shipping cost per shipment from each plant to each retail outlet is given below. D,I (e) Starting with the initial BF solution from part (c), interactively apply the transportation simplex method to obtain an optimal solution. 9.2-5. Consider the prototype example for the transportation problem (the P & T Co. problem) presented at the beginning of Sec. 9.1. Verify that the solution given there actually is optimal by applying just the optimality test portion of the transportation simplex method to this solution. Unit Shipping Cost Retail Outlet Plant 1 2 3 4 1 2 3 4 $500 $200 $300 $200 $600 $900 $400 $100 $400 $100 $200 $300 $200 $300 $100 $200 9.2-6. Consider the transportation problem having the following parameter table: Destination Source 1 2 3 4(D) Demand 1 2 3 4 5 Supply 8 5 6 0 6 M 3 0 3 8 9 0 7 4 6 0 5 7 8 0 20 30 30 20 25 25 20 10 20 After several iterations of the transportation simplex method, a BF solution is obtained that has the following basic variables: x13  20, x21  25, x24  5, x32  25, x34  5, x42  0, x43  0, x45  20. Continue the transportation simplex method for two more iterations by hand. After two iterations, state whether the solution is optimal and, if so, why. 9.2-7.* Consider the transportation problem having the following parameter table: D,I Destination Source Demand 1 2 3 1 2 3 4 Supply 3 2 4 7 4 3 6 3 8 4 2 5 5 2 3 3 3 2 2 Use each of the following criteria to obtain an initial BF solution. In each case, interactively apply the transportation simplex method, starting with this initial solution, to obtain an optimal solution. Plants 1, 2, 3, and 4 make 10, 20, 20, and 10 shipments per month, respectively. Retail outlets 1, 2, 3, and 4 need to receive 20, 10, 10, and 20 shipments per month, respectively. The distribution manager, Randy Smith, now wants to determine the best plan for how many shipments to send from each plant to the respective retail outlets each month. Randy’s objective is to minimize the total shipping cost. (a) Formulate this problem as a transportation problem by constructing the appropriate parameter table. (b) Use the northwest corner rule to construct an initial BF solution. (c) Starting with the initial basic solution from part (b), interactively apply the transportation simplex method to obtain an optimal solution. 9.2-9. The Energetic Company needs to make plans for the energy systems for a new building. The energy needs in the building fall into three categories: (1) electricity, (2) heating water, and (3) heating space in the building. The daily requirements for these three categories (all measured in the same units) are Electricity Water heating Space heating 20 units 10 units 30 units The three possible sources of energy to meet these needs are electricity, natural gas, and a solar heating unit that can be installed on the roof. The size of the roof limits the largest possible solar heater to 30 units, but there is no limit to the electricity and natural gas available. Electricity needs can be met only by purchasing electricity (at a cost of $50 per unit). Both other energy needs can be met by any source or combination of sources. The unit costs are 366 Water heating Space heating CHAPTER 9 THE TRANSPORTATION AND ASSIGNMENT PROBLEMS Electricity Natural Gas Solar Heater $90 80 $60 50 $30 40 The objective is to minimize the total cost of meeting the energy needs. (a) Formulate this problem as a transportation problem by constructing the appropriate parameter table. D,I (b) Use the northwest corner rule to obtain an initial BF solution for this problem. D,I (c) Starting with the initial BF solution from part (b), interactively apply the transportation simplex method to obtain an optimal solution. D,I (d) Use Vogel’s approximation method to obtain an initial BF solution for this problem. D,I (e) Starting with the initial BF solution from part (d ), interactively apply the transportation simplex method to obtain an optimal solution. I (f) Use Russell’s approximation method to obtain an initial BF solution for this problem. D,I (g) Starting with the initial BF solution obtained from part ( f ), interactively apply the transportation simplex method to obtain an optimal solution. Compare the number of iterations required by the transportation simplex method here and in parts (c) and (e). 9.2-10.* Interactively apply the transportation simplex method to solve the Northern Airplane Co. production scheduling problem as it is formulated in Table 9.9. D,I 9.2-11.* Reconsider Prob. 9.1-2. (a) Use the northwest corner rule to obtain an initial BF solution. (b) Starting with the initial BF solution from part (a), interactively apply the transportation simplex method to obtain an optimal solution. D,I 9.2-12. Reconsider Prob. 9.1-3b. Starting with the northwest corner rule, interactively apply the transportation simplex method to obtain an optimal solution for this problem. D,I 9.2-13. Reconsider Prob. 9.1-4. Starting with the northwest corner rule, interactively apply the transportation simplex method to obtain an optimal solution for this problem. D,I 9.2-14. Reconsider Prob. 9.1-6. Starting with Russell’s approximation method, interactively apply the transportation simplex method to obtain an optimal solution for this problem. D,I 9.2-15. Reconsider the transportation problem formulated in Prob. 9.1-7a. D,I (a) Use each of the three criteria presented in Sec. 9.2 to obtain an initial BF solution, and time how long you spend for each one. Compare both these times and the values of the objective function for these solutions. C (b) Obtain an optimal solution for this problem. For each of the three initial BF solutions obtained in part (a), calculate D,I the percentage by which its objective function value exceeds the optimal one. (c) For each of the three initial BF solutions obtained in part (a), interactively apply the transportation simplex method to obtain (and verify) an optimal solution. Time how long you spend in each of the three cases. Compare both these times and the number of iterations needed to reach an optimal solution. 9.2-16. Follow the instructions of Prob. 9.2-15 for the transportation problem formulated in Prob. 9.1-7a. 9.2-17. Consider the transportation problem having the following parameter table: Destination Source Demand 1 2 1 2 Supply 8 6 5 4 4 2 3 3 (a) Using your choice of a criterion from Sec. 9.2 for obtaining the initial BF solution, solve this problem manually by the transportation simplex method. (Keep track of your time.) (b) Reformulate this problem as a general linear programming problem, and then solve it manually by the simplex method. Keep track of how long this takes you, and contrast it with the computation time for part (a). 9.2-18. Consider the Northern Airplane Co. production scheduling problem presented in Sec. 9.1 (see Table 9.7). Formulate this problem as a general linear programming problem by letting the decision variables be xj  number of jet engines to be produced in month j ( j  1, 2, 3, 4). Construct the initial simplex tableau for this formulation, and then contrast the size (number of rows and columns) of this tableau and the corresponding tableaux used to solve the transportation problem formulation of the problem (see Table 9.9). 9.2-19. Consider the general linear programming formulation of the transportation problem (see Table 9.6). Verify the claim in Sec. 9.2 that the set of (m  n) functional constraint equations (m supply constraints and n demand constraints) has one redundant equation; i.e., any one equation can be reproduced from a linear combination of the other (m  n  1) equations. 9.2-20. When you deal with a transportation problem where the supply and demand quantities have integer values, explain why the steps of the transportation simplex method guarantee that all the basic variables (allocations) in the BF solutions obtained must have integer values. Begin with why this occurs with the initialization step when the general procedure for constructing an initial BF solution is used (regardless of the criterion for selecting the next basic variable). Then given a current BF solution that is integer, next explain why Step 3 of an iteration must obtain a new BF 367 PROBLEMS solution that also is integer. Finally, explain how the initialization step can be used to construct any initial BF solution, so the transportation simplex method actually gives a proof of the integer solutions property presented in Sec. 9.1. 9.2-21. A contractor, Susan Meyer, has to haul gravel to three building sites. She can purchase as much as 18 tons at a gravel pit in the north of the city and 14 tons at one in the south. She needs 10, 5, and 10 tons at sites 1, 2, and 3, respectively. The purchase price per ton at each gravel pit and the hauling cost per ton are given in the table below. Hauling Cost per Ton at Site Pit North South 1 2 3 Price per Ton $100 180 $190 110 $160 140 $300 420 Susan wishes to determine how much to haul from each pit to each site to minimize the total cost for purchasing and hauling gravel. (a) Formulate a linear programming model for this problem. Using the Big M method, construct the initial simplex tableau ready to apply the simplex method (but do not actually solve). (b) Now formulate this problem as a transportation problem by constructing the appropriate parameter table. Compare the size of this table (and the corresponding transportation simplex tableau) used by the transportation simplex method with the size of the simplex tableaux from part (a) that would be needed by the simplex method. D (c) Susan Meyer notices that she can supply sites 1 and 2 completely from the north pit and site 3 completely from the south pit. Use the optimality test (but no iterations) of the transportation simplex method to check whether the corresponding BF solution is optimal. D,I (d) Starting with the northwest corner rule, interactively apply the transportation simplex method to solve the problem as formulated in part (b). (e) As usual, let cij denote the unit cost associated with source i and destination j as given in the parameter table constructed in part (b). For the optimal solution obtained in part (d ), suppose that the value of cij for each basic variable xij is fixed at the value given in the parameter table, but that the value of ci j for each nonbasic variable xi j possibly can be altered through bargaining because the site manager wants to pick up the business. Use sensitivity analysis to determine the allowable range for each of the latter ci j, and explain how this information is useful to the contractor. 9.2-22. Consider the transportation problem formulation and solution of the Metro Water District problem presented in Secs. 9.1 and 9.2 (see Tables 9.12 and 9.23). C The numbers given in the parameter table are only estimates that may be somewhat inaccurate, so management now wishes to do some what-if analysis. Use Solver to generate the Sensitivity Report. Then use this report to address the following questions. (In each case, assume that the indicated change is the only change in the model.) (a) Would the optimal solution in Table 9.23 remain optimal if the cost per acre foot of shipping Calorie River water to San Go were actually $200 rather than $230? (b) Would this solution remain optimal if the cost per acre foot of shipping Sacron River water to Los Devils were actually $160 rather than $130? (c) Must this solution remain optimal if the costs considered in parts (a) and (b) were simultaneously changed from their original values to $215 and $145, respectively? (d) Suppose that the supply from the Sacron River and the demand at Hollyglass are decreased simultaneously by the same amount. Must the shadow prices for evaluating these changes remain valid if the decrease were 0.5 million acre feet? 9.2-23. Without generating the Sensitivity Report, adapt the sensitivity analysis procedure presented in Secs. 7.1 and 7.2 to conduct the sensitivity analysis specified in the four parts of Prob. 9.2-22. 9.3-1. Consider the assignment problem having the following cost table. Task A B C D Assignee 1 2 3 4 8 6 7 6 6 5 8 7 5 3 4 5 7 4 6 6 (a) Draw the network representation of this assignment problem. (b) Formulate this problem as a transportation problem by constructing the appropriate parameter table. (c) Display this formulation on an Excel spreadsheet. C (d) Use Solver to obtain an optimal solution. 9.3-2. Four cargo ships will be used for shipping goods from one port to four other ports (labeled 1, 2, 3, 4). Any ship can be used for making any one of these four trips. However, because of differences in the ships and cargoes, the total cost of loading, transporting, and unloading the goods for the different ship-port combinations varies considerably, as shown in the following table: Port Ship 1 2 3 4 1 2 3 4 $500 600 700 500 $400 600 500 400 $600 700 700 600 $700 500 600 600 368 CHAPTER 9 THE TRANSPORTATION AND ASSIGNMENT PROBLEMS The objective is to assign the four ships to four different ports in such a way as to minimize the total cost for all four shipments. (a) Describe how this problem fits into the general format for the assignment problem. C (b) Obtain an optimal solution. (c) Reformulate this problem as an equivalent transportation problem by constructing the appropriate parameter table. D,I (d) Use the northwest corner rule to obtain an initial BF solution for the problem as formulated in part (c). D,I (e) Starting with the initial BF solution from part (d ), interactively apply the transportation simplex method to obtain an optimal set of assignments for the original problem. D,I (f) Are there other optimal solutions in addition to the one obtained in part (e)? If so, use the transportation simplex method to identify them. 9.3-3. Reconsider Prob. 9.1-4. Suppose that the sales forecasts have been revised downward to 240, 400, and 320 units per day of products 1, 2, and 3, respectively, and that each plant now has the capacity to produce all that is required of any one product. Therefore, management has decided that each new product should be assigned to only one plant and that no plant should be assigned more than one product (so that three plants are each to be assigned one product, and two plants are to be assigned none). The objective is to make these assignments so as to minimize the total cost of producing these amounts of the three products. (a) Formulate this problem as an assignment problem by constructing the appropriate cost table. C (b) Obtain an optimal solution. (c) Reformulate this assignment problem as an equivalent transportation problem by constructing the appropriate parameter table. D,I (d) Starting with Vogel’s approximation method, interactively apply the transportation simplex method to solve the problem as formulated in part (c). 9.3-4.* The coach of an age group swim team needs to assign swimmers to a 200-yard medley relay team to send to the Junior Olympics. Since most of his best swimmers are very fast in more than one stroke, it is not clear which swimmer should be assigned to each of the four strokes. The five fastest swimmers and the best times (in seconds) they have achieved in each of the strokes (for 50 yards) are Stroke Carl Chris David Tony Ken Backstroke Breaststroke Butterfly Freestyle 37.7 43.4 33.3 29.2 32.9 33.1 28.5 26.4 33.8 42.2 38.9 29.6 37.0 34.7 30.4 28.5 35.4 41.8 33.6 31.1 The coach wishes to determine how to assign four swimmers to the four different strokes to minimize the sum of the corresponding best times. (a) Formulate this problem as an assignment problem. C (b) Obtain an optimal solution. 9.3-5. Consider the assignment problem formulation of Option 2 for the Better Products Co. problem presented in Table 9.29. (a) Reformulate this problem as an equivalent transportation problem with three sources and five destinations by constructing the appropriate parameter table. (b) Convert the optimal solution given in Sec. 9.3 for this assignment problem into a complete BF solution (including degenerate basic variables) for the transportation problem formulated in part (a). Specifically, apply the “General Procedure for Constructing an Initial BF Solution” given in Sec. 9.2. For each iteration of the procedure, rather than using any of the three alternative criteria presented for step 1, select the next basic variable to correspond to the next assignment of a plant to a product given in the optimal solution. When only one row or only one column remains under consideration, use step 4 to select the remaining basic variables. (c) Verify that the optimal solution given in Sec. 9.3 for this assignment problem actually is optimal by applying just the optimality test portion of the transportation simplex method to the complete BF solution obtained in part (b). (d) Now reformulate this assignment problem as an equivalent transportation problem with five sources and five destinations by constructing the appropriate parameter table. Compare this transportation problem with the one formulated in part (a). (e) Repeat part (b) for the problem as formulated in part (d ). Compare the BF solution obtained with the one from part (b). 9.3-6. Starting with Vogel’s approximation method, interactively apply the transportation simplex method to solve the Job Shop Co. assignment problem as formulated in Table 9.26b. (As stated in Sec. 9.3, the resulting optimal solution has x14  1, x23  1, x31  1, x42  1, and all other xij  0.) D,I 9.3-7. Reconsider Prob. 9.1-7. Now assume that distribution centers 1, 2, and 3 must receive exactly 10, 20, and 30 units per week, respectively. For administrative convenience, management has decided that each distribution center will be supplied totally by a single plant, so that one plant will supply one distribution center and the other plant will supply the other two distribution centers. The choice of these assignments of plants to distribution centers is to be made solely on the basis of minimizing total shipping cost. (a) Formulate this problem as an assignment problem by constructing the appropriate cost table, including identifying the corresponding assignees and tasks. C (b) Obtain an optimal solution. (c) Reformulate this assignment problem as an equivalent transportation problem (with four sources) by constructing the appropriate parameter table. C (d) Solve the problem as formulated in part (c). (e) Repeat part (c) with just two sources. C (f) Solve the problem as formulated in part (e). 369 PROBLEMS 9.3-8. Consider the assignment problem having the following cost table. I Job Person A B C reduced from 820 to 720. Solve this problem by manually applying the Hungarian algorithm. (You may use the corresponding interactive procedure in your IOR Tutorial.) 1 2 3 5 3 2 7 6 3 4 5 4 9.4-4. Manually apply the Hungarian algorithm (perhaps using the corresponding interactive procedure in your IOR Tutorial) to solve the assignment problem having the following cost table: Job The optimal solution is A-3, B-1, C-2, with Z  10. C (a) Use the computer to verify this optimal solution. (b) Reformulate this problem as an equivalent transportation problem by constructing the appropriate parameter table. C (c) Obtain an optimal solution for the transportation problem formulated in part (b). (d) Why does the optimal BF solution obtained in part (c) include some (degenerate) basic variables that are not part of the optimal solution for the assignment problem? (e) Now consider the nonbasic variables in the optimal BF solution obtained in part (c). For each nonbasic variable xij and the corresponding cost cij, adapt the sensitivity analysis procedure for general linear programming (see Case 2a in Sec. 7.2) to determine the allowable range for cij. 9.3-9. Consider the linear programming model for the general assignment problem given in Sec. 9.3. Construct the table of constraint coefficients for this model. Compare this table with the one for the general transportation problem (Table 9.6). In what ways does the general assignment problem have more special structure than the general transportation problem? I 9.4-1. Reconsider the assignment problem presented in Prob. 9.3-2. Manually apply the Hungarian algorithm to solve this problem. (You may use the corresponding interactive procedure in your IOR Tutorial.) I 9.4-2. Reconsider Prob. 9.3-4. See its formulation as an assignment problem in the answers given in the back of the book. Manually apply the Hungarian algorithm to solve this problem. (You may use the corresponding interactive procedure in your IOR Tutorial.) I 9.4-3. Reconsider the assignment problem formulation of Option 2 for the Better Products Co. problem presented in Table 9.29. Suppose that the cost of having Plant 1 produce product 1 is 2 3 M 7 0 8 6 0 7 4 0 9.4-5. Manually apply the Hungarian algorithm (perhaps using the corresponding interactive procedure in your IOR Tutorial) to solve the assignment problem having the following cost table: Task Assignee I I 1 2 3(D) Person 1 A B C D 1 2 3 4 4 1 3 2 1 3 2 2 0 4 1 3 1 0 3 0 9.4-6. Manually apply the Hungarian algorithm (perhaps using the corresponding interactive procedure in your IOR Tutorial) to solve the assignment problem having the following cost table: Task Assignee A B C D 1 2 3 4 4 7 4 5 6 4 7 3 5 5 6 4 5 6 4 7 370 CHAPTER 9 THE TRANSPORTATION AND ASSIGNMENT PROBLEMS ■ CASES In the past the company has shipped the wood by train. However, because shipping costs have been increasing, the alternative of using ships to make some of the deliveries is being investigated. This alternative would require the company to invest in some ships. Except for these investment costs, the shipping costs in thousands of dollars per million board feet by rail and by water (when feasible) would be the following for each route: CASE 9.1 Shipping Wood to Market Alabama Atlantic is a lumber company that has three sources of wood and five markets to be supplied. The annual availability of wood at sources 1, 2, and 3 is 15, 20, and 15 million board feet, respectively. The amount that can be sold annually at markets 1, 2, 3, 4, and 5 is 11, 12, 9, 10, and 8 million board feet, respectively. Unit Cost by Rail ($1,000’s) Market Unit Cost by Ship ($1,000’s) Market Source 1 2 3 4 5 1 2 3 4 5 1 2 3 61 69 59 72 78 66 45 60 63 55 49 61 66 56 47 31 36 — 38 43 33 24 28 36 — 24 32 35 31 26 The capital investment (in thousands of dollars) in ships required for each million board feet to be transported annually by ship along each route is given as follows: Investment for Ships ($1,000’s) Market Source 1 2 3 4 5 1 2 3 275 293 — 303 318 283 238 270 275 — 250 268 285 265 240 Considering the expected useful life of the ships and the time value of money, the equivalent uniform annual cost of these investments is one-tenth the amount given in the table. The objective is to determine the overall shipping plan that minimizes the total equivalent uniform annual cost (including shipping costs). You are the head of the OR team that has been assigned the task of determining this shipping plan for each of the following three options. Option 1: Continue shipping exclusively by rail. Option 2: Switch to shipping exclusively by water (except where only rail is feasible). Option 3: Ship by either rail or water, depending on which is less expensive for the particular route. Present your results for each option. Compare. Finally, consider the fact that these results are based on current shipping and investment costs, so the decision on the option to adopt now should take into account management’s projection of how these costs are likely to change in the future. For each option, describe a scenario of future cost changes that would justify adopting that option now. (Note: Data files for this case are provided on the book’s website for your convenience.) PREVIEWS OF ADDED CASES ON OUR WEBSITE 371 ■ PREVIEWS OF ADDED CASES ON OUR WEBSITE (www.mhhe.com/hillier) CASE 9.2 Continuation of the Texago Case Study The supplement to this chapter on the book’s website presents a case study of how the Texago Corp. solved many transportation problems to help make its decision regarding where to locate its new oil refinery. Management now needs to address the question of whether the capacity of the new refinery should be made somewhat larger than originally planned. This will require formulating and solving some additional transportation problems. A key part of the analysis then will involve combining two transportation problems into a single linear programming model that simultaneously considers the shipping of crude oil from the oil fields to the refineries and the shipping of final product from the refineries to the distribution centers. A memo to management summarizing your results and recommendations also needs to be written. CASE 9.3 Project Pickings This case focuses on a series of applications of the assignment problem for a pharmaceutical manufacturing company. The decision has been made to undertake five research and development projects to attempt to develop new drugs that will treat five specific types of medical ailments. Five senior scientists are available to lead these projects as project directors. The problem now is to decide on how to assign these scientists to the projects on a one-to-one basis. A variety of likely scenarios need to be considered. 10 C H A P T E R Network Optimization Models N etworks arise in numerous settings and in a variety of guises. Transportation, electrical, and communication networks pervade our daily lives. Network representations also are widely used for problems in such diverse areas as production, distribution, project planning, facilities location, resource management, supply chain management and financial planning— to name just a few examples. In fact, a network representation provides such a powerful visual and conceptual aid for portraying the relationships between the components of systems that it is used in virtually every field of scientific, social, and economic endeavor. One of the most exciting developments in operations research (OR) in recent decades, has been the unusually rapid advance in both the methodology and application of network optimization models. A number of algorithmic breakthroughs have had a major impact, as have ideas from computer science concerning data structures and efficient data manipulation. Consequently, algorithms and software now are available and are being used to solve huge problems on a routine basis that would have been completely intractable three decades ago. Many network optimization models actually are special types of linear programming problems. For example, both the transportation problem and the assignment problem discussed in the preceding chapter fall into this category because of their network representations presented in Figs. 9.3 and 9.5. One of the linear programming examples presented in Sec. 3.4 also is a network optimization problem. This is the Distribution Unlimited Co. problem of how to distribute its goods through the distribution network shown in Fig. 3.13. This special type of linear programming problem, called the minimum cost flow problem, is presented in Sec. 10.6. We shall return to this specific example in that section and then solve it with network methodology in the following section. In this one chapter we only scratch the surface of the current state of the art of network methodology. However, we shall introduce you to five important kinds of network problems and some basic ideas of how to solve them (without delving into issues of data structures that are so vital to successful large-scale implementations). Each of the first three problem types—the shortest-path problem, the minimum spanning tree problem, and the maximum flow problem—has a very specific structure that arises frequently in applications. The fourth type—the minimum cost flow problem—provides a unified approach to many other applications because of its far more general structure. In fact, this structure is so general that it includes as special cases both the shortest-path problem and the maximum flow problem as well as the transportation problem and the assignment problem from 372 373 10.1 PROTOTYPE EXAMPLE Chap. 9. Because the minimum cost flow problem is a special type of linear programming problem, it can be solved extremely efficiently by a streamlined version of the simplex method called the network simplex method. (We shall not discuss even more general network problems that are more difficult to solve.) The fifth kind of network problem considered here involves determining the most economical way to conduct a project so that it can be completed by its deadline. A technique called the CPM method of time-cost trade-offs is used to formulate a network model of the project and the time-cost trade-offs for its activities. Either marginal cost analysis or linear programming then is used to solve for the optimal project plan. The first section introduces a prototype example that will be used subsequently to illustrate the approach to the first three of the problem types mentioned above. Section 10.2 presents some basic terminology for networks. The next four sections deal with the first four problem types in turn, and Sec. 10.7 then is devoted to the network simplex method. Section 10.8 presents the CPM method of time-cost trade-offs for project management. (Chapter 22 on the website also uses network models to deal with a variety of project management problems.) ■ 10.1 PROTOTYPE EXAMPLE SEERVADA PARK has recently been set aside for a limited amount of sightseeing and backpack hiking. Cars are not allowed into the park, but there is a narrow, winding road system for trams and for jeeps driven by the park rangers. This road system is shown (without the curves) in Fig. 10.1, where location O is the entrance into the park; other letters designate the locations of ranger stations (and other limited facilities). The numbers give the distances of these winding roads in miles. The park contains a scenic wonder at station T. A small number of trams are used to transport sightseers from the park entrance to station T and back. The park management currently faces three problems. One is to determine which route from the park entrance to station T has the smallest total distance for the operation of the trams. (This is an example of the shortest-path problem to be discussed in Sec. 10.3.) A second problem is that telephone lines must be installed under the roads to establish telephone communication among all the stations (including the park entrance). Because the installation is both expensive and disruptive to the natural environment, lines will be installed under just enough roads to provide some connection between every pair of stations. The question is where the lines should be laid to accomplish this with a minimum total number of miles of line installed. (This is an example of the minimum spanning tree problem to be discussed in Sec. 10.4.) ■ FIGURE 10.1 The road system for Seervada Park. A 7 2 2 T 5 5 O 4 B 3 1 4 C D 4 1 E 7 374 CHAPTER 10 NETWORK OPTIMIZATION MODELS The third problem is that more people want to take the tram ride from the park entrance to station T than can be accommodated during the peak season. To avoid unduly disturbing the ecology and wildlife of the region, a strict ration has been placed on the number of tram trips that can be made on each of the roads per day. (These limits differ for the different roads, as we shall describe in detail in Sec. 10.5.) Therefore, during the peak season, various routes might be followed regardless of distance to increase the number of tram trips that can be made each day. The question pertains to how to route the various trips to maximize the number of trips that can be made per day without violating the limits on any individual road. (This is an example of the maximum flow problem to be discussed in Sec. 10.5.) ■ 10.2 THE TERMINOLOGY OF NETWORKS A relatively extensive terminology has been developed to describe the various kinds of networks and their components. Although we have avoided as much of this special vocabulary as we could, we still need to introduce a considerable number of terms for use throughout the chapter. We suggest that you read through this section once at the outset to understand the definitions and then plan to return to refresh your memory as the terms are used in subsequent sections. To assist you, each term is highlighted in boldface at the point where it is defined. A network consists of a set of points and a set of lines connecting certain pairs of the points. The points are called nodes (or vertices); e.g., the network in Fig. 10.1 has seven nodes designated by the seven circles. The lines are called arcs (or links or edges or branches); e.g., the network in Fig. 10.1 has 12 arcs corresponding to the 12 roads in the road system. Arcs are labeled by naming the nodes at either end; for example, AB is the arc between nodes A and B in Fig. 10.1. The arcs of a network may have a flow of some type through them, e.g., the flow of trams on the roads of Seervada Park in Sec. 10.1. Table 10.1 gives several examples of flow in typical networks. If flow through an arc is allowed in only one direction (e.g., a one-way street), the arc is said to be a directed arc. The direction is indicated by adding an arrowhead at the end of the line representing the arc. When a directed arc is labeled by listing two nodes it connects, the from node always is given before the to node; e.g., an arc that is directed from node A to node B must be labeled as AB rather than BA. Alternatively, this arc may be labeled as A  B. If flow through an arc is allowed in either direction (e.g., a pipeline that can be used to pump fluid in either direction), the arc is said to be an undirected arc. To help you distinguish between the two kinds of arcs, we shall frequently refer to undirected arcs by the suggestive name of links. Although the flow through an undirected arc is allowed to be in either direction, we do assume that the flow will be one way in the direction of choice rather than having ■ TABLE 10.1 Components of typical networks Nodes Arcs Flow Intersections Airports Switching points Pumping stations Work centers Roads Air lanes Wires, channels Pipes Materials-handling routes Vehicles Aircraft Messages Fluids Jobs 10.2 THE TERMINOLOGY OF NETWORKS 375 simultaneous flows in opposite directions. (The latter case requires the use of a pair of directed arcs in opposite directions.) However, in the process of making the decision on the flow through an undirected arc, it is permissible to make a sequence of assignments of flows in opposite directions, but with the understanding that the actual flow will be the net flow (the difference of the assigned flows in the two directions). For example, if a flow of 10 has been assigned in one direction and then a flow of 4 is assigned in the opposite direction, the actual effect is to cancel 4 units of the original assignment by reducing the flow in the original direction from 10 to 6. Even for a directed arc, the same technique sometimes is used as a convenient device to reduce a previously assigned flow. In particular, you are allowed to make a fictional assignment of flow in the “wrong” direction through a directed arc to record a reduction of that amount in the flow in the “right” direction. A network that has only directed arcs is called a directed network. Similarly, if all its arcs are undirected, the network is said to be an undirected network. A network with a mixture of directed and undirected arcs (or even all undirected arcs) can be converted to a directed network, if desired, by replacing each undirected arc by a pair of directed arcs in opposite directions. (You then have the choice of interpreting the flows through each pair of directed arcs as being simultaneous flows in opposite directions or providing a net flow in one direction, depending on which fits your application.) When two nodes are not connected by an arc, a natural question is whether they are connected by a series of arcs. A path between two nodes is a sequence of distinct arcs connecting these nodes. For example, one of the paths connecting nodes O and T in Fig. 10.1 is the sequence of arcs OB–BD–DT (O  B  D  T), or vice versa. When some of or all the arcs in the network are directed arcs, we then distinguish between directed paths and undirected paths. A directed path from node i to node j is a sequence of connecting arcs whose direction (if any) is toward node j, so that flow from node i to node j along this path is feasible. An undirected path from node i to node j is a sequence of connecting arcs whose direction (if any) can be either toward or away from node j. (Notice that a directed path also satisfies the definition of an undirected path, but not vice versa.) Frequently, an undirected path will have some arcs directed toward node j but others directed away (i.e., toward node i). You will see in Secs. 10.5 and 10.7 that, perhaps surprisingly, undirected paths play a major role in the analysis of directed networks. To illustrate these definitions, Fig. 10.2 shows a typical directed network. (Its nodes and arcs are the same as in Fig. 3.13, where nodes A and B represent two factories, nodes D and E represent two warehouses, node C represents a distribution center, and the arcs represent shipping lanes.) The sequence of arcs AB–BC–CE (A  B  C  E) is a directed path from node A to E, since flow toward node E along this entire path is feasible. On the other hand, BC–AC–AD (B  C  A  D) is not a directed path from node B to node D, because the direction of arc AC is away from node D (on this path). However, B  C  A  D is an undirected path from node B to node D, because the sequence of arcs BC–AC–AD does connect these two nodes (even though the direction of arc AC prevents flow through this path). As an example of the relevance of undirected paths, suppose that 2 units of flow from node A to node C had previously been assigned to arc AC. Given this previous assignment, it now is feasible to assign a smaller flow, say, 1 unit, to the entire undirected path B  C  A  D, even though the direction of arc AC prevents positive flow through C  A. The reason is that this assignment of flow in the “wrong” direction for arc AC actually just reduces the flow in the “right” direction by 1 unit. Sections 10.5 and 10.7 make heavy use of this technique of assigning a flow through an undirected path that includes arcs whose direction is opposite to this flow, where the real effect for these arcs is to reduce previously assigned positive flows in the “right” direction. A path that begins and ends at the same node is called a cycle. In a directed network, a cycle is either a directed or an undirected cycle, depending on whether the 376 CHAPTER 10 NETWORK OPTIMIZATION MODELS D A C ■ FIGURE 10.2 The distribution network for Distribution Unlimited Co., first shown in Fig. 3.13, illustrates a directed network. B E path involved is a directed or an undirected path. (Since a directed path also is an undirected path, a directed cycle is an undirected cycle, but not vice versa in general.) In Fig. 10.2, for example, DE–ED is a directed cycle. By contrast, AB–BC–AC is not a directed cycle, because the direction of arc AC opposes the direction of arcs AB and BC. On the other hand, AB–BC–AC is an undirected cycle, because A  B  C  A is an undirected path. In the undirected network shown in Fig. 10.1, there are many cycles, for example, OA–AB–BC–CO. However, note that the definition of path (a sequence of distinct arcs) rules out retracing one’s steps in forming a cycle. For example, OB–BO in Fig. 10.1 does not qualify as a cycle, because OB and BO are two labels for the same arc (link). On the other hand, DE–ED is a (directed) cycle in Fig. 10.2, because DE and ED are distinct arcs. Two nodes are said to be connected if the network contains at least one undirected path between them. (Note that the path does not need to be directed even if the network is directed.) A connected network is a network where every pair of nodes is connected. Thus, the networks in Figs. 10.1 and 10.2 are both connected. However, the latter network would not be connected if arcs AD and CE were removed. Consider a connected network with n nodes (e.g., the n  5 nodes in Fig. 10.2) where all the arcs have been deleted. A “tree” can then be “grown” by adding one arc (or “branch”) at a time from the original network in a certain way. The first arc can go anywhere to connect some pair of nodes. Thereafter, each new arc should be between a node that already is connected to other nodes and a new node not previously connected to any other nodes. Adding an arc in this way avoids creating a cycle and ensures that the number of connected nodes is 1 greater than the number of arcs. Each new arc creates a larger tree, which is a connected network (for some subset of the n nodes) that contains no undirected cycles. Once the (n  1)st arc has been added, the process stops because the resulting tree spans (connects) all n nodes. This tree is called a spanning tree, i.e., a connected network for all n nodes that contains no undirected cycles. Every spanning tree has exactly n  1 arcs, since this is the minimum number of arcs needed to have a connected network and the maximum number possible without having undirected cycles. Figure 10.3 uses the five nodes and some of the arcs of Fig. 10.2 to illustrate this process of growing a tree one arc (branch) at a time until a spanning tree has been obtained. There are several alternative choices for the new arc at each stage of the process, so Fig. 10.3 shows only one of many ways to construct a spanning tree in this case. Note, however, how each new added arc satisfies the conditions specified in the preceding paragraph. We shall discuss and illustrate spanning trees further in Sec. 10.4. 377 10.3 THE SHORTEST-PATH PROBLEM A D A C B D C E E (d ) (a) A D (b) ■ FIGURE 10.3 Example of growing a tree one arc at a time for the network of Fig. 10.2: (a) The nodes without arcs; (b) a tree with one arc; (c) a tree with two arcs; (d) a tree with three arcs; (e) a spanning tree. A D A D C E (c ) B E (e) Spanning trees play a key role in the analysis of many networks. For example, they form the basis for the minimum spanning tree problem discussed in Sec. 10.4. Another prime example is that (feasible) spanning trees correspond to the BF solutions for the network simplex method discussed in Sec. 10.7. Finally, we shall need a little additional terminology about flows in networks. The maximum amount of flow (possibly infinity) that can be carried on a directed arc is referred to as the arc capacity. For nodes, a distinction is made among those that are net generators of flow, net absorbers of flow, or neither. A supply node (or source node or source) has the property that the flow out of the node exceeds the flow into the node. The reverse case is a demand node (or sink node or sink), where the flow into the node exceeds the flow out of the node. A transshipment node (or intermediate node) satisfies conservation of flow, so flow in equals flow out. ■ 10.3 THE SHORTEST-PATH PROBLEM Although several other versions of the shortest-path problem (including some for directed networks) are mentioned at the end of the section, we shall focus on the following simple version. Consider an undirected and connected network with two special nodes called the origin and the destination. Associated with each of the links (undirected arcs) is a nonnegative distance. The objective is to find the shortest path (the path with the minimum total distance) from the origin to the destination. A relatively straightforward algorithm is available for this problem. The essence of this procedure is that it fans out from the origin, successively identifying the shortest path to each of the nodes of the network in the ascending order of their (shortest) distances from the origin, thereby solving the problem when the destination node is reached. We shall first outline the method and then illustrate it by solving the shortest-path problem encountered by the Seervada Park management in Sec. 10.1. 378 CHAPTER 10 NETWORK OPTIMIZATION MODELS Algorithm for the Shortest-Path Problem Objective of nth iteration: Find the nth nearest node to the origin (to be repeated for n  1, 2, . . . until the nth nearest node is the destination. Input for nth iteration: n  1 nearest nodes to the origin (solved for at the previous iterations), including their shortest path and distance from the origin. (These nodes, plus the origin, will be called solved nodes; the others are unsolved nodes.) Candidates for nth nearest node: Each solved node that is directly connected by a link to one or more unsolved nodes provides one candidate— the unsolved node with the shortest connecting link to this solved node. (Ties provide additional candidates.) Calculation of nth nearest node: For each such solved node and its candidate, add the distance between them and the distance of the shortest path from the origin to this solved node. The candidate with the smallest such total distance is the nth nearest node (ties provide additional solved nodes), and its shortest path is the one generating this distance. Applying This Algorithm to the Seervada Park Shortest-Path Problem The Seervada Park management needs to find the shortest path from the park entrance (node O) to the scenic wonder (node T) through the road system shown in Fig. 10.1. Applying the above algorithm to this problem yields the results shown in Table 10.2 (where the tie for the second nearest node allows skipping directly to seeking the fourth nearest node next). The first column (n) indicates the iteration count. The second column simply lists the solved nodes for beginning the current iteration after deleting the irrelevant ones (those not connected directly to any unsolved node). The third column then gives the candidates for the nth nearest node (the unsolved nodes with the shortest connecting link to a solved node). The fourth column calculates the distance of the shortest path from the origin to each of these candidates (namely, the distance to the solved node plus the link ■ TABLE 10.2 Applying the shortest-path algorithm to the Seervada Park problem A 7 2 2 Closest Connected Unsolved Node Total Distance Involved nth Nearest Node Minimum Distance Last Connection 1 O A 2 A 2 OA 2, 3 O A C B 4 22 4 C B 4 4 OC AB 4 A B C D E E 27 9 43 7 44 8 E 7 BE 5 A B E D D D 27 9 44 8 71 8 D D 8 8 BD ED 6 D E T T 8  5  13 7  7  14 T 13 DT T 5 4 5 O n Solved Nodes Directly Connected to Unsolved Nodes B D 1 3 4 7 1 C 4 E 10.3 THE SHORTEST-PATH PROBLEM 379 distance to the candidate). The candidate with the smallest such distance is the nth nearest node to the origin, as listed in the fifth column. The last two columns summarize the information for this newest solved node that is needed to proceed to subsequent iterations (namely, the distance of the shortest path from the origin to this node and the last link on this shortest path). Now let us relate these columns directly to the outline given for the algorithm. The input for nth iteration is provided by the fifth and sixth columns for the preceding iterations, where the solved nodes in the fifth column are then listed in the second column for the current iteration after deleting those that are no longer directly connected to unsolved nodes. The candidates for nth nearest node next are listed in the third column for the current iteration. The calculation of nth nearest node is performed in the fourth column, and the results are recorded in the last three columns for the current iteration. For example, consider the n = 4 iteration in Table 10.2. The objective of this iteration is to find the 4th nearest node to the origin. The input is that we already have found the three nearest nodes to the origin (A, C, and B) and their minimum distances from the origin (2, 4, and 4, respectively), as recorded in the fifth and sixth columns of the table. The next step is to list these solved nodes in the second column of the table for this n  4 iteration. Node A is directly connected to just one unsolved node (node D), so node D automatically becomes a candidate to be the 4th nearest node to the origin. Its minimum distance from the origin is the minimum distance from the origin to node A (2, as recorded in the sixth column) plus the distance between nodes A and D (7), for a total of 9. Node B is directly connected to two unsolved nodes (D and E), but node E is chosen to be the next candidate to be the 4th nearest node to the origin because it is closer to node B than node D is. The sum of the minimum distance from the origin to node B and the distance between node B and node E is 4  3  7, as recorded in the fourth column. Finally, node C is directly connected to just one unsolved node (node E), so node E again becomes a candidate to be the 4th nearest node to the origin, but via node C this time. The total distance involved in this case is 4  4  8. The smallest of the three total distances involved just calculated is the middle case of 4  3  7, so the closest connected unsolved node listed in this middle row of the iteration (node E) has been found to be the 4th nearest node to the origin, via the BE connection. Recording these results in the fifth and seventh columns of the table completes the iteration. After the work shown in Table 10.2 is completed, the shortest path from the destination to the origin can be traced back through the last column of Table 10.2 as either T  D  E  B  A  O or T  D  B  A  O. Therefore, the two alternates for the shortest path from the origin to the destination have been identified as O  A  B  E  D  T and O  A  B  D  T, with a total distance of 13 miles on either path. Using Excel to Formulate and Solve Shortest-Path Problems This algorithm provides a particularly efficient way of solving large shortest-path problems. However, some mathematical programming software packages do not include this algorithm. If not, they often will include the network simplex method described in Sec. 10.7, which is another good option for these problems. Since the shortest-path problem is a special type of linear programming problem, the general simplex method also can be used when better options are not readily available. Although not nearly as efficient as these specialized algorithms on large shortest-path problems, it is quite adequate for problems of even very substantial size (much larger than the Seervada Park problem). Excel, which relies on the general simplex method, provides a convenient way of formulating and solving shortest-path problems with dozens of arcs and nodes. 380 CHAPTER 10 NETWORK OPTIMIZATION MODELS A 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 A 7 2 2 T 5 4 5 O B D 1 3 4 1 C 4 E 7 B C D E F G H I Nodes O A B C D E T Net Flow 1 0 0 0 0 0 -1 = = = = = = = J Seervada Park Shortest-Path Problem From O O O A A B B B C C D D E E To A B C B D C D E B E E T D T On Route Distance 1 2 0 5 0 4 1 2 0 7 0 1 0 4 1 3 0 1 0 4 0 1 1 5 1 1 0 7 Total Distance Solver Parameters Set Objective Cell:TotalDistance To:Min By Changing Variable Cells: OnRoute Subject to the Constraints: NetFlow = SupplyDemand Solver Options: Make Variables Nonnegative Solving Method: Simplex LP C 19 Supply/Dem and 1 0 0 0 0 0 -1 13 H 3 4 5 6 7 8 9 10 Net Flow =SUMIF(From,G4,OnRoute)-SUMIF(To,G4,OnRoute) =SUMIF(From,G5,OnRoute)-SUMIF(To,G5,OnRoute) =SUMIF(From,G6,OnRoute)-SUMIF(To,G6,OnRoute) =SUMIF(From,G7,OnRoute)-SUMIF(To,G7,OnRoute) =SUMIF(From,G8,OnRoute)-SUMIF(To,G8,OnRoute) =SUMIF(From,G9,OnRoute)-SUMIF(To,G9,OnRoute) =SUMIF(From,G10,OnRoute)-SUMIF(To,G10,OnRoute) D Total Distance =SUMPRODUCT(D4:D17,E4:E17) Range Name Distance From NetFlow Nodes OnRoute SupplyDemand To TotalDistance Cells E4:E17 B4:B17 H4:H10 G4:G10 D4:D17 J4:J10 C4:C17 D19 ■ FIGURE 10.4 A spreadsheet formulation for the Seervada Park shortest-path problem, where the changing cells OnRoute (D4:D17) show the optimal solution obtained by Solver and the objective cell TotalDistance (D19) gives the total distance (in miles) of this shortest path. The network next to the spreadsheet shows the road system for Seervada Park that was originally depicted in Fig. 10.1. Figure 10.4 shows an appropriate spreadsheet formulation for the Seervada Park shortest-path problem. Rather than using the kind of formulation presented in Sec. 3.5 that uses a separate row for each functional constraint of the linear programming model, this formulation exploits the special structure by listing the nodes in column G and the arcs in columns B and C, as well as the distance (in miles) along each arc in column E. Since each link in the network is an undirected arc, whereas travel through the shortest path is in one direction, each link can be replaced by a pair of directed arcs in opposite directions. Thus, columns B and C together list both of the nearly vertical links in Fig. 10.1 (B–C and D–E) twice, once as a downward arc and once as an upward arc, since either direction might be on the chosen path. However, the other links are only listed as left-to-right arcs, since this is the only direction of interest for choosing a shortest path from the origin to the destination. A trip from the origin to the destination is interpreted to be a “flow” of 1 on the chosen path through the network. The decisions to be made are which arcs should be included An Application Vignette Incorporated in 1881, Canadian Pacific Railway (CPR) was Canada’s first transcontinental railway. CPR transports rail freight over a 14,000-mile network extending across Canada. It also serves a number of major cities in the United States, including Minneapolis, Chicago, and New York. Alliances with other carriers extend CPR’s market reach into the major business centers of Mexico as well. Every day CPR receives approximately 7,000 new shipments from its customers going to destinations across North America and for export. It must route and move these shipments in railcars over the network of track, where a railcar may be switched a number of times from one locomotive engine to another before reaching its destination. CPR must coordinate the shipments with its operational plans for 1,600 locomotives, 65,000 railcars, over 5,000 train crew members, and 250 train yards. CPR management turned to an OR consulting firm, MultiModal Applied Systems, to work with CPR employees in developing an operations research approach to this problem. A variety of OR techniques were used to create a new operating strategy. However, the foundation of the approach was to represent the flow of blocks of railcars as flow through a network where each node corresponds to both a location and a point in time. This representation then enabled the application of network optimization techniques. For example, numerous shortest path problems are solved each day as part of the overall approach. This application of operations research is saving CPR roughly US$100 million per year. Labor productivity, locomotive productivity, fuel consumption, and railcar velocity have improved very substantially. In addition, CPR now provides its customers with reliable delivery times and has received many awards for its improvement in service. This application of network optimization techniques also led to CPR winning the prestigious First Prize in the 2003 international competition for the Franz Edelman Award for Achievement in Operations Research and the Management Sciences. Source: P. Ireland, R. Case, J. Fallis, C. Van Dyke, J. Kuehn, and M. Meketon: “The Canadian Pacific Railway Transforms Operations by Using Models to Develop Its Operating Plans,” Interfaces, 34(1): 5–14, Jan.–Feb. 2004. (A link to this article is provided on our website, www.mhhe.com/hillier.) in the path to be traversed. A flow of 1 is assigned to an arc if it is included, whereas the flow is 0 if it is not included. Thus, the decision variables are xij  冦01 if arc i  j is not included if arc i  j is included for each of the arcs under consideration. The values of these decision variables are entered in the changing cells OnRoute (D4:D17). Each node can be thought of as having a flow of 1 passing through it if it is on the selected path, but no flow otherwise. The net flow generated at a node is the flow out minus the flow in, so the net flow is 1 at the origin, 1 at the destination, and 0 at every other node. These requirements for the net flows are specified in column J of Fig. 10.4. Using the equations at the bottom of the figure, each column H cell then calculates the actual net flow at that node by adding the flow out and subtracting the flow in. The corresponding constraints, NetFlow (H4:H10)  SupplyDemand (J4:J10), are specified in the Solver parameters box. The objective cell TotalDistance (D19) gives the total distance in miles of the chosen path by using the equation for this cell given at the bottom of Fig. 10.4. The goal of minimizing this objective cell has been specified in Solver. The solution shown in column D is an optimal solution obtained after running Solver. This solution is, of course, one of the two shortest paths identified earlier by the algorithm for the shortest-path algorithm. Other Applications Not all applications of the shortest-path problem involve minimizing the distance traveled from the origin to the destination. In fact, they might not even involve travel at all. The links (or arcs) might instead represent activities of some other kind, so choosing a path through the network corresponds to selecting the best sequence of activities. The numbers giving the “lengths” of the links might then be, for example, the costs of the activities, in which case the objective would be to determine which sequence of activities minimizes the total cost. The Solved Examples section of the book’s website includes another example of this type that illustrates its formulation as a shortest-path problem and then its solution by using either the algorithm for such problems or Solver with a spreadsheet formulation. 382 CHAPTER 10 NETWORK OPTIMIZATION MODELS Here are three categories of applications: 1. Minimize the total distance traveled, as in the Seervada Park example. 2. Minimize the total cost of a sequence of activities. (Problem 10.3-3 is of this type.) 3. Minimize the total time of a sequence of activities. (Problems 10.3-6 and 10.3-7 are of this type.) It is even possible for all three categories to arise in the same application. For example, suppose you wish to find the best route for driving from one town to another through a number of intermediate towns. You then have the choice of defining the best route as being the one that minimizes the total distance traveled or that minimizes the total cost incurred or that minimizes the total time required. (Problem 10.3-2 illustrates such an application.) Many applications require finding the shortest directed path from the origin to the destination through a directed network. The algorithm already presented can be easily modified to deal just with directed paths at each iteration. In particular, when candidates for the nth nearest node are identified, only directed arcs from a solved node to an unsolved node are considered. Another version of the shortest-path problem is to find the shortest paths from the origin to all the other nodes of the network. Notice that the algorithm already solves for the shortest path to each node that is closer to the origin than the destination. Therefore, when all nodes are potential destinations, the only modification needed in the algorithm is that it does not stop until all nodes are solved nodes. An even more general version of the shortest-path problem is to find the shortest paths from every node to every other node. Another option is to drop the restriction that “distances” (arc values) be nonnegative. Constraints also can be imposed on the paths that can be followed. All these variations occasionally arise in applications and so have been studied by researchers. The algorithms for a wide variety of combinatorial optimization problems, such as certain vehicle routing or network design problems, often call for the solution of a large number of shortest-path problems as subroutines. Although we lack the space to pursue this topic further, this use may now be the most important kind of application of the shortest-path problem. ■ 10.4 THE MINIMUM SPANNING TREE PROBLEM The minimum spanning tree problem bears some similarities to the main version of the shortest-path problem presented in the preceding section. In both cases, an undirected and connected network is being considered, where the given information includes some measure of the positive length (distance, cost, time, etc.) associated with each link. Both problems also involve choosing a set of links that have the shortest total length among all sets of links that satisfy a certain property. For the shortest-path problem, this property is that the chosen links must provide a path between the origin and the destination. For the minimum spanning tree problem, the required property is that the chosen links must provide a path between each pair of nodes. The minimum spanning tree problem can be summarized as follows: 1. You are given the nodes of a network but not the links. Instead, you are given the potential links and the positive length for each if it is inserted into the network. (Alternative measures for the length of a link include distance, cost, and time.) 2. You wish to design the network by inserting enough links to satisfy the requirement that there be a path between every pair of nodes. 3. The objective is to satisfy this requirement in a way that minimizes the total length of the links inserted into the network. A network with n nodes requires only (n  1) links to provide a path between each pair of nodes. No extra links should be used, since this would needlessly increase the to- 10.4 THE MINIMUM SPANNING TREE PROBLEM 383 A T O D B C E (a) A T O D B C E (b) A 2 ■ FIGURE 10.5 Illustrations of the spanning tree concept for the Seervada Park problem: (a) Not a spanning tree; (b) not a spanning tree; (c) a spanning tree. 2 O 5 4 B T D 7 4 C E (c) tal length of the chosen links. The (n  1) links need to be chosen in such a way that the resulting network (with just the chosen links) forms a spanning tree (as defined in Sec. 10.2). Therefore, the problem is to find the spanning tree with a minimum total length of the links. Figure 10.5 illustrates this concept of a spanning tree for the Seervada Park problem (see Sec. 10.1). Thus, Fig. 10.5a is not a spanning tree because nodes O, A, B, and C are not connected with nodes D, E, and T. It needs another link to make this connection. This network actually consists of two trees, one for each of these two sets of nodes. The links in Fig. 10.5b do span the network (i.e., the network is connected as defined in Sec. 10.2), but it is not a tree because there are two cycles (O–A–B–C–O and D–T–E–D). It has too many links. Because the Seervada Park problem has n  7 nodes, Sec. 10.2 indicates that the network must have exactly n  1  6 links, with no cycles, to qualify as a spanning tree. This condition is achieved in Fig. 10.5c, so this network is a feasible solution (with a value of 24 miles for the total length of the links) for the minimum spanning tree problem. (You soon will see that this solution is not optimal because it is possible to construct a spanning tree with only 14 miles of links.) Some Applications Here is a list of some key types of applications of the minimum spanning tree problem: 1. Design of telecommunication networks (fiber-optic networks, computer networks, leased-line telephone networks, cable television networks, etc.) 2. Design of a lightly used transportation network to minimize the total cost of providing the links (rail lines, roads, etc.) 3. Design of a network of high-voltage electrical power transmission lines 384 CHAPTER 10 NETWORK OPTIMIZATION MODELS 4. Design of a network of wiring on electrical equipment (e.g., a digital computer system) to minimize the total length of the wire 5. Design of a network of pipelines to connect a number of locations In this age of the information superhighway, applications of this first type have become particularly important. In a telecommunication network, it is only necessary to insert enough links to provide a path between every pair of nodes, so designing such a network is a classic application of the minimum spanning tree problem. Because some telecommunication networks now cost many millions of dollars, it is very important to optimize their design by finding the minimum spanning tree for each one. An Algorithm The minimum spanning tree problem can be solved in a very straightforward way because it happens to be one of the few OR problems where being greedy at each stage of the solution procedure still leads to an overall optimal solution at the end! Thus, beginning with any node, the first stage involves choosing the shortest possible link to another node, without worrying about the effect of this choice on subsequent decisions. The second stage involves identifying the unconnected node that is closest to either of these connected nodes and then adding the corresponding link to the network. This process is repeated, per the following summary, until all the nodes have been connected. (Note that this is the same process already illustrated in Fig. 10.3 for constructing a spanning tree, but now with a specific rule for selecting each new link.) The resulting network is guaranteed to be a minimum spanning tree. Algorithm for the Minimum Spanning Tree Problem 1. Select any node arbitrarily, and then connect it (i.e., add a link) to the nearest distinct node. 2. Identify the unconnected node that is closest to a connected node, and then connect these two nodes (i.e., add a link between them). Repeat this step until all nodes have been connected. 3. Tie breaking: Ties for the nearest distinct node (step 1) or the closest unconnected node (step 2) may be broken arbitrarily, and the algorithm must still yield an optimal solution. However, such ties are a signal that there may be (but need not be) multiple optimal solutions. All such optimal solutions can be identified by pursuing all ways of breaking ties to their conclusion. The fastest way of executing this algorithm manually is the graphical approach illustrated next. Applying This Algorithm to the Seervada Park Minimum Spanning Tree Problem The Seervada Park management (see Sec. 10.1) needs to determine under which roads telephone lines should be installed to connect all stations with a minimum total length of line. Using the data given in Fig. 10.1, we outline the step-by-step solution of this problem. A 7 2 2 5 O B C D 3 1 4 5 4 4 1 E 7 T 385 10.4 THE MINIMUM SPANNING TREE PROBLEM Nodes and distances for the problem are summarized below, where the thin lines now represent potential links. Arbitrarily select node O to start. The unconnected node closest to node O is node A. Connect node A to node O. A 7 2 2 5 O B C T D 3 1 4 5 4 1 7 E 4 The unconnected node closest to either node O or node A is node B (closest to A). Connect node B to node A. A 7 2 2 5 O B C T D 3 1 4 5 4 1 7 E 4 The unconnected node closest to node O, A, or B is node C (closest to B). Connect node C to node B. A 7 2 2 5 O B C 4 T D 3 1 4 5 4 1 7 E The unconnected node closest to node O, A, B, or C is node E (closest to B). Connect node E to node B. 386 CHAPTER 10 NETWORK OPTIMIZATION MODELS A 7 2 2 5 O 4 B C T D 3 1 4 5 1 7 E 4 The unconnected node closest to node O, A, B, C, or E is node D (closest to E). Connect node D to node E. A 7 2 2 5 O B D 3 1 4 C T 5 4 1 7 E 4 The only remaining unconnected node is node T. It is closest to node D. Connect node T to node D. A 7 2 2 5 5 O 4 B C D 3 1 4 4 T 1 7 E All nodes are now connected, so this solution to the problem is the desired (optimal) one. The total length of the links is 14 miles. Although it may appear at first glance that the choice of the initial node will affect the resulting final solution (and its total link length) with this procedure, it really does not. We suggest you verify this fact for the example by reapplying the algorithm, starting with nodes other than node O. The minimum spanning tree problem is the one problem we consider in this chapter that falls into the broad category of network design. In this category, the objective is to design the most appropriate network for the given application (frequently involving transportation systems) rather than analyzing an already designed network. 387 10.5 THE MAXIMUM FLOW PROBLEM ■ 10.5 THE MAXIMUM FLOW PROBLEM Now recall that the third problem facing the Seervada Park management (see Sec. 10.1) during the peak season is to determine how to route the various tram trips from the park entrance (station O in Fig. 10.1) to the scenic wonder (station T) to maximize the number of trips per day. (Each tram will return by the same route it took on the outgoing trip, so the analysis focuses on outgoing trips only.) To avoid unduly disturbing the ecology and wildlife of the region, strict upper limits have been imposed on the number of outgoing trips allowed per day in the outbound direction on each individual road. For each road, the direction of travel for outgoing trips is indicated by an arrow in Fig. 10.6. The number at the base of the arrow gives the upper limit on the number of outgoing trips allowed per day. Given the limits, one feasible solution is to send 7 trams per day, with 5 using the route O  B  E  T, 1 using O  B  C  E  T, and 1 using O  B  C  E  D  T. However, because this solution blocks the use of any routes starting with O  C (because the E  T and E  D capacities are fully used), it is easy to find better feasible solutions. Many combinations of routes (and the number of trips to assign to each one) need to be considered to find the one(s) maximizing the number of trips made per day. This kind of problem is called a maximum flow problem. In general terms, the maximum flow problem can be described as follows: 1. All flow through a directed and connected network originates at one node, called the source, and terminates at one other node, called the sink. (The source and sink in the Seervada Park problem are the park entrance at node O and the scenic wonder at node T, respectively.) 2. All the remaining nodes are transshipment nodes. (These are nodes A, B, C, D, and E in the Seervada Park problem.) 3. Flow through an arc is allowed only in the direction indicated by the arrowhead, where the maximum amount of flow is given by the capacity of that arc. At the source, all arcs point away from the node. At the sink, all arcs point into the node. 4. The objective is to maximize the total amount of flow from the source to the sink. This amount is measured in either of two equivalent ways, namely, either the amount leaving the source or the amount entering the sink. Some Applications Here are some typical kinds of applications of the maximum flow problem: 1. Maximize the flow through a company’s distribution network from its factories to its customers. 2. Maximize the flow through a company’s supply network from its vendors to its factories. 3. Maximize the flow of oil through a system of pipelines. ■ FIGURE 10.6 The Seervada Park maximum flow problem. A 7 2 2 5 O B C D 3 1 4 5 4 4 1 E 7 T An Application Vignette Hewlett-Packard (HP) offers many innovative products to meet the diverse needs of more than one billion customers. The breadth of its product offering has helped the company achieve unparalleled market reach. However, offering multiple similar products also can cause serious problems—including confusing sales representatives and customers—that can adversely affect the revenue and costs for any particular product. Therefore, it is important to find the right balance between too much and too little product variety. With this in mind, HP top management made managing product variety a strategic business priority. HP has been a leader in applying operations research to its important business problems for decades, so it was only natural that many of the company’s top OR analysts were called on to address this problem as well. The heart of the methodology that was developed to address this problem involved formulating and applying a network optimization model. After excluding proposed products that do not have a sufficiently high return on investment, the remaining proposed products can be envisioned as flows through a network that can help fill some of the projected orders on the right-hand side of the network. The resulting model is a maximum flow problem. Following its implementation by the beginning of 2005, this application of a maximum flow problem had a dramatic impact in enabling HP businesses to increase operational focus on their most critical products. This yielded company-wide profit improvements of over $500 million between 2005 and 2008, and then about $180 million annually thereafter. It also yielded a variety of important qualitative benefits for HP. These dramatic results led to HP winning the prestigious First Prize in the 2009 Franz Edelman Award for Achievement in Operations Research and the Management Sciences. Source: J. Ward and 20 co-authors, “HP Transforms Product Portfolio Management with Operations Research,” Interfaces 40(1): 17–32, Jan.–Feb. 2010. (A link to this article is provided on our website, www.mhhe.com/hillier.) 4. Maximize the flow of water through a system of aqueducts. 5. Maximize the flow of vehicles through a transportation network. For some of these applications, the flow through the network may originate at more than one node and may also terminate at more than one node, even though a maximum flow problem is allowed to have only a single source and a single sink. For example, a company’s distribution network commonly has multiple factories and multiple customers. A clever reformulation is used to make such a situation fit the maximum flow problem. This reformulation involves expanding the original network to include a dummy source, a dummy sink, and some new arcs. The dummy source is treated as the node that originates all the flow that, in reality, originates from some of the other nodes. For each of these other nodes, a new arc is inserted that leads from the dummy source to this node, where the capacity of this arc equals the maximum flow that, in reality, can originate from this node. Similarly, the dummy sink is treated as the node that absorbs all the flow that, in reality, terminates at some of the other nodes. Therefore, a new arc is inserted from each of these other nodes to the dummy sink, where the capacity of this arc equals the maximum flow that, in reality, can terminate at this node. Because of all these changes, all the nodes in the original network now are transshipment nodes, so the expanded network has the required single source (the dummy source) and single sink (the dummy sink) to fit the maximum flow problem. An Algorithm Because the maximum flow problem can be formulated as a linear programming problem (see Prob. 10.5-2), it can be solved by the simplex method, so any of the linear programming software packages introduced in Chaps. 3 and 4 can be used. However, an even more efficient augmenting path algorithm is available for solving this problem. This algorithm is based on two intuitive concepts, a residual network and an augmenting path. After some flows have been assigned to the arcs, the residual network shows the remaining arc capacities (called residual capacities) for assigning additional flows. For example, consider arc O  B in Fig. 10.6, which has an arc capacity of 7. Now suppose 389 10.5 THE MAXIMUM FLOW PROBLEM A 7 2 2 5 5 O B C T D 3 1 4 ■ FIGURE 10.7 The initial residual network for the Seervada Park maximum flow problem. 4 7 1 E 4 that the assigned flows include a flow of 5 through this arc, which leaves a residual capacity of 7  5  2 for any additional flow assignment through O  B. This status is depicted as follows in the residual network. O 2 5 B The number on an arc next to a node gives the residual capacity for flow from that node to the other node. Therefore, in addition to the residual capacity of 2 for flow from O to B, the 5 on the right indicates a residual capacity of 5 for assigning some flow from B to O (which actually is canceling some previously assigned flow from O to B). Initially, before any flows have been assigned, the residual network for the Seervada Park problem has the appearance shown in Fig. 10.7. Every arc in the original network (Fig. 10.6) has been changed from a directed arc to an undirected arc. However, the arc capacity in the original direction remains the same and the arc capacity in the opposite direction is zero, so the constraints on flows are unchanged. Subsequently, whenever some amount of flow is assigned to an arc, that amount is subtracted from the residual capacity in the same direction and added to the residual capacity in the opposite direction. An augmenting path is a directed path from the source to the sink in the residual network such that every arc on this path has strictly positive residual capacity. The minimum of these residual capacities is called the residual capacity of the augmenting path because it represents the amount of flow that can feasibly be added to the entire path. Therefore, each augmenting path provides an opportunity to further augment the flow through the original network. The augmenting path algorithm repeatedly selects some augmenting path and adds a flow equal to its residual capacity to that path in the original network. This process continues until there are no more augmenting paths, so the flow from the source to the sink cannot be increased further. The key to ensuring that the final solution necessarily is optimal is the fact that augmenting paths can cancel some previously assigned flows in the original network, so an indiscriminate selection of paths for assigning flows cannot prevent the use of a better combination of flow assignments. To summarize, each iteration of the algorithm consists of the following three steps. The Augmenting Path Algorithm for the Maximum Flow Problem1 1. Identify an augmenting path by finding some directed path from the source to the sink in the residual network such that every arc on this path has strictly positive residual 1 It is assumed that the arc capacities are either integers or rational numbers. 390 CHAPTER 10 NETWORK OPTIMIZATION MODELS capacity. (If no augmenting path exists, the net flows already assigned constitute an optimal flow pattern.) 2. Identify the residual capacity c* of this augmenting path by finding the minimum of the residual capacities of the arcs on this path. Increase the flow in this path by c*. 3. Decrease by c* the residual capacity of each arc on this augmenting path. Increase by c* the residual capacity of each arc in the opposite direction on this augmenting path. Return to step 1. When step 1 is carried out, there often will be a number of alternative augmenting paths from which to choose. Although the algorithmic strategy for making this selection is important for the efficiency of large-scale implementations, we shall not delve into this relatively specialized topic. (Later in the section, we do describe a systematic procedure for finding some augmenting path.) Therefore, for the following example (and the problems at the end of the chapter), the selection is just made arbitrarily. Applying This Algorithm to the Seervada Park Maximum Flow Problem Applying this algorithm to the Seervada Park problem (see Fig. 10.6 for the original network) yields the results summarized next. (Also see the Solved Examples section of the book’s website for another example of the application of this algorithm.) Starting with the initial residual network given in Fig. 10.7, we give the new residual network after each one or two iterations, where the total amount of flow from O to T achieved thus far is shown in boldface (next to nodes O and T). Iteration 1: In Fig. 10.7, one of several augmenting paths is O  B  E  T, which has a residual capacity of min{7, 5, 6}  5. By assigning a flow of 5 to this path, the resulting residual network is 0 A 3 1 5 O 0 0 5 2 5 B 0 4 5 5 D 1 5 C T 0 0 0 9 0 2 4 4 0 1 E 0 Iteration 2: Assign a flow of 3 to the augmenting path O  A  D  T. The resulting residual network is 3 A 0 1 2 8 O 3 0 2 5 B 2 4 4 0 C 0 1 5 4 6 D 0 0 0 3 0 E 1 T 5 8 391 10.5 THE MAXIMUM FLOW PROBLEM Iteration 3: Assign a flow of 1 to the augmenting path O  A  B  D  T. Iteration 4: Assign a flow of 2 to the augmenting path O  B  D  T. The resulting residual network is 4 0 A 0 11 O 6 1 1 0 7 B 1 3 11 0 1 5 C T 5 D 0 0 3 0 2 4 3 4 1 E 0 Iteration 5: Assign a flow of 1 to the augmenting path O  C  E  D  T. Iteration 6: Assign a flow of 1 to the augmenting path O  C  E  T. The resulting residual network is 4 0 A 0 13 O 7 1 1 0 7 B 1 3 C 13 1 0 5 2 T 6 D 0 2 2 0 2 2 3 2 0 E Iteration 7: Assign a flow of 1 to the augmenting path O  C  E  B  D  T. The resulting residual network is 4 A 0 0 14 O 8 1 1 0 7 B 2 1 0 4 C 1 D 1 1 0 0 3 3 4 1 3 E 0 T 6 14 392 CHAPTER 10 NETWORK OPTIMIZATION MODELS A 3 4 14 ■ FIGURE 10.8 Optimal solution for the Seervada Park maximum flow problem. 1 8 7 O B 4 C 3 14 D 4 3 T 1 6 E There are no more augmenting paths, so the current flow pattern is optimal. The current flow pattern may be identified by either cumulating the flow assignments or comparing the final residual capacities with the original arc capacities. If we use the latter method, there is flow along an arc if the final residual capacity is less than the original capacity. The magnitude of this flow equals the difference in these capacities. Applying this method by comparing the residual network obtained from the last iteration with either Fig. 10.6 or 10.7 yields the optimal flow pattern shown in Fig. 10.8. This example nicely illustrates the reason for replacing each directed arc i  j in the original network by an undirected arc in the residual network and then increasing the residual capacity for j  i by c* when a flow of c* is assigned to i  j. Without this refinement, the first six iterations would be unchanged. However, at that point it would appear that no augmenting paths remain (because the real unused arc capacity for E  B is zero). Therefore, the refinement permits us to add the flow assignment of 1 for O  C  E  B  D  T in iteration 7. In effect, this additional flow assignment cancels 1 unit of flow assigned at iteration 1 (O  B  E  T) and replaces it by assignments of 1 unit of flow to both O  B  D  T and O  C  E  T. Finding an Augmenting Path The most difficult part of this algorithm when large networks are involved is finding an augmenting path. This task may be simplified by the following systematic procedure. Begin by determining all nodes that can be reached from the source along a single arc with strictly positive residual capacity. Then, for each of these nodes that were reached, determine all new nodes (those not yet reached) that can be reached from this node along an arc with strictly positive residual capacity. Repeat this successively with the new nodes as they are reached. The result will be the identification of a tree of all the nodes that can be reached from the source along a path with strictly positive residual flow capacity. Hence, this fanning-out procedure will always identify an augmenting path if one exists. The procedure is illustrated in Fig. 10.9 for the residual network that results from iteration 6 in the preceding example. Although the procedure illustrated in Fig. 10.9 is a relatively straightforward one, it would be helpful to be able to recognize when optimality has been reached without an exhaustive search for a nonexistent path. It is sometimes possible to recognize this event because of an important theorem of network theory known as the max-flow min-cut theorem. A cut may be defined as any set of directed arcs containing at least one arc from every directed path from the source to the sink. There normally are many ways to slice through a network to form a cut to help analyze the network. For any particular cut, the cut value is the sum of the arc capacities of the arcs (in the specified direction) of the cut. The max-flow min-cut theorem states that, for any network with a single source and sink, the maximum feasible flow from the source to the sink equals the minimum cut value over all cuts of the network. Thus, if we let F denote the amount of flow from the source to the sink for any feasible flow pattern, the value of any cut provides an upper bound to F, and the smallest of the cut values is equal to the maximum value of F. Therefore, if a cut whose An Application Vignette The network for transport of natural gas on the Norwegian Continental Shelf, with approximately 5,000 miles of subsea pipelines, is the world’s largest offshore pipeline network. Gassco is a company entirely owned by the Norwegian state that operates this network. Another company that is largely state owned, StatoilHydro, is the main Norwegian supplier of natural gas to markets throughout Europe and elsewhere. Gassco and StatoilHydro together use operations research techniques to optimize both the configuration of the network and the routing of the natural gas. The main model used for this routing is a multicommodity network-flow model in which the different hydrocarbons and contaminants in natural gas constitute the commodities. The objective function for the model is to maximize the total flow of the natural gas from the supply points (the offshore drilling platforms) to the demand points 4 (typically import terminals). However, in addition to the usual supply and demand constraints, the model also includes constraints involving pressure-flow relationships, maximum delivery pressures, and technical pressure bounds on pipelines. Therefore, this model is a generalization of the model for the maximum flow problem described in this section. This key application of operations research, along with a few others, has had a dramatic impact on the efficiency of the operation of this offshore pipeline network. The resulting accumulated savings were estimated to be approximately $2 billion in the period 1995–2008. Source: F. Rømo, A. Tomasgard, L. Hellemo, M. Fodstad, B.H. Eidesen, and B. Pedersen, “Optimizing the Norwegian Natural Gas Production and Transport,” Interfaces 39(1): 46–56, Jan.–Feb. 2009. (A link to this article is provided on our website, www.mhhe.com/hillier.) 0 A 0 O 0 7 B 1 3 1 0 A 0 5 C T 6 D 0 2 2 3 0 2 2 ■ FIGURE 10.9 Procedure for finding an augmenting path for iteration 7 of the Seervada Park maximum flow problem. 7 1 1 0 E 2 2 3 1 5 O 0 7 0 4 ■ FIGURE 10.10 A minimum cut for the Seervada Park maximum flow problem. B 0 4 0 T 0 1 0 4 0 0 0 C D 5 2 0 9 0 E 6 value equals the value of F currently attained by the solution procedure can be found in the original network, the current flow pattern must be optimal. Equivalently, optimality has been attained whenever there exists a cut in the residual network whose value is zero. To illustrate, consider the network of Fig. 10.7. One interesting cut through this network is shown in Fig. 10.10. Notice that the value of the cut is 3  4  1  6  14, which was found to be the maximum value of F, so this cut is a minimum cut. Notice also that, in the residual network resulting from iteration 7, where F  14, the corresponding cut has a value of zero. If this had been noticed, it would not have been necessary to search for additional augmenting paths. 394 CHAPTER 10 NETWORK OPTIMIZATION MODELS A A 3 1 5 7 O 4 B 2 9 4 5 T D 1 C 4 E 6 ■ FIGURE 10.11 A spreadsheet formulation for the Seervada Park maximum flow problem, where the changing cells Flow (D4:D15) show the optimal solution obtained by Solver and the objective cell MaxFlow (D17) gives the resulting maximum flow through the network. The network next to the spreadsheet shows the Seervada Park maximum flow problem as it was originally depicted in Fig. 10.6. 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 B C D E F G H I J Capacity Nodes Net Flow 5 7 4 1 3 2 4 5 4 9 1 6 O A B C D E T 14 0 0 0 0 0 -14 K Seervada Park Maximum Flow Problem From To Flow O O O A A B B B C D E E A B C B D C D E E T D T 4 7 3 1 3 0 4 4 3 8 1 6 Max imum Flow <= <= <= <= <= <= <= <= <= <= <= <= Supply/Dem and = = = = = 0 0 0 0 0 14 Solver Parameters Set Objective Cell:Max Flow To:Max By Changing Variable Cells: Flow Subject to the Constraints: I5:I9 = Supply Demand Flow <= Capacity Solver Options: Make Variables Nonnegative Solving Method: Simplex LP I C 17 Net Flow =SUMIF(From,H4,Flow)-SUMIF(To,H4,Flow) =SUMIF(From,H5,Flow)-SUMIF(To,H5,Flow) =SUMIF(From,H6,Flow)-SUMIF(To,H6,Flow) =SUMIF(From,H7,Flow)-SUMIF(To,H7,Flow) =SUMIF(From,H8,Flow)-SUMIF(To,H8,Flow) =SUMIF(From,H9,Flow)-SUMIF(To,H9,Flow) =SUMIF(From,H10,Flow)-SUMIF(To,H10,Flow) 3 4 5 6 7 8 9 10 D Maximum Flow =I4 Range Name Capacity Flow From MaxFlow NetFlow Nodes SupplyDemand To Cells F4:F15 D4:D15 B4:B15 D17 I4:I10 H4:H10 K5:K9 C4:C15 Using Excel to Formulate and Solve Maximum Flow Problems Most maximum flow problems that arise in practice are considerably larger, and occasionally vastly larger, than the Seervada Park problem. Some problems have thousands of nodes and arcs. The augmenting path algorithm just presented is far more efficient than the general simplex method for solving such large problems. However, for problems of modest size, a reasonable and convenient alternative is to use Excel and Solver based on the general simplex method. Figure 10.11 shows a spreadsheet formulation for the Seervada Park maximum flow problem. The format is similar to that for the Seervada Park shortest-path problem displayed in Fig. 10.4. The arcs are listed in columns B and C, and the corresponding arc capacities are given in column F. Since the decision variables are the flows through the respective arcs, these quantities are entered in the changing cells Flow (D4:D15). Employing the equations given in the bottom right-hand corner of the figure, these flows then are used to calculate the net flow generated at each of the nodes (see columns H and I). These net flows are required to be 0 for the transshipment nodes (A, B, C, D, and E), as indicated by the first set of constraints (I5:I9  SupplyDemand) in Solver. The second set of constraints (Flow  Capacity) specifies the arc capacity constraints. The total amount of flow from the source (node O) to the sink (node T ) equals the flow generated at the source (cell I4), so the objective cell MaxFlow (D17) is set equal to I4. After specifying maximization of the objective cell and then running Solver, the optimal solution shown in Flow (D4:D15) is obtained. 395 10.6 THE MINIMUM COST FLOW PROBLEM ■ 10.6 THE MINIMUM COST FLOW PROBLEM The minimum cost flow problem holds a central position among network optimization models, both because it encompasses such a broad class of applications and because it can be solved extremely efficiently. Like the maximum flow problem, it considers flow through a network with limited arc capacities. Like the shortest-path problem, it considers a cost (or distance) for flow through an arc. Like the transportation problem or assignment problem of Chap. 9, it can consider multiple sources (supply nodes) and multiple destinations (demand nodes) for the flow, again with associated costs. In fact, all four of these previously studied problems are special cases of the minimum cost flow problem, as we will demonstrate shortly. The reason that the minimum cost flow problem can be solved so efficiently is that it can be formulated as a linear programming problem so it can be solved by a streamlined version of the simplex method called the network simplex method. We describe this algorithm in the next section. The minimum cost flow problem is described below: 1. 2. 3. 4. 5. The network is a directed and connected network. At least one of the nodes is a supply node. At least one of the other nodes is a demand node. All the remaining nodes are transshipment nodes. Flow through an arc is allowed only in the direction indicated by the arrowhead, where the maximum amount of flow is given by the capacity of that arc. (If flow can occur in both directions, this would be represented by a pair of arcs pointing in opposite directions.) 6. The network has enough arcs with sufficient capacity to enable all the flow generated at the supply nodes to reach all the demand nodes. 7. The cost of the flow through each arc is proportional to the amount of that flow, where the cost per unit flow is known. 8. The objective is to minimize the total cost of sending the available supply through the network to satisfy the given demand. (An alternative objective is to maximize the total profit from doing this.) Some Applications Probably the most important kind of application of minimum cost flow problems is to the operation of a company’s distribution network. As summarized in the first row of Table 10.3, this kind of application always involves determining a plan for shipping goods from its sources (factories, etc.) to intermediate storage facilities (as needed) and then on to the customers. For some applications of minimum cost flow problems, all the transshipment nodes are processing facilities rather than intermediate storage facilities. This is the case for ■ TABLE 10.3 Typical kinds of applications of minimum cost flow problems Kind of Application Supply Nodes Transshipment Nodes Demand Nodes Operation of a distribution network Sources of goods Intermediate storage facilities Customers Solid waste management Sources of solid waste Processing facilities Landfill locations Operation of a supply network Vendors Intermediate warehouses Processing facilities Coordinating product mixes at plants Plants Production of a specific product Market for a specific product Cash flow management Sources of cash at a specific time Short-term investment options Needs for cash at a specific time An Application Vignette An especially challenging problem encountered daily by any major airline company is how to compensate effectively for disruptions in the airline's flight schedules. Bad weather can disrupt flight arrivals and departures; so can mechanical problems. Each delay or cancellation involving a particular airplane can then cause subsequent delays or cancellations because that airplane is not available on time for its next scheduled flights. Such delays or cancellations may require both reassigning crews to flights and readjusting the plans for which airplanes will be used to fly the respective flights. The application vignette in Sec. 2.2 describes how Continental Airlines led the way in applying operations research to the problem of quickly reassigning crews to flights in the most cost-effective manner. However, a different approach is needed to address the problem of quickly reassigning airplanes to flights. An airline has two primary ways of reassigning airplanes to flights to compensate for delays or cancellations. One is to swap aircraft so that an airplane scheduled for a later flight can take the place of the delayed or canceled airplane. The other is to use a spare airplane (often after flying it in) to replace the delayed or canceled airplane. However, it is a real challenge to quickly make good decisions of these types when a considerable number of delays or cancellations occur throughout the day. United Airlines has led the way in applying operations research to this problem. This is done by formulating and solving the problem as a minimum-cost flow problem where each node in the network represents an airport and each arc represents the route of a flight. The objective of the model then is to keep the airplanes flowing through the network in a way that minimizes the cost incurred by having delays or cancellations. When a status monitor subsystem alerts an operations controller of impending delays or cancellations, the controller provides the necessary input into the model and then solves it in order to provide the updated operating plan in a matter of minutes. This application of the minimum-cost flow problem has resulted in reducing passenger delays by about 50 percent. Source: A. Rakshit, N. Krishnamurthy, and G. Yu: “System Operations Advisor: A Real-Time Decision Support System for Managing Airline Operations at United Airlines,” Interfaces, 26(2): 50–58, Mar.–Apr. 1996. (A link to this article is provided on our website, www.mhhe.com/hillier.) solid waste management, as indicated in the second row of Table 10.3. Here, the flow of materials through the network begins at the sources of the solid waste, then goes to the facilities for processing these waste materials into a form suitable for landfill, and then sends them on to the various landfill locations. However, the objective still is to determine the flow plan that minimizes the total cost, where the cost now is for both shipping and processing. In other applications, the demand nodes might be processing facilities. For example, in the third row of Table 10.3, the objective is to find the minimum cost plan for obtaining supplies from various possible vendors, storing these goods in warehouses (as needed), and then shipping the supplies to the company’s processing facilities (factories, etc.). Since the total amount that could be supplied by all the vendors is more than the company needs, the network includes a dummy demand node that receives (at zero cost) all the unused supply capacity at the vendors. The next kind of application in Table 10.3 (coordinating product mixes at plants) illustrates that arcs can represent something other than a shipping lane for a physical flow of materials. This application involves a company with several plants (the supply nodes) that can produce the same products but at different costs. Each arc from a supply node represents the production of one of the possible products at that plant, where this arc leads to the transshipment node that corresponds to this product. Thus, this transshipment node has an arc coming in from each plant capable of producing this product, and then the arcs leading out of this node go to the respective customers (the demand nodes) for this product. The objective is to determine how to divide each plant’s production capacity among the products so as to minimize the total cost of meeting the demand for the various products. 10.6 THE MINIMUM COST FLOW PROBLEM 397 The last application in Table 10.3 (cash flow management) illustrates that different nodes can represent some event that occurs at different times. In this case, each supply node represents a specific time (or time period) when some cash will become available to the company (through maturing accounts, notes receivable, sales of securities, borrowing, etc.). The supply at each of these nodes is the amount of cash that will become available then. Similarly, each demand node represents a specific time (or time period) when the company will need to draw on its cash reserves. The demand at each such node is the amount of cash that will be needed then. The objective is to maximize the company’s income from investing the cash between each time it becomes available and when it will be used. Therefore, each transshipment node represents the choice of a specific short-term investment option (e.g., purchasing a certificate of deposit from a bank) over a specific time interval. The resulting network will have a succession of flows representing a schedule for cash becoming available, being invested, and then being used after the maturing of the investment. Formulation of the Model Consider a directed and connected network where the n nodes include at least one supply node and at least one demand node. The decision variables are xij  flow through arc i  j, and the given information includes cij  cost per unit flow through arc i  j, uij  arc capacity for arc i  j, bi  net flow generated at node i. The value of bi depends on the nature of node i, where bi ⬎ 0 bi ⬍ 0 bi ⫽ 0 if node i is a supply node, if node i is a demand node, if node i is a transshipment node. The objective is to minimize the total cost of sending the available supply through the network to satisfy the given demand. By using the convention that summations are taken only over existing arcs, the linear programming formulation of this problem is n n  cij xij, i⫽1 j⫽1 Z⫽ Minimize subject to n n  xij ⫺ j⫽1  xji ⫽ bi, j⫽1 for each node i, and 0 ⱕ xij ⱕ uij, for each arc i  j. The first summation in the node constraints represents the total flow out of node i, whereas the second summation represents the total flow into node i, so the difference is the net flow generated at this node. The pattern of the coefficients in these node constraints is a key characteristic of minimum cost flow problems. It is not always easy to recognize a minimum cost flow problem, but formulating (or reformulating) a problem so that its constraint coefficients have 398 CHAPTER 10 NETWORK OPTIMIZATION MODELS this pattern is a good way of doing so. This then enables solving the problem extremely efficiently by the network simplex method. In some applications, it is necessary to have a lower bound Lij ⬎ 0 for the flow through each arc i  j. When this occurs, use a translation of variables x⬘i j ⫽ xij ⫺ Lij, with x⬘i j ⫹ Lij substituted for xij throughout the model, to convert the model back to the above format with nonnegativity constraints. It is not guaranteed that the problem actually will possess feasible solutions, depending partially upon which arcs are present in the network and their arc capacities. However, for a reasonably designed network, the main condition needed is the following: Feasible solutions property: A necessary condition for a minimum cost flow problem to have any feasible solutions is that n  bi ⫽ 0. i⫽1 That is, the total flow being generated at the supply nodes equals the total flow being absorbed at the demand nodes. If the values of bi provided for some application violate this condition, the usual interpretation is that either the supplies or the demands (whichever are in excess) actually represent upper bounds rather than exact amounts. When this situation arose for the transportation problem in Sec. 9.1, either a dummy destination was added to receive the excess supply or a dummy source was added to send the excess demand. The analogous step now is that either a dummy demand node should be added to absorb the excess supply (with cij ⫽ 0 arcs added from every supply node to this node) or a dummy supply node should be added to generate the flow for the excess demand (with cij ⫽ 0 arcs added from this node to every demand node). For many applications, bi and uij will have integer values, and implementation will require that the flow quantities xij also be integer. Fortunately, just as for the transportation problem, this outcome is guaranteed without explicitly imposing integer constraints on the variables because of the following property. Integer solutions property: For minimum cost flow problems where every bi and uij have integer values, all the basic variables in every basic feasible (BF) solution (including an optimal one) also have integer values. An Example Figure 10.12 shows an example of a minimum cost flow problem. This network actually is the distribution network for the Distribution Unlimited Co. problem presented in Sec. 3.4 (see Fig. 3.13). The quantities given in Fig. 3.13 provide the values of the bi, cij, and uij shown here. The bi values in Fig. 10.12 are shown in square brackets by the nodes, so the supply nodes (bi ⬎ 0) are A and B (the company’s two factories), the demand nodes (bi ⬍ 0) are D and E (two warehouses), and the one transshipment node (bi ⫽ 0) is C (a distribution center). The cij values are shown next to the arcs. In this example, all but two of the arcs have arc capacities exceeding the total flow generated (90), so uij ⫽ ⴥ for all practical purposes. The two exceptions are arc A  B, where uAB ⫽ 10, and arc C  E, which has uCE ⫽ 80. The linear programming model for this example is Minimize Z ⫽ 2xAB ⫹ 4xAC ⫹ 9xAD ⫹ 3xBC ⫹ xCE ⫹ 3xDE ⫹ 2xED, 399 10.6 THE MINIMUM COST FLOW PROBLEM [⫺30] bA ⫽ [50] cAD ⫽ 9 A D 4 2 [0] 3 2 C (uAB ⫽ 10) 3 1 (uCE ⫽ 80) ■ FIGURE 10.12 The Distribution Unlimited Co. problem formulated as a minimum cost flow problem. B E [40] [⫺60] subject to xAB ⫹ xAC ⫹ xAD ⫺xAB ⫹ xBC ⫺ xAC ⫺ xBC ⫹ xCE ⫺ xAD ⫹ xDE ⫺ xED ⫺ xCE ⫺ xDE ⫹ xED ⫽ 50 ⫽ 40 ⫽ 0 ⫽ ⫺30 ⫽ ⫺60 and xAB ⱕ 10, xCE ⱕ 80, all xij ⱖ 0. Now note the pattern of coefficients for each variable in the set of five node constraints (the equality constraints). Each variable has exactly two nonzero coefficients, where one is ⫹1 and the other is ⫺1. This pattern recurs in every minimum cost flow problem, and it is this special structure that leads to the integer solutions property. Another implication of this special structure is that (any) one of the node constraints is redundant. The reason is that summing all these constraint equations yields nothing but zeros on both sides (assuming feasible solutions exist, so the bi values sum to zero), so the negative of any one of these equations equals the sum of the rest of the equations. With just n ⫺ 1 nonredundant node constraints, these equations provide just n ⫺ 1 basic variables for a BF solution. In the next section, you will see that the network simplex method treats the xij ⱕ uij constraints as mirror images of the nonnegativity constraints, so the total number of basic variables is n ⫺ 1. This leads to a direct correspondence between the n ⫺ 1 arcs of a spanning tree and the n ⫺ 1 basic variables—but more about that story later. Using Excel to Formulate and Solve Minimum Cost Flow Problems Excel provides a convenient way of formulating and solving small minimum cost flow problems like this one, as well as somewhat larger problems. Figure 10.13 shows how this can be done. The format is almost the same as displayed in Fig. 10.11 for a maximum flow problem. One difference is that the unit costs (cij) now need to be included (in column G). Because bi values are specified for every node, net flow constraints are needed for all the nodes. However, only two of the arcs happen to need arc capacity 400 CHAPTER 10 NETWORK OPTIMIZATION MODELS A 1 2 3 4 5 6 7 8 9 10 11 12 ■ FIGURE 10.13 A spreadsheet formulation for the Distribution Unlimited Co. minimum cost flow problem, where the changing cells Ship (D4:D10) show the optimal solution obtained by Solver and the objective cell TotalCost (D12) gives the resulting total cost of the flow of shipments through the network. B C D E F G H I J K L Nodes A B C D E Net Flow 50 40 0 -30 -60 = = = = = Supply/Demand 50 40 0 -30 -60 Distribution Unlimited Co. Minimum Cost Flow Problem From A A A B C D E To B C D C E E D Total Cost Ship 0 40 10 40 80 0 20 <= Capacity 10 <= 80 Unit Cost 2 4 9 3 1 3 2 490 Solver Parameters Set Objective Cell: TotalCost To: Min By Changing Variable Cells: Ship Subject to the Constraints: D4 <= F4 D8 <= F8 NetFlow = SupplyDemand Solver Options: Make Variables Nonnegative Solving Method: Simplex LP Range Name Capacity From NetFlow Nodes Ship SupplyDemand To TotalCost UnitCost C 12 Cells F4:F10 B4:B10 J4:J8 I4:I8 D4:D10 L4:L8 C4:C10 D12 G4:G10 J 3 4 5 6 7 8 Net Flow =SUMIF(From,I4,Ship)-SUMIF(To,I4,Ship =SUMIF(From,I5,Ship)-SUMIF(To,I5,Ship =SUMIF(From,I6,Ship)-SUMIF(To,I6,Ship =SUMIF(From,I7,Ship)-SUMIF(To,I7,Ship =SUMIF(From,I8,Ship)-SUMIF(To,I8,Ship D Total Cost =SUMPRODUCT(D4:D10,G4:G10) constraints. The objective cell TotalCost (D12) now gives the total cost of the flow (shipments) through the network (see its equation at the bottom of the figure), so the goal specified in Solver is to minimize this quantity. The changing cells Ship (D4:D10) in this spreadsheet show the optimal solution obtained after running Solver. For much larger minimum cost flow problems, the network simplex method described in the next section provides a considerably more efficient solution procedure. It also is an attractive option for solving various special cases of the minimum cost flow problem outlined below. This algorithm is commonly included in mathematical programming software packages. We shall soon solve this same example by the network simplex method. However, let us first see how some special cases fit into the network format of the minimum cost flow problem. Special Cases The Transportation Problem. To formulate the transportation problem presented in Sec. 9.1 as a minimum cost flow problem, a supply node is provided for each source, as well as a demand node for each destination, but no transshipment nodes are included in the network. All the arcs are directed from a supply node to a demand node, where distributing xij units from source i to destination j corresponds to a flow of xij through arc i  j. The cost cij per unit distributed becomes the cost cij per unit of flow. Since the transportation problem does not impose upper bound constraints on individual xij, all the uij ⫽ ⬁. Using this formulation for the P & T Co. transportation problem presented in Table 9.2 yields the network shown in Fig. 9.2. The corresponding network for the general transportation problem is shown in Fig. 9.3. The Assignment Problem. Since the assignment problem discussed in Sec. 9.3 is a special type of transportation problem, its formulation as a minimum cost flow problem fits into the same format. The additional factors are that (1) the number of supply nodes 10.6 THE MINIMUM COST FLOW PROBLEM 401 equals the number of demand nodes, (2) bi  1 for each supply node, and (3) bi ⫽ ⫺1 for each demand node. Figure 9.5 shows this formulation for the general assignment problem. The Transshipment Problem. This special case actually includes all the general features of the minimum cost flow problem except for not having (finite) arc capacities. Thus, any minimum cost flow problem where each arc can carry any desired amount of flow is also called a transshipment problem. For example, the Distribution Unlimited Co. problem shown in Fig. 10.13 would be a transshipment problem if the upper bounds on the flow through arcs A  B and C  E were removed. Transshipment problems frequently arise as generalizations of transportation problems where units being distributed from each source to each destination can first pass through intermediate points. These intermediate points may include other sources and destinations, as well as additional transfer points that would be represented by transshipment nodes in the network representation of the problem. For example, the Distribution Unlimited Co. problem can be viewed as a generalization of a transportation problem with two sources (the two factories represented by nodes A and B in Fig. 10.13), two destinations (the two warehouses represented by nodes D and E), and one additional intermediate transfer point (the distribution center represented by node C). (The first section in Chap. 23 on the book’s website includes a further discussion of the transshipment problem.) The Shortest-Path Problem. Now consider the main version of the shortest-path problem presented in Sec. 10.3 (finding the shortest path from one origin to one destination through an undirected network). To formulate this problem as a minimum cost flow problem, one supply node with a supply of 1 is provided for the origin, one demand node with a demand of 1 is provided for the destination, and the rest of the nodes are transshipment nodes. Because the network of our shortest-path problem is undirected, whereas the minimum cost flow problem is assumed to have a directed network, we replace each link with a pair of directed arcs in opposite directions (depicted by a single line with arrowheads at both ends). The only exceptions are that there is no need to bother with arcs into the supply node or out of the demand node. The distance between nodes i and j becomes the unit cost cij or cji for flow in either direction between these nodes. As with the preceding special cases, no arc capacities are imposed, so all uij ⫽ ⴥ. Figure 10.14 depicts this formulation for the Seervada Park shortest-path problem shown in Fig. 10.1, where the numbers next to the lines now represent the unit cost of flow in either direction. The Maximum Flow Problem. The last special case we shall consider is the maximum flow problem described in Sec. 10.5. In this case a network already is provided with one supply node (the source), one demand node (the sink), and various transshipment nodes, as well as the various arcs and arc capacities. Only three adjustments are needed to fit this problem into the format for the minimum cost flow problem. First, set cij ⫽ 0 for all existing arcs to reflect the absence of costs in the maximum flow problem. Second, select a quantity 苶 F, which is a safe upper bound on the maximum feasible flow through the network, and then assign a supply and a demand of 苶 F to the supply node and the demand node, respectively. (Because all other nodes are transshipment nodes, they automatically have bi ⫽ 0.) Third, add an arc going directly from the supply node to the demand node and assign it an arbitrarily large unit cost of cij ⫽ M as well as an unlimited arc capacity (uij ⫽ ⴥ). Because of this positive unit cost for this arc and the zero unit cost 402 CHAPTER 10 NETWORK OPTIMIZATION MODELS All uij ⫽ ⬁. cij values are given next to the arcs. [0] A cAD 2 2 ⫽7 ⫽c DA [0] [0] [1] 5 O B C T [⫺1] 1 7 E 4 [0] 5 D 3 1 4 ■ FIGURE 10.14 Formulation of the Seervada Park shortest-path problem as a minimum cost flow problem. 4 [0] for all the other arcs, the minimum cost flow problem will send the maximum feasible flow through the other arcs, which achieves the objective of the maximum flow problem. Applying this formulation to the Seervada Park maximum flow problem shown in Fig. 10.6 yields the network given in Fig. 10.15, where the numbers given next to the original arcs are the arc capacities. Final Comments. Except for the transshipment problem, each of these special cases has been the focus of a previous section in either this chapter or Chap. 9. When each was first presented, we talked about a special-purpose algorithm for solving it very efficiently. Therefore, it certainly is not necessary to reformulate these special cases to fit the format of the minimum cost flow problem in order to solve them. However, when a computer code is not readily available for the special-purpose algorithm, it is very reasonable to use the network simplex method instead. In fact, recent implementations of the network simplex method have become so powerful that it now provides an excellent alternative to the special-purpose algorithm. ■ FIGURE 10.15 Formulation of the Seervada Park maximum flow problem as a minimum cost flow problem. All cij  0 except cOT. uij values are given next to the arcs. [0] A 3 5 1 [0] 7 [F苵] O [0] 4 B 5 2 4 C [0] 4 cOT ⫽ M 9 D 1 E [0] (uOT ⫽ ⬁) 6 T [⫺F苵 ] 10.7 THE NETWORK SIMPLEX METHOD 403 The fact that these problems are special cases of the minimum cost flow problem is of interest for other reasons as well. One reason is that the underlying theory for the minimum cost flow problem and for the network simplex method provides a unifying theory for all these special cases. Another reason is that some of the many applications of the minimum cost flow problem include features of one or more of the special cases, so it is important to know how to reformulate these features into the broader framework of the general problem. ■ 10.7 THE NETWORK SIMPLEX METHOD The network simplex method is a highly streamlined version of the simplex method for solving minimum cost flow problems. As such, it goes through the same basic steps at each iteration—finding the entering basic variable, determining the leaving basic variable, and solving for the new BF solution—in order to move from the current BF solution to a better adjacent one. However, it executes these steps in ways that exploit the special network structure of the problem without ever needing a simplex tableau. You may note some similarities between the network simplex method and the transportation simplex method presented in Sec. 9.2. In fact, both are streamlined versions of the simplex method that provide alternative algorithms for solving transportation problems in similar ways. The network simplex method extends these ideas to solving other types of minimum cost flow problems as well. In this section, we provide a somewhat abbreviated description of the network simplex method that focuses just on the main concepts. We omit certain details needed for a full computer implementation, including how to construct an initial BF solution and how to perform certain calculations (such as for finding the entering basic variable) in the most efficient manner. These details are provided in various more specialized textbooks such as Selected Reference 1. Incorporating the Upper Bound Technique The first concept is to incorporate the upper bound technique described in Sec. 8.3 to deal efficiently with the arc capacity constraints xij ⱕ uij. Thus, rather than these constraints being treated as functional constraints, they are handled just as nonnegativity constraints are. Therefore, they are considered only when the leaving basic variable is determined. In particular, as the entering basic variable is increased from zero, the leaving basic variable is the first basic variable that reaches either its lower bound (0) or its upper bound (uij). A nonbasic variable at its upper bound xij ⫽ uij is replaced with xij ⫽ uij ⫺ yij, so yij ⫽ 0 becomes the nonbasic variable. See Sec. 8.3 for further details. In our current context, yij has an interesting network interpretation. Whenever yij becomes a basic variable with a strictly positive value (ⱕ uij), this value can be thought of as flow from node j to node i (so in the “wrong” direction through arc i  j) that, in actuality, is canceling that amount of the previously assigned flow (xij ⫽ uij) from node i to node j. Thus, when xij ⫽ uij is replaced with xij ⫽ uij ⫺ yij, we also replace the real arc i  j with the reverse arc j  i, where this new arc has arc capacity uij (the maximum amount of the xij ⫽ uij flow that can be canceled) and unit cost ⫺ cij (since each unit of flow canceled saves cij). To reflect the flow of xij ⫽ uij through the deleted arc, we shift this amount of net flow generated from node i to node j by decreasing bi by uij and increasing bj by uij. Later, if yij becomes the leaving basic variable by reaching its upper bound, then yij ⫽ uij is replaced with yij ⫽ uij ⫺ xij with xij ⫽ 0 as the new 404 CHAPTER 10 NETWORK OPTIMIZATION MODELS [⫺30] [40] cAD ⫽ 9 A [0] 4 ⫺2 ■ FIGURE 10.16 The adjusted network for the example when the upperbound technique leads to replacing xAB ⫽ 10 with xAB ⫽ 10 ⫺ yAB. D 2 C (uBA ⫽ 10) 3 B [50] 1 (uCE ⫽ 80) 3 E [⫺60] nonbasic variable, so the above process would be reversed (replace arc j  i by arc i  j, etc.) to the original configuration. To illustrate this process, consider the minimum cost flow problem shown in Fig. 10.12. While the network simplex method is generating a sequence of BF solutions, suppose that xAB has become the leaving basic variable for some iteration by reaching its upper bound of 10. Consequently, xAB  10 is replaced with xAB  10 ⫺ yAB, so yAB ⫽ 0 becomes the new nonbasic variable. At the same time, we replace arc A  B with arc B  A (with yAB as its flow quantity), and we assign this new arc a capacity of 10 and a unit cost of ⫺2. To take xAB ⫽ 10 into account, we also decrease bA from 50 to 40 and increase bB from 40 to 50. The resulting adjusted network is shown in Fig. 10.16. We shall soon illustrate the entire network simplex method with this same example, starting with yAB ⫽ 0 (xAB ⫽ 10) as a nonbasic variable and so using Fig. 10.16. A later iteration will show xCE reaching its upper bound of 80 and so being replaced with xCE ⫽ 80 ⫺ yCE, and so on, and then the next iteration has yAB reaching its upper bound of 10. You will see that all these operations are performed directly on the network, so we will not need to use the xij or yij labels for arc flows or even to keep track of which arcs are real arcs and which are reverse arcs (except when we record the final solution). Using the upper bound technique leaves the node constraints (flow out minus flow in ⫽ bi) as the only functional constraints. Minimum cost flow problems tend to have far more arcs than nodes, so the resulting number of functional constraints generally is only a small fraction of what it would have been if the arc capacity constraints had been included. The computation time for the simplex method goes up relatively rapidly with the number of functional constraints, but only slowly with the number of variables (or the number of bounding constraints on these variables). Therefore, incorporating the upper bound technique here tends to provide a tremendous saving in computation time. However, this technique is not needed for uncapacitated minimum cost flow problems (including all but the last special case considered in the preceding section), where there are no arc capacity constraints. Correspondence between BF Solutions and Feasible Spanning Trees The most important concept underlying the network simplex method is its network representation of BF solutions. Recall from Sec. 10.6 that with n nodes, every BF solution has (n ⫺ 1) basic variables, where each basic variable xij represents the flow through arc i  j. These (n ⫺ 1) arcs are referred to as basic arcs. (Similarly, the arcs corresponding to the nonbasic variables xij ⫽ 0 or yij ⫽ 0 are called nonbasic arcs.) 405 10.7 THE NETWORK SIMPLEX METHOD A key property of basic arcs is that they never form undirected cycles. (This property prevents the resulting solution from being a weighted average of another pair of feasible solutions, which would violate one of the general properties of BF solutions.) However, any set of n ⫺ 1 arcs that contains no undirected cycles forms a spanning tree. Therefore, any complete set of n ⫺ 1 basic arcs forms a spanning tree. Thus, BF solutions can be obtained by “solving” spanning trees, as summarized below. A spanning tree solution is obtained as follows: 1. For the arcs not in the spanning tree (the nonbasic arcs), set the corresponding variables (xij or yij) equal to zero. 2. For the arcs that are in the spanning tree (the basic arcs), solve for the corresponding variables (xij or yij) in the system of linear equations provided by the node constraints. (The network simplex method actually solves for the new BF solution from the current one much more efficiently, without solving this system of equations from scratch.) Note that this solution process does not consider either the nonnegativity constraints or the arc capacity constraints for the basic variables, so the resulting spanning tree solution may or may not be feasible with respect to these constraints—which leads to our next definition: A feasible spanning tree is a spanning tree whose solution from the node constraints also satisfies all the other constraints (0 ⱕ xij ⱕ uij or 0 ⱕ yij ⱕ uij). With these definitions, we now can summarize our key conclusion as follows: The fundamental theorem for the network simplex method says that basic solutions are spanning tree solutions (and conversely) and that BF solutions are solutions for feasible spanning trees (and conversely). To begin illustrating the application of this fundamental theorem, consider the network shown in Fig. 10.16 that results from replacing xAB ⫽ 10 with xAB ⫽ 10 ⫺ yAB for our example in Fig. 10.12. One spanning tree for this network is the one shown in Fig. 10.3e, where the arcs are A  D, D  E, C  E, and B  C. With these as the basic arcs, the process of finding the spanning tree solution is shown below. On the left is the set of node constraints given in Sec. 10.6 after 10 ⫺ yAB is substituted for xAB, where the basic variables are shown in boldface. On the right, starting at the top and moving down, is the sequence of steps for setting or calculating the values of the variables. yAB ⫽ 0, xAC ⫽ 0, xED ⫽ 0 ⫺yAB ⫹ xAC ⫹ xAD ⫹ xBC ⫺ xCE ⫺ xDE ⫹ xED ⫺yAB ⫹ xAC ⫹ xAD ⫹ xBC ⫺yAB ⫺ xAC ⫹xAD ⫺ xBC ⫹ xCE ⫺yAB ⫺ xAC ⫺ xAD ⫺ xBC ⫹ xCE ⫹ xDE ⫺ xED ⫺yAB ⫹ xAC ⫹ xAD ⫹ xBC ⫺ xCE ⫺ xDE ⫹ xED ⫽ ⫺40 ⫽ ⫺50 ⫽⫺ 0 ⫽ ⫺30 ⫽ ⫺60 xAD ⫽ 40. xBC ⫽ 50. so xCE ⫽ 50. so xDE ⫽ 10. Redundant. Since the values of all these basic variables satisfy the nonnegativity constraints and the one relevant arc capacity constraint (xCE ⱕ 80), the spanning tree is a feasible spanning tree, so we have a BF solution. We shall use this solution as the initial BF solution for demonstrating the network simplex method. Figure 10.17 shows its network representation, namely, the feasible spanning tree and its solution. Thus, the numbers given next to the arcs now represent flows (values of xij) rather than the unit costs cij previously given. (To help you distinguish, we shall always put parentheses around flows but not around costs.) 406 CHAPTER 10 NETWORK OPTIMIZATION MODELS [⫺30] [40] (xAD ⫽ 40) A D [0] (10) C (50) ■ FIGURE 10.17 The initial feasible spanning tree and its solution for the example. (50) B E [50] [⫺60] Selecting the Entering Basic Variable To begin an iteration of the network simplex method, recall that the standard simplex method criterion for selecting the entering basic variable is to choose the nonbasic variable which, when increased from zero, will improve Z at the fastest rate. Now let us see how this is done without having a simplex tableau. To illustrate, consider the nonbasic variable xAC in our initial BF solution, i.e., the nonbasic arc A  C. Increasing xAC from zero to some value ␪ means that the arc A  C with flow ␪ must be added to the network shown in Fig. 10.17. Adding a nonbasic arc to a spanning tree always creates a unique undirected cycle, where the cycle in this case is seen in Fig. 10.18 to be AC–CE–DE–AD. Figure 10.18 also shows the effect of adding the flow ␪ to arc A  C on the other flows in the network. Specifically, the flow is thereby increased by ␪ for other arcs that have the same direction as A  C in the cycle (arc C  E), whereas the net flow is decreased by ␪ for other arcs whose direction is opposite to A  C in the cycle (arcs D  E and A  D). In the latter case, the new flow is, in effect, canceling a flow of ␪ in the opposite direction. Arcs not in the cycle (arc B  C) are unaffected by the new flow. (Check these conclusions by noting the effect of the change in xAC on the values of the other variables in the solution just derived for the initial feasible spanning tree.) Now what is the incremental effect on Z (total flow cost) from adding the flow ␪ to arc A  C? Figure 10.19 shows most of the answer by giving the unit cost times the change in the flow for each arc of Fig. 10.18. Therefore, the overall increment in Z is ⌬Z ⫽ cAC␪ ⫹ cCE␪ ⫹ cDE(⫺␪) ⫹ cAD(⫺␪) ⫽ 4␪ ⫹ ␪ ⫺ 3␪ ⫺ 9␪ ⫽ ⫺7␪. Setting ␪ ⫽ 1 then gives the rate of change of Z as xAC is increased, namely, ⌬Z ⫽ ⫺7, when ␪ ⫽ 1. Because the objective is to minimize Z, this large rate of decrease in Z by increasing xAC is very desirable, so xAC becomes a prime candidate to be the entering basic variable. We now need to perform the same analysis for the other nonbasic variables before we make the final selection of the entering basic variable. The only other nonbasic variables are yAB and xED, corresponding to the two other nonbasic arcs B  A and E  D in Fig. 10.16. Figure 10.20 shows the incremental effect on costs of adding arc B  A with flow ␪ to the initial feasible spanning tree given in Fig. 10.17. Adding this arc creates the undirected cycle BA–AD–DE–CE–BC, so the flow increases by ␪ for arcs A  D and D  E 407 10.7 THE NETWORK SIMPLEX METHOD [⫺30] [40] (xAD ⫽ 40 ⫺ ␪) A (xAC ⫽ ␪) D [0] (10 ⫺ ␪) C ■ FIGURE 10.18 The effect on flows of adding arc A  C with flow ␪ to the initial feasible spanning tree. (50 ⫹ ␪) (50) B E [50] [⫺60] [40] [⫺30] 9(⫺␪) A D 4 [0] 3(⫺␪) C ■ FIGURE 10.19 The incremental effect on costs of adding arc A  C with flow ␪ to the initial feasible spanning tree. 3(0) 1␪ B E [50] [⫺60] [40] [⫺30] 9␪ A D [0] ■ FIGURE 10.20 The incremental effect on costs of adding arc B  A with flow ␪ to the initial feasible spanning tree. 3␪ C ⫺2␪ 3(⫺␪) 1(⫺␪) B E [50] [⫺60] but decreases by ␪ for the two arcs in the opposite direction on this cycle, C  E and B  C. These flow increments, ␪ and ⫺␪, are the multiplicands for the cij values in the figure. Therefore, ⌬Z ⫽ ⫺2␪ ⫹ 9␪ ⫹ 3␪ ⫹ 1(⫺␪) ⫹ 3(⫺␪) ⫽ 6␪ ⫽ 6, when ␪ ⫽ 1. Since the objective is to minimize Z, the fact that Z increases rather than decreases when yAB (flow through the reverse arc B  A) is increased from zero rules out this variable as 408 CHAPTER 10 NETWORK OPTIMIZATION MODELS [40] [⫺30] 9(0) A D [0] ■ FIGURE 10.21 The incremental effect on costs of adding arc E  D with flow ␪ to the initial feasible spanning tree. 3␪ 2␪ C 1(0) 3(0) B E [50] [⫺60] a candidate to be the entering basic variable. (Remember that increasing yAB from zero really means decreasing xAB, flow through the real arc A  B, from its upper bound of 10.) A similar result is obtained for the last nonbasic arc E  D. Adding this arc with flow ␪ to the initial feasible spanning tree creates the undirected cycle ED–DE shown in Fig. 10.21, so the flow also increases by ␪ for arc D  E, but no other arcs are affected. Therefore, ⌬Z ⫽ 2␪ ⫹ 3␪ ⫽ 5␪ ⫽ 5, when ␪ ⫽ 1, so xED is ruled out as a candidate to be the entering basic variable. To summarize, ⌬Z ⫽  ⫺7, 6, ⫺5, if ⌬xAC ⫽ 1 if ⌬yAB ⫽ 1 if ⌬xED ⫽ 1 so the negative value for xAC implies that xAC becomes the entering basic variable for the first iteration. If there had been more than one nonbasic variable with a negative value of ⌬Z, then the one having the largest absolute value would have been chosen. (If there had been no nonbasic variables with a negative value of ⌬Z, the current BF solution would have been optimal.) Rather than identifying undirected cycles, etc., the network simplex method actually obtains these ⌬Z values by an algebraic procedure that is considerably more efficient (especially for large networks). The procedure is analogous to that used by the transportation simplex method (see Sec. 9.2) to solve for ui and vj in order to obtain the value of cij ⫺ ui ⫺ vj for each nonbasic variable xij. We shall not describe this procedure further, so you should just use the undirected cycles method when you are doing problems at the end of the chapter. Finding the Leaving Basic Variable and the Next BF Solution After selection of the entering basic variable, only one more quick step is needed to simultaneously determine the leaving basic variable and solve for the next BF solution. For the first iteration of the example, the key is Fig. 10.18. Since xAC is the entering basic variable, the flow ␪ through arc A  C is to be increased from zero as far as possible until one of the basic variables reaches either its lower bound (0) or its upper bound (uij). For those arcs whose flow increases with ␪ in Fig. 10.18 (arcs A  C and C  E), only the upper bounds (uAC ⫽ ⴥ and uCE ⫽ 80) need to be considered: xAC ⫽ ␪ ⱕ ⴥ. xCE ⫽ 50 ⫹ ␪ ⱕ 80, so ␪ ⱕ 30. 409 10.7 THE NETWORK SIMPLEX METHOD For those arcs whose flow decreases with ␪ (arcs D  E and A  D), only the lower bound of 0 needs to be considered: xDE  10 ⫺ ␪ ⱖ 0, xAD ⫽ 40 ⫺ ␪ ⱖ 0, so so ␪ ⱕ 10. ␪ ⱕ 40. Arcs whose flow is unchanged by ␪ (i.e., those not part of the undirected cycle), which is just arc B  C in Fig. 10.18, can be ignored since no bound will be reached as ␪ is increased. For the five arcs in Fig. 10.18, the conclusion is that xDE must be the leaving basic variable because it reaches a bound for the smallest value of ␪ (10). Setting ␪ ⫽ 10 in this figure thereby yields the flows through the basic arcs in the next BF solution: xAC xCE xAD xBC ⫽ ⫽ ⫽ ⫽ ␪ ⫽ 10, 50 ⫹ ␪ ⫽ 60, 40 ⫺ ␪ ⫽ 30, 50. The corresponding feasible spanning tree is shown in Fig. 10.22. If the leaving basic variable had reached its upper bound, then the adjustments discussed for the upper bound technique would have been needed at this point (as you will see illustrated during the next two iterations). However, because it was the lower bound of 0 that was reached, nothing more needs to be done. Completing the Example. For the two remaining iterations needed to reach the optimal solution, the primary focus will be on some features of the upper bound technique they illustrate. The pattern for finding the entering basic variable, the leaving basic variable, and the next BF solution will be very similar to that described for the first iteration, so we only summarize these steps briefly. Iteration 2: Starting with the feasible spanning tree shown in Fig. 10.22 and referring to Fig. 10.16 for the unit costs cij, we arrive at the calculations for selecting the entering basic variable in Table 10.4. The second column identifies the unique undirected cycle that is created by adding the nonbasic arc in the first column to this spanning tree, and the third column shows the incremental effect on costs because of the changes in flows on this cycle caused by adding a flow of ␪ ⫽ 1 to the nonbasic arc. Arc E  D has the largest (in absolute terms) negative value of ⌬Z, so xED is the entering basic variable. We now make the flow ␪ through arc E  D as large as possible, while satisfying the following flow bounds: ■ FIGURE 10.22 The second feasible spanning tree and its solution for the example. [⫺30] [40] (xAD ⫽ 30) A (10) D [0] C (60) (50) B E [50] [⫺60] 410 CHAPTER 10 NETWORK OPTIMIZATION MODELS ■ TABLE 10.4 Calculations for selecting the entering basic variable for iteration 2 Nonbasic Arc Cycle Created ⌬Z When ␪ ⫽ 1 BA DE ED BA–AC–BC DE–CE–AC–AD ED–AD–AC–CE 2 ⫺2 ⫹ 4 ⫺ 3 ⫽ ⫺1 3 ⫺ 1 ⫺ 4 ⫹ 9 ⫽ ⫺7 2 ⫺ 9 ⫹ 4 ⫹ 1 ⫽ ⫺2 so so so so xED ⫽ ␪ ⱕ uED ⫽ ⴥ, xAD ⫽ 30 ⫺ ␪ ⱖ 0, xAC ⫽ 10 ⫹ ␪ ⱕ uAC ⫽ ⴥ, xCE ⫽ 60 ⫹ ␪ ⱕ uCE ⫽ 80, ␪ ␪ ␪ ␪ ⱕ ⱕ ⱕ ⱕ ⴥ. 30. ⴥ. 20.  Minimum  Minimum Because xCE imposes the smallest upper bound (20) on ␪, xCE becomes the leaving basic variable. Setting ␪ ⫽ 20 in the above expressions for xED, xAD, and xAC then yields the flow through the basic arcs for the next BF solution (with xBC ⫽ 50 unaffected by ␪), as shown in Fig. 10.23. What is of special interest here is that the leaving basic variable xCE was obtained by the variable reaching its upper bound (80). Therefore, by using the upper bound technique, xCE is replaced with 80 ⫺ yCE, where yCE ⫽ 0 is the new nonbasic variable. At the same time, the original arc C  E with cCE ⫽ 1 and uCE ⫽ 80 is replaced with the reverse arc E  C with cEC ⫽ ⫺1 and uEC ⫽ 80. The values of bE and bC also are adjusted by adding 80 to bE and subtracting 80 from bC. The resulting adjusted network is shown in Fig. 10.24, where the nonbasic arcs are shown as dashed lines and the numbers by all the arcs are unit costs. ■ FIGURE 10.23 The third feasible spanning tree and its solution for the example. [40] [⫺30] (xAD ⫽ 10) A (30) D [⫺80] (20) C (50) ■ FIGURE 10.24 The adjusted network with unit costs at the completion of iteration 2. B E [50] [20] [⫺30] [40] cAD ⫽ 9 A [⫺80] 4 ⫺2 D C 3 2 (uBA ⫽ 10) 3 B [50] ⫺1 (uEC ⫽ 80) E [20] 411 10.7 THE NETWORK SIMPLEX METHOD ■ TABLE 10.5 Calculations for selecting the entering basic variable for iteration 3 Nonbasic Arc Cycle Created ⌬Z When ␪ ⴝ 1 BA DE EC BA–AC–BC DE–ED EC–AC–AD–ED ⫺1 ⫺2 ⫹ 4 ⫺ 3 ⫽ ⫺1 ⫺1 ⫺2 ⫹ 3 ⫹ 2 ⫽ ⫺5 ⫺1 ⫺ 4 ⫹ 9 ⫺ 2 ⫽ ⫺2  Minimum Iteration 3: If Figs. 10.23 and 10.24 are used to initiate the next iteration, Table 10.5 shows the calculations that lead to selecting yAB (reverse arc B  A) as the entering basic variable. We then add as much flow ␪ through arc B  A as possible while satisfying the flow bounds below: yAB  ␪ ⱕ uBA ⫽ 10, xAC ⫽ 30 ⫹ ␪ ⱕ uAC ⫽ ⴥ, xBC ⫽ 50 ⫺ ␪ ⱖ 0, so so so ␪ ⱕ 10. ␪ ⱕ ⴥ. ␪ ⱕ 50.  Minimum The smallest upper bound (10) on ␪ is imposed by yAB, so this variable becomes the leaving basic variable. Setting ␪ ⫽ 10 in these expressions for xAC and xBC (along with the unchanged values of xAC ⫽ 10 and xED ⫽ 20) then yields the next BF solution, as shown in Fig. 10.25. As with iteration 2, the leaving basic variable (yAB) was obtained here by the variable reaching its upper bound. In addition, there are two other points of special interest concerning this particular choice. One is that the entering basic variable yAB also became the leaving basic variable on the same iteration! This event occurs occasionally with the upper bound technique whenever increasing the entering basic variable from zero causes its upper bound to be reached first before any of the other basic variables reach a bound. The other interesting point is that the arc B  A that now needs to be replaced by a reverse arc A  B (because of the leaving basic variable reaching an upper bound) already is a reverse arc! This is no problem, because the reverse arc for a reverse arc is simply the original real arc. Therefore, the arc B  A (with cBA ⫽ ⫺2 and uBA ⫽ 10) in Fig. 10.24 now is replaced by arc A  B (with cAB ⫽ 2 and uAB ⫽ 10), which is the arc between nodes A and B in the original network shown in Fig. 10.12, and a generated net flow of 10 is shifted from node B (bB ⫽ 50  40) to node A (bA ⫽ 40  50). Simultaneously, the variable yAB ⫽ 10 is replaced by 10 ⫺ xAB, with xAB ⫽ 0 as the new nonbasic variable. The resulting adjusted network is shown in Fig. 10.26. Passing the Optimality Test: At this point, the algorithm would attempt to use Figs. 10.25 and 10.26 to find the next entering basic variable with the usual calculations shown in Table 10.6. However, none of the nonbasic arcs gives a negative value ■ FIGURE 10.25 The fourth (and final) feasible spanning tree and its solution for the example. [⫺30] [50] (xAD ⫽ 10) A (40) D [⫺80] (20) C (40) B E [40] [20] 412 CHAPTER 10 NETWORK OPTIMIZATION MODELS [⫺30] [50] cAD ⫽ 9 A [⫺80] 4 2 D 3 2 C (uAB ⫽ 10) ⫺1 (uEC ⫽ 80) 3 ■ FIGURE 10.26 The adjusted network with unit costs at the completion of iteration 3. B [40] E [20] ■ TABLE 10.6 Calculations for the optimality test at the end of iteration 3 Nonbasic Arc Cycle Created ⌬Z When ␪ ⴝ 1 AB DE EC AB–BC–AC DE–ED EC–AC–AD–ED 2⫹3⫺4⫽1 3⫹2⫽5 ⫺1 ⫺ 4 ⫹ 9 ⫺ 2 ⫽ 2 of ⌬Z, so an improvement in Z cannot be achieved by introducing flow through any of them. This means that the current BF solution shown in Fig. 10.25 has passed the optimality test, so the algorithm stops. To identify the flows through real arcs rather than reverse arcs for this optimal solution, the current adjusted network (Fig. 10.26) should be compared with the original network (Fig. 10.12). Note that each of the arcs has the same direction in the two networks with the one exception of the arc between nodes C and E. This means that the only reverse arc in Fig. 10.26 is arc E  C, where its flow is given by the variable yCE. Therefore, calculate xCE ⫽ uCE ⫺ yCE ⫽ 80 ⫺ yCE. Arc E  C happens to be a nonbasic arc, so yCE ⫽ 0 and xCE ⫽ 80 is the flow through the real arc C  E. All the other flows through real arcs are the flows given in Fig. 10.25. Therefore, the optimal solution is the one shown in Fig. 10.27. Another complete example of applying the network simplex method is provided by the demonstration in the Network Analysis Area of your OR Tutor. An additional example is given in the Solved Examples section of the book’s website as well. Also included in your IOR Tutorial is an interactive procedure for the network simplex method. ■ FIGURE 10.27 The optimal flow pattern in the original network for the Distribution Unlimited Co. example. [⫺30] [50] (xAD ⫽ 10) A (40) (0) D [0] (20) C (40) (0) (80) B E [40] [⫺60] 10.8 A NETWORK MODEL ■ 10.8 413 A NETWORK MODEL FOR OPTIMIZING A PROJECT’S TIME-COST TRADE-OFF Networks provide a natural way of graphically displaying the flow of activities in a major project, such as a construction project or a research-and-development project. Therefore, one of the most important applications of network theory is in aiding the management of such projects. In the late 1950s, two network-based OR techniques—PERT (program evaluation and review technique) and CPM (critical path method)—were developed independently to assist project managers in carrying out their responsibilities. These techniques were designed to help plan how to coordinate a project’s various activities, develop a realistic schedule for the project, and then monitor the progress of the project after it is under way. Over the years, the better features of these two techniques have tended to be merged into what is now commonly referred to as the PERT/CPM technique. This network approach to project management continues to be widely used today. One of the supplementary chapters on the book’s website, Chap. 22 (Project Management with PERT/CPM), provides a complete description of the various features of PERT/CPM. We now will highlight one of these features for two reasons. First, it is a network optimization model and so fits into the theme of the current chapter. Second, it illustrates the kind of important applications that such models can have. The feature we will highlight is referred to as the CPM method of time-cost tradeoffs because it was a key part of the original CPM technique. It addresses the following problem for a project that needs to be completed by a specific deadline. Suppose that this deadline would not be met if all the activities are performed in the normal manner, but that there are various ways of meeting the deadline by spending more money to expedite some of the activities. What is the optimal plan for expediting some activities so as to minimize the total cost of performing the project within the deadline? The general approach begins by using a network to display the various activities and the order in which they need to be performed. An optimization model then is formulated that can be solved by using either marginal analysis or linear programming. As with the other network optimization models considered earlier in this chapter, the special structure of the problem makes it relatively easy to solve efficiently. This approach is illustrated below by using the same prototype example that is carried through Chap. 22. A Prototype Example—the Reliable Construction Co. Problem The RELIABLE CONSTRUCTION COMPANY has just made the winning bid of $5.4 million to construct a new plant for a major manufacturer. The manufacturer needs the plant to go into operation within 40 weeks. Reliable is assigning its best construction manager, David Perty, to this project to help ensure that it stays on schedule. Mr. Perty will need to arrange for a number of crews to perform the various construction activities at different times. Table 10.7 shows his list of the various activities. The third column provides important additional information for coordinating the scheduling of the crews. For any given activity, its immediate predecessors (as given in the third column of Table 10.7) are those activities that must be completed by no later than the starting time of the given activity. (Similarly, the given activity is called an immediate successor of each of its immediate predecessors.) 414 CHAPTER 10 NETWORK OPTIMIZATION MODELS ■ TABLE 10.7 Activity list for the Reliable Construction Co. project Activity A B C D E F G H I J K L M N Activity Description Immediate Predecessors Excavate Lay the foundation Put up the rough wall Put up the roof Install the exterior plumbing Install the interior plumbing Put up the exterior siding Do the exterior painting Do the electrical work Put up the wallboard Install the flooring Do the interior painting Install the exterior fixtures Install the interior fixtures — A B C C E D E, G C F, I J J H K, L Estimated Duration 2 4 10 6 4 5 7 9 7 8 4 5 2 6 weeks weeks weeks weeks weeks weeks weeks weeks weeks weeks weeks weeks weeks weeks For example, the top entries in this column indicate that 1. Excavation does not need to wait for any other activities. 2. Excavation must be completed before starting to lay the foundation. 3. The foundation must be completely laid before starting to put up the rough wall, and so on. When a given activity has more than one immediate predecessor, all must be finished before the activity can begin. In order to schedule the activities, Mr. Perty consults with each of the crew supervisors to develop an estimate of how long each activity should take when it is done in the normal way. These estimates are given in the rightmost column of Table 10.7. Adding up these times gives a grand total of 79 weeks, which is far beyond the deadline of 40 weeks for the project. Fortunately, some of the activities can be done in parallel, which substantially reduces the project completion time. We will see next how the project can be displayed graphically to better visualize the flow of the activities and to determine the total time required to complete the project if no delays occur. We have seen in this chapter how valuable networks can be to represent and help analyze many kinds of problems. In much the same way, networks play a key role in dealing with projects. They enable showing the relationships between the activities and succinctly displaying the overall plan for the project. They also are helpful for analyzing the project. Project Networks A network used to represent a project is called a project network. A project network consists of a number of nodes (typically shown as small circles or rectangles) and a number of arcs (shown as arrows) that connect two different nodes. As Table 10.7 indicates, three types of information are needed to describe a project: 1. Activity information: Break down the project into its individual activities (at the desired level of detail). 2. Precedence relationships: Identify the immediate predecessor(s) for each activity. 3. Time information: Estimate the duration of each activity. The project network should convey all this information. Two alternative types of project networks are available for doing this. 10.8 A NETWORK MODEL 415 One type is the activity-on-arc (AOA) project network, where each activity is represented by an arc. A node is used to separate an activity (an outgoing arc) from each of its immediate predecessors (an incoming arc). The sequencing of the arcs thereby shows the precedence relationships between the activities. The second type is the activity-on-node (AON) project network, where each activity is represented by a node. Then the arcs are used just to show the precedence relationships that exist between the activities. In particular, the node for each activity with immediate predecessors has an arc coming in from each of these predecessors. The original versions of PERT and CPM used AOA project networks, so this was the conventional type for some years. However, AON project networks have some important advantages over AOA project networks for conveying the same information: 1. AON project networks are considerably easier to construct than AOA project networks. 2. AON project networks are easier to understand than AOA project networks for inexperienced users, including many managers. 3. AON project networks are easier to revise than AOA project networks when there are changes in the project. For these reasons, AON project networks have become increasingly popular with practitioners. It appears that they may become the standard format for project networks. Therefore, we will focus solely on AON project networks, and will drop the adjective AON. Figure 10.28 shows the project network for Reliable’s project.2 Referring also to the third column of Table 10.7, note how there is an arc leading to each activity from each of its immediate predecessors. Because activity A has no immediate predecessors, there is an arc leading from the start node to this activity. Similarly, since activities M and N have no immediate successors, arcs lead from these activities to the finish node. Therefore, the project network nicely displays at a glance all the precedence relationships between all the activities (plus the start and finish of the project). Based on the rightmost column of Table 10.7, the number next to the node for each activity then records the estimated duration (in weeks) of that activity. The Critical Path How long should the project take? We noted earlier that summing the durations of all the activities gives a grand total of 79 weeks. However, this isn’t the answer to the question because some of the activities can be performed (roughly) simultaneously. What is relevant instead is the length of each path through the network: A path through a project network is one of the routes following the arcs from the START node to the FINISH node. The length of a path is the sum of the (estimated) durations of the activities on the path. The six paths through the project network in Fig. 10.28 are given in Table 10.8, along with the calculations of the lengths of these paths. The path lengths range from 31 weeks up to 44 weeks for the longest path (the fourth one in the table). So given these path lengths, what should be the (estimated) project duration (the total time required for the project)? Let us reason it out. Since the activities on any given path must be done in sequence with no overlap, the project duration cannot be shorter than the path length. However, the project duration can be longer because some activity on the path with multiple immediate predecessors might 2 Although project networks often are drawn from left to right, we go from top to bottom to better fit on the printed page. 416 CHAPTER 10 NETWORK OPTIMIZATION MODELS START 0 Activity Code A. B. C. D. E. F. G. H. I. J. K. L. M. N. A 2 B 4 C 10 D 6 E 4 G 7 I 7 Excavate Foundation Rough wall Roof Exterior plumbing Interior plumbing Exterior siding Exterior painting Electrical work Wallboard Flooring Interior painting Exterior fixtures Interior fixtures F 5 J 8 H 9 K 4 L 5 M 2 N 6 ■ FIGURE 10.28 The project network for the Reliable Construction Co. project. FINISH 0 ■ TABLE 10.8 The paths and path lengths through Reliable’s project network Path START START START START START START ABCDGHM FINISH ABCEHM FINISH ABCEFJKN FINISH ABCEFJLN FINISH ABCIJKN FINISH ABCIJLN FINISH Length 2 ⫹ 4 ⫹ 10 ⫹ 6 ⫹ 7 ⫹ 9 ⫹ 2 ⫹ 6 ⫽ 40 2 ⫹ 4 ⫹ 10 ⫹ 4 ⫹ 9 ⫹ 2 ⫹ 2 ⫹ 6 ⫽ 31 2 ⫹ 4 ⫹ 10 ⫹ 4 ⫹ 5 ⫹ 8 ⫹ 4 ⫹ 6 ⫽ 43 2 ⫹ 4 ⫹ 10 ⫹ 4 ⫹ 5 ⫹ 8 ⫹ 5 ⫹ 6 ⫽ 44 2 ⫹ 4 ⫹ 10 ⫹ 7 ⫹ 8 ⫹ 4 ⫹ 6 ⫹ 6 ⫽ 41 2 ⫹ 4 ⫹ 10 ⫹ 7 ⫹ 8 ⫹ 5 ⫹ 6 ⫹ 6 ⫽ 42 weeks weeks weeks weeks weeks weeks have to wait longer for an immediate predecessor not on the path to finish than for the one on the path. For example, consider the second path in Table 10.8 and focus on activity H. This activity has two immediate predecessors, one (activity G) not on the path and one (activity E) that is. After activity C finishes, only 4 more weeks are required for activity E but 13 weeks will be needed for activity D and then activity G to finish. Therefore, the project duration must be considerably longer than the length of the second path in the table. However, the project duration will not be longer than one particular path. This is the longest path through the project network. The activities on this path can be performed sequentially without interruption. (Otherwise, this would not be the longest path.) 10.8 A NETWORK MODEL 417 Therefore, the time required to reach the FINISH node equals the length of this path. Furthermore, all the shorter paths will reach the FINISH node no later than this. Here is the key conclusion: The (estimated) project duration equals the length of the longest path through the project network. This longest path is called the critical path.3 (If more than one path tie for the longest, they all are critical paths.) Thus, for the Reliable Construction Co. project, we have Critical path: START ABCEFJLN FINISH (Estimated) project duration  44 weeks. Therefore, if no delays occur, the total time required to complete the project should be about 44 weeks. Furthermore, the activities on this critical path are the critical bottleneck activities where any delays in their completion must be avoided to prevent delaying project completion. This is valuable information for Mr. Perty, since he now knows that he should focus most of his attention on keeping these particular activities on schedule in striving to keep the overall project on schedule. Furthermore, to reduce the duration of the project (remember that the deadline for completion is 40 weeks), these are the main activities where changes should be made to reduce their durations. Mr. Perty now needs to determine specifically which activites should have their durations reduced, and by how much, in order to meet the deadline of 40 weeks in the least expensive way. He remembers that CPM provides an excellent procedure for investigating such time-cost trade-offs, so he will use this approach to address this question. We begin with some background. Time-Cost Trade-Offs for Individual Activities The first key concept for this approach is that of crashing: Crashing an activity refers to taking special costly measures to reduce the duration of an activity below its normal value. These special measures might include using overtime, hiring additional temporary help, using special time-saving materials, obtaining special equipment, etc. Crashing the project refers to crashing a number of activities in order to reduce the duration of the project below its normal value. The CPM method of time-cost trade-offs is concerned with determining how much (if any) to crash each of the activities in order to reduce the anticipated duration of the project to a desired value. The data necessary for determining how much to crash a particular activity are given by the time-cost graph for the activity. Figure 10.29 shows a typical time-cost graph. Note the two key points on this graph labeled Normal and Crash: The normal point on the time-cost graph for an activity shows the time (duration) and cost of the activity when it is performed in the normal way. The crash point shows the time and cost when the activity is fully crashed, i.e., it is fully expedited with no cost spared to reduce its duration as much as possible. As an approximation, CPM assumes that these times and costs can be reliably predicted without significant uncertainty. For most applications, it is assumed that partially crashing the activity at any level will give a combination of time and cost that will lie somewhere on the line segment between 3 Although Table 10.8 illustrates how the enumeration of paths and path lengths can be used to find the critical path for small projects, Chap. 22 describes how PERT/CPM normally uses a considerably more efficient procedure to obtain a variety of useful information, including the critical path. 418 CHAPTER 10 NETWORK OPTIMIZATION MODELS Activity cost Crash cost Crash Normal Normal cost ■ FIGURE 10.29 A typical time-cost graph for an activity. Crash time Normal time Activity duration these two points.4 (For example, this assumption says that half of a full crash will give a point on this line segment that is midway between the normal and crash points.) This simplifying approximation reduces the necessary data gathering to estimating the time and cost for just two situations: normal conditions (to obtain the normal point) and a full crash (to obtain the crash point). Using this approach, Mr. Perty has his staff and crew supervisors working on developing these data for each of the activities of Reliable’s project. For example, the supervisor of the crew responsible for putting up the wallboard indicates that adding two temporary employees and using overtime would enable him to reduce the duration of this activity from 8 weeks to 6 weeks, which is the minimum possible. Mr. Perty’s staff then estimates the cost of fully crashing the activity in this way as compared to following the normal 8-week schedule, as shown below. Activity J (put up the wallboard): Normal point: time  8 weeks, cost  $430,000. Crash point: time  6 weeks, cost  $490,000. Maximum reduction in time  8 ⫺ 6 ⫽ 2 weeks. $490,000 ⫺ $430,000 ᎏᎏᎏ 2 ⫽ $30,000. Crash cost per week saved ⫽ After investigating the time-cost trade-off for each of the other activities in the same way, Table 10.9 gives the data obtained for all the activities. Which Activities Should Be Crashed? Summing the normal cost and crash cost columns of Table 10.9 gives Sum of normal costs ⫽ $4.55 million, Sum of crash costs ⫽ $6.15 million. 4 This is a convenient assumption, but it often is only a rough approximation since the underlying assumptions of proportionality and divisibility may not hold completely. If the true time-cost graph is convex, linear programming can still be employed by using a piecewise linear approximation and then applying the separable programming technique described in Sec. 13.8. 419 10.8 A NETWORK MODEL ■ TABLE 10.9 Time-cost trade-off data for the activities of Reliable’s project Time Activity A B C D E F G H I J K L M N Normal 2 4 10 6 4 5 7 9 7 8 4 5 2 6 weeks weeks weeks weeks weeks weeks weeks weeks weeks weeks weeks weeks weeks weeks Maximum Reduction in Time Cost Crash 1 2 7 4 3 3 4 6 5 6 3 3 1 3 weeks weeks weeks weeks weeks weeks weeks weeks weeks weeks weeks weeks weeks weeks Normal Crash $180,000 $320,000 $620,000 $260,000 $410,000 $180,000 $900,000 $200,000 $210,000 $430,000 $160,000 $250,000 $100,000 $330,000 $1,280,000 $1,420,000 $1,860,000 $1,340,000 $1,570,000 $1,260,000 $1,020,000 $1,380,000 $1,270,000 $1,490,000 $1,200,000 $1,350,000 $1,200,000 $1,510,000 1 2 3 2 1 2 3 3 2 2 1 2 1 3 weeks weeks weeks weeks weeks weeks weeks weeks weeks weeks weeks weeks weeks weeks Crash Cost per Week Saved $100,000 $ 50,000 $ 80,000 $ 40,000 $160,000 $ 40,000 $ 40,000 $ 60,000 $ 30,000 $ 30,000 $ 40,000 $ 50,000 $100,000 $ 60,000 Recall that the company will be paid $5.4 million for doing this project. This payment needs to cover some overhead costs in addition to the costs of the activities listed in the table, as well as provide a reasonable profit to the company. When developing the winning bid of $5.4 million, Reliable’s management felt that this amount would provide a reasonable profit as long as the total cost of the activities could be held fairly close to the normal level of about $4.55 million. Mr. Perty understands very well that it is his responsibility to keep the project as close to both budget and schedule as possible. As found previously in Table 10.8, if all the activities are performed in the normal way, the anticipated duration of the project would be 44 weeks (if delays can be avoided). If all the activities were to be fully crashed instead, then a similar calculation would find that this duration would be reduced to only 28 weeks. But look at the prohibitive cost ($6.15 million) of doing this! Fully crashing all activities clearly is not a viable option. However, Mr. Perty still wants to investigate the possibility of partially or fully crashing just a few activities to reduce the anticipated duration of the project to 40 weeks. The problem: What is the least expensive way of crashing some activities to reduce the (estimated) project duration to the specified level (40 weeks)? One way of solving this problem is marginal cost analysis, which uses the last column of Table 10.9 (along with Table 10.8) to determine the least expensive way to reduce project duration 1 week at a time. The easiest way to conduct this kind of analysis is to set up a table like Table 10.10 that lists all the paths through the project network and the current length of each of these paths. To get started, this information can be copied directly from Table 10.8. Since the fourth path listed in Table 10.10 has the longest length (44 weeks), the only way to reduce project duration by a week is to reduce the duration of the activities on this particular path by a week. Comparing the crash cost per week saved given in the ■ TABLE 10.10 The initial table for starting marginal cost analysis of Reliable’s project Length of Path Activity to Crash Crash Cost ABCDGHM ABCEHM ABCEFJKN ABCEFJLN ABCIJKN ABCIJLN 40 31 43 44 41 42 420 CHAPTER 10 NETWORK OPTIMIZATION MODELS ■ TABLE 10.11 The final table for performing marginal cost analysis on Reliable’s project Length of Path Activity to Crash Crash Cost ABCDGHM ABCEHM ABCEFJKN ABCEFJLN ABCIJKN ABCIJLN J J F F $30,000 30,000 40,000 40,000 40 40 40 40 40 31 31 31 31 31 43 42 41 40 39 44 43 42 41 40 41 40 39 39 39 42 41 40 40 40 last column of Table 10.9 for these activities, the smallest cost is $30,000 for activity J. (Note that activity I with this same cost is not on this path.) Therefore, the first change is to crash activity J enough to reduce its duration by a week. This change results in reducing the length of each path that includes activity J (the third, fourth, fifth, and sixth paths in Table 10.10) by a week, as shown in the second row of Table 10.11. Because the fourth path still is the longest (43 weeks), the same process is repeated to find the least expensive activity to shorten on this path. This again is activity J, since the next-to-last column in Table 10.9 indicates that a maximum reduction of 2 weeks is allowed for this activity. This second reduction of a week for activity J leads to the third row of Table 10.11. At this point, the fourth path still is the longest (42 weeks), but activity J cannot be shortened any further. Among the other activities on this path, activity F now is the least expensive to shorten ($40,000 per week) according to the last column of Table 10.9. Therefore, this activity is shortened by a week to obtain the fourth row of Table 10.11, and then (because a maximum reduction of 2 weeks is allowed) is shortened by another week to obtain the last row of this table. The longest path (a tie between the first, fourth, and sixth paths) now has the desired length of 40 weeks, so we don’t need to do any more crashing. (If we did need to go further, the next step would require looking at the activities on all three paths to find the least expensive way of shortening all three paths by a week.) The total cost of crashing activities J and F to get down to this project duration of 40 weeks is calculated by adding the costs in the second column of Table 10.11—a total of $140,000. Figure 10.30 shows the resulting project network, where the darker arrows show the critical paths. Figure 10.30 shows that reducing the durations of activities F and J to their crash times has led to now having three critical paths through the network. The reason is that, as we found earlier from the last row of Table 10.11, the three paths tie for being the longest, each with a length of 40 weeks. With larger networks, marginal cost analysis can become quite unwieldy. A more efficient procedure would be desirable for large projects. For this reason, the standard CPM procedure is to apply linear programming instead (commonly with a customized software package that exploits the special structure of this network optimization model). Using Linear Programming to Make Crashing Decisions The problem of finding the least expensive way of crashing activities can be rephrased in a form more familiar to linear programming as follows: Restatement of the problem: Let Z be the total cost of crashing activities. The problem then is to minimize Z, subject to the constraint that project duration must be less than or equal to the time desired by the project manager. 421 10.8 A NETWORK MODEL START 0 A 2 B 4 C 10 D 6 E 4 G 7 I 7 F 3 J 6 H 9 ■ FIGURE 10.30 The project network if activities J and F are fully crashed (with all other activities normal) for Reliable’s project. The darker arrows show the various critical paths through the project network. K 4 L 5 M 2 N 6 FINISH 0 The natural decision variables are xj  reduction in the duration of activity j due to crashing this activity, for j  A, B, . . . , N. By using the last column of Table 10.9, the objective function to be minimized then is Z  100,000xA ⫹ 50,000xB ⫹ . . . ⫹ 60,000xN. Each of the 14 decision variables on the right-hand side needs to be restricted to nonnegative values that do not exceed the maximum given in the next-to-last column of Table 10.9. To impose the constraint that project duration must be less than or equal to the desired value (40 weeks), let yFINISH ⫽ project duration, i.e., the time at which the FINISH node in the project network is reached. The constraint then is . . . yFINISH ⱕ 40. To help the linear programming model assign the appropriate value to yFINISH, given the values of xA, xB, . . . , xN, it is convenient to introduce into the model the following additional variables. 422 CHAPTER 10 NETWORK OPTIMIZATION MODELS yj  start time of activity j (for j  B, C, . . . , N), given the values of xA, xB, . . . , xN. (No such variable is needed for activity A, since an activity that begins the project is automatically assigned a value of 0.) By treating the FINISH node as another activity (albeit one with zero duration), as we now will do, this definition of yj for activity FINISH also fits the definition of yFINISH given in the preceding paragraph. The start time of each activity (including FINISH) is directly related to the start time and duration of each of its immediate predecessors as summarized below. For each activity (B, C, . . . , N, FINISH) and each of its immediate predecessors, Start time of this activity ⱖ (start time ⫹ duration) for this immediate predecessor. Furthermore, by using the normal times from Table 10.9, the duration of each activity is given by the following formula: Duration of activity j ⫽ its normal time ⫺ xj , To illustrate these relationships, consider activity F in the project network (Fig. 10.28 or 10.30): Immediate predecessor of activity F: Activity E, which has duration ⫽ 4 ⫺ xE. Relationship between these activities: yF ⱖ yE ⫹ 4 ⫺ xE. Thus, activity F cannot start until activity E starts and then completes its duration of 4 ⫺ xE. Now consider activity J, which has two immediate predecessors: Immediate predecessors of activity J: Activity F, which has duration ⫽ 5 ⫺ xF. Activity I, which has duration ⫽ 7 ⫺ xI. Relationships between these activities: yJ ⱖ yF ⫹ 5 ⫺ xF, yJ ⱖ yI ⫹ 7 ⫺ xI. These inequalities together say that activity j cannot start until both of its predecessors finish. By including these relationships for all the activities as constraints, we obtain the complete linear programming model given below: Minimize Z ⫽ 100,000xA ⫹ 50,000xB ⫹ . . . ⫹ 60,000xN, subject to the following constraints: 1. Maximum reduction constraints: Using the next-to-last column of Table 10.9, xA ⱕ 1, xB ⱕ 2, . . . , xN ⱕ 3. 2. Nonnegativity constraints: xA ⱖ 0, xB ⱖ 0, . . . , xN ⱖ 0 yB ⱖ 0, yC ⱖ 0, . . . , yN ⱖ 0, yFINISH ⱖ 0. 3. Start-time constraints: As described above the objective function, with the exception of activity A (which starts the project), there is one start-time constraint for each activity with a single immediate 423 10.8 A NETWORK MODEL predecessor (activities B, C, D, E, F, G, I, K, L, M) and two constraints for each activity with two immediate predecessors (activities H, J, N, FINISH), as listed below. One immediate predecessor ᎏᎏᎏᎏᎏᎏᎏᎏᎏᎏᎏᎏᎏᎏᎏᎏᎏᎏᎏᎏᎏᎏᎏᎏᎏ yB ⱖ 0 ⫹ 2 ⫺ xA yC ⱖ yB ⫹ 4 ⫺ xB yD ⱖ yC ⫹ 10 ⫺ xC ⯗ yM ⱖ yH ⫹ 9 ⫺ xH Two immediate predecessors ᎏᎏᎏᎏᎏᎏᎏᎏᎏᎏᎏᎏᎏᎏᎏᎏᎏᎏᎏᎏᎏᎏᎏᎏᎏᎏ yH ⱖ yG ⫹ 7 ⫺ xG yH ⱖ yE ⫹ 4 ⫺ xE ⯗ yFINISH ⱖ yM ⫹ 2 ⫺ xM yFINISH ⱖ yN ⫹ 6 ⫺ xN (In general, the number of start-time constraints for an activity equals its number of immediate predecessors since each immediate predecessor contributes one start-time constraint.) 4. Project duration constraint: yFINISH ⱕ 40. Figure 10.31 shows how this problem can be formulated as a linear programming model on a spreadsheet. The decisions to be made are shown in the changing cells, StartTime (I6:I19), TimeReduction (J6:J19), and ProjectFinishTime (I22). Columns B to H correspond to the columns in Table 10.9. As the equations in the bottom half of the figure indicate, columns G and H are calculated in a straightforward way. The equations for column K express the fact that the finish time for each activity is its start time plus its normal time minus its time reduction due to crashing. The equation entered into the objective cell TotalCost (I24) adds all the normal costs plus the extra costs due to crashing to obtain the total cost. The last set of constraints in Solver, TimeReduction (J6:J19) ⱕ MaxTimeReduction (G6:G19), specifies that the time reduction for each activity cannot exceed its maximum time reduction given in column G. The two preceding constraints, ProjectFinishTime (I22) ⱖ MFinish (K18) and ProjectFinishTime (I22) ⱖ NFINISH (K19), indicate that the project cannot finish until each of the two immediate predecessors (activities M and N ) finish. The constraint that ProjectFinishTime (I22) ⱕ MaxTime (K22) is a key one that specifies that the project must finish within 40 weeks. The constraints involving StartTime (I6:I19) all are start-time constraints that specify that an activity cannot start until each of its immediate predecessors has finished. For example, the first constraint shown, BStart (I7) ⱖ AFinish (K6), says that activity B cannot start until activity A (its immediate predecessor) finishes. When an activity has more than one immediate predecessor, there is one such constraint for each of them. To illustrate, activity H has both activities E and G as immediate predecessors. Consequently, activity H has two starttime constraints, HStart (I13) ⱖ EFinish (K10) and HStart (I13) ⱖ GFinish (K12). You may have noticed that the ⱖ form of the start-time constraints allows a delay in starting an activity after all its immediate predecessors have finished. Although such a delay is feasible in the model, it cannot be optimal for any activity on a critical path, since this needless delay would increase the total cost (by necessitating additional crashing to meet the project duration constraint). Therefore, an optimal solution for the model will not have any such delays, except possibly for activities not on a critical path. Columns I and J in Fig. 10.31 show the optimal solution obtained after having clicked on the Solve button. (Note that this solution involves one delay—activity K starts at 30 even though its only immediate predecessor, activity J, finishes at 29—but this doesn’t matter since activity K is not on a critical path.) This solution corresponds to the one displayed in Fig. 10.30 that was obtained by marginal cost analysis. If you would like to see another example that illustrates both the marginal cost analysis approach and the linear programming approach to applying the CPM method of timecost trade-offs, the Solved Examples section of the book’s website provides one. 424 CHAPTER 10 NETWORK OPTIMIZATION MODELS A 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 B C D E F G H I J K Reliable Construction Co. Project Scheduling Problem with Time-Cost Trade-offs Activity A B C D E F G H I J K L M N Time Normal Crash 2 1 4 2 10 7 6 4 4 3 5 3 7 4 9 6 7 5 8 6 4 3 5 3 2 1 6 3 Cost Normal $180,000 $320,000 $620,000 $260,000 $410,000 $180,000 $900,000 $200,000 $210,000 $430,000 $160,000 $250,000 $100,000 $330,000 Crash $280,000 $420,000 $860,000 $340,000 $570,000 $260,000 $1,020,000 $380,000 $270,000 $490,000 $200,000 $350,000 $200,000 $510,000 Maximum Time Reduction 1 2 3 2 1 2 3 3 2 2 1 2 1 3 Crash Cost per Week saved $100,000 $50,000 $80,000 $40,000 $160,000 $40,000 $40,000 $60,000 $30,000 $30,000 $40,000 $50,000 $100,000 $60,000 Project Finish Time Total Cost Solver Parameters Set Objective Cell: TotalCost To: Min By Changing Variable Cells: StartTime, TimeReduction, ProjectFinishTime 3 4 5 6 7 8 9 10 11 Subject to the Constraints: BStart >= AFinish CStart >= BFinish DStart >= CFinish EStart >= CFinish FStart >= EFinish GStart >= DFinish HStart >= EFinish HStart >= GFinish IStart >= CFinish JStart >= FFinish JStart >= IFinish KStart >= JFinish LStart >= JFinish MStart >= HFinish NStart >= KFinish NStart >= LFinish ProjectFinishTime <= MaxTime ProjectFinishTime <= MFinish ProjectFinishTime <= NFinish TimeReduction <= MaxTimeReduction Solver Options: Make Variables Nonnegative Solving Method: Simplex LP G Maximum Time Reduction =NormalTime-CrashTime =NormalTime-CrashTime =NormalTime-CrashTime =NormalTime-CrashTime : : Start Time 0 2 6 16 16 20 22 29 16 23 30 29 38 34 Time Reduction 0 0 0 0 0 2 0 0 0 2 0 0 0 0 Finish Time 2 6 16 22 20 23 29 38 23 29 34 34 40 40 40 <= Max Time 40 $4,690,000 H Crash Cost per Week saved =(CrashCost-NormalCost)/MaxTimeReduction =(CrashCost-NormalCost)/MaxTimeReduction =(CrashCost-NormalCost)/MaxTimeReduction =(CrashCost-NormalCost)/MaxTimeReduction : : 4 5 6 7 8 9 10 11 K Finish Time =StartTime+NormalTime-TimeReduction =StartTime+NormalTime-TimeReduction =StartTime+NormalTime-TimeReduction =StartTime+NormalTime-TimeReduction : : Range Name AFinish AStart BFinish BStart CFinish CrashCost CrashCostPerWeekSaved CrashTime CStart DFinish DStart EFinish EStart FFinish FinishTime FStart GFinish GStart HFinish HStart IFinish IStart JFinish JStart KFinish KStart LFinish LStart MaxTime MaxTimeReduction MFinish MStart NFinish NormalCost NormalTime NStart ProjectFinishTime StartTime TimeReduction TotalCost Cells K6 I6 K7 I7 K8 F6:F19 H6:H19 D6:D19 I8 K9 I9 K10 I10 K11 K6:K19 I11 K12 I12 K13 I13 K14 I14 K15 I15 K16 I16 K17 I17 K22 G6:G19 K18 I18 K19 E6:E19 C6:C19 I19 I22 I6:I19 J6:J19 I24 I Total Cost =SUM(NormalCost)+SUMPRODUCT(CrashCostPerWeekSaved,T ■ FIGURE 10.31 The spreadsheet displays the application of the CPM method of time-cost trade-offs to Reliable’s project, where columns I and J show the optimal solution obtained by using Solver with the entries shown in the Solver parameters box. ■ 10.9 CONCLUSIONS Networks of some type arise in a wide variety of contexts. Network representations are very useful for portraying the relationships and connections between the components of systems. Frequently, flow of some type must be sent through a network, so a decision needs to be made about the best way to do this. The kinds of network optimization models and algorithms introduced in this chapter provide a powerful tool for making such decisions. The minimum cost flow problem plays a central role among these network optimization models, both because it is so broadly applicable and because it can be solved extremely efficiently by the network simplex method. Two of its special cases included in this chapter, the shortest-path problem and the maximum flow problem, also are basic network optimization models, as are additional special cases discussed in Chap. 9 (the transportation problem and the assignment problem). Whereas all these models are concerned with optimizing the operation of an existing network, the minimum spanning tree problem is a prominent example of a model for optimizing the design of a new network. The CPM method of time-cost trade-offs provides a powerful way of using a network optimization model to design a project so that it can meet its deadline with a minimum total cost. This chapter has only scratched the surface of the current state of the art of network methodology. Because of their combinatorial nature, network problems often are extremely LEARNING AIDS FOR THIS CHAPTER ON OUR WEBSITE 425 difficult to solve. However, great progress has been made in developing powerful modeling techniques and solution methodologies that have opened up new vistas for important applications. In fact, relatively recent algorithmic advances are enabling us to solve successfully some complex network problems of enormous size. ■ SELECTED REFERENCES 1. Bazaraa, M. S., J. J. Jarvis, and H. D. Sherali: Linear Programming and Network Flows, 4th ed., Wiley, Hoboken, NJ, 2010. 2. Bertsekas, D. P.: Network Optimization: Continuous and Discrete Models, Athena Scientific Publishing, Belmont, MA, 1998. 3. Cai, X., and C. K. Wong: Time Varying Network Optimization, Springer, New York, 2007. 4. Dantzig, G. B., and M. N. Thapa: Linear Programming 1: Introduction, Springer, New York, 1997, chap. 9. 5. Hillier, F. S., and M. S. Hillier: Introduction to Management Science: A Modeling and Case Studies Approach with Spreadsheets, 5th ed., McGraw-Hill/Irwin, Burr Ridge, IL, 2014, chap. 6. 6. Sierksma, G., and D. Ghosh: Networks in Action:Text and Computer Exercises in Network Optimization, Springer, New York, 2010. 7. Vanderbei, R. J.: Linear Programming: Foundations and Extensions, 4th ed., Springer, New York, 2014, chaps. 14 and 15. 8. Whittle, P.: Networks: Optimization and Evolution, Cambridge University Press, Cambridge, UK, 2007. Some Award-Winning Applications of Network Optimization Models: (A link to all these articles is provided on our website, www.mhhe.com/hillier.) A1. Ben-Khedher, N., J. Kintanar, C. Queille, and W. Stripling: “Schedule Optimization at SNCF: From Conception to Day of Departure,” Interfaces, 28(1): 6–23, January–February 1998. A2. Cosares, S., D. N. Deutsch, I. Saniee, and O. J. Wasem: “SONET Toolkit: A Decision-Support System for Designing Robust and Cost-Effective Fiber-Optic Networks,” Interfaces, 25(1): 20–40, January–February 1995. A3. Fleuren, H., C. Goossens, M. Hendriks, M.-C. Lombard, I. Meuffels, and J. Poppelaars: “Supply Chain-Wide Optimization at TNT Express,” Interfaces, 43(1): 5–20, January– February 2013. A4. Gorman, M. F., D. Acharya, and D. Sellers: “CSX Railway Uses OR to Cash In on Optimized Equipment Distribution,” Interfaces, 40(1): 5–16, January–February 2010. A5. Huisingh, J. L., H. M. Yamauchi, and R. Zimmerman, “Saving Federal Tax Dollars,” Interfaces, 31(5): 13–23, September–October 2001. A6. Klingman, D., N. Phillips, D. Steiger, and W. Young: “The Successful Deployment of Management Science throughout Citgo Petroleum Corporation,” Interfaces, 17(1): 4–25, January–February 1987. A7. Prior, R. C., R. L. Slavens, J. Trimarco, V. Akgun, E. G. Feitzinger, and C.-F. Hong: “Menlo Worldwide Forwarding Optimizes Its Network Routing,” Interfaces, 34(1): 26–38, January–February 2004. A8. Srinivasan, M. M., W. D. Best, and S. Chandrasekaran: “Warner Robins Air Logistics Center Streamlines Aircraft Repair and Overhaul,” Interfaces, 37(1): 7–21, January–February 2007. ■ LEARNING AIDS FOR THIS CHAPTER ON OUR WEBSITE (www.mhhe.com/hillier) Solved Examples: Examples for Chapter 10 A Demonstration Example in OR Tutor: Network Simplex Method 426 CHAPTER 10 NETWORK OPTIMIZATION MODELS An Interactive Procedure in IOR Tutorial: Network Simplex Method—Interactive An Excel Add-in: Analytic Solver Platform for Education (ASPE) “Ch. 10—Network Opt Models” Files for Solving the Examples: Excel Files LINGO/LINDO File MPL/Solvers File Glossary for Chapter 10 See Appendix 1 for documentation of the software. ■ PROBLEMS The symbols to the left of some of the problems (or their parts) have the following meaning: D: The demonstration example just listed in Learning Aids may be helpful. I: We suggest that you use the interactive procedure just listed (the printout records your work). C: Use the computer with any of the software options available to you (or as instructed by your instructor) to solve the problem. 10.3-2. You need to take a trip by car to another town that you have never visited before. Therefore, you are studying a map to determine the shortest route to your destination. Depending on which route you choose, there are five other towns (call them A, B, C, D, E) that you might pass through on the way. The map shows the mileage along each road that directly connects two towns without any intervening towns. These numbers are summarized in the following table, where a dash indicates that there is no road directly connecting these two towns without going through any other towns. An asterisk on the problem number indicates that at least a partial answer is given in the back of the book. Miles between Adjacent Towns 10.2-1. Consider the following directed network. A C E B D F (a) Find a directed path from node A to node F, and then identify three other undirected paths from node A to node F. (b) Find three directed cycles. Then identify an undirected cycle that includes every node. (c) Identify a set of arcs that forms a spanning tree. (d) Use the process illustrated in Fig. 10.3 to grow a tree one arc at a time until a spanning tree has been formed. Then repeat this process to obtain another spanning tree. [Do not duplicate the spanning tree identified in part (c).] 10.3-1. Read the referenced article that fully describes the OR study summarized in the application vignette presented in Sec. 10.3. Briefly describe how network optimization models were applied in this study. Then list the various financial and nonfinancial benefits that resulted from this study. Town A B C D E Destination Origin A B C D E 40 60 10 50 — 20 — 70 55 — — — 40 50 10 — — — — 60 80 (a) Formulate this problem as a shortest-path problem by drawing a network where nodes represent towns, links represent roads, and numbers indicate the length of each link in miles. (b) Use the algorithm described in Sec. 10.3 to solve this shortestpath problem. C (c) Formulate and solve a spreadsheet model for this problem. (d) If each number in the table represented your cost (in dollars) for driving your car from one town to the next, would the answer in part (b) or (c) now give your minimum cost route? (e) If each number in the table represented your time (in minutes) for driving your car from one town to the next, would the answer in part (b) or (c) now give your minimum time route? 10.3-3. At a small but growing airport, the local airline company is purchasing a new tractor for a tractor-trailer train to bring luggage 427 PROBLEMS to and from the airplanes. A new mechanized luggage system will be installed in 3 years, so the tractor will not be needed after that. However, because it will receive heavy use, so that the running and maintenance costs will increase rapidly as the tractor ages, it may still be more economical to replace the tractor after 1 or 2 years. The following table gives the total net discounted cost associated with purchasing a tractor (purchase price minus trade-in allowance, plus running and maintenance costs) at the end of year i and trading it in at the end of year j (where year 0 is now). 10.3-6. One of Speedy Airlines’ flights is about to take off from Seattle for a nonstop flight to London. There is some flexibility in choosing the precise route to be taken, depending upon weather conditions. The following network depicts the possible routes under consideration, where SE and LN are Seattle and London, respectively, and the other nodes represent various intermediate locations. 4.6 j SE i 0 1 2 1 2 3 $8,000 $18,000 10,000 $31,000 21,000 12,000 The problem is to determine at what times (if any) the tractor should be replaced to minimize the total cost for the tractors over 3 years. (a) Formulate this problem as a shortest-path problem. (b) Use the algorithm described in Sec. 10.3 to solve this shortestpath problem. C (c) Formulate and solve a spreadsheet model for this problem. 10.3-4.* Use the algorithm described in Sec. 10.3 to find the shortest path through each of the following networks, where the numbers represent actual distances between the corresponding nodes. (a) A 4 1 6 (Origin) O 7 D 5 B 1 4 2 5 6 T (Destination) 8 E 5 C 4.7 A 3.5 3.4 D B 3.6 3.2 3.3 E 4.2 C 3.4 3.6 LN 3.8 3.5 3.4 F The winds along each arc greatly affect the flying time (and so the fuel consumption). Based on current meteorological reports, the flying times (in hours) for this particular flight are shown next to the arcs. Because the fuel consumed is so expensive, the management of Speedy Airlines has established a policy of choosing the route that minimizes the total flight time. (a) What plays the role of “distances” in interpreting this problem to be a shortest-path problem? (b) Use the algorithm described in Sec. 10.3 to solve this shortestpath problem. C (c) Formulate and solve a spreadsheet model for this problem. 10.3-7. The Quick Company has learned that a competitor is planning to come out with a new kind of product with a great sales potential. Quick has been working on a similar product that had been scheduled to come to market in 20 months. However, research is nearly complete and Quick’s management now wishes to rush the product out to meet the competition. There are four nonoverlapping phases left to be accomplished, including the remaining research that currently is being conducted at a normal pace. However, each phase can instead be conducted at a priority or crash level to expedite completion, and these are the only levels that will be considered for the last three phases. The times required at these levels are given in the following table. (The times in parentheses at the normal level have been ruled out as too long.) (b) A 4 (Origin) O 3 3 5 2 6 C B 6 2 5 2 2 51 4 4 D E F 2 5 G 2 H 3 Time 7 8 T (Destination) 4 I 10.3-5. Formulate the shortest-path problem as a linear programming problem. Level Initiate Design of Production Remaining Manufacturing and Research Development System Distribution Normal 5 months Priority 4 months Crash 2 months (4 months) 3 months 2 months (7 months) 5 months 3 months (4 months) 2 months 1 month 428 CHAPTER 10 NETWORK OPTIMIZATION MODELS Management has allocated $30 million for these four phases. The cost of each phase at the different levels under consideration is as follows: Cost Level Initiate Design of Production Remaining Manufacturing and Research Development System Distribution Normal $3 million Priority 6 million Crash 9 million — $6 million 9 million — $ 9 million 12 million 10.4-1.* Reconsider the networks shown in Prob. 10.3-4. Use the algorithm described in Sec. 10.4 to find the minimum spanning tree for each of these networks. 10.4-2. The Wirehouse Lumber Company will soon begin logging eight groves of trees in the same general area. Therefore, it must develop a system of dirt roads that makes each grove accessible from every other grove. The distance (in miles) between every pair of groves is as follows: Distance between Pairs of Groves Grove Distance between Pairs of Offices — $3 million 6 million Management wishes to determine at which level to conduct each of the four phases to minimize the total time until the product can be marketed subject to the budget restriction of $50 million. (a) Formulate this problem as a shortest-path problem. (b) Use the algorithm described in Sec. 10.3 to solve this shortestpath problem. 1 2 3 4 5 6 7 8 The phone line from a branch office need not be connected directly to the main office. It can be connected indirectly by being connected to another branch office that is connected (directly or indirectly) to the main office. The only requirement is that every branch office be connected by some route to the main office. The charge for the special phone lines is $100 times the number of miles involved, where the distance (in miles) between every pair of offices is as follows: 1 2 3 4 5 6 7 8 — 1.3 2.1 0.9 0.7 1.8 2.0 1.5 1.3 — 0.9 1.8 1.2 2.6 2.3 1.1 2.1 0.9 — 2.6 1.7 2.5 1.9 1.0 0.9 1.8 2.6 — 0.7 1.6 1.5 0.9 0.7 1.2 1.7 0.7 — 0.9 1.1 0.8 1.8 2.6 2.5 1.6 0.9 — 0.6 1.0 2.0 2.3 1.9 1.5 1.1 0.6 — 0.5 1.5 1.1 1.0 0.9 0.8 1.0 0.5 — Management now wishes to determine between which pairs of groves the roads should be constructed to connect all groves with a minimum total length of road. (a) Describe how this problem fits the network description of the minimum spanning tree problem. (b) Use the algorithm described in Sec. 10.4 to solve the problem. 10.4-3. The Premiere Bank soon will be hooking up computer terminals at each of its branch offices to the computer at its main office using special phone lines with telecommunications devices. Main office Branch 1 Branch 2 Branch 3 Branch 4 Branch 5 Main B.1 B.2 B.3 B.4 B.5 — 190 70 115 270 160 190 — 100 110 215 50 70 100 — 140 120 220 115 110 140 — 175 80 270 215 120 175 — 310 160 50 220 80 310 — Management wishes to determine which pairs of offices should be directly connected by special phone lines in order to connect every branch office (directly or indirectly) to the main office at a minimum total cost. (a) Describe how this problem fits the network description of the minimum spanning tree problem. (b) Use the algorithm described in Sec. 10.4 to solve the problem. 10.5-1.* For the network shown below, use the augmenting path algorithm described in Sec. 10.5 to find the flow pattern giving the maximum flow from the source to the sink, given that the arc capacity from node i to node j is the number nearest node i along the arc between these nodes. Show your work. 2 5 6 F Source 1 1 4 4 1 4 3 7 3 3 4 6 Sink F 9 4 10.5-2. Formulate the maximum flow problem as a linear programming problem. 10.5-3. The next diagram depicts a system of aqueducts that originate at three rivers (nodes R1, R2, and R3) and terminate at a major city (node T), where the other nodes are junction points in the system. 429 PROBLEMS A Distribution Center D R1 Refinery E R2 C R3 New Orleans Charleston Seattle St. Louis T B F Using units of thousands of acre feet, the tables below the diagram show the maximum amount of water that can be pumped through each aqueduct per day. FromTo FromTo From A B C R1 R2 R3 75 40 — 65 50 80 — 60 70 From A B C FromTo D E F From 60 70 — 45 — 55 45 70 90 D E F T 120 190 130 The city water manager wants to determine a flow plan that will maximize the flow of water to the city. (a) Formulate this problem as a maximum flow problem by identifying a source, a sink, and the transshipment nodes, and then drawing the complete network that shows the capacity of each arc. (b) Use the augmenting path algorithm described in Sec. 10.5 to solve this problem. C (c) Formulate and solve a spreadsheet model for this problem. 10.5-4. The Texago Corporation has four oil fields, four refineries, and four distribution centers. A major strike involving the transportation industries now has sharply curtailed Texago’s capacity to ship oil from the oil fields to the refineries and to ship petroleum products from the refineries to the distribution centers. Using units of thousands of barrels of crude oil (and its equivalent in refined products), the following tables show the maximum number of units that can be shipped per day from each oil field to each refinery, and from each refinery to each distribution center. The Texago management now wants to determine a plan for how many units to ship from each oil field to each refinery and Refinery Oil Field Texas California Alaska Middle East New Orleans Charleston Seattle St. Louis 11 5 7 8 7 4 3 9 2 8 12 4 8 7 6 15 Pittsburgh Atlanta Kansas City San Francisco 5 8 4 12 9 7 6 11 6 9 7 9 4 5 8 7 from each refinery to each distribution center that will maximize the total number of units reaching the distribution centers. (a) Draw a rough map that shows the location of Texago’s oil fields, refineries, and distribution centers. Add arrows to show the flow of crude oil and then petroleum products through this distribution network. (b) Redraw this distribution network by lining up all the nodes representing oil fields in one column, all the nodes representing refineries in a second column, and all the nodes representing distribution centers in a third column. Then add arcs to show the possible flow. (c) Modify the network in part (b) as needed to formulate this problem as a maximum flow problem with a single source, a single sink, and a capacity for each arc. (d) Use the augmenting path algorithm described in Sec. 10.5 to solve this maximum flow problem. C (e) Formulate and solve a spreadsheet model for this problem. 10.5-5. One track of the Eura Railroad system runs from the major industrial city of Faireparc to the major port city of Portstown. This track is heavily used by both express passenger and freight trains. The passenger trains are carefully scheduled and have priority over the slow freight trains (this is a European railroad), so that the freight trains must pull over onto a siding whenever a passenger train is scheduled to pass them soon. It is now necessary to increase the freight service, so the problem is to schedule the freight trains so as to maximize the number that can be sent each day without interfering with the fixed schedule for passenger trains. Consecutive freight trains must maintain a schedule differential of at least 0.1 hour, and this is the time unit used for scheduling them (so that the daily schedule indicates the status of each freight train at times 0.0, 0.1, 0.2, . . . , 23.9). There are S sidings between Faireparc and Portstown, where siding i is long enough to hold ni freight trains (i  1, . . . , S ). It requires ti time units (rounded up to an integer) for a freight train to travel from siding i to siding i ⫹ 1 (where t0 is the time from the Faireparc station to siding 1 and ts is the time from siding S to the Portstown station). A freight train is allowed to pass or leave siding i (i ⫽ 0, 1, . . . , S) at time j ( j ⫽ 0.0, 0.1, . . . , 23.9) only if it would not be overtaken by a scheduled passenger train before reaching siding i ⫹ 1 (let ␦ij ⫽ 1 if it would not be overtaken, and let ␦ij ⫽ 0 if it would be). A freight train also is required to stop at a siding if there will not be room for it at all subsequent sidings that it would reach before being overtaken by a passenger train. Formulate this problem as a maximum flow problem by identifying each node (including the supply node and the demand node) as well as each arc and its arc capacity for the 430 CHAPTER 10 NETWORK OPTIMIZATION MODELS network representation of the problem. (Hint: Use a different set of nodes for each of the 240 times.) 10.5-6. Consider the maximum flow problem shown below, where the source is node A, the sink is node F, and the arc capacities are the numbers shown next to these directed arcs. (a) Use the augmenting path algorithm described in Sec. 10.5 to solve this problem. C (b) Formulate and solve a spreadsheet model for this problem. 7 B D 6 9 2 A 3 F 4 9 7 C 6 E 10.5-7. Read the referenced article that fully describes the OR study summarized in the first application vignette presented in Sec. 10.5. Briefly describe how the model for the minimum cost flow problem was applied in this study. Then list the various financial and nonfinancial benefits that resulted from this study. 10.5-8. Follow the instructions of Prob. 10.5-7 for the second application vignette presented in Sec. 10.5. 10.6-1. Read the referenced article that fully describes the OR study summarized in the application vignette presented in Sec. 10.6. Briefly describe how the model for the minimum cost flow problem was applied in this study. Then list the various financial and nonfinancial benefits that resulted from this study. 10.6-2. Reconsider the maximum flow problem shown in Prob. 10.56. Formulate this problem as a minimum cost flow problem, including adding the arc A  F. Use 苶 F  20. 10.6-3. A company will be producing the same new product at two different factories, and then the product must be shipped to two warehouses. Factory 1 can send an unlimited amount by rail to warehouse 1 only, whereas factory 2 can send an unlimited amount by rail to warehouse 2 only. However, independent truckers can be used to ship up to 50 units from each factory to a distribution center, from which up to 50 units can be shipped to each Unit Shipping Cost To From Factory 1 Factory 2 Distribution center Allocation Warehouse Distribution Center 1 2 Output 3 4 7 — — 9 80 70 2 4 60 90 warehouse. The shipping cost per unit for each alternative is shown in the following table, along with the amounts to be produced at the factories and the amounts needed at the warehouses. (a) Formulate the network representation of this problem as a minimum cost flow problem. (b) Formulate the linear programming model for this problem. 10.6-4. Reconsider Prob. 10.3-3. Now formulate this problem as a minimum cost flow problem by showing the appropriate network representation. 10.6-5. The Makonsel Company is a fully integrated company that both produces goods and sells them at its retail outlets. After production, the goods are stored in the company’s two warehouses until needed by the retail outlets. Trucks are used to transport the goods from the two plants to the warehouses, and then from the warehouses to the three retail outlets. Using units of full truckloads, the following table shows each plant’s monthly output, its shipping cost per truckload sent to each warehouse, and the maximum amount that it can ship per month to each warehouse. To Unit Shipping Cost Shipping Capacity Output From Warehouse Warehouse Warehouse Warehouse 1 2 1 2 Plant 1 Plant 2 $425 510 $560 600 125 175 150 200 200 300 For each retail outlet (RO), the next table shows its monthly demand, its shipping cost per truckload from each warehouse, and the maximum amount that can be shipped per month from each warehouse. To Unit Shipping Cost Shipping Capacity From RO1 RO2 RO3 RO1 RO2 RO3 Warehouse 1 Warehouse 2 $470 390 $505 410 $490 440 100 125 150 150 100 75 Demand $150 $200 $150 150 200 150 Management now wants to determine a distribution plan (number of truckloads shipped per month from each plant to each warehouse and from each warehouse to each retail outlet) that will minimize the total shipping cost. (a) Draw a network that depicts the company’s distribution network. Identify the supply nodes, transshipment nodes, and demand nodes in this network. (b) Formulate this problem as a minimum cost flow problem by inserting all the necessary data into this network. C (c) Formulate and solve a spreadsheet model for this problem. C (d) Use the computer to solve this problem without using Excel. 431 PROBLEMS 10.6-6. The Audiofile Company produces boomboxes. However, management has decided to subcontract out the production of the speakers needed for the boomboxes. Three vendors are available to supply the speakers. Their price for each shipment of 1,000 speakers is shown below. Vendor Price 1 2 3 $22,500 22,700 22,300 In addition, each vendor would charge a shipping cost. Each shipment would go to one of the company’s two warehouses. Each vendor has its own formula for calculating this shipping cost based on the mileage to the warehouse. These formulas and the mileage data are shown below. warehouse. Similarly, each warehouse is able to send a maximum of only 6 shipments per month to each factory. Management now wants to develop a plan for each month regarding how many shipments (if any) to order from each vendor, how many of those shipments should go to each warehouse, and then how many shipments each warehouse should send to each factory. The objective is to minimize the sum of the purchase costs (including the shipping charge) and the shipping costs from the warehouses to the factories. (a) Draw a network that depicts the company’s supply network. Identify the supply nodes, transshipment nodes, and demand nodes in this network. (b) Formulate this problem as a minimum cost flow problem by inserting all the necessary data into this network. Also include a dummy demand node that receives (at zero cost) all the unused supply capacity at the vendors. C (c) Formulate and solve a spreadsheet model for this problem. C (d) Use the computer to solve this problem without using Excel. 10.7-1. Consider the minimum cost flow problem shown below, where the bi values (net flows generated) are given by the nodes, the cij values (costs per unit flow) are given by the arcs, and the uij values (arc capacities) are given between nodes C and D. Do the following work manually. (a) Obtain an initial BF solution by solving the feasible spanning tree with basic arcs A  B, C  E, D  E, and C  A D Vendor Charge per Shipment 1 2 3 $300 ⫹ 40¢/mile 200 ⫹ 50¢/mile 500 ⫹ 20¢/mile Vendor Warehouse 1 Warehouse 2 1 2 3 1,600 miles 1,500 miles 2,000 miles 1,400 miles 1,600 miles 1,000 miles [20] [0] 6 A 3 Arc capacities: A  C: 10 B  C: 25 Others: ⬁ 5 2 Whenever one of the company’s two factories needs a shipment of speakers to assemble into the boomboxes, the company hires a trucker to bring the shipment in from one of the warehouses. The cost per shipment is given next, along with the number of shipments needed per month at each factory. Unit Shipping Cost Warehouse 1 Warehouse 2 Monthly demand Factory 1 Factory 2 $200 $400 $700 $500 10 6 Each vendor is able to supply as many as 10 shipments per month. However, because of shipping limitations, each vendor is able to send a maximum of only 6 shipments per month to each C 3 [⫺30] E 4 B [10] 5 D [0] (a reverse arc), where one of the nonbasic arcs (C  B) also is a reverse arc. Show the resulting network (including bi, cij, and uij) in the same format as the above one (except use dashed lines to draw the nonbasic arcs), and add the flows in parentheses next to the basic arcs. (b) Use the optimality test to verify that this initial BF solution is optimal and that there are multiple optimal solutions. Apply one iteration of the network simplex method to find the other optimal BF solution, and then use these results to identify the other optimal solutions that are not BF solutions. (c) Now consider the following BF solution. 432 CHAPTER 10. NETWORK OPTIMIZATION MODELS Basic Arc Flow Nonbasic Arc AD BC CE DE 20 10 10 20 AB AC BD Starting from this BF solution, apply one iteration of the network simplex method. Identify the entering basic arc, the leaving basic arc, and the next BF solution, but do not proceed further. 10.7-2. Reconsider the minimum cost flow problem formulated in Prob. 10.6-2. (a) Obtain an initial BF solution by solving the feasible spanning tree with basic arcs A  B, A  C, A  F, B  D, and E  F, where two of the nonbasic arcs (E  C and F  D) are reverse arcs. D,I (b) Use the network simplex method yourself (you may use the interactive procedure in your IOR Tutorial) to solve this problem. 10.7-3. Reconsider the minimum cost flow problem formulated in Prob. 10.6-3. (a) Obtain an initial BF solution by solving the feasible spanning tree that corresponds to using just the two rail lines plus factory 1 shipping to warehouse 2 via the distribution center. D,I (b) Use the network simplex method yourself (you may use the interactive procedure in your IOR Tutorial) to solve this problem. 10.7-4. Reconsider the minimum cost flow problem formulated in Prob. 10.6-4. Starting with the initial BF solution that corresponds to replacing the tractor every year, use the network simplex method yourself (you may use the interactive procedure in your IOR Tutorial) to solve this problem. D,I 10.7-5. For the P & T Co. transportation problem given in Table 9.2, consider its network representation as a minimum cost flow problem presented in Fig. 9.2. Use the northwest corner rule to obtain an initial BF solution from Table 9.2. Then use the network simplex method yourself (you may use the interactive procedure in your IOR Tutorial) to solve this problem (and verify the optimal solution given in Sec. 9.1). D,I 10.7-6. Consider the Metro Water District transportation problem presented in Table 9.12. (a) Formulate the network representation of this problem as a minimum cost flow problem. (Hint: Arcs where flow is prohibited should be deleted.) D,I (b) Starting with the initial BF solution given in Table 9.19, use the network simplex method yourself (you may use the interactive procedure in your IOR Tutorial) to solve this problem. Compare the sequence of BF solutions obtained with the sequence obtained by the transportation simplex method in Table 9.23. D,I 10.7-7. Consider the minimum cost flow problem shown below, where the bi values are given by the nodes, the cij values are given by the arcs, and the finite uij values are given in parentheses by the arcs. Obtain an initial BF solution by solving the feasible spanning tree with basic arcs A  C, B  A, C  D, and C  E, where one of the nonbasic arcs (D  A) is a reverse arc. Then use the network simplex method yourself (you may use the interactive procedure in your IOR Tutorial) to solve this problem. [50] A [⫺70] (uAD  40) 6 4 D 3 [0] 1 C 2 B [80] 5 5 E (uBE ⫽ 40) [⫺60] 10.8-1. The Tinker Construction Company is ready to begin a project that must be completed in 12 months. This project has four activities (A, B, C, D) with the project network shown next. D B START A FINISH C E The project manager, Sean Murphy, has concluded that he cannot meet the deadline by performing all these activities in the normal way. Therefore, Sean has decided to use the CPM method of time-cost trade-offs to determine the most economical way of crashing the project to meet the deadline. He has gathered the following data for the four activities. Activity A B C D Normal Time 8 9 6 7 months months months months 5 7 4 4 Crash Time Normal Cost Crash Cost months months months months $25,000 20,000 16,000 27,000 $40,000 30,000 24,000 45,000 Use marginal cost analysis to solve the problem. 10.8-2. Reconsider the Tinker Construction Co. problem presented in Prob. 10.8-1. While in college, Sean Murphy took an OR course that devoted a month to linear programming, so Sean has decided to use linear programming to analyze this problem. 433 PROBLEMS (a) Consider the upper path through the project network. Formulate a two-variable linear programming model for the problem of how to minimize the cost of performing this sequence of activities within 12 months. Use the graphical method to solve this model. (b) Repeat part (a) for the lower path through the project network. (c) Combine the models in parts (a) and (b) into a single complete linear programming model for the problem of how to minimize the cost of completing the project within 12 months. What must an optimal solution for this model be? (d) Use the CPM linear programming formulation presented in Sec. 10.8 to formulate a complete model for this problem. [This model is a little larger than the one in part (c) because this method of formulation is applicable to more complicated project networks as well.] C (e) Use Excel to solve this problem. C (f) Use another software option to solve this problem. C (g) Check the effect of changing the deadline by repeating part (e) or ( f ) with the deadline of 11 months and then with a deadline of 13 months. 10.8-3.* Good Homes Construction Company is about to begin the construction of a large new home. The company’s president, Michael Dean, is currently planning the schedule for this project. Michael has identified the five major activities (labeled A, B, . . . , E) that will need to be performed according to the project network shown next, followed by a table giving the normal point and crash point for each of these activities. C A G D START FINISH E H B Activity A B C D E F Normal Time 3 4 5 3 4 weeks weeks weeks weeks weeks Crash Time 2 3 2 1 2 weeks weeks weeks weeks weeks Normal Cost Crash Cost $54,000 62,000 66,000 40,000 75,000 $60,000 65,000 70,000 43,000 80,000 These costs reflect the company’s direct costs for the material, equipment, and direct labor required to perform the activities. In addition, the company incurs indirect project costs such as supervision and other customary overhead costs, interest charges for capital tied up, and so forth. Michael estimates that these indirect costs run $5,000 per week. He wants to minimize the overall cost of the project. Therefore, to save some of these indirect costs, Michael concludes that he should shorten the project by doing some crashing to the extent that the crashing cost for each additional week saved is less than $5,000. (a) Use marginal cost analysis to determine which activities should be crashed and by how much to minimize the overall cost of the project. Under this plan, what is the duration and cost of each activity? How much money is saved by doing this crashing? C (b) Now use the linear programming approach to do part (a) by shortening the deadline 1 week at a time. 10.8-4. The 21st Century Studios is about to begin the production of its most important (and most expensive) movie of the year. The movie’s producer, Dusty Hoffmer, has decided to use PERT/CPM to help plan and control this key project. He has identified the eight major activities (labeled A, B, . . . , H) required to produce the movie. Their precedence relationships are shown in the project network below. C A J START F FINISH H E B I D G Dusty now has learned that another studio also will be coming out with a blockbuster movie during the middle of the upcoming summer, just when his movie was to be released. This would be very unfortunate timing. Therefore, he and the top management of 21st Century Studios have concluded that they must accelerate production of their movie and bring it out at the beginning of the summer (15 weeks from now) to establish it as THE movie of the year. Although this will require substantially increasing an already huge budget, management feels that this will pay off in much larger box office earnings both nationally and internationally. Dusty now wants to determine the least costly way of meeting the new deadline 15 weeks hence. Using the CPM method of time-cost trade-offs, he has obtained the following data. Activity A B C D E F G H Normal Time 5 3 4 6 5 7 9 8 weeks weeks weeks weeks weeks weeks weeks weeks Crash Time 3 2 2 3 4 4 5 6 weeks weeks weeks weeks weeks weeks weeks weeks Normal Cost $20 10 16 25 22 30 25 30 million million million million million million million million Crash Cost $30 20 24 43 30 48 45 44 million million million million million million million million 434 CHAPTER 10 NETWORK OPTIMIZATION MODELS (a) Formulate a linear programming model for this problem. C (b) Use Excel to solve the problem. C (c) Use another software option to solve the problem. 10.8-5. The Lockhead Aircraft Co. is ready to begin a project to develop a new fighter airplane for the U.S. Air Force. The company’s contract with the Department of Defense calls for project completion within 92 weeks, with penalties imposed for late delivery. The project involves 10 activities (labeled A, B, . . . , J ), where their precedence relationships are shown in the project network below. A Activity A B C D E F G H I J Normal Time 32 28 36 16 32 54 17 20 34 18 weeks weeks weeks weeks weeks weeks weeks weeks weeks weeks Crash Time 28 25 31 13 27 47 15 17 30 16 weeks weeks weeks weeks weeks weeks weeks weeks weeks weeks Normal Cost $160 million 125 million 170 million 60 million 135 million 215 million 90 million 120 million 190 million 80 million Crash Cost 180 146 210 72 160 257 96 132 226 84 million million million million million million million million million million C J START F FINISH H E B I D G Management would like to avoid the hefty penalties for missing the deadline in the current contract. Therefore, the decision has been made to crash the project, using the CPM method of timecost trade-offs to determine how to do this in the most economical way. The data needed to apply this method are given next. (a) Formulate a linear programming model for this problem. C (b) Use Excel to solve the problem. C (c) Use another software option to solve the problem. 10.9-1. From the bottom part of the selected references given at the end of the chapter, select one of these award-winning applications of network optimization models. Read this article and then write a two-page summary of the application and the benefits (including nonfinancial benefits) it provided. 10.9-2. From the bottom part of the selected references given at the end of the chapter, select three of these award-winning applications of network optimization models. For each one, read the article and then write a one-page summary of the application and the benefits (including nonfinancial benefits) it provided. ■ CASES CASE 10.1 Money in Motion Jake Nguyen runs a nervous hand through his once finely combed hair. He loosens his once perfectly knotted silk tie. And he rubs his sweaty hands across his once immaculately pressed trousers. Today has certainly not been a good day. Over the past few months, Jake had heard whispers circulating from Wall Street—whispers from the lips of investment bankers and stockbrokers famous for their outspokenness. They had whispered about a coming Japanese economic collapse—whispered because they had believed that publicly vocalizing their fears would hasten the collapse. And today, their very fears have come true. Jake and his colleagues gather round a small television dedicated exclusively to the Bloomberg channel. Jake stares in disbelief as he listens to the horrors taking place in the Japanese market. And the Japanese market is taking the financial markets in all other East Asian countries with it on its tailspin. He goes numb. As manager of Asian foreign investment for Grant Hill Associates, a small West Coast investment boutique specializing in currency trading, Jake bears personal responsibility for any negative impacts of the collapse. And Grant Hill Associates will experience negative impacts. Jake had not heeded the whispered warnings of a Japanese collapse. Instead, he had greatly increased the stake Grant Hill Associates held in the Japanese market. Because the Japanese market had performed better than expected over the past year, Jake had increased investments in Japan from 2.5 million to 15 million dollars only 1 month ago. At that time, 1 dollar was worth 80 yen. No longer. Jake realizes that today’s devaluation of the yen means that 1 dollar is worth 125 yen. He will be able to liquidate these investments without any loss in yen, but now the dollar loss when converting back into U.S. currency would be huge. He takes a deep breath, closes his eyes, and mentally prepares himself for serious damage control. 435 CASES Jake’s meditation is interrupted by a booming voice calling for him from a large corner office. Grant Hill, the president of Grant Hill Associates, yells, “Nguyen, get the hell in here!” Jake jumps and looks reluctantly toward the corner office hiding the furious Grant Hill. He smooths his hair, tightens his tie, and walks briskly into the office. Grant Hill meets Jake’s eyes upon his entrance and continues yelling, “I don’t want one word out of you, Nguyen! No excuses; just fix this debacle! Get all of our money out of Japan! My gut tells me this is only the beginning! Get the money into safe U.S. bonds! NOW! And don’t forget to get our cash positions out of Indonesia and Malaysia ASAP with it!” Jake has enough common sense to say nothing. He nods his head, turns on his heel, and practically runs out of the office. Safely back at his desk, Jake begins formulating a plan to move the investments out of Japan, Indonesia, and Malaysia. His experiences investing in foreign markets have taught him that when playing with millions of dollars, how he gets money out of a foreign market is almost as important as when he gets money out of the market. The banking partners of Grant Hill Associates charge different transaction fees for converting one currency into another one and wiring large sums of money around the globe. And now, to make matters worse, the governments in East Asia have imposed very tight limits on the amount of money an individual or a company can exchange from the domestic currency into a particular foreign currency and withdraw it from the country. The goal of this dramatic measure is to reduce the outflow of foreign investments out of those countries to prevent a complete collapse of the economies in the region. Because of Grant Hill Associates’ cash holdings of 10.5 billion Indonesian rupiahs and 28 million Malaysian ringgits, along with the holdings in yen, it is not clear how these holdings should be converted back into dollars. Jake wants to find the most cost-effective method to convert these holdings into dollars. On his company’s website he always can find on-the-minute exchange rates for most currencies in the world (Table 1). The table states that, for example, 1 Japanese yen equals 0.008 U.S. dollars. By making a few phone calls he discovers the transaction costs his company must pay for large currency transactions during these critical times (Table 2). Jake notes that exchanging one currency for another one results in the same transaction cost as a reverse conversion. Finally, Jake finds out the maximum amounts of domestic currencies his company is allowed to convert into other currencies in Japan, Indonesia, and Malaysia (Table 3). (a) Formulate Jake’s problem as a minimum cost flow problem, and draw the network for his problem. Identify the supply and demand nodes for the network. (b) Which currency transactions must Jake perform in order to convert the investments from yen, rupiah, and ringgit into U.S. dollars to ensure that Grant Hill Associates has the maximum dollar amount after all transactions have occurred? How much money does Jake have to invest in U.S. bonds? (c) The World Trade Organization forbids transaction limits because they promote protectionism. If no transaction limits exist, what method should Jake use to convert the Asian holdings from the respective currencies into dollars? TABLE 1 Currency exchange rates To From Japanese yen Indonesian rupiah Malaysian ringgit U.S. dollar Canadian dollar European euro English pound Mexican peso U.S. Canadian Yen Rupiah Ringgit Dollar Dollar 1 50 1 0.04 0.008 Euro Pound 0.0048 Peso 0.01 0.0064 0.0768 0.0008 0.00016 0.0002 0.000128 0.000096 0.001536 1 0.2 0.25 0.16 0.12 1.92 1 1.25 0.8 0.6 9.6 1 0.64 0.48 7.68 1 0.75 12 1 16 1 436 CHAPTER 10 NETWORK OPTIMIZATION MODELS TABLE 2 Transaction cost, percent To From Yen Yen Rupiah Ringgit U.S. Dollar Canadian Dollar Euro Pound Peso — 0.5 0.5 0.4 0.4 0.4 0.25 0.5 — 0.7 0.5 0.3 0.3 0.75 0.75 — 0.7 0.7 0.4 0.45 0.5 — 0.05 0.1 0.1 0.1 — 0.2 0.1 0.1 — 0.05 0.5 — 0.5 Rupiah Ringgit U.S. dollar Canadian dollar Euro Pound Peso — TABLE 3 Transaction limits in equivalent of 1,000 dollars To From Yen Rupiah Ringgit U.S. Dollar Canadian Dollar Euro Pound Peso — 5,000 5,000 2,000 2,000 2,000 2,000 4,000 Rupiah 5,000 — 2,000 200 200 1,000 500 200 Ringgit 3,000 4,500 — 1,500 1,500 2,500 1,000 1,000 Yen (d) In response to the World Trade Organization’s mandate forbidding transaction limits, the Indonesian government introduces a new tax that leads to an increase of transaction costs for transaction of rupiah by 500 percent to protect their currency. Given these new transaction costs but no transaction limits, what currency transactions should Jake perform in order to convert the Asian holdings from the respective currencies into dollars? (e) Jake realizes that his analysis is incomplete because he has not included all aspects that might influence his planned currency exchanges. Describe other factors that Jake should examine before he makes his final decision. (Note: A data file for this case is provided on the book’s website for your convenience.) PREVIEWS OF ADDED CASES ON OUR WEBSITE 437 ■ PREVIEWS OF ADDED CASES ON OUR WEBSITE (www.mhhe.com/hillier) CASE 10.2 Aiding Allies CASE 10.3 Steps to Success A rebel army is attempting to overthrow the elected government of the Russian Federation. The United States government has decided to assist its ally by quickly sending troops and supplies to the Federation. A plan now needs to be developed for shipping the troops and supplies most effectively. Depending on the choice of the overall measure of performance, the analysis requires formulating and solving a shortest-path problem, a minimum cost flow problem, or a maximum flow problem. Subsequent analysis requires formulating and solving a minimum spanning tree problem. The management of a privately held company has made the decision to go public. Many interrelated steps need to be completed in the process of making the initial public offering of stock in the company. Management wishes to accelerate this process. Therefore, after you construct a project network to represent this process, apply the CPM method of time-cost trade-offs. 11 C H A P T E R Dynamic Programming D ynamic programming is a useful mathematical technique for making a sequence of interrelated decisions. It provides a systematic procedure for determining the optimal combination of decisions. In contrast to linear programming, there does not exist a standard mathematical formulation of “the” dynamic programming problem. Rather, dynamic programming is a general type of approach to problem solving, and the particular equations used must be developed to fit each situation. Therefore, a certain degree of ingenuity and insight into the general structure of dynamic programming problems is required to recognize when and how a problem can be solved by dynamic programming procedures. These abilities can best be developed by an exposure to a wide variety of dynamic programming applications and a study of the characteristics that are common to all these situations. A large number of illustrative examples are presented for this purpose. (Some of these examples are small enough that they also could be solved fairly quickly by exhaustive enumeration, but dynamic programming provides a vastly more efficient way of solving larger versions of these examples.) ■ 11.1 A PROTOTYPE EXAMPLE FOR DYNAMIC PROGRAMMING EXAMPLE 1 The Stagecoach Problem The STAGECOACH PROBLEM is a problem specially constructed1 to illustrate the features and to introduce the terminology of dynamic programming. It concerns a mythical fortune seeker in Missouri who decided to go west to join the gold rush in California during the mid-19th century. The journey would require traveling by stagecoach through unsettled country where there was serious danger of attack by marauders. Although his starting point and destination were fixed, he had considerable choice as to which states (or territories that subsequently became states) to travel through en route. The possible routes are shown in Fig. 11.1, where each state is represented by a circled letter and the direction of travel is always from left to right in the diagram. Thus, four stages (stagecoach runs) were required to travel from his point of embarkation in state A (Missouri) to his destination in state J (California). This fortune seeker was a prudent man who was quite concerned about his safety. After some thought, he came up with a rather clever way of determining the safest route. Life 1 This problem was developed by Professor Harvey M. Wagner while he was at Stanford University. 438 439 11.1 A PROTOTYPE EXAMPLE FOR DYNAMIC PROGRAMMING 7 B E 4 6 2 1 4 3 3 A 4 6 2 C F J 3 4 3 4 I 3 4 ■ FIGURE 11.1 The road system and costs for the stagecoach problem. H 1 D 3 G 5 insurance policies were offered to stagecoach passengers. Because the cost of the policy for taking any given stagecoach run was based on a careful evaluation of the safety of that run, the safest route should be the one with the cheapest total life insurance policy. The cost for the standard policy on the stagecoach run from state i to state j, which will be denoted by cij, is A B C D 2 4 3 E F G H I J B 7 4 6 E 1 4 H 3 C 3 2 4 F 6 3 I 4 D 4 1 5 G 3 3 These costs are also shown in Fig. 11.1. We shall now focus on the question of which route minimizes the total cost of the policy. Solving the Problem First note that the shortsighted approach of selecting the cheapest run offered by each successive stage need not yield an overall optimal decision. Following this strategy would give the route A  B  F  I  J, at a total cost of 13. However, sacrificing a little on one stage may permit greater savings thereafter. For example, A  D  F is cheaper overall than A  B  F. One possible approach to solving this problem is to use trial and error.2 However, the number of possible routes is large (18), and having to calculate the total cost for each route is not an appealing task. Fortunately, dynamic programming provides a solution with much less effort than exhaustive enumeration. (The computational savings are enormous for larger versions of this problem.) Dynamic programming starts with a small portion of the original problem and finds the optimal solution for this smaller problem. It then gradually enlarges the problem, finding the current optimal solution from the preceding one, until the original problem is solved in its entirety. 2 This problem also can be formulated as a shortest-path problem (see Sec. 10.3), where costs here play the role of distances in the shortest-path problem. The algorithm presented in Sec. 10.3 actually uses the philosophy of dynamic programming. However, because the present problem has a fixed number of stages, the dynamic programming approach presented here is even better. 440 CHAPTER 11 DYNAMIC PROGRAMMING For the stagecoach problem, we start with the smaller problem where the fortune seeker has nearly completed his journey and has only one more stage (stagecoach run) to go. The obvious optimal solution for this smaller problem is to go from his current state (whatever it is) to his ultimate destination (state J). At each subsequent iteration, the problem is enlarged by increasing by 1 the number of stages left to go to complete the journey. For this enlarged problem, the optimal solution for where to go next from each possible state can be found relatively easily from the results obtained at the preceding iteration. The details involved in implementing this approach follow. Formulation. Let the decision variables xn (n  1, 2, 3, 4) be the immediate destination on stage n (the nth stagecoach run to be taken). Thus, the route selected is A  x1  x2  x3  x4, where x4  J. Let fn(s, xn) be the total cost of the best overall policy for the remaining stages, given that the fortune seeker is in state s, ready to start stage n, and selects xn as the immediate destination. Given s and n, let xn* denote any value of xn (not necessarily unique) that minimizes fn(s, xn), and let f n* (s) be the corresponding minimum value of fn(s, xn). Thus, f n*(s)  min fn(s, xn)  fn(s, xn*), xn where fn(s, xn)  immediate cost (stage n)  minimum future cost (stages n  1 onward)  csxn  f n*1(xn). The value of csxn is given by the preceding tables for cij by setting i  s (the current state) and j  xn (the immediate destination). Because the ultimate destination (state J) is reached at the end of stage 4, f 5* ( J)  0. The objective is to find f 1* (A) and the corresponding route. Dynamic programming finds it by successively finding f 4*(s), f 3*(s), f 2*(s), for each of the possible states s and then using f 2*(s) to solve for f 1*(A).3 Solution Procedure. When the fortune seeker has only one more stage to go (n  4), his route thereafter is determined entirely by his current state s (either H or I) and his final destination x4  J, so the route for this final stagecoach run is s  J. Therefore, since f 4*(s)  f4(s, J)  cs,J, the immediate solution to the n  4 problem is n  4: s f 4*(s) x4* H I 3 4 J J When the fortune seeker has two more stages to go (n  3), the solution procedure requires a few calculations. For example, suppose that the fortune seeker is in state F. Then, as depicted below, he must next go to either state H or I at an immediate cost of cF,H  6 or cF,I  3, respectively. If he chooses state H, the minimum additional cost after he reaches there is given in the preceding table as f 4*(H)  3, as shown above the H node in the diagram. Therefore, the total cost for this decision is 6  3  9. If he chooses state I instead, the total cost is 3  4  7, which is smaller. Therefore, the optimal choice is this latter one, x3*  I, because it gives the minimum cost f 3*(F)  7. 3 Because this procedure involves moving backward stage by stage, some writers also count n backward to denote the number of remaining stages to the destination. We use the more natural forward counting for greater simplicity. 11.1 A PROTOTYPE EXAMPLE FOR DYNAMIC PROGRAMMING 441 3 H 6 F 3 I 4 Similar calculations need to be made when you start from the other two possible states s  E and s  G with two stages to go. Try it, proceeding both graphically (Fig. 11.1) and algebraically [combining cij and f 4*(s) values], to verify the following complete results for the n  3 problem. x3 n ⴝ 3: f3(s, x3) ⴝ csx3 ⴙ f 4*(x3) s H I f 3*(s) x3* E F G 4 9 6 8 7 7 4 7 6 H I H The solution for the second-stage problem (n  2), where there are three stages to go, is obtained in a similar fashion. In this case, f2(s, x2)  csx  f 3*(x2). For example, suppose that the fortune seeker is in state C, as depicted below: 2 4 E 3 7 2 C F 4 G 6 He must next go to state E, F, or G at an immediate cost of cC,E  3, cC,F  2, or cC,G  4, respectively. After getting there, the minimum additional cost for stage 3 to the end is given by the n  3 table as f 3*(E)  4, f 3*(F)  7, or f 3*(G)  6, respectively, as shown above the E and F nodes and below the G node in the preceding diagram. The resulting calculations for the three alternatives are summarized below: x2  E: x2  F: x2  G: f2(C, E)  cC,E  f 3*(E)  3  4  7. f2(C, F)  cC,F  f 3*(F)  2  7  9. f2(C, G)  cC,G  f 3*(G)  4  6  10. The minimum of these three numbers is 7, so the minimum total cost from state C to the end is f 2*(C)  7, and the immediate destination should be x2*  E. 442 CHAPTER 11 DYNAMIC PROGRAMMING Making similar calculations when you start from state B or D (try it) yields the following results for the n  2 problem: f2(s, x2) ⴝ csx2 ⴙ f 3*(x2) x2 n ⴝ 2: s E F G f 2*(s) x2* B C D 11 7 8 11 9 8 12 10 11 11 7 8 E or F E E or F In the first and third rows of this table, note that E and F tie as the minimizing value of x2, so the immediate destination from either state B or D should be x2*  E or F. Moving to the first-stage problem (n  1), with all four stages to go, we see that the calculations are similar to those just shown for the second-stage problem (n  2), except now there is just one possible starting state s  A, as depicted below. 11 B 2 7 4 A C 3 D 8 These calculations are summarized next for the three alternatives for the immediate destination: x1  B: x1  C: x1  D: f1(A, B)  cA,B  f 2*(B)  2  11  13. f1(A, C)  cA,C  f 2*(C)  4  7  11. f1(A, D)  cA,D  f 2*(D)  3  8  11. Since 11 is the minimum, f 1*(A)  11 and x1*  C or D, as shown in the following table: f1(s, x1) ⴝ csx1 ⴙ f 2*(x1) x1 n ⴝ 1: s B C D f 1*(s) x1* A 13 11 11 11 C or D An optimal solution for the entire problem can now be identified from the four tables. Results for the n  1 problem indicate that the fortune seeker should go initially to either state C or state D. Suppose that he chooses x1*  C. For n  2, the result for s  C is x2*  E. This result leads to the n  3 problem, which gives x3*  H for s  E, and the n  4 problem yields x4*  J for s  H. Hence, one optimal route is A  C  E  H  J. Choosing x1*  D leads to the other two optimal routes A  D  E  H  J and A  D  F  I  J. They all yield a total cost of f 1*(A)  11. These results of the dynamic programming analysis also are summarized in Fig. 11.2. Note how the two arrows for stage 1 come from the first and last columns of the n  1 table and the resulting cost comes from the next-to-last column. Each of the other 11.2 CHARACTERISTICS OF DYNAMIC PROGRAMMING PROBLEMS Stage: ■ FIGURE 11.2 Graphical display of the dynamic programming solution of the stagecoach problem. Each arrow shows an optimal policy decision (the best immediate destination) from that state, where the number by the state is the resulting cost from there to the end. Following the boldface arrows from A to J gives the three optimal solutions (the three routes giving the minimum total cost of 11). 1 2 3 11 4 4 7 B E 1 4 11 State: 7 4 A 3 H 3 3 7 C F J 4 3 443 3 3 1 D G 8 6 4 I 4 arrows (and the resulting cost) comes from one row in one of the other tables in just the same way. You will see in the next section that the special terms describing the particular context of this problem—stage, state, and policy—actually are part of the general terminology of dynamic programming with an analogous interpretation in other contexts. ■ 11.2 CHARACTERISTICS OF DYNAMIC PROGRAMMING PROBLEMS The stagecoach problem is a literal prototype of dynamic programming problems. In fact, this example was purposely designed to provide a literal physical interpretation of the rather abstract structure of such problems. Therefore, one way to recognize a situation that can be formulated as a dynamic programming problem is to notice that its basic structure is analogous to the stagecoach problem. These basic features that characterize dynamic programming problems are presented and discussed here. 1. The problem can be divided into stages, with a policy decision required at each stage. The stagecoach problem was literally divided into its four stages (stagecoaches) that correspond to the four legs of the journey. The policy decision at each stage was which life insurance policy to choose (i.e., which destination to select for the next stagecoach ride). Similarly, other dynamic programming problems require making a sequence of interrelated decisions, where each decision corresponds to one stage of the problem. 2. Each stage has a number of states associated with the beginning of that stage. The states associated with each stage in the stagecoach problem were the states (or territories) in which the fortune seeker could be located when embarking on that particular leg of the journey. In general, the states are the various possible conditions in which the system might be at that stage of the problem. The number of states may be either finite (as in the stagecoach problem) or infinite (as in some subsequent examples). 3. The effect of the policy decision at each stage is to transform the current state to a state associated with the beginning of the next stage (possibly according to a probability distribution). The fortune seeker’s decision as to his next destination led him from his current state to the next state on his journey. This procedure suggests that dynamic programming 444 CHAPTER 11 DYNAMIC PROGRAMMING 4. 5. 6. 7. problems can be interpreted in terms of the networks described in Chap. 10. Each node would correspond to a state. The network would consist of columns of nodes, with each column corresponding to a stage, so that the flow from a node can go only to a node in the next column to the right. The links from a node to nodes in the next column correspond to the possible policy decisions on which state to go to next. The value assigned to each link usually can be interpreted as the immediate contribution to the objective function from making that policy decision. In most cases, the objective corresponds to finding either the shortest or the longest path through the network. The solution procedure is designed to find an optimal policy for the overall problem, i.e., a prescription of the optimal policy decision at each stage for each of the possible states. For the stagecoach problem, the solution procedure constructed a table for each stage (n) that prescribed the optimal decision (xn*) for each possible state (s). Thus, in addition to identifying three optimal solutions (optimal routes) for the overall problem, the results show the fortune seeker how he should proceed if he gets detoured to a state that is not on an optimal route. For any problem, dynamic programming provides this kind of policy prescription of what to do under every possible circumstance (which is why the actual decision made upon reaching a particular state at a given stage is referred to as a policy decision). Providing this additional information beyond simply specifying an optimal solution (optimal sequence of decisions) can be helpful in a variety of ways, including sensitivity analysis. Given the current state, an optimal policy for the remaining stages is independent of the policy decisions adopted in previous stages. Therefore, the optimal immediate decision depends on only the current state and not on how you got there. This is the principle of optimality for dynamic programming. Given the state in which the fortune seeker is currently located, the optimal life insurance policy (and its associated route) from this point onward is independent of how he got there. For dynamic programming problems in general, knowledge of the current state of the system conveys all the information about its previous behavior necessary for determining the optimal policy henceforth. (This property is the Markovian property, discussed in Sec. 29.2.) Any problem lacking this property cannot be formulated as a dynamic programming problem. The solution procedure begins by finding the optimal policy for the last stage. The optimal policy for the last stage prescribes the optimal policy decision for each of the possible states at that stage. The solution of this one-stage problem is usually trivial, as it was for the stagecoach problem. A recursive relationship that identifies the optimal policy for stage n, given the optimal policy for stage n  1, is available. For the stagecoach problem, this recursive relationship was * (xn)}. f n*(s)  min {csxn  f n1 xn Therefore, finding the optimal policy decision when you start in state s at stage n requires finding the minimizing value of xn. For this particular problem, the corresponding minimum cost is achieved by using this value of xn and then following the optimal policy when you start in state xn at stage n  1. The precise form of the recursive relationship differs somewhat among dynamic programming problems. However, notation analogous to that introduced in the preceding section will continue to be used here, as summarized below: N  number of stages. n  label for current stage (n  1, 2, . . . , N). sn  current state for stage n. 11.3 DETERMINISTIC DYNAMIC PROGRAMMING 445 xn  decision variable for stage n. xn*  optimal value of xn (given sn). fn(sn, xn)  contribution of stages n, n  1, . . . , N to objective function if system starts in state sn at stage n, immediate decision is xn, and optimal decisions are made thereafter. f n*(sn)  fn(sn, xn*). The recursive relationship will always be of the form f n*(sn)  max {fn(sn, xn)} xn or f n*(sn)  min {fn(sn, xn)}, xn * (sn1), and probably some where fn(sn, xn) would be written in terms of sn, xn, f n1 measure of the immediate contribution of xn to the objective function. It is the inclusion of f *n1(sn1) on the right-hand side, so that f n*(sn) is defined in terms of f *n1(sn1), that makes the expression for f n*(sn) a recursive relationship. The recursive relationship keeps recurring as we move backward stage by stage. When the current stage number n is decreased by 1, the new fn*(sn) function is derived by using the f *n1(sn1) function that was just derived during the preceding iteration, and then this process keeps repeating. This property is emphasized in the next (and final) characteristic of dynamic programming. 8. When we use this recursive relationship, the solution procedure starts at the end and moves backward stage by stage—each time finding the optimal policy for that stage— until it finds the optimal policy starting at the initial stage. This optimal policy immediately yields an optimal solution for the entire problem, namely, x1* for the initial state s1, then x2* for the resulting state s2, then x3* for the resulting state s3, and so forth to x*N for the resulting stage sN. This backward movement was demonstrated by the stagecoach problem, where the optimal policy was found successively beginning in each state at stages 4, 3, 2, and 1, respectively.4 For all dynamic programming problems, a table such as the following would be obtained for each stage (n  N, N ⫺ 1, . . . , 1). xn sn fn(sn, xn) f n*(sn) xn* When this table is finally obtained for the initial stage (n  1), the problem of interest is solved. Because the initial state is known, the initial decision is specified by x1* in this table. The optimal value of the other decision variables is then specified by the other tables in turn according to the state of the system that results from the preceding decisions. ■ 11.3 DETERMINISTIC DYNAMIC PROGRAMMING This section further elaborates upon the dynamic programming approach to deterministic problems, where the state at the next stage is completely determined by the state and policy decision at the current stage. The probabilistic case, where there is a probability distribution for what the next state will be, is discussed in the next section. 4 Actually, for this problem the solution procedure can move either backward or forward. However, for many problems (especially when the stages correspond to time periods), the solution procedure must move backward. 446 CHAPTER 11 DYNAMIC PROGRAMMING Deterministic dynamic programming can be described diagrammatically as shown in Fig. 11.3. Thus, at stage n the process will be in some state sn. Making policy decision xn then moves the process to some state sn1 at stage n  1. The contribution thereafter to the objective function under an optimal policy has been previously calculated to be f *n1(sn1). The policy decision xn also makes some contribution to the objective function. Combining these two quantities in an appropriate way provides fn(sn, xn), the contribution of stages n onward to the objective function. Optimizing with respect to xn then gives f n*(sn)  fn(sn, xn*). After xn* and f n*(sn) are found for each possible value of sn, the solution procedure is ready to move back one stage. One way of categorizing deterministic dynamic programming problems is by the form of the objective function. For example, the objective might be to minimize the sum of the contributions from the individual stages (as for the stagecoach problem), or to maximize such a sum, or to minimize a product of such terms, and so on. Another categorization is in terms of the nature of the set of states for the respective stages. In particular, states sn might be representable by a discrete state variable (as for the stagecoach problem) or by a continuous state variable, or perhaps a state vector (more than one variable) is required. Similarly, the decision variables (x1, x2, . . . , xN) also can be either discrete or continuous. Several examples are presented to illustrate some of these possibilities. More importantly, they illustrate that these apparently major differences are actually quite inconsequential (except in terms of computational difficulty) because the underlying basic structure shown in Fig. 11.3 always remains the same. The first new example arises in a much different context from the stagecoach problem, but it has the same mathematical formulation except that the objective is to maximize rather than minimize a sum. EXAMPLE 2 Distributing Medical Teams to Countries The WORLD HEALTH COUNCIL is devoted to improving health care in the underdeveloped countries of the world. It now has five medical teams available to allocate among three such countries to improve their medical care, health education, and training programs. Therefore, the council needs to determine how many teams (if any) to allocate to each of these countries to maximize the total effectiveness of the five teams. The teams must be kept intact, so the number allocated to each country must be an integer. The measure of performance being used is additional person-years of life. (For a particular country, this measure equals the increased life expectancy in years times the country’s population.) Table 11.1 gives the estimated additional person-years of life (in multiples of 1,000) for each country for each possible allocation of medical teams. Which allocation maximizes the measure of performance? Formulation. This problem requires making three interrelated decisions, namely, how many medical teams to allocate to each of the three countries. Therefore, even though there is no fixed sequence, these three countries can be considered as the three stages in ■ FIGURE 11.3 The basic structure for deterministic dynamic programming. Stage n State: sn Value: fn(sn, xn) Stage n1 xn Contribution of xn sn  1 f *n  1(sn  1) An Application Vignette Six days after Saddam Hussein ordered his Iraqi military forces to invade Kuwait on August 2, 1990, the United States began the long process of deploying many of its own military units and cargo to the region. After developing a coalition force from 35 nations led by the United States, the military operation called Operation Desert Storm was launched on January 17, 1991, to expel the Iraqi troops from Kuwait. This led to a decisive victory for the coalition forces, which liberated Kuwait and penetrated Iraq. The logistical challenge involved in quickly transporting the needed troops and cargo to the war zone was a daunting one. A typical airlift mission carrying troops and cargo from the United States to the Persian Gulf required a three-day round-trip, visited seven or more different airfields, burned almost one million pounds of fuel, and cost $280,000. During Operation Desert Storm, the Military Airlift Command (MAC) averaged more than 100 such missions daily as it managed the largest airlift in history. To meet this challenge, operations research was applied to develop the decision support systems needed to schedule and route each airlift mission. The OR technique used to drive this process was dynamic programming. The stages in the dynamic programming formulation correspond to the airfields in the network of flight legs relevant to the mission. For a given airfield, the states are characterized by the departure time from the airfield and the remaining available duty for the current crew. The objective function to be minimized is a weighted sum of several measures of performance: the lateness of deliveries, the flying time of the mission, the ground time, and the number of crew changes. The constraints include a lower bound on the load carried by the mission and upper bounds on the availability of crew and ground-support resources at airfields. This application of dynamic programming had a dramatic impact on the ability to deliver the necessary cargo and personnel to the Persian gulf quickly to support Operation Desert Storm. For example, when speaking to the developers of this approach, MAC’s deputy chief of staff for operations and transportation is quoted as saying, “I guarantee you that we could not have done that (the deployment to the Persian Gulf) without your help and the contributions you made to (the decision support systems)—we absolutely could not have done that.” Source: M. C. Hilliard, R. S. Solanki, C. Liu, I. K. Busch, G. Harrison, and R. D. Kraemer: “Scheduling the Operation Desert Storm Airlift: An Advanced Automated Scheduling Support System,” Interfaces, 22(1): 131–146, Jan.–Feb. 1992. ■ TABLE 11.1 Data for the World Health Council problem Thousands of Additional Person-Years of Life Country Medical Teams 1 2 3 0 1 2 3 4 5 0 45 70 90 105 120 0 20 45 75 110 150 0 50 70 80 100 130 a dynamic programming formulation. The decision variables xn (n  1, 2, 3) are the number of teams to allocate to stage (country) n. The identification of the states may not be readily apparent. To determine the states, we ask questions such as the following. What is it that changes from one stage to the next? Given that the decisions have been made at the previous stages, how can the status of the situation at the current stage be described? What information about the current state of affairs is necessary to determine the optimal policy hereafter? On these bases, an appropriate choice for the “state of the system” is 448 CHAPTER 11 DYNAMIC PROGRAMMING sn  number of medical teams still available for allocation to remaining countries (n, . . . , 3). Thus, at stage 1 (country 1), where all three countries remain under consideration for allocations, s1  5. However, at stage 2 or 3 (country 2 or 3), sn is just 5 minus the number of teams allocated at preceding stages, so that the sequence of states is s1  5, s2  5 ⫺ x1, s3  s2 ⫺ x2. With the dynamic programming procedure of solving backward stage by stage, when we are solving at stage 2 or 3, we shall not yet have solved for the allocations at the preceding stages. Therefore, we shall consider every possible state we could be in at stage 2 or 3, namely, sn  0, 1, 2, 3, 4, or 5. Figure 11.4 shows the states to be considered at each stage. The links (line segments) show the possible transitions in states from one stage to the next from making a feasible allocation of medical teams to the country involved. The numbers shown next to the links are the corresponding contributions to the measure of performance, where these numbers ■ FIGURE 11.4 Graphical display of the World Health Council problem, showing the possible states at each stage, the possible transitions in states, and the corresponding contributions to the measure of performance. Stage: 1 2 3 0 0 50 20 150 0 1 0 0 1 70 45 110 20 105 80 2 0 120 20 45 75 3 90 45 20 75 45 4 0 70 0 4 20 45 5 100 75 3 0 110 State: 2 5 0 5 130 0 449 11.3 DETERMINISTIC DYNAMIC PROGRAMMING come from Table 11.1. From the perspective of this figure, the overall problem is to find the path from the initial state 5 (beginning stage 1) to the final state 0 (after stage 3) that maximizes the sum of the numbers along the path. To state the overall problem mathematically, let pi(xi) be the measure of performance from allocating xi medical teams to country i, as given in Table 11.1. Thus, the objective is to choose x1, x2, x3 so as to 3 Maximize 冱 pi (xi), i1 subject to 3 冱 xi  5, i1 and xi are nonnegative integers. Using the notation presented in Sec. 11.2, we see that fn(sn, xn) is 3 fn(sn, xn)  pn(xn)  max 冱 pi(xi), in1 where the maximum is taken over xn1, . . . , x3 such that 3 冱 xi  sn in and the xi are nonnegative integers, for n  1, 2, 3. In addition, f n*(sn)  max xn0,1, . . . , sn fn(sn, xn) Therefore, fn(sn, xn)  pn(xn)  f *n1(sn ⫺ xn) (with f 4* defined to be zero). These basic relationships are summarized in Fig. 11.5. Consequently, the recursive relationship relating functions f 1*, f 2*, and f 3* for this problem is f n*(sn)  max {pn(xn)  f *n1(sn ⫺ xn)}, for n  1, 2. xn0,1, . . . , sn For the last stage (n  3), f 3*(s3)  max x30,1, . . . , s3 p3(x3). The resulting dynamic programming calculations are given next. Solution Procedure. Beginning with the last stage (n  3), we note that the values of p3(x3) are given in the last column of Table 11.1 and these values keep increasing as we move down the column. Therefore, with s3 medical teams still available for allocation to country 3, the maximum of p3(x3) is automatically achieved by allocating all s3 teams; so x3*  s3 and f 3*(s3)  p3(s3), as shown in the following table. 450 CHAPTER 11 DYNAMIC PROGRAMMING n ⴝ 3: s3 f 3*(s3) x3* 0 1 2 3 4 5 0 50 70 80 100 130 0 1 2 3 4 5 We now move backward to start from the next-to-last stage (n  2). Here, finding x2* requires calculating and comparing f2(s2, x2) for the alternative values of x2, namely, x2  0, 1, . . . , s2. To illustrate, we depict this situation when s2  2 graphically: 0 0 45 50 State: 2 20 1 0 2 70 This diagram corresponds to Fig. 11.5 except that all three possible states at stage 3 are shown. Thus, if x2  0, the resulting state at stage 3 will be s2 ⫺ x2  2 ⫺ 0  2, whereas x2  1 leads to state 1 and x2  2 leads to state 0. The corresponding values of p2(x2) from the country 2 column of Table 11.1 are shown along the links, and the values of f 3*(s2 ⫺ x2) from the n  3 table are given next to the stage 3 nodes. The required calculations for this case of s2  2 are summarized below: Formula: x2  0: x2  1: x2  2: f2(2, x2)  p2(x2)  f 3*(2 ⫺ x2). p2(x2) is given in the country 2 column of Table 11.1. f 3*(2 ⫺ x2) is given in the n  3 table above. f2(2, 0)  p2(0)  f 3*(2)  0  70  70. f2(2, 1)  p2(1)  f 3*(1)  20  50  70. f2(2, 2)  p2(2)  f 3*(0)  45  0  45. Because the objective is maximization, x2*  0 or 1 with f 2*(2)  70. ■ FIGURE 11.5 The basic structure for the World Health Council problem. Stage n State: sn xn Value: fn(sn, xn) pn(xn)  pn(xn)  f*n1(sn ⫺ xn) Stage n1 s n ⫺ xn f*n1(sn ⫺ xn) 451 11.3 DETERMINISTIC DYNAMIC PROGRAMMING Proceeding in a similar way with the other possible values of s2 (try it) yields the following table: f2(s2, x2) ⴝ p2(x2) ⴙ f 3*(s2 ⴚ x2) x2 n ⴝ 2: s2 0 1 0 1 2 3 4 5 0 50 70 80 100 130 20 70 90 100 120 2 3 45 95 115 125 4 75 125 145 110 160 5 f 2*(s2) 150 0 50 70 95 125 160 x2* 0 0 0 2 3 4 or or or or or or 1 1 1 1 1 1 We now are ready to move backward to solve the original problem where we are starting from stage 1 (n  1). In this case, the only state to be considered is the starting state of s1  5, as depicted below: 0 0 120 State: 5 125 4 45 0 5 160 Since allocating x1 medical teams to country 1 leads to a state of 5 ⫺ x1 at stage 2, a choice of x1  0 leads to the bottom node on the right, x1  1 leads to the next node up, and so forth up to the top node with x1  5. The corresponding p1(x1) values from Table 11.1 are shown next to the links. The numbers next to the nodes are obtained from the f 2*(s2) column of the n  2 table. As with n  2, the calculation needed for each alternative value of the decision variable involves adding the corresponding link value and node value, as summarized below: Formula: x1  0: x1  1: ⯗ x1  5: f1(5, x1)  p1(x1)  f 2*(5 ⫺ x1). p1(x1) is given in the country 1 column of Table 11.1. f 2*(5 ⫺ x1) is given in the n  2 table. f1(5, 0)  p1(0)  f 2*(5)  0  160  160. f1(5, 1)  p1(1)  f 2*(4)  45  125  170. f1(5, 5)  p1(5)  f 2*(0)  120  0  120. The similar calculations for x1  2, 3, 4 (try it) verify that x1*  1 with f 1*(5)  170, as shown in the following table: f1(s1, x1) ⴝ p1(x1) ⴙ f 2*(s1 ⴚ x1) x2 n ⴝ 1: s1 0 1 2 3 4 5 f 1*(s1) x1* 5 160 170 165 160 155 120 170 1 452 CHAPTER 11 DYNAMIC PROGRAMMING Stage: 2 1 3 0 0 0 0 0 0 50 0 1) * (x 3 50 50 0 1 1 70 20 2 0 70 130 2 70 45 80 95 3 ) 3 2 (x * ■ FIGURE 11.6 Graphical display of the dynamic programming solution of the World Health Council problem. An arrow from state sn to state sn1 indicates that an optimal policy decision from state sn is to allocate (sn ⫺ sn1) medical teams to country n. Allocating the medical teams in this way when following the boldfaced arrows from the initial state to the final state gives the optimal solution. 3 80 100 75 * (x 1 State: 1) 4 4 125 100 110 45 5 5 5 170 160 130 Thus, the optimal solution has x1*  1, which makes s2  5 ⫺ 1  4, so x2*  3, which makes s3  4 ⫺ 3  1, so x3*  1. Since f 1*(5)  170, this (1, 3, 1) allocation of medical teams to the three countries will yield an estimated total of 170,000 additional personyears of life, which is at least 5,000 more than for any other allocation. These results of the dynamic programming analysis also are summarized in Fig. 11.6. A Prevalent Problem Type—The Distribution of Effort Problem The preceding example illustrates a particularly common type of dynamic programming problem called the distribution of effort problem. For this type of problem, there is just one kind of resource that is to be allocated to a number of activities. The objective is to determine how to distribute the effort (the resource) among the activities most effectively. For the World Health Council example, the resource involved is the medical teams, and the three activities are the health care work in the three countries. Assumptions. This interpretation of allocating resources to activities should ring a bell for you, because it is the typical interpretation for linear programming problems given at 453 11.3 DETERMINISTIC DYNAMIC PROGRAMMING the beginning of Chap. 3. However, there also are some key differences between the distribution of effort problem and linear programming that help illuminate the general distinctions between dynamic programming and other areas of mathematical programming. One key difference is that the distribution of effort problem involves only one resource (one functional constraint), whereas linear programming can deal with thousands of resources. (In principle, dynamic programming can handle slightly more than one resource, but it quickly becomes very inefficient when the number of resources is increased because a separate state variable is required for each of the resources. This is referred to as the “curse of dimensionality.”) On the other hand, the distribution of effort problem is far more general than linear programming in other ways. Consider the four assumptions of linear programming presented in Sec. 3.3: proportionality, additivity, divisibility, and certainty. Proportionality is routinely violated by nearly all dynamic programming problems, including distribution of effort problems (e.g., Table 11.1 violates proportionality). Divisibility also is often violated, as in Example 2, where the decision variables must be integers. In fact, dynamic programming calculations become more complex when divisibility does hold (as in Example 4). Although we shall consider the distribution of effort problem only under the assumption of certainty, this is not necessary, and many other dynamic programming problems violate this assumption as well (as described in Sec. 11.4). Of the four assumptions of linear programming, the only one needed by the distribution of effort problem (or other dynamic programming problems) is additivity (or its analog for functions involving a product of terms). This assumption is needed to satisfy the principle of optimality for dynamic programming (characteristic 5 in Sec. 11.2). Formulation. Because they always involve allocating one kind of resource to a number of activities, distribution of effort problems always have the following dynamic programming formulation (where the ordering of the activities is arbitrary): Stage n  activity n (n  1, 2, . . . , N). xn  amount of resource allocated to activity n. State sn  amount of resource still available for allocation to remaining activities (n, . . . , N). The reason for defining state sn in this way is that the amount of the resource still available for allocation is precisely the information about the current state of affairs (entering stage n) that is needed for making the allocation decisions for the remaining activities. When the system starts at stage n in state sn, the choice of xn results in the next state at stage n  1 being sn1  sn ⫺ xn, as depicted below:5 Stage: n State: sn n1 xn sn ⫺ xn Note how the structure of this diagram corresponds to the one shown in Fig. 11.5 for the World Health Council example of a distribution of effort problem. What will differ from one such example to the next is the rest of what is shown in Fig. 11.5, namely, the rela* (sn ⫺ xn), and then the resulting recursive relationship tionship between fn(sn, xn) and f n1 * * between the f n and f n1 functions. These relationships depend on the particular objective function for the overall problem. 5 This statement assumes that xn and sn are expressed in the same units. If it is more convenient to define xn as some other quantity such that the amount of the resource allocated to activity n is anxn, then sn1  sn ⫺ anxn. 454 CHAPTER 11 DYNAMIC PROGRAMMING The structure of the next example is similar to the one for the World Health Council because it, too, is a distribution of effort problem. However, its recursive relationship differs in that its objective is to minimize a product of terms for the respective stages. At first glance, this example may appear not to be a deterministic dynamic programming problem because probabilities are involved. However, it does indeed fit our definition because the state at the next stage is completely determined by the state and policy decision at the current stage. EXAMPLE 3 Distributing Scientists to Research Teams A government space project is conducting research on a certain engineering problem that must be solved before people can fly safely to Mars. Three research teams are currently trying three different approaches for solving this problem. The estimate has been made that, under present circumstances, the probability that the respective teams—call them 1, 2, and 3—will not succeed is 0.40, 0.60, and 0.80, respectively. Thus, the current probability that all three teams will fail is (0.40)(0.60)(0.80)  0.192. Because the objective is to minimize the probability of failure, two more top scientists have been assigned to the project. Table 11.2 gives the estimated probability that the respective teams will fail when 0, 1, or 2 additional scientists are added to that team. Only integer numbers of scientists are considered because each new scientist will need to devote full attention to one team. The problem is to determine how to allocate the two additional scientists to minimize the probability that all three teams will fail. Formulation. Because both Examples 2 and 3 are distribution of effort problems, their underlying structure is actually very similar. In this case, scientists replace medical teams as the kind of resource involved, and research teams replace countries as the activities. Therefore, instead of medical teams being allocated to countries, scientists are being allocated to research teams. The only basic difference between the two problems is in their objective functions. With so few scientists and teams involved, this problem could be solved very easily by a process of exhaustive enumeration. However, the dynamic programming solution is presented for illustrative purposes. In this case, stage n (n  1, 2, 3) corresponds to research team n, and the state sn is the number of new scientists still available for allocation to the remaining teams. The decision variables xn (n  1, 2, 3) are the number of additional scientists allocated to team n. Let pi (xi) denote the probability of failure for team i if it is assigned xi additional scientists, as given by Table 11.2. If we let  denote multiplication, the government’s objective is to choose x1, x2, x3 so as to 3 Minimize  pi(xi)  p1(x1)p2(x2)p3(x3), i1 ■ TABLE 11.2 Data for the Government Space Project problem Probability of Failure Team New Scientists 1 2 3 0 1 2 0.40 0.20 0.15 0.60 0.40 0.20 0.80 0.50 0.30 455 11.3 DETERMINISTIC DYNAMIC PROGRAMMING subject to 3 冱 xi  2 i1 and xi are nonnegative integers. Consequently, fn(sn, xn) for this problem is 3 fn(sn, xn)  pn(xn)  min  pi (xi), in1 where the minimum is taken over xn1, . . . , x3 such that 3 冱 xi  sn in and xi are nonnegative integers, for n  1, 2, 3. Thus, f n*(sn)  min xn0,1, . . . , sn fn(sn, xn), where fn(sn, xn)  pn(xn)  f *n1(sn  xn) (with f 4* defined to be 1). Figure 11.7 summarizes these basic relationships. Thus, the recursive relationship relating the f 1*, f 2*, and f 3* functions in this case is f n*(sn)  min xn0,1, . . . , sn {pn(xn)  f *n1(sn  xn)}, for n  1, 2, and, when n  3, f 3*(s3)  min x3  0,1, . . . , s3 p3(x3). Solution Procedure. The resulting dynamic programming calculations are as follows: n ⴝ 3: ■ FIGURE 11.7 The basic structure for the government space project problem. s3 f 3*(s3) x3* 0 1 2 0.80 0.50 0.30 0 1 2 Stage n State: sn xn Value: fn(sn, xn) pn(xn)  pn(xn) ⴢ f*n1(sn  xn) Stage n1 s n  xn f*n1(sn  xn) 456 CHAPTER 11 DYNAMIC PROGRAMMING x2 n ⴝ 2: s2 0 1 0 1 2 0.48 0.30 0.18 0.32 0.20 x1 n ⴝ 1: f2(s2, x2) ⴝ p2(x2) ⴢ f 3*(s2 ⴚ x2) 2 f 2*(s2) x2* 0.16 0.48 0.30 0.16 0 0 2 f1(s1, x1) ⴝ p1(x1) ⴢ f 2*(s1 ⴚ x1) s1 0 1 2 f 1*(s1) x1* 2 0.064 0.060 0.072 0.060 1 Therefore, the optimal solution must have x1*  1, which makes s2  2 ⫺ 1  1, so that x2*  0, which makes s3  1 ⫺ 0  1, so that x3*  1. Thus, teams 1 and 3 should each receive one additional scientist. The new probability that all three teams will fail would then be 0.060. All the examples thus far have had a discrete state variable sn at each stage. Furthermore, they all have been reversible in the sense that the solution procedure actually could have moved either backward or forward stage by stage. (The latter alternative amounts to renumbering the stages in reverse order and then applying the procedure in the standard way.) This reversibility is a general characteristic of distribution of effort problems such as Examples 2 and 3, since the activities (stages) can be ordered in any desired manner. The next example is different in both respects. Rather than being restricted to integer values, its state variable sn at stage n is a continuous variable that can take on any value over certain intervals. Since sn now has an infinite number of values, it is no longer possible to consider each of its feasible values individually. Rather, the solution for f n*(sn) and xn* must be expressed as functions of sn. Furthermore, this example is not reversible because its stages correspond to time periods, so the solution procedure must proceed backward. Before proceeding directly to the rather involved example presented next, you might find it helpful at this point to look at the two additional examples of deterministic dynamic programming presented in the Solved Examples section of the book’s website. The first one involves production and inventory planning over a number of time periods. Like the examples thus far, both the state variable and the decision variable at each stage are discrete. However, this example is not reversible since the stages correspond to time periods. It also is not a distribution of effort problem. The second example is a nonlinear programming problem with two variables and a single constraint. Therefore, even though it is reversible, its state and decision variables are continuous. However, in contrast to the following example (which has four continuous variables and thus four stages), it has only two stages, so it can be solved relatively quickly with dynamic programming and a bit of calculus. EXAMPLE 4 Scheduling Employment Levels The workload for the LOCAL JOB SHOP is subject to considerable seasonal fluctuation. However, machine operators are difficult to hire and costly to train, so the manager is reluctant to lay off workers during the slack seasons. He is likewise reluctant to maintain his peak season payroll when it is not required. Furthermore, he is definitely opposed to 457 11.3 DETERMINISTIC DYNAMIC PROGRAMMING overtime work on a regular basis. Since all work is done to custom orders, it is not possible to build up inventories during slack seasons. Therefore, the manager is in a dilemma as to what his policy should be regarding employment levels. The following estimates are given for the minimum employment requirements during the four seasons of the year for the foreseeable future: Season Spring Summer Autumn Winter Spring 255 220 240 200 255 Requirements Employment will not be permitted to fall below these levels. Any employment above these levels is wasted at an approximate cost of $2,000 per person per season. It is estimated that the hiring and firing costs are such that the total cost of changing the level of employment from one season to the next is $200 times the square of the difference in employment levels. Fractional levels of employment are possible because of a few part-time employees, and the cost data also apply on a fractional basis. Formulation. On the basis of the data available, it is not worthwhile to have the employment level go above the peak season requirements of 255. Therefore, spring employment should be at 255, and the problem is reduced to finding the employment level for the other three seasons. For a dynamic programming formulation, the seasons should be the stages. There are actually an indefinite number of stages because the problem extends into the indefinite future. However, each year begins an identical cycle, and because spring employment is known, it is possible to consider only one cycle of four seasons ending with the spring season, as summarized below: Stage 1  summer, Stage 2  autumn, Stage 3  winter, Stage 4  spring. xn  employment level for stage n (n  1, 2, 3, 4). (x4  255). It is necessary that the spring season be the last stage because the optimal value of the decision variable for each state at the last stage must be either known or obtainable without considering other stages. For every other season, the solution for the optimal employment level must consider the effect on costs in the following season. Let rn  minimum employment requirement for stage n, where these requirements were given earlier as r1  220, r2  240, r3  200, and r4  255. Thus, the only feasible values for xn are rn  xn  255. Referring to the cost data given in the problem statement, we have Cost for stage n  200(xn  xn1)2  2,000(xn  rn). Note that the cost at the current stage depends upon only the current decision xn and the employment in the preceding season xn1. Thus, the preceding employment level is all the information about the current state of affairs that we need to determine the optimal policy henceforth. Therefore, the state sn for stage n is State sn  xn1. 458 CHAPTER 11 DYNAMIC PROGRAMMING When n  1, s1  x0  x4  255. For your ease of reference while working through the problem, a summary of the data is given in Table 11.3 for each of the four stages. The objective for the problem is to choose x1, x2, x3 (with x0  x4  255) so as to 4 Minimize [200(xi  xi1)2  2,000(xi  ri)], 冱 i1 subject to ri  xi  255, for i  1, 2, 3, 4. Thus, for stage n onward (n  1, 2, 3, 4), since sn  xn1 fn(sn, xn)  200(xn  sn)2  2,000(xn  rn) 4 冱 [200(xi  xi1)2  2,000(xi  ri)], r x 255 in1  min i i where this summation equals zero when n  4 (because it has no terms). Also, f n*(sn)  min rnxn255 fn(sn, xn). Hence, fn(sn, xn)  200(xn  sn)2  2,000(xn  rn)  f *n1(xn) (with f5* defined to be zero because costs after stage 4 are irrelevant to the analysis). A summary of these basic relationships is given in Fig. 11.8. Consequently, the recursive relationship relating the f n* functions is f n*(sn)  min rnxn255 {200(xn  sn)2  2,000(xn  rn)  f *n1(xn)}. The dynamic programming approach uses this relationship to identify successively these functions—f 4*(s4), f 3*(s3), f 2*(s2), f 1*(255)—and the corresponding minimizing xn. ■ TABLE 11.3 Data for the Local Job Shop problem n 1 2 3 4 ■ FIGURE 11.8 The basic structure for the Local Job Shop problem. rn Feasible xn 220 240 200 255 Possible sn ⴝ xnⴚ1 220  x1  255 240  x2  255 200  x3  255 x4  255 s1  255 220  s2  255 240  s3  255 200  s4  255 sn Value: fn(sn, xn)  sum 200(x1  255)  2,000(x1  220) 200(x2  x1)2  2,000(x2  240) 200(x3  x2)2  2,000(x3  200) 200(255  x3)2 Stage n1 Stage n State: Cost 2 xn 200(xn  sn)2  2,000(xn  rn) xn f*n1(xn) 459 11.3 DETERMINISTIC DYNAMIC PROGRAMMING Solution Procedure. Stage 4: Beginning at the last stage (n  4), we already know that x4*  255, so the necessary results are s4 f 4*(s4) x4* 200  s4  255 200(255  s4)2 255 n ⴝ 4: Stage 3: For the problem consisting of just the last two stages (n  3), the recursive relationship reduces to f *3 (s3)   min {200(x3 ⫺ s3)2  2,000(x3  200)  f *4 (x3)} min {200(x3  s3)2  2,000(x3  200)  200(255  x3)2}, 200x3255 200x3255 where the possible values of s3 are 240  s3  255. One way to solve for the value of x3 that minimizes f3(s3, x3) for any particular value of s3 is the graphical approach illustrated in Fig. 11.9. However, a faster way is to use calculus. We want to solve for the minimizing x3 in terms of s3 by considering s3 to have some fixed (but unknown) value. Therefore, set the first (partial) derivative of f3(s3, x3) with respect to x3 equal to zero:  f3(s3, x3)  400(x3  s3)  2,000  400(255  x3) x3  400(2x3  s3  250)  0, which yields s3  250 x3*   . 2 Because the second derivative is positive, and because this solution lies in the feasible interval for x3 (200  x3  255) for all possible s3 (240  s3  255), it is indeed the desired minimum. ■ FIGURE 11.9 Graphical solution for f 3*(s3) for the Local Job Shop problem. 200(255  x3)2 Sum  f3(s3, x3) f *3(s3) 200(x3  s3)2 2,000(x3  200) 200 s3 s3  250 2 255 x3 460 CHAPTER 11 DYNAMIC PROGRAMMING Note a key difference between the nature of this solution and those obtained for the preceding examples where there were only a few possible states to consider. We now have an infinite number of possible states (240  s3  255), so it is no longer feasible to solve separately for x3* for each possible value of s3. Therefore, we instead have solved for x3* as a function of the unknown s3. Using 冢 冢  2 s3  250 s3  250 f 3*(s3)  f3(s3, x3*)  200    s3  200 255    2 2 s3  250  2,000    200 2 冢  2  and reducing this expression algebraically complete the required results for the third-stage problem, summarized as follows: n ⴝ 3: s3 f 3*(s3) x3* 240  s3  255 50(250  s3)2  50(260  s3)2  1,000(s3  150) s3  250  2 Stage 2: The second-stage (n  2) and first-stage problems (n  1) are solved in a similar fashion. Thus, for n  2, f2(s2, x2)  200(x2  s2)2  2,000(x2  r2)  f 3*(x2)  200(x2  s2)2  2,000(x2  240)  50(250  x2)2  50(260  x2)2  1,000(x2  150). The possible values of s2 are 220  s2  255, and the feasible region for x2 is 240  x2  255. The problem is to find the minimizing value of x2 in this region, so that f 2*(s2)  min 240x2255 f2(s2, x2). Setting to zero the partial derivative with respect to x2:  f2(s2, x2)  400(x2  s2)  2,000  100(250  x2)  100(260  x2)  1,000 x2  200(3x2  2s2  240) 0 yields 2s2  240 x2   . 3 Because 2 2 f2(s2, x2)  600 x2 0, this value of x2 is the desired minimizing value if it is feasible (240  x2  255). Over the possible s2 values (220  s2  255), this solution actually is feasible only if 240  s2  255. Therefore, we still need to solve for the feasible value of x2 that minimizes f2(s2, x2) when 220  s2 240. The key to analyzing the behavior of f2(s2, x2) over the feasible region for x2 again is the partial derivative of f2(s2, x2). When s2 240, 461 11.3 DETERMINISTIC DYNAMIC PROGRAMMING  f2(s2, x2) x2 0, for 240  x2  255, so that x2  240 is the desired minimizing value. The next step is to plug these values of x2 into f2(s2, x2) to obtain f 2*(s2) for s2 and s2 240. This yields n ⴝ 2: 240 s2 f 2*(s2) x2* 220  s2  240 200(240  s2)2  115,000 200  [(240  s2)2  (255  s2)2 9  (270  s2)2]  2,000(s2  195) 240 2s2  240  3 240  s2  255 Stage 1: For the first-stage problem (n  1), f1(s1, x1)  200(x1  s1)2  2,000(x1  r1)  f 2*(x1). Because r1  220, the feasible region for x1 is 220  x1  255. The expression for f 2*(x1) will differ in the two portions 220  x1  240 and 240  x1  255 of this region. Therefore, f1(s1, x1)   200(x1  s1)2  2,000(x1  220)  200(240  x1)2  115,000, if 220  x1  240 200 200(x1  s1)2  2,000(x1  220)   [(240  x1)2  (255  x1)2  (270  x1)2] 9 if 240  x1  255.  2,000(x1  195), Considering first the case where 220  x1  240, we have  f1(s1, x1)  400(x1  s1)  2,000  400(240  x1) x1  400(2x1  s1  235). It is known that s1  255 (spring employment), so that  f1(s1, x1)  800(x1  245) x1 0 for all x1  240. Therefore, x1  240 is the minimizing value of f1(s1, x1) over the region 220  x1  240. When 240  x1  255,  f1(s1, x1)  400(x1  s1)  2,000 x1 400  [(240  x1)  (255  x1)  (270  x1)]  2,000 9 400   (4x1  3s1  225). 3 Because 2  f (s , x ) x12 1 1 1 0 for all x1, 462 CHAPTER 11 DYNAMIC PROGRAMMING set  f1(s1, x1)  0, x1 which yields 3s1  225 x1   . 4 Because s1  255, it follows that x1  247.5 minimizes f1(s1, x1) over the region 240  x1  255. Note that this region (240  x1  255) includes x1  240, so that f1(s1, 240) f1(s1, 247.5). In the next-to-last paragraph, we found that x1  240 minimizes f1(s1, x1) over the region 220  x1  240. Consequently, we now can conclude that x1  247.5 also minimizes f1(s1, x1) over the entire feasible region 220  x1  255. Our final calculation is to find f 1*(s1) for s1  255 by plugging x1  247.5 into the expression for f1(255, x1) that holds for 240  x1  255. Hence, f 1*(255)  200(247.5  255)2  2,000(247.5  220) 200   [2(250  247.5)2  (265  247.5)2  30(742.5  575)] 9  185,000. These results are summarized as follows: n ⴝ 1: s1 f 1*(s1) x1* 255 185,000 247.5 Therefore, by tracing back through the tables for n  2, n  3, and n  4, respectively, and setting sn  x*n1 each time, the resulting optimal solution is x1*  247.5, x2*  245, x3*  247.5, x4*  255, with a total estimated cost per cycle of $185,000. You now have seen a variety of applications of dynamic programming, with more to come in the next section. However, these examples only scratch the surface. For example, Chapter 2 of Selected Reference 2 describes 47 types of problems to which dynamic programming can be applied. (This reference also presents a software tool that can be used to solve all these problem types.) The one common theme that runs through all these applications of dynamic programming is the need to make a series of interrelated decisions and the efficient way dynamic programming provides for finding an optimal combination of decisions. ■ 11.4 PROBABILISTIC DYNAMIC PROGRAMMING Probabilistic dynamic programming differs from deterministic dynamic programming in that the state at the next stage is not completely determined by the state and policy decision at the current stage. Rather, there is a probability distribution for what the next state will be. However, this probability distribution still is completely determined by the state 463 11.4 PROBABILISTIC DYNAMIC PROGRAMMING Stage n  1 Stage n Probability State: sn Decision xn p1 p2 pS fn(sn, xn) Contribution from stage n C1 f*n1(1) C2 2 ⴢ ⴢ ⴢ f*n1(2) CS ■ FIGURE 11.10 The basic structure for probabilistic dynamic programming. 1 ⴢ ⴢ ⴢ S f*n1(S) and policy decision at the current stage. The resulting basic structure for probabilistic dynamic programming is described diagrammatically in Fig. 11.10. For the purposes of this diagram, we let S denote the number of possible states at stage n  1 and label these states on the right side as 1, 2, . . . , S. The system goes to state i with probability pi (i  1, 2, . . . , S) given state sn and decision xn at stage n. If the system goes to state i, Ci is the contribution of stage n to the objective function. When Fig. 11.10 is expanded to include all the possible states and decisions at all the stages, it is sometimes referred to as a decision tree. If the decision tree is not too large, it provides a useful way of summarizing the various possibilities. Because of the probabilistic structure, the relationship between fn(sn, xn) and the f *n1(sn1) necessarily is somewhat more complicated than that for deterministic dynamic programming. The precise form of this relationship will depend upon the form of the overall objective function. To illustrate, suppose that the objective is to minimize the expected sum of the contributions from the individual stages. In this case, fn(sn, xn) represents the minimum expected sum from stage n onward, given that the state and policy decision at stage n are sn and xn, respectively. Consequently, S fn(sn, xn)  冱 pi[Ci  f *n1(i)], i1 with f *n1(i)  min fn1(i, xn1), xn1 where this minimization is taken over the feasible values of xn1. Example 5 has this same form. Example 6 will illustrate another form. EXAMPLE 5 Determining Reject Allowances The HIT-AND-MISS MANUFACTURING COMPANY has received an order to supply one item of a particular type. However, the customer has specified such stringent quality requirements that the manufacturer may have to produce more than one item to obtain an 464 CHAPTER 11 DYNAMIC PROGRAMMING item that is acceptable. The number of extra items produced in a production run is called the reject allowance. Including a reject allowance is common practice when producing for a custom order, and it seems advisable in this case. The manufacturer estimates that each item of this type that is produced will be acceptable with probability 21 and defective (without possibility for rework) with probability 21. Thus, the number of acceptable items produced in a lot of size L will have a binomial distribution; i.e., the probability of producing no acceptable items in such a lot is (12)L. Marginal production costs for this product are estimated to be $100 per item (even if defective), and excess items are worthless. In addition, a setup cost of $300 must be incurred whenever the production process is set up for this product, and a completely new setup at this same cost is required for each subsequent production run if a lengthy inspection procedure reveals that a completed lot has not yielded an acceptable item. The manufacturer has time to make no more than three production runs. If an acceptable item has not been obtained by the end of the third production run, the cost to the manufacturer in lost sales income and penalty costs will be $1,600. The objective is to determine the policy regarding the lot size (1  reject allowance) for the required production run(s) that minimizes total expected cost for the manufacturer. Formulation. A dynamic programming formulation for this problem is Stage n  production run n (n  1, 2, 3), xn  lot size for stage n, State sn  number of acceptable items still needed (1 or 0) at beginning of stage n. Thus, at stage 1, state s1  1. If at least one acceptable item is obtained subsequently, the state changes to sn  0, after which no additional costs need to be incurred. Because of the stated objective for the problem, fn(sn, xn)  total expected cost for stages n, . . . , 3 if system starts in state sn at stage n, immediate decision is xn, and optimal decisions are made thereafter, f n*(sn)  min fn(sn, xn), xn0, 1, . . . where f n*(0)  0. Using $100 as the unit of money, the contribution to cost from stage n is [K(xn)  xn] regardless of the next state, where K(xn) is a function of xn such that K(xn)  0,3, if xn  0 if xn 0. Therefore, for sn  1, 冢 1 fn(1, xn)  K(xn)  xn   2 冢 1  K(xn)  xn   2  冢   f* xn 1 f *n1(1)  1   2 xn n1(0) xn f *n1(1) [where f 4*(1) is defined to be 16, the terminal cost if no acceptable items have been obtained]. A summary of these basic relationships is given in Fig. 11.11. Consequently, the recursive relationship for the dynamic programming calculations is f n*(1)  K(x )  x  冢12 xn min xn0, 1, . . . for n  1, 2, 3. n n  f *n1(1) 465 11.4 PROBABILISTIC DYNAMIC PROGRAMMING Probability Contribution from stage n 0 State: ■ FIGURE 11.11 The basic structure for the Hit-and-Miss Manufacturing Co. problem. 1 Decision K(xn)xn 1 xxn 1  ( 12 ) n 2 () (12 ) xn f *n1(0)  0 xn Value: fn(1, xn) x 1 n  K(xn)xn f* (1) 2 n1 () K(xn)xn 1 f*n1(1) Solution Procedure. The calculations using this recursive relationship are summarized as follows: 冢  1 f3(1, x3) ⴝ K(x3) ⴙ x3 ⴙ 16  2 x3 x3 n ⴝ 3: s3 0 0 0 1 16 1 2 12 3 9 8 冢  1 f2(1, x2) ⴝ K(x2) ⴙ x2 ⴙ  2 x2 4 5 8 1 8 2 f 3*(s3) x3* 0 0 8 3 or 4 f 3*(1) x2 n ⴝ 2: s2 0 0 0 1 8 1 8 2 7 f 2*(s2) 3 4 7 1 7 2 冢  1 f1(1, x1) ⴝ K(x1) ⴙ x1 ⴙ  2 x x2* 0 0 7 2 or 3 1 f 2*(1) x1 n ⴝ 1: s1 0 1 2 3 4 f 1*(s1) x1* 1 7 1 7 2 3 6 4 7 6 8 7 7 16 3 6 4 2 Thus, the optimal policy is to produce two items on the first production run; if none is acceptable, then produce either two or three items on the second production run; if none is acceptable, then produce either three or four items on the third production run. The total expected cost for this policy is $675. 466 CHAPTER 11 DYNAMIC PROGRAMMING EXAMPLE 6 Winning in Las Vegas An enterprising young statistician believes that she has developed a system for winning a popular Las Vegas game. Her colleagues do not believe that her system works, so they have made a large bet with her that if she starts with three chips, she will not have at least five chips after three plays of the game. Each play of the game involves betting any desired number of available chips and then either winning or losing this number of chips. The statistician believes that her system will give her a probability of 23 of winning a given play of the game. Assuming the statistician is correct, we now use dynamic programming to determine her optimal policy regarding how many chips to bet (if any) at each of the three plays of the game. The decision at each play should take into account the results of earlier plays. The objective is to maximize the probability of winning her bet with her colleagues. Formulation. The dynamic programming formulation for this problem is Stage n  nth play of game (n  1, 2, 3), xn  number of chips to bet at stage n, State sn  number of chips in hand to begin stage n. This definition of the state is chosen because it provides the needed information about the current situation for making an optimal decision on how many chips to bet next. Because the objective is to maximize the probability that the statistician will win her bet, the objective function to be maximized at each stage must be the probability of finishing the three plays with at least five chips. (Note that the value of ending with more than five chips is just the same as ending with exactly five, since the bet is won either way.) Therefore, fn(sn, xn)  probability of finishing three plays with at least five chips, given that the statistician starts stage n in state sn, makes immediate decision xn, and makes optimal decisions thereafter, f n*(sn)  max fn(sn, xn). xn0, 1, . . . , sn The expression for fn(sn, xn) must reflect the fact that it may still be possible to accumulate five chips eventually even if the statistician should lose the next play. If she loses, the state at the next stage will be sn ⫺ xn, and the probability of finishing with at least five chips will then be f *n1(sn  xn). If she wins the next play instead, the state will become sn  xn, and the corresponding probability will be f *n1(sn  xn). Because the assumed probability of winning a given play is 23, it now follows that 1 2 fn(sn, xn)   f *n1(sn  xn)   f *n1(sn  xn) 3 3 [where f 4*(s4) is defined to be 0 for s4 5 and 1 for s4 5]. Thus, there is no direct contribution to the objective function from stage n other than the effect of then being in the next state. These basic relationships are summarized in Fig. 11.12. Therefore, the recursive relationship for this problem is f n*(sn)  max xn0, 1, . . . , sn 13 f * n1(sn  2  xn)   f *n1(sn  xn) , 3 for n  1, 2, 3, with f 4*(s4) as just defined. 467 11.4 PROBABILISTIC DYNAMIC PROGRAMMING Stage n Probability Contribution from stage n Stage n  1 s n  xn State: ■ FIGURE 11.12 The basic structure for the Las Vegas problem. Decision sn 0 1 3 xn 2 3 Value: fn(sn, xn) 2 1  f*n1(sn  xn)  f*n1(sn  xn) 3 3 f*n1(sn  xn) 0 s n  xn f*n1(sn  xn) Solution Procedure. This recursive relationship leads to the following computational results: n ⴝ 3: s3 f 3*(s3) x3* 0 0 0 2  3 2  3 1 — — — 0 1 2 3 4 5 2 (or more) 1 (or more) 0 (or  s3  5) 1 2 f2(s2, x2) ⴝ f 3*(s2 ⴚ x2) ⴙ f 3*(s2 ⴙ x2) 3 3 x2 n ⴝ 2: s2 0 0 1 0 0 2 0 1 0 4  9 4  9 8  9 2  3 2  3 1 3 4 5 2 4  9 2  3 2  3 3 4 2  3 2  3 2  3 f 2*(s2) x2* 0 0 4  9 2  3 8  9 1 — — 1 or 2 0, 2, or 3 1 0 (or  s2  5) 1 2 f1(s1, x1) ⴝ f 2*(s1 ⴚ x1) ⴙ f 2*(s1 ⴙ x1) 3 3 x1 n ⴝ 1: s1 0 1 2 3 f 1*(s1) x1* 3 2  3 20  27 2  3 2  3 20  27 1 468 CHAPTER 11 DYNAMIC PROGRAMMING Therefore, the optimal policy is x1*  1  if win, x2*  1 ifif win, lose, x3*  0 x3*  2 or 3. if lose, x2*  1 or 2  if lose, This policy gives the statistician a probability of ■ 11.5  2 or 3 (for x2*  1) x3*  1, 2, 3, or 4 (for x*  2) 2 if win, bet is lost 20  27 of winning her bet with her colleagues. CONCLUSIONS Dynamic programming is a very useful technique for making a sequence of interrelated decisions. It requires formulating an appropriate recursive relationship for each individual problem. However, it provides a great computational savings over using exhaustive enumeration to find the best combination of decisions, especially for large problems. For example, if a problem has 10 stages with 10 states and 10 possible decisions at each stage, then exhaustive enumeration must consider up to 10 billion combinations, whereas dynamic programming need make no more than a thousand calculations (10 for each state at each stage). This chapter has considered only dynamic programming with a finite number of stages. Chapter 19 is devoted to a general kind of model for probabilistic dynamic programming where the stages continue to recur indefinitely, namely, Markov decision processes. ■ SELECTED REFERENCES 1. Denardo, E. V.: Dynamic Programming: Models and Applications, Dover Publications, Mineola, NY, 2003. 2. Lew, A., and H. Mauch: Dynamic Programming: A Computational Tool, Springer, New York, 2007. 3. Sniedovich, M.: Dynamic Programming: Foundations and Principles, Taylor & Francis, New York, 2010. ■ LEARNING AIDS FOR THIS CHAPTER ON OUR WEBSITE (www.mhhe.com/hillier) Solved Examples: Examples for Chapter 11 “Ch. 11—Dynamic Programming” LINGO File Glossary for Chapter 11 See Appendix 1 for documentation of the software. 469 PROBLEMS ■ PROBLEMS An asterisk on the problem number indicates that at least a partial answer is given in the back of the book. 11.2-1. Consider the following network, where each number along a link represents the actual distance between the pair of nodes connected by that link. The objective is to find the shortest path from the origin to the destination. f *2(A)  11 A 5 9 (origin) O 6 7 B f *3(D)  6 D 6 (a) Use dynamic programming to solve this problem. Instead of using the usual tables, show your work graphically by constructing and filling in a network such as the one shown for Prob. 11.2-1. Proceed as in Prob. 11.2-1b by solving for f n*(sn) for each node (except the terminal node) and writing its value by the node. Draw an arrowhead to show the optimal link (or links in case of a tie) to take out of each node. Finally, identify the resulting optimal path (or paths) through the network and the corresponding optimal solution (or solutions). (b) Use dynamic programming to solve this problem by constructing the usual tables for n  3, n  2, and n  1. T (destination) 8 11.2-3. Consider the following project network (as described in Sec. 10.8), where the number over each node is the time required for the corresponding activity. Consider the problem of finding the longest path (the largest total time) through this network from start to finish, since the longest path is the critical path. 7 7 C 6 f *2(C)  13 E f *3(E)  7 (a) What are the stages and states for the dynamic programming formulation of this problem? (b) Use dynamic programming to solve this problem. However, instead of using the usual tables, show your work graphically (similar to Fig. 11.2). In particular, start with the given network, where the answers already are given for f n*(sn) for four of the nodes; then solve for and fill in f 2*(B) and f 1*(O). Draw an arrowhead that shows the optimal link to traverse out of each of the latter two nodes. Finally, identify the optimal path by following the arrows from node O onward to node T. (c) Use dynamic programming to solve this problem by manually constructing the usual tables for n  3, n  2, and n  1. (d) Use the shortest-path algorithm presented in Sec. 9.3 to solve this problem. Compare and contrast this approach with the one in parts (b) and (c). 11.2-2. The sales manager for a publisher of college textbooks has six traveling salespeople to assign to three different regions of the country. She has decided that each region should be assigned at least one salesperson and that each individual salesperson should be restricted to one of the regions, but now she wants to determine how many salespeople should be assigned to the respective regions in order to maximize sales. The next table gives the estimated increase in sales (in appropriate units) in each region if it were allocated various numbers of salespeople: Region Salespersons 1 2 3 1 2 3 4 35 48 70 89 21 42 56 70 28 41 63 75 1 F 5 A 4 C 2 D 0 START 3 B 3 E 4 G 6 H 2 I 5 J 4 K 0 FINISH 7 L (a) What are the stages and states for the dynamic programming formulation of this problem? (b) Use dynamic programming to solve this problem. However, instead of using the usual tables, show your work graphically. In particular, fill in the values of the various f n*(sn) under the corresponding nodes, and show the resulting optimal arc to traverse out of each node by drawing an arrowhead near the beginning of the arc. Then identify the optimal path (the longest path) by following these arrowheads from the Start node to the Finish node. If there is more than one optimal path, identify them all. (c) Use dynamic programming to solve this problem by constructing the usual tables for n  4, n  3, n  2, and n  1. 11.2-4. Consider the following statements about solving dynamic programming problems. Label each statement as true or false, and then justify your answer by referring to specific statements in the chapter. 470 CHAPTER 11 DYNAMIC PROGRAMMING (a) The solution procedure uses a recursive relationship that enables solving for the optimal policy for stage (n  1) given the optimal policy for stage n. (b) After completing the solution procedure, if a nonoptimal decision is made by mistake at some stage, the solution procedure will need to be reapplied to determine the new optimal decisions (given this nonoptimal decision) at the subsequent stages. (c) Once an optimal policy has been found for the overall problem, the information needed to specify the optimal decision at a particular stage is the state at that stage and the decisions made at preceding stages. 11.3-1. Read the referenced article that fully describes the OR study summarized in the application vignette presented in Sec. 11.3. Briefly describe how dynamic programming was applied in this study. Then list the various financial and nonfinancial benefits that resulted from this study. 11.3-2.* The owner of a chain of three grocery stores has purchased five crates of fresh strawberries. The estimated probability distribution of potential sales of the strawberries before spoilage differs among the three stores. Therefore, the owner wants to know how to allocate five crates to the three stores to maximize expected profit. For administrative reasons, the owner does not wish to split crates between stores. However, he is willing to distribute no crates to any of his stores. The following table gives the estimated expected profit at each store when it is allocated various numbers of crates: that the alternative allocations for each course would yield the number of grade points shown in the following table: Estimated Grade Points Course Study Days 1 2 3 4 1 2 3 4 3 5 6 7 5 5 6 9 2 4 7 8 6 7 9 9 Solve this problem by dynamic programming. 11.3-4. A political campaign is entering its final stage, and polls indicate a very close election. One of the candidates has enough funds left to purchase TV time for a total of five prime-time commercials on TV stations located in four different areas. Based on polling information, an estimate has been made of the number of additional votes that can be won in the different broadcasting areas depending upon the number of commercials run. These estimates are given in the following table in thousands of votes: Area Store Crates 1 2 3 0 1 2 3 4 5 0 5 9 14 17 21 0 6 11 15 19 22 0 4 9 13 18 20 Commercials 1 2 3 4 0 1 2 3 4 5 0 4 7 9 12 15 0 6 8 10 11 12 0 5 9 11 10 9 0 3 7 12 14 16 Use dynamic programming to determine how many of the five crates should be assigned to each of the three stores to maximize the total expected profit. Use dynamic programming to determine how the five commercials should be distributed among the four areas in order to maximize the estimated number of votes won. 11.3-3. A college student has 7 days remaining before final examinations begin in her four courses, and she wants to allocate this study time as effectively as possible. She needs at least 1 day on each course, and she likes to concentrate on just one course each day, so she wants to allocate 1, 2, 3, or 4 days to each course. Having recently taken an OR course, she decides to use dynamic programming to make these allocations to maximize the total grade points to be obtained from the four courses. She estimates 11.3-5. A county chairwoman of a certain political party is making plans for an upcoming presidential election. She has received the services of six volunteer workers for precinct work, and she wants to assign them to four precincts in such a way as to maximize their effectiveness. She feels that it would be inefficient to assign a worker to more than one precinct, but she is willing to assign no workers to any one of the precincts if they can accomplish more in other precincts. 471 PROBLEMS The following table gives the estimated increase in the number of votes for the party’s candidate in each precinct if it were allocated various numbers of workers: Effect on Market Share Millions of Dollars Expended m f2 f3 0 1 2 3 4 — 20 30 40 50 0.2 0.4 0.5 0.6 — 0.3 0.5 0.6 0.7 — Precinct Workers 1 2 3 4 0 1 2 3 4 5 6 0 4 9 15 18 22 24 0 7 11 16 18 20 21 0 5 10 15 18 21 22 0 6 11 14 16 17 18 This problem has several optimal solutions for how many of the six workers should be assigned to each of the four precincts to maximize the total estimated increase in the plurality of the party’s candidate. Use dynamic programming to find all of them so the chairwoman can make the final selection based on other factors. 11.3-6. Use dynamic programming to solve the Northern Airplane Co. production scheduling problem presented in Sec. 9.1 (see Table 9.7). Assume that production quantities must be integer multiples of 5. 11.3-7.* A company will soon be introducing a new product into a very competitive market and is currently planning its marketing strategy. The decision has been made to introduce the product in three phases. Phase 1 will feature making a special introductory offer of the product to the public at a greatly reduced price to attract first-time buyers. Phase 2 will involve an intensive advertising campaign to persuade these first-time buyers to continue purchasing the product at a regular price. It is known that another company will be introducing a new competitive product at about the time that phase 2 will end. Therefore, phase 3 will involve a follow-up advertising and promotion campaign to try to keep the regular purchasers from switching to the competitive product. A total of $4 million has been budgeted for this marketing campaign. The problem now is to determine how to allocate this money most effectively to the three phases. Let m denote the initial share of the market (expressed as a percentage) attained in phase 1, f2 the fraction of this market share that is retained in phase 2, and f3 the fraction of the remaining market share that is retained in phase 3. Use dynamic programming to determine how to allocate the $4 million to maximize the final share of the market for the new product, i.e., to maximize mf2 f3. (a) Assume that the money must be spent in integer multiples of $1 million in each phase, where the minimum permissible multiple is 1 for phase 1 and 0 for phases 2 and 3. The following table gives the estimated effect of expenditures in each phase: (b) Now assume that any amount within the total budget can be spent in each phase, where the estimated effect of spending an amount xi (in units of millions of dollars) in phase i (i  1, 2, 3) is m  10x1 ⫺ x12 f2  0.40  0.10x2 f3  0.60  0.07x3. [Hint: After solving for the f 2*(s) and f 3*(s) functions analytically, solve for x1* graphically.] 11.3-8. Consider an electronic system consisting of four components, each of which must work for the system to function. The reliability of the system can be improved by installing several parallel units in one or more of the components. The following table gives the probability that the respective components (labeled as Comp. 1, 2, 3, and 4) will function if they consist of one, two, or three parallel units: Probability of Functioning Parallel Units 1 2 3 Comp. 1 0.5 0.6 0.8 Comp. 2 Comp. 3 Comp. 4 0.6 0.7 0.8 0.7 0.8 0.9 0.5 0.7 0.9 The probability that the system will function is the product of the probabilities that the respective components will function. The cost (in hundreds of dollars) of installing one, two, or three parallel units in the respective components (labeled as Comp. 1, 2, 3, and 4) is given by the following table: Cost Parallel Units 1 2 3 Comp. 1 1 2 3 Comp. 2 Comp. 3 Comp. 4 2 4 5 1 3 4 2 3 4 472 CHAPTER 11 DYNAMIC PROGRAMMING Because of budget limitations, a maximum of $1,000 can be expended. Use dynamic programming to determine how many parallel units should be installed in each of the four components to maximize the probability that the system will function. 11.3-9. Consider the following integer nonlinear programming problem. Z  3x21 ⫺ x31  5x22  x32, Maximize x1 0, x2 0, x3 0. Use dynamic programming to solve this problem. 11.3-14. Consider the following nonlinear programming problem. Z  x 41  2x 22 Minimize subject to x 21  x 22 subject to 2. (There are no nonnegativity constraints.) Use dynamic programming to solve this problem. x1  2x2  4 and 11.3-15. Consider the following nonlinear programming problem. x1 0, x2 0 x1, x2 are integers. Use dynamic programming to solve this problem. 11.3-10. Consider the following integer nonlinear programming problem. Z  18x1  x21  20x2  10x3, Maximize subject to 2x1  4x2  3x3  11 and x1, x2, x3 are nonnegative integers. Use dynamic programming to solve this problem. 11.3-11.* Consider the following nonlinear programming problem. Z  36x1  9x21  6x31  36x2  3x32, Maximize subject to Z  x 31  4x 22  16x3, Maximize subject to x1x2x3  4 and x1 1, x2 1, x3 1. (a) Solve by dynamic programming when, in addition to the given constraints, all three variables also are required to be integer. (b) Use dynamic programming to solve the problem as given (continuous variables). 11.3-16. Consider the following nonlinear programming problem. Maximize Z  x1(1  x2)x3, subject to x1  x2  x3  1 and x1 x1  x2  3 0, x2 0, x3 0. Use dynamic programming to solve this problem. and x1 and 0, x2 0. Use dynamic programming to solve this problem. 11.3-12. Re-solve the Local Job Shop employment scheduling problem (Example 4) when the total cost of changing the level of employment from one season to the next is changed to $100 times the square of the difference in employment levels. 11.3-13. Consider the following nonlinear programming problem. Maximize Z  2x21  2x2  4x3  x23 subject to 2x1  x2  x3  4 11.4-1. A backgammon player will be playing three consecutive matches with friends tonight. For each match, he will have the opportunity to place an even bet that he will win; the amount bet can be any quantity of his choice between zero and the amount of money he still has left after the bets on the preceding matches. For each match, the probability is 12 that he will win the match and thus win the amount bet, whereas the probability is 12 that he will lose the match and thus lose the amount bet. He will begin with $75, and his goal is to have $100 at the end. (Because these are friendly matches, he does not want to end up with more than $100.) Therefore, he wants to find the optimal betting policy (including all ties) that maximizes the probability that he will have exactly $100 after the three matches. Use dynamic programming to solve this problem. 473 PROBLEMS 11.4-2. Imagine that you have $5,000 to invest and that you will have an opportunity to invest that amount in either of two investments (A or B) at the beginning of each of the next 3 years. Both investments have uncertain returns. For investment A you will either lose your money entirely or (with higher probability) get back $10,000 (a profit of $5,000) at the end of the year. For investment B you will get back either just your $5,000 or (with low probability) $10,000 at the end of the year. The probabilities for these events are as follows: Amount Returned ($) Probability A 0 10,000 0.3 0.7 B 5,000 10,000 0.9 0.1 Investment You are allowed to make only (at most) one investment each year, and you can invest only $5,000 each time. (Any additional money accumulated is left idle.) (a) Use dynamic programming to find the investment policy that maximizes the expected amount of money you will have after 3 years. (b) Use dynamic programming to find the investment policy that maximizes the probability that you will have at least $10,000 after 3 years. 11.4-3.* Suppose that the situation for the Hit-and-Miss Manufacturing Co. problem (Example 5) has changed somewhat. After a more careful analysis, you now estimate that each item produced will be acceptable with probability 23, rather than 12, so that the probability of producing zero acceptable items in a lot of size L is (13)L. Furthermore, there now is only enough time available to make two production runs. Use dynamic programming to determine the new optimal policy for this problem. 11.4-4. Reconsider Example 6. Suppose that the bet is changed as follows: “Starting with two chips, she will not have at least five chips after five plays of the game.” By referring to the previous computational results, make additional calculations to determine the new optimal policy for the enterprising young statistician. 11.4-5. The Profit & Gambit Co. has a major product that has been losing money recently because of declining sales. In fact, during the current quarter of the year, sales will be 4 million units below the break-even point. Because the marginal revenue for each unit sold exceeds the marginal cost by $5, this amounts to a loss of $20 million for the quarter. Therefore, management must take action quickly to rectify this situation. Two alternative courses of action are being considered. One is to abandon the product immediately, incurring a cost of $20 million for shutting down. The other alternative is to undertake an intensive advertising campaign to increase sales and then abandon the product (at the cost of $20 million) only if the campaign is not sufficiently successful. Tentative plans for this advertising campaign have been developed and analyzed. It would extend over the next three quarters (subject to early cancellation), and the cost would be $30 million in each of the three quarters. It is estimated that the increase in sales would be approximately 3 million units in the first quarter, another 2 million units in the second quarter, and another 1 million units in the third quarter. However, because of a number of unpredictable market variables, there is considerable uncertainty as to what impact the advertising actually would have; and careful analysis indicates that the estimates for each quarter could turn out to be off by as much as 2 million units in either direction. (To quantify this uncertainty, assume that the additional increases in sales in the three quarters are independent random variables having a uniform distribution with a range from 1 to 5 million, from 0 to 4 million, and from ⫺1 to 3 million, respectively.) If the actual increases are too small, the advertising campaign can be discontinued and the product abandoned at the end of either of the next two quarters. If the intensive advertising campaign were initiated and continued to its completion, it is estimated that the sales for some time thereafter would continue to be at about the same level as in the third (last) quarter of the campaign. Therefore, if the sales in that quarter still were below the break-even point, the product would be abandoned. Otherwise, it is estimated that the expected discounted profit thereafter would be $40 for each unit sold over the break-even point in the third quarter. Use dynamic programming to determine the optimal policy maximizing the expected profit. 12 C H A P T E R Integer Programming I n Chap. 3 you saw several examples of the numerous and diverse applications of linear programming. However, one key limitation that prevents many more applications is the assumption of divisibility (see Sec. 3.3), which requires that noninteger values be permissible for decision variables. In many practical problems, the decision variables actually make sense only if they have integer values. For example, it is often necessary to assign people, machines, and vehicles to activities in integer quantities. If requiring integer values is the only way in which a problem deviates from a linear programming formulation, then it is an integer programming (IP) problem. (The more complete name is integer linear programming, but the adjective linear normally is dropped except when this problem is contrasted with the more esoteric integer nonlinear programming problem, which is beyond the scope of this book.) The mathematical model for integer programming is the linear programming model (see Sec. 3.2) with the one additional restriction that the variables must have integer values. If only some of the variables are required to have integer values (so the divisibility assumption holds for the rest), this model is referred to as mixed integer programming (MIP). When distinguishing the all-integer problem from this mixed case, we call the former pure integer programming. For example, the Wyndor Glass Co. problem presented in Sec. 3.1 actually would have been an IP problem if the two decision variables x1 and x2 had represented the total number of units to be produced of products 1 and 2, respectively, instead of the production rates. Because both products (glass doors and wood-framed windows) necessarily come in whole units, x1 and x2 would have to be restricted to integer values. There have been numerous applications of integer programming that involve a direct extension of linear programming where the divisibility assumption must be dropped. However, another area of application may be of even greater importance, namely, problems involving a number of interrelated “yes-or-no decisions.” In such decisions, the only two possible choices are yes and no. For example, should we undertake a particular fixed project? Should we make a particular fixed investment? Should we locate a facility in a particular site? With just two choices, we can represent such decisions by decision variables that are restricted to just two values, say 0 and 1. Thus, the jth yes-or-no decision would be represented by, say, xj such that xj  474 0 1 if decision j is yes if decision j is no. 475 12.1 PROTOTYPE EXAMPLE Such variables are called binary variables (or 0–1 variables). Consequently, IP problems that contain only binary variables sometimes are called binary integer programming (BIP) problems (or 0–1 integer programming problems). Section 12.1 presents a miniature version of a typical BIP problem and Sec. 12.2 surveys a variety of other BIP applications. Additional formulation possibilities with binary variables are discussed in Sec. 12.3, and Sec. 12.4 presents a series of formulation examples. Sections 12.5–12.8 then deal with ways to solve IP problems, including both BIP and MIP problems. The chapter concludes in Sec. 12.9 by introducing an exciting more recent development (constraint programming) that promises to greatly expand our ability to formulate and solve integer programming models. ■ 12.1 PROTOTYPE EXAMPLE The CALIFORNIA MANUFACTURING COMPANY is considering expansion by building a new factory in either Los Angeles or San Francisco, or perhaps even in both cities. It also is considering building at most one new warehouse, but the choice of location is restricted to a city where a new factory is being built. The net present value (total profitability considering the time value of money) of each of these alternatives is shown in the fourth column of Table 12.1. The rightmost column gives the capital required (already included in the net present value) for the respective investments, where the total capital available is $10 million. The objective is to find the feasible combination of alternatives that maximizes the total net present value. The BIP Model Although this problem is small enough that it can be solved very quickly by inspection (build factories in both cities but no warehouse), let us formulate the IP model for illustrative purposes. All the decision variables have the binary form xj  0 1 if decision j is yes, if decision j is no, ( j  1, 2, 3, 4). Let Z  total net present value of these decisions. If the investment is made to build a particular facility (so that the corresponding decision variable has a value of 1), the estimated net present value from that investment is given in the fourth column of Table 12.1. If the investment is not made (so the decision variable equals 0), the net present value is 0. Therefore, using units of millions of dollars, Z  9x1  5x2  6x3  4x4. ■ TABLE 12.1 Data for the California Manufacturing Co. example Decision Number 1 2 3 4 Yes-or-No Question Build Build Build Build factory in Los Angeles? factory in San Francisco? warehouse in Los Angeles? warehouse in San Francisco? Decision Variable x1 x2 x3 x4 Net Present Value $9 $5 $6 $4 million million million million Capital available: Capital Required $6 $3 $5 $2 million million million million $10 million 476 CHAPTER 12 INTEGER PROGRAMMING The rightmost column of Table 12.1 indicates that the amount of capital expended on the four facilities cannot exceed $10 million. Consequently, continuing to use units of millions of dollars, one constraint in the model is 6x1  3x2  5x3  2x4 ⱕ 10. Because the last two decisions represent mutually exclusive alternatives (the company wants at most one new warehouse), we also need the constraint x3 ⫹ x4 ⱕ 1. Furthermore, decisions 3 and 4 are contingent decisions, because they are contingent on decisions 1 and 2, respectively (the company would consider building a warehouse in a city only if a new factory also were going there). Thus, in the case of decision 3, we require that x3 ⫽ 0 if x1 ⫽ 0. This restriction on x3 (when x1 ⫽ 0) is imposed by adding the constraint x3 ⱕ x1. Similarly, the requirement that x4 ⫽ 0 if x2 ⫽ 0 is imposed by adding the constraint x4 ⱕ x2. Therefore, after we rewrite these two constraints to bring all variables to the left-hand side, the complete BIP model is Maximize Z ⫽ 9x1 ⫹ 5x2 ⫹ 6x3 ⫹ 4x4, subject to 6x1 ⫹ 3x2 ⫹ 5x3 ⫹ 2x4 ⱕ 10 x3 ⫹ x4 ⱕ 1 ⫺x1 ⫹ x3 ⱕ 0 ⫺ x2 ⫹ x4 ⱕ 0 xj ⱕ 1 xj ⱖ 0 and xj is integer, for j ⫽ 1, 2, 3, 4. Equivalently, the last three lines of this model can be replaced by the single restriction xj is binary, for j ⫽ 1, 2, 3, 4. Except for its small size, this example is typical of many real applications of integer programming where the basic decisions to be made are of the yes-or-no type. Like the second pair of decisions for this example, groups of yes-or-no decisions often constitute groups of mutually exclusive alternatives such that only one decision in the group can be yes. Each group requires a constraint that the sum of the corresponding binary variables must be equal to 1 (if exactly one decision in the group must be yes) or less than or equal to 1 (if at most one decision in the group can be yes). Occasionally, decisions of the yes-or-no type are contingent decisions, i.e., decisions that depend upon previous decisions. For example, one decision is said to be contingent on another decision if it is allowed to be yes only if the other is yes. This situation occurs when the contingent decision involves a follow-up action that would become irrelevant, or even impossible, if the other decision were no. The form that the resulting constraint takes always is that illustrated by the third and fourth constraints in the example. 12.1 PROTOTYPE EXAMPLE 477 Software Options for Solving Such Models All the software packages featured in your OR Courseware (Excel, LINGO/LINDO, and MPL/Solvers) include an algorithm for solving (pure or mixed) BIP models, as well as an algorithm for solving general (pure or mixed) IP models where variables need to be integer but not binary. However, since binary variables are considerably easier to deal with than general integer variables, the former algorithm generally can solve substantially larger problems than the latter algorithm. When using Solver (or ASPE’s Solver), the procedure is basically the same as for linear programming. The one difference arises when you click on the “Add” button on the Solver dialog box to add the constraints. In addition to the constraints that fit linear programming, you also need to add the integer constraints. In the case of integer variables that are not binary, this is accomplished in the Add Constraint dialog box by choosing the range of integer-restricted variables on the left-hand side and then choosing “int” from the pop-up menu. In the case of binary variables, choose “bin” from the pop-up menu instead. One of the Excel files for this chapter shows the complete spreadsheet formulation and solution for the California Manufacturing Co. example. The Solved Examples section of the book’s website also includes a small minimization example with two integerrestricted variables. This example illustrates the formulation of the IP model and its graphical solution, along with a spreadsheet formulation and solution. A LINGO model uses the function @BIN() to specify that the variable named inside the parentheses is a binary variable. For a general integer variable (one restricted to integer values but not just binary values), the function @GIN() is used in the same way. In either case, the function can be embedded inside an @FOR statement to impose this binary or integer constraint on an entire set of variables. In a LINDO syntax model, the binary or integer constraints are inserted after the END statement. A variable X is specified to be a general integer variable by entering GIN X. Alternatively, for any positive integer value of n, the statement GIN n specifies that the first n variables are general integer variables. Binary variables are handled in the same way except for substituting the word INTEGER for GIN. For an MPL model, the keyword INTEGER is used to designate general integer variables, whereas BINARY is used for binary variables. In the variables section of an MPL model, all you need to do is add the appropriate adjective (INTEGER or BINARY) in front of the label VARIABLES to specify that the set of variables listed below the label is of that type. Alternatively, you can ignore this specification in the variables section and instead place the integer or binary constraints in the model section anywhere after the other constraints. In this case, the label over the set of variables becomes just INTEGER or BINARY. The student version of MPL includes four elite solvers for linear programming — CPLEX, GUROBI, CoinMP, and SULUM — and all four also include state-of-the-art algorithms for solving pure or mixed IP or BIP models. When using CPLEX, for example, by selecting the MIP Strategy tab from the CPLEX Parameters dialog box in the Options menu, an experienced practitioner can even choose from a wide variety of options for exactly how to execute the algorithm to best fit the particular problem. These instructions for how to use the various software packages become clearer when you see them applied to examples. The Excel, LINGO/LINDO, and MPL/Solvers files for this chapter in your OR Courseware show how each of these software options would be applied to the prototype example introduced in this section, as well as to the subsequent IP examples. The latter part of the chapter will focus on IP algorithms that are similar to those used in these software packages. Section 12.6 will use the prototype example to illustrate the application of the pure BIP algorithm presented there. 478 ■ 12.2 CHAPTER 12 INTEGER PROGRAMMING SOME BIP APPLICATIONS Just as in the California Manufacturing Co. example, managers frequently must face yesor-no decisions. Therefore, binary integer programming (BIP) is widely used to aid in these decisions. We now will introduce various types of yes-or-no decisions. This section includes two application vignettes to help illustrate two of these types. For the other types, we also will mention some other examples of actual applications where BIP was used to address these decisions. All of these examples are included in the selected references of awardwinning applications cited at the end of the chapter, so a link to these articles is provided on the book’s website. Investment Analysis Linear programming sometimes is used to make capital budgeting decisions about how much to invest in various projects. However, as the California Manufacturing Co. example demonstrates, some capital budgeting decisions do not involve how much to invest, but rather, whether to invest a fixed amount. Specifically, the four decisions in the example were whether to invest the fixed amount of capital required to build a certain kind of facility (factory or warehouse) in a certain location (Los Angeles or San Francisco). Management often must face decisions about whether to make fixed investments (those where the amount of capital required has been fixed in advance). Should we acquire a certain subsidiary being spun off by another company? Should we purchase a certain source of raw materials? Should we add a new production line to produce a certain input item ourselves rather than continuing to obtain it from a supplier? In general, capital budgeting decisions about fixed investments are yes-or-no decisions of the following type. Each yes-or-no decision: Should we make a certain fixed investment? 1 if yes Its decision variable  0 if no.  An example that falls somewhat into this category is described in Selected Reference A6. A major OR study was conducted for the South African National Defense Force to upgrade its capabilities with a smaller budget. The “investments” under consideration in this case were acquisition costs and ongoing expenses that would be required to provide specific types of military capabilities. A mixed BIP model was formulated to choose those specific capabilities that would maximize the overall effectiveness of the Defense Force while satisfying a budget constraint. The model had over 16,000 variables (including 256 binary variables) and over 5,000 functional constraints. The resulting optimization of the size and shape of the defense force provided savings of over $1.1 billion per year as well as vital nonmonetary benefits. Selected Reference A2 presents another award-winning application of a mixed BIP model to investment analysis. This particular model has been used by the investment firm Grantham, Mayo, Van Otterloo and Company to construct many quantitatively managed portfolios representing over $8 billion in assets. In each case, a portfolio has been constructed that is close (in terms of sector and security exposure) to a target portfolio but with a far smaller and more manageable number of distinct stocks. A binary variable is used to represent each yes-or-no decision as to whether a particular stock should be included in the portfolio and then a separate continuous variable represents the amount of the stock to include. Given a current portfolio that needs to be rebalanced, it is desirable to reduce transaction costs by minimizing the number of transactions needed to obtain the final portfolio, An Application Vignette The Midwest Independent Transmission System Operator, Inc. (MISO) is a nonprofit organization formed in 1998 to administer the generation and transmission of electricity throughout the midwestern United States. It serves over 40 million customers (both individuals and businesses) through its control of nearly 60,000 miles of high-voltage transmission lines and more than 1,000 power plants capable of generating 146,000 megawatts of electricity. This infrastructure spans 13 midwestern U.S. states plus the Canadian province of Manitoba. The key mission of any regional transmission organization is to reliably and efficiently provide the electricity needed by its customers. MISO transformed the way this was done by using mixed binary integer programming to minimize the total cost of providing the needed electricity. Each main binary variable in the model represents a yes-or-no decision about whether a particular power plant should be on during a particular time period. After solving this model, the results are then fed into a linear programming model to set electricity output levels and establish prices for electricity trades. The mixed BIP model is a massive one with about 3,300,000 continuous variables, 450,000 binary variables, and 3,900,000 functional constraints. A special technique (Lagrangian relaxation) is used to solve such a huge model. This innovative application of operations research yielded savings of approximately $2.5 billion over the four years from 2007 to 2010, with an additional savings of about $7 billion expected through 2020. These dramatic results led to MISO winning the prestigious First Prize in the 2011 international competition for the Franz Edelman Award for Achievement in Operations Research and the Management Sciences. Source: B. Carlson and 12 co-authors, “MISO Unlocks Billions in Savings Through the Application of Operations Research for Energy and Ancillary Services Markets,” Interfaces, 42(1): 58–73, Jan.–Feb. 2012. (A link to this article is provided on our website, www.mhhe.com/hillier.) so binary variables also are included to represent the yes-or-no decisions as to whether to make the transactions to change the amounts of individual stocks being held. The inclusion of this consideration in the model has reduced the annual cost of trading the portfolios being managed by at least $4 million. Site Selection In this global economy, many corporations are opening up new plants in various parts of the world to take advantage of lower labor costs, etc. Before selecting a site for a new plant, many potential sites may need to be analyzed and compared. (The California Manufacturing Co. example had just two potential sites for each of two kinds of facilities.) Each of the potential sites involves a yes-or-no decision of the following type. Each yes-or-no decision: Should a certain site be selected for the location of a certain new facility? 1 if yes Its decision variable  0 if no.  In many cases, the objective is to select the sites so as to minimize the total cost of the new facilities that will provide the required output. As described in Selected Reference A10, AT&T used a BIP model to help dozens of their customers select the sites for their telemarketing centers. The model minimizes labor, communications, and real estate costs while providing the desired level of coverage by the centers. In one year alone, this approach enabled 46 AT&T customers to make their yesor-no decisions on site locations swiftly and confidently, while committing to $375 million in annual network services and $31 million in equipment sales from AT&T. Selected Reference A5 describes how global papermaker Norske Skog used a similar model, but this time for selecting sites to close facilities rather than opening new ones. The company had been experiencing declining demand for its products as electronic media replaced newsprint publications. Therefore, a large BIP model (312 binary variables, 480 CHAPTER 12 INTEGER PROGRAMMING 47,000 continuous variables, and 2600 functional constraints) was used to select two paper mills and a paper machine to close, saving the company $100 million annually. We next describe an important type of problem for many corporations where site selection plays a key role. Designing a Production and Distribution Network Manufacturers today face great competitive pressure to get their products to market more quickly as well as to reduce their production and distribution costs. Therefore, any corporation that distributes its products over a wide geographical area (or even worldwide) must pay continuing attention to the design of its production and distribution network. This design involves addressing the following kinds of yes-or-no decisions: Should Should Should Should a a a a certain certain certain certain plant remain open? site be selected for a new plant? distribution center remain open? site be selected for a new distribution center? If each market area is to be served by a single distribution center, then we also have another kind of yes-or-no decision for each combination of a market area and a distribution center. Should a certain distribution center be assigned to serve a certain market area? For each of the yes-or-no decisions of any of these kinds: Its decision variable  0 1 if yes if no. The first application vignette in this section describes how the Midwest Independent Transmission Operator used a huge BIP model of this type to save literally billions of dollars. The product being produced and distributed through a network in this case is electricity. Dispatching Shipments Once a production and distribution network has been designed and put into operation, daily operating decisions need to be made about how to send the shipments. Some of these decisions again are yes-or-no decisions. For example, suppose that trucks are being used to transport the shipments and each truck typically makes deliveries to several customers during each trip. It then becomes necessary to select a route (sequence of customers) for each truck, so each candidate for a route leads to the following yes-or-no decision: Should a certain route be selected for one of the trucks? Its decision variable  0 1 if yes if no. The objective would be to select the routes that would minimize the total cost of making all the deliveries. Various complications also can be considered. For example, if different truck sizes are available, each candidate for selection would include both a certain route and a certain truck size. Similarly, if timing is an issue, a time period for the departure also can be specified as part of the yes-or-no decision. With both factors, each yes-or-no decision would have the form shown next. 481 12.2 SOME BIP APPLICATIONS Should all the following be selected simultaneously for a delivery run: 1. A certain route, 2. A certain size of truck, and 3. A certain time period for the departure? Its decision variable  0 1 if yes if no. For example, one BIP application of this type was developed by Petrobras, the largest corporation in Brazil and one of the world’s oil giants. As described in Selected Reference A7, Petrobras transports approximately 1,900 employees daily between about 80 offshore oil platforms and four mainland bases, using more than 40 helicopters. A BIP model requires less than an hour to generate optimized helicopter routes and schedules each day, resulting in annual savings of more than $20 million. Thus, the “shipments” being dispatched in this case are groups of employees. Scheduling Interrelated Activities We all schedule interrelated activities in our everyday lives, even if it is just scheduling when to begin our various homework assignments. So too, managers must schedule various kinds of interrelated activities. When should we begin production for various new orders? When should we begin marketing various new products? When should we make various capital investments to expand our production capacity? For any such activity, the decision about when to begin can be expressed in terms of a series of yes-or-no decisions, with one of these decisions for each of the possible time periods in which to begin, as shown below. Should a certain activity begin in a certain time period? Its decision variable  0 1 if yes if no. Since a particular activity can begin in only one time period, the choice of the various time periods provides a group of mutually exclusive alternatives, so the decision variable for only one time period can have a value of 1. Selected Reference A4 describes how Swedish municipalities use large BIP models of this type to plan staff scheduling and routing of 4,000 home care workers to attend to the needs of the elderly. Replacing manual planning by BIP has resulted in annual savings in the range of $30 million to $45 million while also improving the quality of the home care. Airline Applications The airline industry is an especially heavy user of OR throughout its operations. Many hundreds of OR professionals now work in this area. Major airline companies typically have a large in-house department that works on OR applications. In addition, there are some prominent consulting firms that focus solely on the problems of companies involved with transportation, including especially airlines. We will mention here just two of the applications which specifically use BIP. One is the fleet assignment problem. Given several different types of airplanes available, the problem is to assign a specific type to each flight leg in the schedule so as to maximize the total profit from meeting the schedule. The basic trade-off is that if the airline uses an airplane that is too small on a particular flight leg, it will leave potential customers behind, while if it uses an airplane that is too large, it will suffer the greater expense of the larger airplane to fly empty seats. An Application Vignette Netherlands Railways (Nederlandse Spoorwegen Reizigers) is the main Dutch railway operator of passenger trains. In this densely populated country, about 5,500 passenger trains currently transport approximately 1.1 million passengers on an average workday. The company’s operating revenues are approximately 1.5 billion Euros (approximately $2 billion) per year. The amount of passenger transport on the Dutch railway network has steadily increased over the years, so a national study in 2002 concluded that three major infrastructure extensions should be undertaken. As a result, a new national timetable for the Dutch railway system, specifying the planned departure and arrival times of every train at every station, would need to be developed. Therefore, the management of Netherlands Railways directed that an extensive operations research study should be conducted over the next few years to develop an optimal overall plan for both the new timetable and the usage of the available resources (rolling-stock units and train crews) for meeting this timetable. A task force consisting of several members of the company’s Department of Logistics and several prominent OR scholars from European universities or a software company was formed to conduct this study. The new timetable was launched in December 2006, along with a new system for scheduling the allocation of rolling-stock units (various kinds of passenger cars and other train units) to the trains meeting this timetable. A new system also was implemented for scheduling the assignment of crews (with a driver and a number of conductors in each crew) to the trains. Binary integer programming and related techniques were used to do all of this. For example, the BIP model used for crew scheduling closely resembles (except for its vastly larger size) the one shown in this section for the Southwestern Airlines problem. This application of operations research immediately resulted in an additional annual profit of approximately $60 million for the company and this additional profit is expected to increase to $105 million annually in the coming years. These dramatic results led to Netherlands Railways winning the prestigious First Prize in the 2008 international competition for the Franz Edelman Award for Achievement in Operations Research and the Management Sciences. Source: L. Kroon, D. Huisman, E. Abbink, P.-J. Fioole, M. Fischetti, G. Maróti, A. Schrijver, A. Steenbeck, and R. Ybema, “The New Dutch Timetable: The OR Revolution,” Interfaces, 39(1): 6–17, Jan.–Feb. 2009. (A link to this article is provided on our website, www.mhhe.com/hillier.) For each combination of an airplane type and a flight leg, we have the following yes-or-no decision. Should a certain type of airplane be assigned to a certain flight leg? Its decision variable  0 1 if yes if no. Prior to its merger with Northwest Airlines, completed in 2010, Delta Air Lines flew over 2,500 domestic flight legs every day, using about 450 airplanes of 10 different types. As described in Selected Reference A11, they have used a huge integer programming model (about 40,000 functional constraints, 20,000 binary variables, and 40,000 general integer variables) to solve their fleet assignment problem each time a change is needed. This application has saved Delta approximately $100 million per year. A fairly similar application is the crew scheduling problem. Here, rather than assigning airplane types to flight legs, we are instead assigning sequences of flight legs to crews of pilots and flight attendants. Thus, for each feasible sequence of flight legs that leaves from a crew base and returns to the same base, the following yes-or-no decision must be made. Should a certain sequence of flight legs be assigned to a crew? Its decision variable  0 1 if yes if no. The objective is to minimize the total cost of providing crews that cover each flight leg in the schedule. A full-fledged formulation example of this type will be presented at the end of Sec. 12.4. 12.3 INNOVATIVE USES OF BINARY VARIABLES IN MODEL FORMULATION 483 A related problem for airline companies is that their crew schedules occasionally need to be revised quickly when flight delays or cancellations occur because of inclement weather, aircraft mechanical problems, or crew unavailability. As described in an application vignette in Sec. 2.2 (as well as in Selected Reference A12), Continental Airlines (now merged with United Airlines) achieved savings of $40 million in the first year of using an elaborate decision support system based on BIP for optimizing the reassignment of crews to flights when such emergencies occur. Many of the problems that face airline companies also arise in other segments of the transportation industry. Therefore, some of the airline applications of OR are being extended to these other segments, including extensive use now by the railroad industry. For example, the second application vignette in this section describes how Netherlands Railways won a prestigious award for its applications of operations research, including integer programming and constraint programming (the subject of Sec. 12.9), throughout its operations. ■ 12.3 INNOVATIVE USES OF BINARY VARIABLES IN MODEL FORMULATION You have just seen a number of examples where the basic decisions of the problem are of the yes-or-no type, so that binary variables are introduced to represent these decisions. We now will look at some other ways in which binary variables can be very useful. In particular, we will see that these variables sometimes enable us to take a problem whose natural formulation is intractable and reformulate it as a pure or mixed IP problem. This kind of situation arises when the original formulation of the problem fits either an IP or a linear programming format except for minor disparities involving combinatorial relationships in the model. By expressing these combinatorial relationships in terms of questions that must be answered yes or no, auxiliary binary variables can be introduced to the model to represent these yes-or-no decisions. (Rather than being a decision variable for the original problem under consideration, an auxiliary binary variable is a binary variable that is introduced into the model of the problem simply to help formulate the model as a pure or mixed BIP model.) Introducing these variables reduces the problem to an MIP problem (or a pure IP problem if all the original variables also are required to have integer values). Some cases that can be handled by this approach are discussed next, where the xj denote the original variables of the problem (they may be either continuous or integer variables) and the yi denote the auxiliary binary variables that are introduced for the reformulation. Either-Or Constraints Consider the important case where a choice can be made between two constraints, so that only one (either one) must hold (whereas the other one can hold but is not required to do so). For example, there may be a choice as to which of two resources to use for a certain purpose, so that it is necessary for only one of the two resource availability constraints to hold mathematically. To illustrate the approach to such situations, suppose that one of the requirements in the overall problem is that Either or 3x1  2x2 ⱕ 18 x1 ⫹ 4x2 ⱕ 16, i.e., at least one of these two inequalities must hold but not necessarily both. This requirement must be reformulated to fit it into the linear programming format where all 484 CHAPTER 12 INTEGER PROGRAMMING specified constraints must hold. Let M symbolize a very large positive number. Then this requirement can be rewritten as Either or 3x1  2x2 x1 ⫹ 4x2 3x1 ⫹ 2x2 x1 ⫹ 4x2 ⱕ ⱕ ⱕ ⱕ 18 16 ⫹ M 18 ⫹ M 16. The key is that adding M to the right-hand side of such constraints has the effect of eliminating them, because they would be satisfied automatically by any solutions that satisfy the other constraints of the problem. (This formulation assumes that the set of feasible solutions for the overall problem is a bounded set and that M is large enough that it will not eliminate any feasible solutions.) This formulation is equivalent to the set of constraints 3x1 ⫹ 2x2 ⱕ 18 ⫹ My x1 ⫹ 4x2 ⱕ 16 ⫹ M(1 ⫺ y). Because the auxiliary variable y must be either 0 or 1, this formulation guarantees that one of the original constraints must hold while the other is, in effect, eliminated. This new set of constraints would then be appended to the other constraints in the overall model to give a pure or mixed IP problem (depending upon whether the xj are integer or continuous variables). This approach is related directly to our earlier discussion about expressing combinatorial relationships in terms of questions that must be answered yes or no. The combinatorial relationship involved concerns the combination of the other constraints of the model with the first of the two alternative constraints and then with the second. Which of these two combinations of constraints is better (in terms of the value of the objective function that then can be achieved)? To rephrase this question in yes-or-no terms, we ask two complementary questions: 1. Should x1 ⫹ 4x2 ⱕ 16 be selected as the constraint that must hold? 2. Should 3x1 ⫹ 2x2 ⱕ 18 be selected as the constraint that must hold? Because exactly one of these questions is to be answered affirmatively, we let the binary terms y and 1 ⫺ y, respectively, represent these yes-or-no decisions. Thus, y ⫽ 1 if the answer is yes to the first question (and no to the second), whereas 1 ⫺ y ⫽ 1 (that is, y ⫽ 0) if the answer is yes to the second question (and no to the first). Since y ⫹ 1 ⫺ y ⫽ 1 (one yes) automatically, there is no need to add another constraint to force these two decisions to be mutually exclusive. (If separate binary variables y1 and y2 had been used instead to represent these yes-or-no decisions, then an additional constraint y1 ⫹ y2 ⫽ 1 would have been needed to make them mutually exclusive.) A formal presentation of this approach is given next for a more general case. K out of N Constraints Must Hold Consider the case where the overall model includes a set of N possible constraints such that only some K of these constraints must hold. (Assume that K ⬍ N.) Part of the optimization process is to choose the combination of K constraints that permits the objective function to reach its best possible value. The N ⫺ K constraints not chosen are, in effect, eliminated from the problem, although feasible solutions might coincidentally still satisfy some of them. 12.3 INNOVATIVE USES OF BINARY VARIABLES IN MODEL FORMULATION 485 This case is a direct generalization of the preceding case, which had K  1 and N  2. Denote the N possible constraints by f1(x1, x2, . . . , xn) ⱕ d1 f2(x1, x2, . . . , xn) ⱕ d2 ⯗ fN (x1, x2, . . . , xn) ⱕ dN. Then, applying the same logic as for the preceding case, we find that an equivalent formulation of the requirement that some K of these constraints must hold is f1(x1, x2, . . . , xn) ⱕ d1 ⫹ My1 f2(x1, x2, . . . , xn) ⱕ d2 ⫹ My2 ⯗ fN (x1, x2, . . . , xn) ⱕ dN ⫹ MyN N  yi ⫽ N ⫺ K, i⫽1 and yi is binary, for i ⫽ 1, 2, . . . , N, where M is an extremely large positive number. For each binary variable yi (i ⫽ 1, 2, . . . , N), note that yi ⫽ 0 makes Myi ⫽ 0, which reduces the new constraint i to the original constraint i. On the other hand, yi ⫽ 1 makes (di ⫹ Myi) so large that (again assuming a bounded feasible region) the new constraint i is automatically satisfied by any solution that satisfies the other new constraints, which has the effect of eliminating the original constraint i. Therefore, because the constraints on the yi guarantee that K of these variables will equal 0 and those remaining will equal 1, K of the original constraints will be unchanged and the other (N ⫺ K) original constraints will, in effect, be eliminated. The choice of which K constraints should be retained is made by applying the appropriate algorithm to the overall problem so it finds an optimal solution for all the variables simultaneously. Functions with N Possible Values Consider the situation where a given function is required to take on any one of N given values. Denote this requirement by f(x1, x2, . . . , xn) ⫽ d1 or d2, . . . , or dN. One special case is where this function is n f(x1, x2, . . . , xn) ⫽  aj xj, j⫽1 as on the left-hand side of a linear programming constraint. Another special case is where f(x1, x2, . . . , xn) ⫽ xj for a given value of j, so the requirement becomes that xj must take on any one of N given values. The equivalent IP formulation of this requirement is the following: N f(x1, x2, . . . , xn) ⫽  di yi i⫽1 N  yi ⫽ 1 i⫽1 486 CHAPTER 12 INTEGER PROGRAMMING and for i  1, 2, . . . , N. yi is binary, so this new set of constraints would replace this requirement in the statement of the overall problem. This set of constraints provides an equivalent formulation because exactly one yi must equal 1 and the others must equal 0, so exactly one di is being chosen as the value of the function. In this case, there are N yes-or-no questions being asked, namely, should di be the value chosen (i  1, 2, . . . , N)? Because the yi respectively represent these yesor-no decisions, the second constraint makes them mutually exclusive alternatives. To illustrate how this case can arise, reconsider the Wyndor Glass Co. problem presented in Sec. 3.1. Eighteen hours of production time per week in Plant 3 currently is unused and available for the two new products or for certain future products that will be ready for production soon. In order to leave any remaining capacity in usable blocks for these future products, management now wants to impose the restriction that the production time used by the two current new products be 6 or 12 or 18 hours per week. Thus, the third constraint of the original model (3x1  2x2 ⱕ 18) now becomes 3x1 ⫹ 2x2 ⫽ 6 or 12 or 18. In the preceding notation, N ⫽ 3 with d1 ⫽ 6, d2 ⫽ 12, and d3 ⫽ 18. Consequently, management’s new requirement should be formulated as follows: 3x1 ⫹ 2x2 ⫽ 6y1 ⫹ 12y2 ⫹ 18y3 y1 ⫹ y2 ⫹ y3 ⫽ 1 and y1, y2, y3 are binary. The overall model for this new version of the problem then consists of the original model (see Sec. 3.1) plus this new set of constraints that replaces the original third constraint. This replacement yields a very tractable MIP formulation. The Fixed-Charge Problem It is quite common to incur a fixed charge or setup cost when undertaking an activity. For example, such a charge occurs when a production run to produce a batch of a particular product is undertaken and the required production facilities must be set up to initiate the run. In such cases, the total cost of the activity is the sum of a variable cost related to the level of the activity and the setup cost required to initiate the activity. Frequently the variable cost will be at least roughly proportional to the level of the activity. If this is the case, the total cost of the activity (say, activity j) can be represented by a function of the form f j (xj) ⫽ 0 kj ⫹ cj xj if xj ⬎ 0 if xj ⫽ 0, where xj denotes the level of activity j (xj ⱖ 0), kj denotes the setup cost, and cj denotes the cost for each incremental unit. Were it not for the setup cost kj, this cost structure would suggest the possibility of a linear programming formulation to determine the optimal levels of the competing activities. Fortunately, even with the kj, MIP can still be used. To formulate the overall model, suppose that there are n activities, each with the preceding cost structure (with kj ⱖ 0 in every case and kj ⬎ 0 for some j ⫽ 1, 2, . . . , n), and that the problem is to Minimize Z ⫽ f1(x1) ⫹ f2(x2) ⫹ . . . ⫹ fn(xn), 12.3 INNOVATIVE USES OF BINARY VARIABLES IN MODEL FORMULATION 487 subject to given linear programming constraints. To convert this problem to an MIP format, we begin by posing n questions that must be answered yes or no; namely, for each j  1, 2, . . . , n, should activity j be undertaken (xj  0)? Each of these yes-or-no decisions is then represented by an auxiliary binary variable yj, so that n Z   (cj xj  kjyj), j1 where yj  0 1 if xj ⬎ 0 if xj ⫽ 0. Therefore, the yj can be viewed as contingent decisions similar to (but not identical to) the type considered in Sec. 12.1. Let M be an extremely large positive number that exceeds the maximum feasible value of any xj ( j  1, 2, . . . , n). Then the constraints xj ⱕ Myj for j ⫽ 1, 2, . . . , n will ensure that yj ⫽ 1 rather than 0 whenever xj ⬎ 0. The one difficulty remaining is that these constraints leave yj free to be either 0 or 1 when xj ⫽ 0. Fortunately, this difficulty is automatically resolved because of the nature of the objective function. The case where kj ⫽ 0 can be ignored because yj can then be deleted from the formulation. So we consider the only other case, namely, where kj ⬎ 0. When xj ⫽ 0, so that the constraints permit a choice between yj ⫽ 0 and yj ⫽ 1, yj ⫽ 0 must yield a smaller value of Z than yj ⫽ 1. Therefore, because the objective is to minimize Z, an algorithm yielding an optimal solution would always choose yj ⫽ 0 when xj ⫽ 0. To summarize, the MIP formulation of the fixed-charge problem is n Minimize Z ⫽  (cj xj ⫹ kjyj), j⫽1 subject to the original constraints, plus xj ⫺ Myj ⱕ 0 and yj is binary, for j ⫽ 1, 2, . . . , n. If the xj also had been restricted to be integer, then this would be a pure IP problem. To illustrate this approach, look again at the Nori & Leets Co. air pollution problem described in Sec. 3.4. The first of the abatement methods considered—increasing the height of the smokestacks—actually would involve a substantial fixed charge to get ready for any increase in addition to a variable cost that would be roughly proportional to the amount of increase. After conversion to the equivalent annual costs used in the formulation, this fixed charge would be $2 million each for the blast furnaces and the open-hearth furnaces, whereas the variable costs are those identified in Table 3.14. Thus, in the preceding notation, k1 ⫽ 2, k2 ⫽ 2, c1 ⫽ 8, and c2 ⫽ 10, where the objective function is expressed in units of millions of dollars. Because the other abatement methods do not involve any fixed charges, kj ⫽ 0 for j ⫽ 3, 4, 5, 6. Consequently, the new MIP formulation of this problem is Minimize Z ⫽ 8x1 ⫹ 10x2 ⫹ 7x3 ⫹ 6x4 ⫹ 11x5 ⫹ 9x6 ⫹ 2y1 ⫹ 2y2, 488 CHAPTER 12 INTEGER PROGRAMMING subject to the constraints given in Sec. 3.4, plus x1  My1 ⱕ 0, x2 ⫺ My2 ⱕ 0, and y1, y2 are binary. Binary Representation of General Integer Variables Suppose that you have a pure IP problem where most of the variables are binary variables, but the presence of a few general integer variables prevents you from solving the problem by one of the very efficient BIP algorithms now available. A nice way to circumvent this difficulty is to use the binary representation for each of these general integer variables. Specifically, if the bounds on an integer variable x are 0ⱕxⱕu and if N is defined as the integer such that 2N ⱕ u ⬍ 2N⫹1, then the binary representation of x is N x ⫽  2iyi, i⫽0 where the yi variables are (auxiliary) binary variables. Substituting this binary representation for each of the general integer variables (with a different set of auxiliary binary variables for each) thereby reduces the entire problem to a BIP model. For example, suppose that an IP problem has just two general integer variables x1 and x2 along with many binary variables. Also suppose that the problem has nonnegativity constraints for both x1 and x2 and that the functional constraints include ⱕ 5 x1 2x1 ⫹ 3x2 ⱕ 30. These constraints imply that u ⫽ 5 for x1 and u ⫽ 10 for x2, so the above definition of N gives N ⫽ 2 for x1 (since 22 ⱕ 5 ⬍ 23) and N ⫽ 3 for x2 (since 23 ⱕ 10 ⬍ 24). Therefore, the binary representations of these variables are x1 ⫽ y0 ⫹ 2y1 ⫹ 4y2 x2 ⫽ y3 ⫹ 2y4 ⫹ 4y5 ⫹ 8y6. After we substitute these expressions for the respective variables throughout all the functional constraints and the objective function, the two functional constraints noted above become y0 ⫹ 2y1 ⫹ 4y2 ⱕ 5 2y0 ⫹ 4y1 ⫹ 8y2 ⫹ 3y3 ⫹ 6y4 ⫹ 12y5 ⫹ 24y6 ⱕ 30. Observe that each feasible value of x1 corresponds to one of the feasible values of the vector (y0, y1, y2), and similarly for x2 and (y3, y4, y5, y6). For example, x1 ⫽ 3 corresponds to (y0, y1, y2) ⫽ (1, 1, 0), and x2 ⫽ 5 corresponds to (y3, y4, y5, y6) ⫽ (1, 0, 1, 0). For an IP problem where all the variables are (bounded) general integer variables, it is possible to use this same technique to reduce the problem to a BIP model. However, this is not advisable for most cases because of the explosion in the number of variables 489 12.4 SOME FORMULATION EXAMPLES involved. Applying a good IP algorithm to the original IP model generally should be more efficient than applying a good BIP algorithm to the much larger BIP model.1 In general terms, for all the formulation possibilities with auxiliary binary variables discussed in this section, we need to strike the same note of caution. This approach sometimes requires adding a relatively large number of such variables, which can make the model computationally infeasible. (Section 12.5 will provide some perspective on the sizes of IP problems that can be solved.) ■ 12.4 SOME FORMULATION EXAMPLES We now present a series of examples that illustrate a variety of formulation techniques with binary variables, including those discussed in the preceding sections. For the sake of clarity, these examples have been kept very small. (A somewhat larger formulation example, with dozens of binary variables and constraints, is included in the Solved Examples section of the book’s website.) In actual applications, these formulations typically would be just a small part of a vastly larger model. EXAMPLE 1 Making Choices When the Decision Variables Are Continuous The Research and Development Division of the GOOD PRODUCTS COMPANY has developed three possible new products. However, to avoid undue diversification of the company’s product line, management has imposed the following restriction: Restriction 1: From the three possible new products, at most two should be chosen to be produced. Each of these products can be produced in either of two plants. For administrative reasons, management has imposed a second restriction in this regard. Restriction 2: Just one of the two plants should be chosen to be the sole producer of the new products. The production cost per unit of each product would be essentially the same in the two plants. However, because of differences in their production facilities, the number of hours of production time needed per unit of each product might differ between the two plants. These data are given in Table 12.2, along with other relevant information, including marketing ■ TABLE 12.2 Data for Example 1 (the Good Products Co. problem) Production Time Used for Each Unit Produced Product 1 Product 2 Product 3 Production Time Available per Week 3 hours 4 hours 4 hours 6 hours 2 hours 2 hours 30 hours 40 hours Unit profit 5 7 3 Sales potential 7 5 9 Plant 1 Plant 2 1 (thousands of dollars) (units per week) For evidence supporting this conclusion, see J. H. Owen and S. Mehrotra, “On the Value of Binary Expansions for General Mixed lnteger Linear Programs,” Operations Research, 50: 810–819, 2002. 490 CHAPTER 12 INTEGER PROGRAMMING estimates of the number of units of each product that could be sold per week if it is produced. The objective is to choose the products, the plant, and the production rates of the chosen products so as to maximize total profit. In some ways, this problem resembles a standard product mix problem such as the Wyndor Glass Co. example described in Sec. 3.1. In fact, if we changed the problem by dropping the two restrictions and by requiring each unit of a product to use the production hours given in Table 12.2 in both plants (so the two plants now perform different operations needed by the products), it would become just such a problem. In particular, if we let x1, x2, x3 be the production rates of the respective products, the model then becomes Maximize Z  5x1  7x2  3x3, subject to 3x1  4x2  2x3 ⱕ 30 4x1 ⫹ 6x2 ⫹ 2x3 ⱕ 40 x1 ⱕ 7 x2 ⱕ 5 x3 ⱕ 9 and x1 ⱖ 0, x2 ⱖ 0, x3 ⱖ 0. For the real problem, however, restriction 1 necessitates adding to the model the constraint The number of strictly positive decision variables (x1, x2, x3) must be ⱕ 2. This constraint does not fit into a linear or an integer programming format, so the key question is how to convert it to such a format so that a corresponding algorithm can be used to solve the overall model. If the decision variables were binary variables, then the constraint would be expressed in this format as x1 ⫹ x2 ⫹ x3 ⱕ 2. However, with continuous decision variables, a more complicated approach involving the introduction of auxiliary binary variables is needed. Requirement 2 necessitates replacing the first two functional constraints (3x1 ⫹ 4x2 ⫹ 2x3 ⱕ 30 and 4x1 ⫹ 6x2 ⫹ 2x3 ⱕ 40) by the restriction Either or 3x1 ⫹ 4x2 ⫹ 2x3 ⱕ 30 4x1 ⫹ 6x2 ⫹ 2x3 ⱕ 40 must hold, where the choice of which constraint must hold corresponds to the choice of which plant will be used to produce the new products. We discussed in the preceding section how such an either-or constraint can be converted to a linear or an integer programming format, again with the help of an auxiliary binary variable. Formulation with Auxiliary Binary Variables. To deal with requirement 1, we introduce three auxiliary binary variables ( y1, y2, y3) with the interpretation yj ⫽ 0 1 if xj ⬎ 0 can hold (can produce product j) if xj ⫽ 0 must hold (cannot produce product j), 12.4 SOME FORMULATION EXAMPLES 491 for j  1, 2, 3. To enforce this interpretation in the model with the help of M (an extremely large positive number), we add the constraints x1 ⱕ My1 x2 ⱕ My2 x3 ⱕ My3 y1 ⫹ y2 ⫹ y3 ⱕ 2 yj is binary, for j ⫽ 1, 2, 3. The either-or constraint and nonnegativity constraints give a bounded feasible region for the decision variables (so each xj ⱕ M throughout this region). Therefore, in each xj ⱕ Myj constraint, yj ⫽ 1 allows any value of xj in the feasible region, whereas yj ⫽ 0 forces xj ⫽ 0. (Conversely, xj ⬎ 0 forces yj ⫽ 1, whereas xj ⫽ 0 allows either value of yj.) Consequently, when the fourth constraint forces choosing at most two of the yj to equal 1, this amounts to choosing at most two of the new products as the ones that can be produced. To deal with requirement 2, we introduce another auxiliary binary variable y4 with the interpretation y4 ⫽ 0 1 if 4x1 ⫹ 6x2 ⫹ 2x3 ⱕ 40 must hold (choose Plant 2) if 3x1 ⫹ 4x2 ⫹ 2x3 ⱕ 30 must hold (choose Plant 1). As discussed in Sec. 12.3, this interpretation is enforced by adding the constraints, 3x1 ⫹ 4x2 ⫹ 2x3 ⱕ 30 ⫹ My4 4x1 ⫹ 6x2 ⫹ 2x3 ⱕ 40 ⫹ M(1 ⫺ y4) y4 is binary. Consequently, after we move all variables to the left-hand side of the constraints, the complete model is Maximize Z ⫽ 5x1 ⫹ 7x2 ⫹ 3x3, subject to x1 x2 x3 x1 ⫺ My1 x2 ⫺ My2 x3 ⫺ My3 y1 ⫹ y2 ⫹ y3 3x1 ⫹ 4x2 ⫹ 2x3 ⫺ My4 4x1 ⫹ 6x2 ⫹ 2x3 ⫹ My4 ⱕ ⱕ ⱕ ⱕ ⱕ ⱕ ⱕ ⱕ ⱕ 7 5 9 0 0 0 2 30 40 ⫹ M and x1 ⱖ 0, x2 ⱖ 0, x3 ⱖ 0 yj is binary, for j ⫽ 1, 2, 3, 4. This now is an MIP model, with three variables (the xj) not required to be integer and four binary variables, so an MIP algorithm can be used to solve the model. When this is done (after substituting a large numerical value for M),2 the optimal solution is y1 ⫽ 1, 2 In practice, some care is taken to choose a value for M that definitely is large enough to avoid eliminating any feasible solutions, but as small as possible otherwise in order to avoid unduly enlarging the feasible region for the LP relaxation (described in the next section) and to avoid numerical instability. For this example, a careful examination of the constraints reveals that the minimum feasible value of M is M ⫽ 9. 492 CHAPTER 12 INTEGER PROGRAMMING y2  0, y3  1, y4  1, x1  51ᎏ2ᎏ, x2  0, and x3  9; that is, choose products 1 and 3 to produce, choose Plant 2 for the production, and choose the production rates of 51ᎏ2ᎏ units per week for product 1 and 9 units per week for product 3. The resulting total profit is $54,500 per week. EXAMPLE 2 Violating Proportionality The SUPERSUDS CORPORATION is developing its marketing plans for next year’s new products. For three of these products, the decision has been made to purchase a total of five TV spots for commercials on national television networks. The problem we will focus on is how to allocate the five spots to these three products, with a maximum of three spots (and a minimum of zero) for each product. Table 12.3 shows the estimated impact of allocating zero, one, two, or three spots to each product. This impact is measured in terms of the profit (in units of millions of dollars) from the additional sales that would result from the spots, considering also the cost of producing the commercial and purchasing the spots. The objective is to allocate five spots to the products so as to maximize the total profit. This small problem can be solved easily by dynamic programming (Chap. 11) or even by inspection. (The optimal solution is to allocate two spots to product 1, no spots to product 2, and three spots to product 3.) However, we will show two different BIP formulations for illustrative purposes. Such a formulation would become necessary if this small problem needed to be incorporated into a larger IP model involving the allocation of resources to marketing activities for all the corporation’s new products. One Formulation with Auxiliary Binary Variables. A natural formulation would be to let x1, x2, x3 be the number of TV spots allocated to the respective products. The contribution of each xj to the objective function then would be given by the corresponding column in Table 12.3. However, each of these columns violates the assumption of proportionality described in Sec. 3.3. Therefore, we cannot write a linear objective function in terms of these integer decision variables. Now see what happens when we introduce an auxiliary binary variable yij for each positive integer value of xi  j ( j  1, 2, 3), where yij has the interpretation yij  0 1 if xi  j otherwise. (For example, y21  0, y22  0, and y23  1 mean that x2  3.) The resulting linear BIP model is Maximize Z  y11  3y12  3y13  2y22  3y23  y31  2y32  4y33, ■ TABLE 12.3 Data for Example 2 (the Supersuds Corp. problem) Profit Product Number of TV Spots 1 2 3 0 1 2 3 0 1 3 3 0 0 2 3 0 1 2 4 493 12.4 SOME FORMULATION EXAMPLES subject to y11 ⫹ 2y12 ⫹ 3y13 ⫹ y21 ⫹ 2y22 ⫹ 3y23 y11  y12  y13 y21 ⫹ y22 ⫹ y23 y31 ⫹ y32 ⫹ y33 ⫹ y31 ⫹ 2y32 ⫹ 3y33 ⱕ ⱕ ⱕ ⫽ 1 1 1 5 and each yij is binary. Note that the first three functional constraints ensure that each xi will be assigned just one of its possible values. (Here yi1 ⫹ yi2 ⫹ yi3 ⫽ 0 corresponds to xi ⫽ 0, which contributes nothing to the objective function.) The last functional constraint ensures that x1 ⫹ x2 ⫹ x3 ⫽ 5. The linear objective function then gives the total profit according to Table 12.3. Solving this BIP model gives an optimal solution of y11 ⫽ 0, y21 ⫽ 0, y31 ⫽ 0, y12 ⫽ 1, y22 ⫽ 0, y32 ⫽ 0, y13 ⫽ 0, y23 ⫽ 0, y33 ⫽ 1, so so so x1 ⫽ 2 x2 ⫽ 0 x3 ⫽ 3. Another Formulation with Auxiliary Binary Variables. We now redefine the above auxiliary binary variables yij as follows: yij ⫽ 0 1 if xi ⱖ j otherwise. Thus, the difference is that yij ⫽ 1 now if xi ⱖ j instead of xi ⫽ j. Therefore, xi ⫽ 0 ⇒ xi ⫽ 1 ⇒ xi ⫽ 2 ⇒ xi ⫽ 3 ⇒ so xi ⫽ yi1 ⫹ yi2 yi1 ⫽ yi1 ⫽ yi1 ⫽ yi1 ⫽ ⫹ yi3 0, 1, 1, 1, yi2 yi2 yi2 yi2 ⫽ ⫽ ⫽ ⫽ 0, 0, 1, 1, yi3 yi3 yi3 yi3 ⫽ ⫽ ⫽ ⫽ 0, 0, 0, 1, for i ⫽ 1, 2, 3. Because allowing yi2 ⫽ 1 is contingent upon yi1 ⫽ 1 and allowing yi3 ⫽ 1 is contingent upon yi2 ⫽ 1, these definitions are enforced by adding the constraints yi2 ⱕ yi1 and yi3 ⱕ yi2, for i ⫽ 1, 2, 3. The new definition of the yij also changes the objective function, as illustrated in Fig. 12.1 for the product 1 portion of the objective function. Since y11, y12, y13 provide the successive increments (if any) in the value of x1 (starting from a value of 0), the coefficients of y11, y12, y13 are given by the respective increments in the product 1 column of Table 12.3 (1 ⫺ 0 ⫽ 1, 3 ⫺ 1 ⫽ 2, 3 ⫺ 3 ⫽ 0). These increments are the slopes in Fig. 12.1, yielding 1y11 ⫹ 2y12 ⫹ 0y13 for the product 1 portion of the objective function. Note that applying this approach to all three products still must lead to a linear objective function. After we bring all variables to the left-hand side of the constraints, the resulting complete BIP model is Maximize Z ⫽ y11 ⫹ 2y12 ⫹ 2y22 ⫹ y23 ⫺ y31 ⫹ 3y32 ⫹ 2y33, 494 CHAPTER 12 INTEGER PROGRAMMING Profit from product 1  1y11  2y12  0y13 4 Slope  0 3 Slope  2 2 ■ FIGURE 12.1 The profit from the additional sales of product 1 that would result from x1 TV spots, where the slopes give the corresponding coefficients in the objective function for the second BIP formulation for Example 2 (the Supersuds Corp. problem). 1 Slope  1 0 1 y11 2 y12 x1 3 y13 subject to y12 y13 y22 y23 y32 y33 y11  ⫺ ⫺ ⫺ ⫺ ⫺ ⫹ y11 y12 y21 y22 y31 y32 y12 ⱕ ⱕ ⱕ ⱕ ⱕ ⱕ ⫹ 0 0 0 0 0 0 y13 ⫹ y21 ⫹ y22 ⫹ y23 ⫹ y31 ⫹ y32 ⫹ y33 ⫽ 5 and each yij is binary. Solving this BIP model gives an optimal solution of y11 ⫽ 1, y21 ⫽ 0, y31 ⫽ 1, y12 ⫽ 1, y22 ⫽ 0, y32 ⫽ 1, y13 ⫽ 0, y23 ⫽ 0, y33 ⫽ 1, so so so x1 ⫽ 2 x2 ⫽ 0 x3 ⫽ 3. There is little to choose between this BIP model and the preceding one other than personal taste. They have the same number of binary variables (the prime consideration in determining computational effort for BIP problems). They also both have some special 495 12.4 SOME FORMULATION EXAMPLES structure (constraints for mutually exclusive alternatives in the first model and constraints for contingent decisions in the second) that can lead to speedup. The second model does have more functional constraints than the first. EXAMPLE 3 Covering All Characteristics SOUTHWESTERN AIRWAYS needs to assign its crews to cover all its upcoming flights. We will focus on the problem of assigning three crews based in San Francisco to the flights listed in the first column of Table 12.4. The other 12 columns show the 12 feasible sequences of flights for a crew. (The numbers in each column indicate the order of the flights.) Exactly three of the sequences need to be chosen (one per crew) in such a way that every flight is covered. (It is permissible to have more than one crew on a flight, where the extra crews would fly as passengers, but union contracts require that the extra crews would still need to be paid for their time as if they were working.) The cost of assigning a crew to a particular sequence of flights is given (in thousands of dollars) in the bottom row of the table. The objective is to minimize the total cost of the three crew assignments that cover all the flights. Formulation with Binary Variables. With 12 feasible sequences of flights, we have 12 yes-or-no decisions: Should sequence j be assigned to a crew? ( j  1, 2, . . . , 12) Therefore, we use 12 binary variables to represent these respective decisions: xj  0 1 if sequence j is assigned to a crew otherwise. The most interesting part of this formulation is the nature of each constraint that ensures that a corresponding flight is covered. For example, consider the last flight in Table 12.4 [Seattle to Los Angeles (LA)]. Five sequences (namely, sequences 6, 9, 10, 11, and 12) include this flight. Therefore, at least one of these five sequences must be chosen. The resulting constraint is x6  x9  x10  x11  x12  1. Using similar constraints for the other 10 flights, the complete BIP model is Minimize Z  2x1  3x2  4x3  6x4  7x5  5x6  7x7  8x8  9x9  9x10  8x11  9x12, ■ TABLE 12.4 Data for Example 3 (the Southwestern Airways problem) Feasible Sequence of Flights Flight 1 1. San Francisco to Los Angeles 2. San Francisco to Denver 3. San Francisco to Seattle 4. Los Angeles to Chicago 5. Los Angeles to San Francisco 6. Chicago to Denver 7. Chicago to Seattle 8. Denver to San Francisco 9. Denver to Chicago 10. Seattle to San Francisco 11. Seattle to Los Angeles 1 Cost, $1,000’s 2 2 3 4 5 6 1 1 7 8 9 1 1 1 2 3 3 4 4 2 3 4 6 7 3 3 2 4 2 4 5 4 5 2 3 1 3 2 5 4 3 2 12 1 1 3 2 3 11 1 1 1 2 10 5 7 8 2 2 4 4 5 2 9 9 8 9 496 CHAPTER 12 INTEGER PROGRAMMING subject to x1  x4  x7  x10  1 x2  x5  x8  x11  1 x3  x6  x9  x12  1 x4  x7  x9  x10  x12  1 x1  x6  x10  x11  1 x4  x5  x9  1 x7  x8  x10  x11  x12  1 x2  x4  x5  x9  1 x5  x8  x11  1 x3  x7  x8  x12  1 x6  x9  x10  x11  x12  1 (SF to LA) (SF to Denver) (SF to Seattle) (LA to Chicago) (LA to SF) (Chicago to Denver) (Chicago to Seattle) (Denver to SF) (Denver to Chicago) (Seattle to SF) (Seattle to LA) 12  xj  3 j1 (assign three crews) and xj is binary, for j  1, 2, . . . , 12. One optimal solution for this BIP model is x3  1 x4  1 x11  1 (assign sequence 3 to a crew) (assign sequence 4 to a crew) (assign sequence 11 to a crew) and all other xj  0, for a total cost of $18,000. (Another optimal solution is x1  1, x5  1, x12  1, and all other xj  0.) This example illustrates a broader class of problems called set covering problems.3 Any set covering problem can be described in general terms as involving a number of potential activities (such as flight sequences) and characteristics (such as flights). Each activity possesses some but not all of the characteristics. The objective is to determine the least costly combination of activities that collectively possess (cover) each characteristic at least once. Thus, let Si be the set of all activities that possess characteristic i. At least one member of the set Si must be included among the chosen activities, so a constraint,  xj  1, j⑀Si is included for each characteristic i. A related class of problems, called set partitioning problems, changes each such constraint to  xj  1, j⑀Si so now exactly one member of each set Si must be included among the chosen activities. For the crew scheduling example, this means that each flight must be included exactly once among the chosen flight sequences, which rules out having extra crews (as passengers) on any flight. 3 Strictly speaking, a set covering problem does not include any other functional constraints such as the last functional constraint in the above crew scheduling example. It also is sometimes assumed that every coefficient in the objective function being minimized equals one, and then the name weighted set covering problem is used when this assumption does not hold. 12.5 SOME PERSPECTIVES ON SOLVING INTEGER PROGRAMMING ■ 12.5 497 SOME PERSPECTIVES ON SOLVING INTEGER PROGRAMMING PROBLEMS It may seem that IP problems should be relatively easy to solve. After all, linear programming problems can be solved extremely efficiently, and the only difference is that IP problems have far fewer solutions to be considered. In fact, pure IP problems with a bounded feasible region are guaranteed to have just a finite number of feasible solutions. Unfortunately, there are two fallacies in this line of reasoning. One is that having a finite number of feasible solutions ensures that the problem is readily solvable. Finite numbers can be astronomically large. For example, consider the simple case of BIP problems. With n variables, there are 2n solutions to be considered (where some of these solutions can subsequently be discarded because they violate the functional constraints). Thus, each time n is increased by 1, the number of solutions is doubled. This pattern is referred to as the exponential growth of the difficulty of the problem. With n  10, there are more than 1,000 solutions (1,024); with n  20, there are more than 1,000,000; with n  30, there are more than 1 billion; and so forth. Therefore, even the fastest computers are incapable of performing exhaustive enumeration (checking each solution for feasibility and, if it is feasible, calculating the value of the objective value) for BIP problems with more than a few dozen variables, let alone for general IP problems with the same number of integer variables. Fortunately, by starting with the ideas described in subsequent sections, today’s best IP algorithms are vastly superior to exhaustive enumeration. The improvement over just the past two or three decades has been dramatic. BIP problems that would have required years of computing time to solve 25 years ago now can be solved in seconds with today’s best commercial software. This huge speedup is due to great progress in three areas—dramatic improvements in BIP algorithms (as well as other IP algorithms), striking improvements in linear programming algorithms that are heavily used within the integer programming algorithms, and the great speedup in computers (including desktop computers). As a result, vastly larger BIP problems now are sometimes being solved than would have been possible in past decades. The best algorithms today are capable of solving some pure BIP problems with over a hundred thousand variables. Nevertheless, because of exponential growth, even the best algorithms cannot be guaranteed to solve every relatively small problem (less than a few hundred binary variables). Depending on their characteristics, certain relatively small problems can be much more difficult to solve than some much larger ones.4 When dealing with general integer variables instead of binary variables, the size of the problems that can be solved tend to be substantially smaller. However, there are exceptions. The second fallacy is that removing some feasible solutions (the noninteger ones) from a linear programming problem will make it easier to solve. To the contrary, it is only because all these feasible solutions are there that the guarantee usually can be given (see Sec. 5.1) that there will be a corner-point feasible (CPF) solution [and so a corresponding basic feasible (BF) solution] that is optimal for the overall problem. This guarantee is the key to the remarkable efficiency of the simplex method. As a result, linear programming problems generally are considerably easier to solve than IP problems. Consequently, most successful algorithms for integer programming incorporate a linear programming algorithm, such as the simplex method (or dual simplex method), as much as they can by relating portions of the IP problem under consideration to the corresponding linear programming problem (i.e., the same problem except that the integer restriction is deleted). For any given IP problem, this corresponding linear programming 4 For information about predicting the time required to solve a particular integer programming problem, see Ozaltin, O. Y., B. Hunsaker, and A. J. Schaefer: “Predicting the Solution Time of Branch-and-Bound Algorithms for Mixed-Integer Programs,” INFORMS Journal on Computing, 23(3): 392–403, Summer 2011. An Application Vignette As of 2013, Taco Bell Corporation has approximately 5,600 quick-service restaurants in the United States and about 250 more in 20 other countries. It serves over 2 billion meals per year. At each Taco Bell restaurant, the amount of business is highly variable throughout the day (and from day to day), with a heavy concentration during the normal meal times. Therefore, determining how many employees should be scheduled to perform what functions in the restaurant at any given time is a complex and vexing problem. To attack this problem, Taco Bell management instructed an OR team (including several consultants) to develop a new labor-management system. The team concluded that the system needed three major components: (1) a forecasting model for predicting customer transactions at any time, (2) a simulation model (such as those described in Chap. 20) to translate customer transactions to labor requirements, and (3) an integer programming model to schedule employees to satisfy labor requirements and minimize payroll. The integer decision variables for this integer programming model for any restaurant are the number of employees assigned to each of the shifts that begin at various specified times. The lengths of these shifts also are decision variables (constrained to be between minimum and maximum permissible shift lengths), but continuous decision variables in this case, so the model is a mixed IP model. The main constraints specify that the number of employees working during each 15-minute time interval must be greater than or equal to the minimum number required during that interval (according to the forecasting model). This MIP model is similar to the linear programming model for a personnel scheduling example that is presented in Sec. 3.4. However, the key difference is that the number of employees working shifts at Taco Bell restaurants is much smaller than for this example in Sec. 3.4 that involves over 100 employees, so it is necessary to restrict these decision variables to integer values for the Taco Bell model (whereas noninteger values in a solution for the example with over 100 employees can readily be rounded to integer values with little loss of accuracy). The implementation of this MIP model along with the other components of the labor-management system has provided Taco Bell with documented savings of $13 million per year in labor costs. Source: J. Hueter and W. Swart: “An Integrated LaborManagement System for Taco Bell,” Interfaces, 28(1): 75–91, Jan.–Feb. 1998. (A link to this article is provided on our website, www.mhhe.com/hillier.) problem commonly is referred to as its LP relaxation. The algorithms presented in the next two sections illustrate how a sequence of LP relaxations for portions of an IP problem can be used to solve the overall IP problem effectively. There is one special situation where solving an IP problem is no more difficult than solving its LP relaxation once by the simplex method, namely, when the optimal solution to the latter problem turns out to satisfy the integer restriction of the IP problem. When this situation occurs, this solution must be optimal for the IP problem as well, because it is the best solution among all the feasible solutions for the LP relaxation, which includes all the feasible solutions for the IP problem. Therefore, it is common for an IP algorithm to begin by applying the simplex method to the LP relaxation to check whether this fortuitous outcome has occurred. Although it generally is quite fortuitous indeed for the optimal solution to the LP relaxation to be integer as well, there actually exist several special types of IP problems for which this outcome is guaranteed. You already have seen the most prominent of these special types in Chaps. 9 and 10, namely, the minimum cost flow problem (with integer parameters) and its special cases (including the transportation problem, the assignment problem, the shortest-path problem, and the maximum flow problem). This guarantee can be given for these types of problems because they possess a certain special structure (e.g., see Table 9.6) that ensures that every BF solution is integer, as stated in the integer solutions property given in Secs. 9.1 and 10.6. Consequently, these special types of IP problems can 12.5 SOME PERSPECTIVES ON SOLVING INTEGER PROGRAMMING 499 be treated as linear programming problems, because they can be solved completely by a streamlined version of the simplex method. Although this much simplification is somewhat unusual, in practice IP problems frequently have some special structure that can be exploited to simplify the problem. (Examples 2 and 3 in the preceding section fit into this category, because of their mutually exclusive alternatives constraints or contingent decisions constraints or set-covering constraints.) Sometimes, very large versions of these problems can be solved successfully. Special-purpose algorithms designed specifically to exploit certain kinds of special structures can be very useful in integer programming. Thus, the three primary determinants of computational difficulty for an IP problem are (1) the number of integer variables, (2) whether these integer variables are binary variables or general integer variables, and (3) any special structure in the problem. This situation is in contrast to linear programming, where the number of (functional) constraints is much more important than the number of variables. In integer programming, the number of constraints is of some importance (especially if LP relaxations are being solved), but it is strictly secondary to the other three factors. In fact, there occasionally are cases where increasing the number of constraints decreases the computation time because the number of feasible solutions has been reduced. For MIP problems, it is the number of integer variables rather than the total number of variables that is important, because the continuous variables have almost no effect on the computational effort. Because IP problems are, in general, much more difficult to solve than linear programming problems, sometimes it is tempting to use the approximate procedure of simply applying the simplex method to the LP relaxation and then rounding the noninteger values to integers in the resulting solution. This approach may be adequate for some applications, especially if the values of the variables are quite large so that rounding creates relatively little error. However, you should beware of two pitfalls involved in this approach. One pitfall is that an optimal linear programming solution is not necessarily feasible after it is rounded. Often it is difficult to see in which way the rounding should be done to retain feasibility. It may even be necessary to change the value of some variables by one or more units after rounding. To illustrate, consider the following problem: Maximize Z  x2, subject to 1 x1  x2 ⱕ ᎏᎏ 2 1 ⫺x1 ⫹ x2 ⱕ 3ᎏᎏ 2 and x1 ⱖ 0, x2 ⱖ 0 x1, x2 are integers. As Fig. 12.2 shows, the optimal solution for the LP relaxation is x1 ⫽ 11ᎏ2ᎏ, x2 ⫽ 2, but it is impossible to round the noninteger variable x1 to 1 or 2 (or any other integer) and retain feasibility. Feasibility can be retained only by also changing the integer value of x2. It is easy to imagine how such difficulties can be compounded when there are hundreds or thousands of constraints and variables. Even if an optimal solution for the LP relaxation is rounded successfully, there remains another pitfall. There is no guarantee that this rounded solution will be the optimal 500 CHAPTER 12 INTEGER PROGRAMMING The rounded solutions are not feasible x2 3 3 ( , 2) 2 2 Optimal solution for the LP relaxation 1 ■ FIGURE 12.2 An example of an IP problem where the optimal solution for the LP relaxation cannot be rounded in any way that retains feasibility. Feasible region for the LP relaxation 0 1 2 3 4 x1 integer solution. In fact, it may even be far from optimal in terms of the value of the objective function. This fact is illustrated by the following problem: Maximize Z  x1  5x2, subject to x1  10x2 ⱕ 20 x1 ⱕ 2 and x1 ⱖ 0, x2 ⱖ 0 x1, x2 are integers. Because there are only two decision variables, this problem can be depicted graphically as shown in Fig. 12.3. Either the graph or the simplex method may be used to find that the optimal solution for the LP relaxation is x1 ⫽ 2, x2 ⫽ ᎏ95ᎏ, with Z ⫽ 11. If a graphical solution were not available (which would be the case with more decision variables), then the variable with the noninteger value x2 ⫽ ᎏ95ᎏ would normally be rounded in the feasible direction to x2 ⫽ 1. The resulting integer solution is x1 ⫽ 2, x2 ⫽ 1, which yields Z ⫽ 7. Notice that this solution is far from the optimal solution (x1, x2) ⫽ (0, 2), where Z ⫽ 10. Because of these two pitfalls, a better approach for dealing with IP problems that are too large to be solved exactly is to use one of the available heuristic algorithms. These algorithms are extremely efficient for large problems, but they are not guaranteed to find an optimal solution. However, they do tend to be considerably more effective than the rounding approach just discussed in finding very good feasible solutions.5 5 For recent research on heuristic algorithms, see Bertsimas, D., D. A. Iancu, and D. Katz: “A New Local Search Algorithm for Binary Optimization,” INFORMS Journal on Computing, 25(2): 208–221, Spring 2013. 12.6 THE BRANCH-AND-BOUND TECHNIQUE AND ITS APPLICATION x2 3 Optimal IP solution Optimal solution for the LP relaxation 2 Z*  10 ■ FIGURE 12.3 An example where rounding the optimal solution for the LP relaxation is far from optimal for the IP problem. 1 0 501  x1  5x2 Rounded solution 1 2 3 x1 One of the particularly exciting developments in OR in recent years has been the rapid progress in developing very effective heuristic algorithms (commonly called metaheuristics) for various combinatorial problems such as IP problems. Three prominent types of metaheuristics (tabu search, simulated annealing, and genetic algorithms) will be described in Chap. 14. These sophisticated metaheuristics can even be applied to integer nonlinear programming problems that have locally optimal solutions that may be far removed from a globally optimal solution. They also can be applied to various combinatorial optimization problems, which frequently can be represented in a model that has integer variables but also has some constraints that are more complicated than for an IP model. (We’ll discuss such applications further in Chap. 14.) Returning to integer linear programming, for IP problems that are small enough to be solved to optimality, a considerable number of algorithms now are available. However, no IP algorithm possesses computational efficiency that is nearly comparable to the simplex method (except on special types of problems). Therefore, developing IP algorithms has continued to be an active area of research. Fortunately, some exciting algorithmic advances have been made and additional progress can be anticipated during the coming years. These advances are discussed further in Secs. 12.8 and 12.9. The most popular traditional mode for IP algorithms is to use the branch-and-bound technique and related ideas to implicitly enumerate the feasible integer solutions, and we shall focus on this approach. The next section presents the branch-and-bound technique in a general context, and illustrates it with a basic branch-and-bound algorithm for BIP problems. Section 12.7 presents another algorithm of the same type for general MIP problems. ■ 12.6 THE BRANCH-AND-BOUND TECHNIQUE AND ITS APPLICATION TO BINARY INTEGER PROGRAMMING Because any bounded pure IP problem has only a finite number of feasible solutions, it is natural to consider using some kind of enumeration procedure for finding an optimal solution. Unfortunately, as we discussed in the preceding section, this finite number can be, and usually is, very large. Therefore, it is imperative that any enumeration procedure be cleverly structured so that only a tiny fraction of the feasible solutions actually need be examined. For example, dynamic programming (see Chap. 11) provides one such kind of procedure for many problems having a finite number of feasible solutions (although it is not particularly efficient for most IP problems). Another such approach is provided by the branch-and-bound technique. This technique and variations of it have been applied with 502 CHAPTER 12 INTEGER PROGRAMMING some success to a variety of OR problems, but it is especially well known for its application to IP problems. The basic concept underlying the branch-and-bound technique is to divide and conquer. Since the original “large” problem is too difficult to be solved directly, it is divided into smaller and smaller subproblems until these subproblems can be conquered. The dividing (branching) is done by partitioning the entire set of feasible solutions into smaller and smaller subsets. The conquering ( fathoming) is done partially by bounding how good the best solution in the subset can be and then discarding the subset if its bound indicates that it cannot possibly contain an optimal solution for the original problem. We shall now describe in turn these three basic steps—branching, bounding, and fathoming—and illustrate them by applying a branch-and-bound algorithm to the prototype example (the California Manufacturing Co. problem) presented in Sec. 12.1 and repeated here (with the constraints numbered for later reference). Maximize Z  9x1  5x2  6x3  4x4, subject to (1) (2) (3) (4) 6x1  3x2  5x3  2x4 ⱕ 10 3 ⫹ 3x2 ⫹ 5x3 ⫹ 2x4 ⱕ 1 ⫺x1 ⫹ 3x3 ⱕ 0 x1 ⫹⫺x2 ⫹ 5x3 ⫹ x4 ⱕ 0 and (5) xj is binary, for j ⫽ 1, 2, 3, 4. Branching When you are dealing with binary variables, the most straightforward way to partition the set of feasible solutions into subsets is to fix the value of one of the variables (say, x1) at x1 ⫽ 0 for one subset and at x1 ⫽ 1 for the other subset. Doing this for the prototype example divides the whole problem into the two smaller subproblems shown next. Subproblem 1: Fix x1 ⫽ 0 so the resulting subproblem reduces to Maximize Z ⫽ 5x2 ⫹ 6x3 ⫹ 4x4, subject to (1) (2) (3) (4) (5) 3x2 ⫹ 5x3 ⫹ 2x4 ⱕ 10 x3 ⫹ x4 ⱕ 1 x3 ⱕ 0 ⫺x2 5x3 ⫹ x4 ⱕ 0 xj is binary, for j ⫽ 2, 3, 4. Subproblem 2: Fix x1 ⫽ 1 so the resulting subproblem reduces to Maximize Z ⫽ 9 ⫹ 5x2 ⫹ 6x3 ⫹ 4x4, subject to (1) (2) (3) (4) (5) 3x2 ⫹ 5x3 ⫹ 2x4 ⱕ 4 x3 ⫹ x4 ⱕ 1 x3 ⱕ1 ⫺x2 5x3 ⫹ x4 ⱕ 0 xj is binary, for j ⫽ 2, 3, 4. 12.6 THE BRANCH-AND-BOUND TECHNIQUE AND ITS APPLICATION ■ FIGURE 12.4 The branching tree created by the branching for the first iteration of the BIP branchand-bound algorithm for the example in Sec. 12.1. Variable: x1 x1 = 0 All x1 = 1 503 Figure 12.4 portrays this dividing (branching) into subproblems by a tree (defined in Sec. 10.2) with branches (arcs) from the All node (corresponding to the whole problem having all feasible solutions) to the two nodes corresponding to the two subproblems. This tree, which will continue “growing branches” iteration by iteration, is referred to as the branching tree (or solution tree or enumeration tree) for the algorithm. The variable used to do this branching at any iteration by assigning values to the variable (as with x1 above) is called the branching variable. (Sophisticated methods for selecting branching variables are an important part of most branch-and-bound algorithms but, for simplicity, we always select them in their natural order—x1, x2, . . . , xn—throughout this section.) Later in the section you will see that one of these subproblems can be conquered (fathomed) immediately, whereas the other subproblem will need to be divided further into smaller subproblems by setting x2  0 or x2  1. For other IP problems where the integer variables have more than two possible values, the branching can still be done by setting the branching variable at its respective individual values, thereby creating more than two new subproblems. However, a good alternate approach is to specify a range of values (for example, xj ⱕ 2 or xj ⱖ 3) for the branching variable for each new subproblem. This is the approach used for the algorithm presented in Sec. 12.7. Bounding For each of these subproblems, we now need to obtain a bound on how good its best feasible solution can be. The standard way of doing this is to quickly solve a simpler relaxation of the subproblem. In most cases, a relaxation of a problem is obtained simply by deleting (“relaxing”) one set of constraints that had made the problem difficult to solve. For IP problems, the most troublesome constraints are those requiring the respective variables to be integer. Therefore, the most widely used relaxation is the LP relaxation that deletes this set of constraints. To illustrate for the example, consider first the whole problem given in Sec. 12.1 (and repeated at the beginning of this section). Its LP relaxation is obtained by replacing the last line of the model (xj is binary, for j ⫽ 1, 2, 3, 4) by the following new (relaxed) version of this constraint (5). (5) 0 ⱕ xj ⱕ 1, for j ⫽ 1, 2, 3, 4. Using the simplex method to quickly solve this LP relaxation yields its optimal solution 冢 冣 5 (x1, x2, x3, x4) ⫽ ᎏᎏ, 1, 0, 1 , 6 1 with Z ⫽ 16ᎏᎏ. 2 Therefore, Z ⱕ 161ᎏ2ᎏ for all feasible solutions for the original BIP problem (since these solutions are a subset of the feasible solutions for the LP relaxation). In fact, as indicated later in the summary of the algorithm, this bound of 161ᎏ2ᎏ can be rounded down to 16, because all coefficients in the objective function are integer, so all integer solutions must have an integer value for Z. Bound for whole problem: Z ⱕ 16. Now let us obtain the bounds for the two subproblems (shown in the preceding subsection) in the same way. In both cases, the LP relaxation is obtained by replacing the last constraint (xj is binary for j = 2, 3, 4) by (5) 0 ⱕ xj ⱕ 1, for j = 2, 3, 4. Applying the simplex method then yields the optimal solutions shown next for these LP relaxations. 504 CHAPTER 12 INTEGER PROGRAMMING LP relaxation of subproblem 1: x1  0 and (5) 0 ⱕ xj ⱕ 1 Optimal solution: (x1, x2, x3, x4) ⫽ (0, 1, 0, 1) with Z ⫽ 9. LP relaxation of subproblem 2: x1 ⫽ 1 and (5) 0 ⱕ xj ⱕ 1 4 4 Optimal solution: (x1, x2, x3, x4) ⫽ 1, ᎏᎏ, 0, ᎏᎏ 5 5 The resulting bounds for the subproblems then are 冢 Bound for subproblem 1: Bound for subproblem 2: ■ FIGURE 12.5 The results of bounding for the first iteration of the BIP branch-and-bound algorithm for the example in Sec. 12.1. x1 Variable: x1 = 0 9 (0, 1, 0, 1) All for j ⫽ 2, 3, 4. 冣 for j ⫽ 2, 3, 4. 1 with Z ⫽ 16ᎏᎏ. 5 Z ⱕ 9, Z ⱕ 16. Figure 12.5 summarizes these results, where the numbers given just below the nodes are the bounds and below each bound is the optimal solution obtained for the LP relaxation. Fathoming A subproblem can be conquered (fathomed), and thereby dismissed from further consideration, in the three ways described below. One way is illustrated by the results for subproblem 1 given by the x1 ⫽ 0 node in Fig. 12.5. Note that the (unique) optimal solution for its LP relaxation, (x1, x2, x3, x4) ⫽ (0, 1, 0, 1), is an integer solution. Therefore, this solution must also be the optimal solution for subproblem 1 itself. This solution should be stored as the first incumbent (the best feasible solution found so far) for the whole problem, along with its value of Z. This value is denoted by Z* ⫽ value of Z for current incumbent, 16 ( 56 , 1, 0, 1) x1 = 1 ( 16 4 1, , 0, 4 5 5 ) so Z* ⫽ 9 at this point. Since this solution has been stored, there is no reason to consider subproblem 1 any further by branching from the x1 ⫽ 0 node, etc. Doing so could only lead to other feasible solutions that are inferior to the incumbent, and we have no interest in such solutions. Because it has been solved, we fathom (dismiss) subproblem 1 now. The above results suggest a second key fathoming test. Since Z* ⫽ 9, there is no reason to consider further any subproblem whose bound (after rounding down) ⱕ 9, since such a subproblem cannot have a feasible solution better than the incumbent. Stated more generally, a subproblem is fathomed whenever its Bound ⱕ Z*. This outcome does not occur in the current iteration of the example because subproblem 2 has a bound of 16 that is larger than 9. However, it might occur later for descendants of this subproblem (new smaller subproblems created by branching on this subproblem, and then perhaps branching further through subsequent “generations”). Furthermore, as new incumbents with larger values of Z* are found, it will become easier to fathom in this way. The third way of fathoming is quite straightforward. If the simplex method finds that a subproblem’s LP relaxation has no feasible solutions, then the subproblem itself must have no feasible solutions, so it can be dismissed (fathomed). In all three cases, we are conducting our search for an optimal solution by retaining for further investigation only those subproblems that could possibly have a feasible solution better than the current incumbent. Summary of Fathoming Tests. A subproblem is fathomed (dismissed from further consideration) if Test 1: Its bound ⱕ Z*, or Test 2: Its LP relaxation has no feasible solutions, 12.6 THE BRANCH-AND-BOUND TECHNIQUE AND ITS APPLICATION Variable: 505 x1 x1 = 0 F(3) 9  Z* (0, 1, 0, 1)  incumbent All ■ FIGURE 12.6 The branching tree after the first iteration of the BIP branch-and-bound algorithm for the example in Sec. 12.1. 16 x1 = 1 16 or Test 3: The optimal solution for its LP relaxation is integer. (If this solution is better than the incumbent, it becomes the new incumbent, and test 1 is reapplied to all unfathomed subproblems with the new larger Z*.) Figure 12.6 summarizes the results of applying these three tests to subproblems 1 and 2 by showing the current branching tree. Only subproblem 1 has been fathomed, by test 3, as indicated by F(3) next to the x1  0 node. The resulting incumbent also is identified below this node. The subsequent iterations will illustrate successful applications of all three tests. However, before continuing the example, we summarize the algorithm being applied to this BIP problem. (This algorithm assumes that the objective function is to be maximized, that all coefficients in the objective function are integer and, for simplicity, that the ordering of the variables for branching is x1, x2, . . . , xn. As noted previously, most branch-and-bound algorithms use sophisticated methods for selecting branching variables instead.) Summary of the BIP Branch-and-Bound Algorithm Initialization: Set Z*   . Apply the bounding step, fathoming step, and optimality test described below to the whole problem. If not fathomed, classify this problem as the one remaining “subproblem” for performing the first full iteration below. Steps for each iteration: 1. Branching: Among the remaining (unfathomed) subproblems, select the one that was created most recently. (Break ties according to which has the larger bound.) Branch from the node for this subproblem to create two new subproblems by fixing the next variable (the branching variable) at either 0 or 1. 2. Bounding: For each new subproblem, solve its LP relaxation to obtain an optimal solution, including the value of Z, for this LP relaxation. If this value of Z is not an integer, round it down to an integer. (If it was already an integer, no change is needed.) This integer value of Z is the bound for the subproblem. 3. Fathoming: For each new subproblem, apply the three fathoming tests summarized above, and discard those subproblems that are fathomed by any of the tests. Optimality test: Stop when there are no remaining subproblems that have not been fathomed; the current incumbent is optimal.6 Otherwise, return to perform another iteration. 6 If there is no incumbent, the conclusion is that the problem has no feasible solutions. 506 CHAPTER 12 INTEGER PROGRAMMING The branching step for this algorithm warrants a comment as to why the subproblem to branch from is selected in this way. One option not used here (but sometimes adopted in other branch-and-bound algorithms) would have been always to select the remaining subproblem with the best bound, because this subproblem would be the most promising one to contain an optimal solution for the whole problem. The reason for instead selecting the most recently created subproblem is that LP relaxations are being solved in the bounding step. Rather than start the simplex method from scratch each time, each LP relaxation generally is solved by reoptimization in large-scale implementations of this algorithm.7 This reoptimization involves revising the final simplex tableau from the preceding LP relaxation as needed because of the few differences in the model ( just as for sensitivity analysis) and then applying a few iterations of the appropriate algorithm (perhaps the dual simplex method). When dealing with very large problems, this reoptimization tends to be much faster than starting from scratch, provided the preceding and current models are closely related. The models will tend to be closely related under the branching rule used, but not when you are skipping around in the branching tree by selecting the subproblem with the best bound. Completing the Example The pattern for the remaining iterations will be quite similar to that for the first iteration described above except for the ways in which fathoming occurs. Therefore, we shall summarize the branching and bounding steps fairly briefly and then focus on the fathoming step. Iteration 2. The only remaining subproblem corresponds to the x1  1 node in Fig. 12.6, so we shall branch from this node to create the two new subproblems given below. Subproblem 3: Fix x1  1, x2  0 so the resulting subproblem reduces to Maximize Z  9  6x3  4x4, subject to (1) (2) (3) (4) (5) 5x3  2x4 ⱕ x3 ⫹ x4 ⱕ x3 ⱕ x4 ⱕ xj is binary, 4 1 1 0 for j ⫽ 3, 4. Subproblem 4: Fix x1 ⫽ 1, x2 ⫽ 1 so the resulting subproblem reduces to Maximize Z ⫽ 14 ⫹ 6x3 ⫹ 4x4, subject to (1) (2) (3) (4) (5) 5x3 ⫹ 2x4 ⱕ x3 ⫹ x4 ⱕ x3 ⱕ x4 ⱕ xj is binary, 1 1 1 1 for j ⫽ 3, 4. The LP relaxations of these subproblems are obtained by using the relaxed version of constraint (5). Their optimal solutions also are shown on the next page. 7 The reoptimization technique was first introduced in Sec. 4.7 and then applied to sensitivity analysis in Sec.7.2. To apply it here, all of the original variables would be retained in each LP relaxation and then the constraint xj ⱕ 0 would be added to fix xj ⫽ 0 and the constraint xj ⱖ 1 would be added to fix xj ⫽ 1. These constraints indeed have the effect of fixing the variables in this way because the LP relaxation also includes the constraints that 0 ⱕ xj ⱕ 1. 507 12.6 THE BRANCH-AND-BOUND TECHNIQUE AND ITS APPLICATION x1  1, x2  0, and (5) 0 ⱕ xj ⱕ 1 for j ⫽ 3, 4. LP relaxation of subproblem 3: 冢 冣 4 Optimal solution: (x1, x2, x3, x4) ⫽ 1, 0, ᎏᎏ, 0 5 LP relaxation of subproblem 4: 4 with Z ⫽ 13ᎏᎏ, 5 x1 ⫽ 1, x2 ⫽ 1, and (5) 0 ⱕ xj ⱕ 1 for j ⫽ 3, 4. 冢 冣 1 Optimal solution: (x1, x2, x3, x4) ⫽ 1, 1, 0, ᎏᎏ 2 with Z ⫽ 16. The resulting bounds for the subproblems are Bound for subproblem 3: Bound for subproblem 4: Z ⱕ 13, Z ⱕ 16. Note that both of these bounds are larger than Z* ⫽ 9, so fathoming test 1 fails in both cases. Test 2 also fails, since both LP relaxations have feasible solutions (as indicated by the existence of an optimal solution). Alas, test 3 fails as well, because both optimal solutions include variables with noninteger values. Figure 12.7 shows the resulting branching tree at this point. The lack of an F to the right of either new node indicates that both remain unfathomed. Iteration 3. So far, the algorithm has created four subproblems. Subproblem 1 has been fathomed, and subproblem 2 has been replaced by (separated into) subproblems 3 and 4, but these last two remain under consideration. Because they were created simultaneously, but subproblem 4 (x1 ⫽ 1, x2 ⫽ 1) has the larger bound (16 ⬎ 13), the next branching is done from the (x1, x2) ⫽ (1, 1) node in the branching tree, which creates the following new subproblems (where constraint 3 disappears because it does not contain x4). Subproblem 5: Fix x1 ⫽ 1, x2 ⫽ 1, x3 ⫽ 0 so the resulting subproblem reduces to Maximize Z ⫽ 14 ⫹ 4x4, subject to (1) ■ FIGURE 12.7 The branching tree after iteration 2 of the BIP branchand-bound algorithm for the example in Sec. 12.1. Variable: 2x4 ⱕ 1 x1 x1 = 0 x2 F(3) 9 ⫽ Z* (0, 1, 0, 1) ⫽ incumbent All x2 = 0 16 x1 = 1 ( 13 1, 0, 4 , 0 5 ) 16 x2 = 1 16 (1, 1, 0, 12 ) 508 CHAPTER 12 INTEGER PROGRAMMING (twice) x4 ⱕ 1 x4 is binary. (2), (4) (5) Subproblem 6: Fix x1 ⫽ 1, x2 ⫽ 1, x3 ⫽ 1 so the resulting subproblem reduces to Maximize Z ⫽ 20 ⫹ 4x4, subject to (1) (2) (4) (5) 2x4 x4 x4 x4 ⱕ ⫺4 ⱕ ⫺0 ⱕ ⫺1 is binary. The corresponding LP relaxations have the relaxed version of constraint (5), the optimal solution, and the bound (when it exists) shown below. LP relaxation of subproblem 5: x1 ⫽ 1, x2 ⫽ 1, x3 ⫽ 0, and (5) 0 ⱕ xj ⱕ 1 for j ⫽ 4. 1 Optimal solution: (x1, x2, x3, x4) ⫽ 1, 1, 0, ᎏᎏ , 2 Bound: Z ⱕ 16. LP relaxation of subproblem 6: x1 ⫽ 1, x2 ⫽ 1, x3 ⫽ 1, and (5) 0 ⱕ xj ⱕ 1 for j ⫽ 4. 冢 Optimal solution: Bound: None 冣 with Z ⫽ 16. None since there are no feasible solutions. For both of these subproblems, reducing these LP relaxations to one-variable problems (plus the fixed values of x1, x2, and x3) make it easy to see that the optimal solution for the LP relaxation of subproblem 5 is indeed the one given above. Similarly, note how the combination of constraint 1 and 0 ⱕ x4 ⱕ 1 in the LP relaxation of subproblem 6 prevents any feasible solutions. Therefore, this subproblem is fathomed by test 2. However, subproblem 5 fails this test, as well as test 1 (16 ⬎ 9) and test 3 (x4 ⫽ ᎏ12ᎏ is not integer), so it remains under consideration. We now have the branching tree shown in Fig. 12.8. Iteration 4. The subproblems corresponding to nodes (1, 0) and (1, 1, 0) in Fig. 12.8 remain under consideration, but the latter node was created more recently, so it is selected for branching from next. Since the resulting branching variable x4 is the last variable, fixing its value at either 0 or 1 actually creates a single solution rather than subproblems requiring fuller investigation. These single solutions are x4 ⫽ 0: x4 ⫽ 1: (x1, x2, x3, x4) ⫽ (1, 1, 0, 0) is feasible, with Z ⫽ 14, (x1, x2, x3, x4) ⫽ (1, 1, 0, 1) is infeasible. Formally applying the fathoming tests, we see that the first solution passes test 3 and the second passes test 2. Furthermore, this feasible first solution is better than the incumbent (14 ⬎ 9), so it becomes the new incumbent, with Z* ⫽ 14. Because a new incumbent has been found, we now reapply fathoming test 1 with the new larger value of Z* to the only remaining subproblem, the one at node (1, 0). 509 12.6 THE BRANCH-AND-BOUND TECHNIQUE AND ITS APPLICATION x1 Variable: x1 = 0 x2 x3 F(3) 9 ⫽ Z* (0, 1, 0, 1) ⫽ incumbent All x2 = 0 16 13 x1 = 1 16 ■ FIGURE 12.8 The branching tree after iteration 3 of the BIP branchand-bound algorithm for the example in Sec. 12.1. ■ FIGURE 12.9 The branching tree after the final (fourth) iteration of the BIP branch-and-bound algorithm for the example in Sec. 12.1. x3 = 0 16 x2 = 1 (1, 1, 0, 12 ) 16 x3 = 1 Variable: x1 x1 = 0 x2 F(2) x3 x4 F(3) 9 All x2 = 0 16 F(1) x4 = 0 13 x1 = 1 16 x3 = 0 x2 = 1 F(3) 14 ⫽ Z* (1, 1, 0, 0) ⫽ incumbent ⫽ optimal solution 16 x4 = 1 F(2) 16 x3 = 1 F(2) Subproblem 3: Bound  13 ⱕ Z* ⫽ 14. Therefore, this subproblem now is fathomed. We now have the branching tree shown in Fig. 12.9. Note that there are no remaining (unfathomed) subproblems. Consequently, the optimality test indicates that the current incumbent (x1, x2, x3, x4) ⫽ (1, 1, 0, 0) is optimal, so we are done. Your OR Tutor includes another example of applying this algorithm. Also included in the IOR Tutorial is an interactive procedure for executing this algorithm. As usual, the Excel, LINGO/LINDO, and MPL/Solvers files for this chapter in your OR 510 CHAPTER 12 INTEGER PROGRAMMING Courseware show how the student versions of these software packages are applied to the various examples in the chapter. The algorithms they use for BIP problems all are similar to the one described above.8 Other Options with the Branch-and-Bound Technique This section has illustrated the branch-and-bound technique by describing a basic branchand-bound algorithm for solving BIP problems. However, the general framework of the branch-and-bound technique provides a great deal of flexibility in how to design a specific algorithm for any given type of problem such as BIP. There are many options available, and constructing an efficient algorithm requires tailoring the specific design to fit the specific structure of the problem type. Every branch-and-bound algorithm has the same three basic steps of branching, bounding, and fathoming. The flexibility lies in how these steps are performed. Branching always involves selecting one remaining subproblem and dividing it into smaller subproblems. The flexibility here is found in the rules for selecting and dividing. Our BIP algorithm selected the most recently created subproblem, because this is very efficient for reoptimizing each LP relaxation from the preceding one. Selecting the subproblem with the best bound is the other most popular rule, because it tends to lead more quickly to better incumbents and so more fathoming. Combinations of the two rules also can be used. The dividing typically (but not always) is done by choosing a branching variable and assigning it either individual values (e.g., our BIP algorithm) or ranges of values (e.g., the algorithm in the next section). More sophisticated algorithms generally use a rule for strategically choosing a branching variable that should tend to lead to early fathoming. This usually is considerably more efficient than the rule used by our BIP algorithm of simply selecting the branching variables in their natural order—x1, x2, . . . , xn. For example, a major drawback of this simple rule for selecting the branching variable is that if this variable has an integer value in the optimal solution for the LP relaxation of the subproblem being branched on, the next subproblem that fixes this variable at this same integer value also will have the same optimal solution for its LP relaxation, so no progress will have been made toward fathoming. Therefore, more strategic options for selecting the branching variable might do something like selecting the variable whose value in the optimal solution for the LP relaxation of the current subproblem is furthest from being an integer. Bounding usually is done by solving a relaxation. However, there are a variety of ways to form relaxations. For example, consider the Lagrangian relaxation, where the entire set of functional constraints Ax ⱕ b (in matrix notation) is deleted (except possibly for any “convenient” constraints) and then the objective function Maximize Z ⫽ cx, is replaced by Maximize ZR ⫽ cx ⫺ ␭(Ax ⫺ b), where the fixed vector ␭ ⱖ 0. If x* is an optimal solution for the original problem, its Z ⱕ ZR, so solving the Lagrangian relaxation for the optimal value of ZR provides a valid bound. If ␭ is chosen well, this bound tends to be a reasonably tight one (at least comparable to the bound from the LP relaxation). Without any functional constraints, this relaxation also can be solved extremely quickly. The drawbacks are that fathoming tests 2 and 3 (revised) are not as powerful as for the LP relaxation. 8 In the professional version of LINGO, LINDO, and various MPL solvers, the BIP algorithm also uses a variety of sophisticated techniques along the lines described in Sec. 12.8. 12.6 THE BRANCH-AND-BOUND TECHNIQUE AND ITS APPLICATION 511 In general terms, two features are sought in choosing a relaxation: it can be solved relatively quickly, and it provides a relatively tight bound. Neither alone is adequate. The LP relaxation is popular because it provides an excellent trade-off between these two factors. One option occasionally employed is to use a quickly solved relaxation and then, if fathoming is not achieved, to tighten the relaxation in some way to obtain a somewhat tighter bound. Fathoming generally is done pretty much as described for the BIP algorithm. The three fathoming criteria can be stated in more general terms as follows. Summary of Fathoming Criteria. relaxation reveals that A subproblem is fathomed if an analysis of its Criterion 1: Feasible solutions of the subproblem must have Z ⱕ Z*, or Criterion 2: The subproblem has no feasible solutions, or Criterion 3: An optimal solution of the subproblem has been found. Just as for the BIP algorithm, the first two criteria usually are applied by solving the relaxation to obtain a bound for the subproblem and then checking whether this bound is ⱕ Z* (test 1) or whether the relaxation has no feasible solutions (test 2). If the relaxation differs from the subproblem only by the deletion (or loosening) of some constraints, then the third criterion usually is applied by checking whether the optimal solution for the relaxation is feasible for the subproblem, in which case it must be optimal for the subproblem. For other relaxations (such as the Lagrangian relaxation), additional analysis is required to determine whether the optimal solution for the relaxation is also optimal for the subproblem. If the original problem involves minimization rather than maximization, two options are available. One is to convert to maximization in the usual way (see Sec. 4.6). The other is to convert the branch-and-bound algorithm directly to minimization form, which requires changing the direction of the inequality for fathoming test 1 from Is the subproblem’s bound ⱕ Z*? to Is the subproblem’s bound ⱖ Z*? When using this latter inequality, if the value of Z for the optimal solution for the LP relaxation of the subproblem is not an integer, it now would be rounded up to an integer to obtain the subproblem’s bound. So far, we have described how to use the branch-and-bound technique to find only one optimal solution. However, in the case of ties for the optimal solution, it is sometimes desirable to identify all these optimal solutions so that the final choice among them can be made on the basis of intangible factors not incorporated into the mathematical model. To find them all, you need to make only a few slight alterations in the procedure. First, change the weak inequality for fathoming test 1 (Is the subproblem’s bound ⱕ Z*?) to a strict inequality (Is the subproblem’s bound ⬍ Z*?), so that fathoming will not occur if the subproblem can have a feasible solution equal to the incumbent. Second, if fathoming test 3 passes and the optimal solution for the subproblem has Z ⫽ Z*, then store this solution as another (tied) incumbent. Third, if test 3 provides a new incumbent (tied or otherwise), then check whether the optimal solution obtained for the relaxation is unique. If it is not, then identify the other optimal solutions for the relaxation and check whether they are optimal for the subproblem as well, in which case they also become incumbents. Finally, when the optimality test finds that there are no remaining (unfathomed) subsets, all the current incumbents will be the optimal solutions. 512 CHAPTER 12 INTEGER PROGRAMMING Finally, note that rather than find an optimal solution, the branch-and-bound technique can be used to find a nearly optimal solution, generally with much less computational effort. For some applications, a solution is “good enough” if its Z is “close enough” to the value of Z for an optimal solution (call it Z**). Close enough can be defined in either of two ways as either Z**  K ⱕ Z or (1 ⫺ ␣)Z** ⱕ Z for a specified (positive) constant K or ␣. For example, if the second definition is chosen and ␣ ⫽ 0.05, then the solution is required to be within 5 percent of optimal. Consequently, if it were known that the value of Z for the current incumbent (Z*) satisfies either Z** ⫺ K ⱕ Z* or (1 ⫺ ␣)Z** ⱕ Z* then the procedure could be terminated immediately by choosing the incumbent as the desired nearly optimal solution. Although the procedure does not actually identify an optimal solution and the corresponding Z**, if this (unknown) solution is feasible (and so optimal) for the subproblem currently under investigation, then fathoming test 1 finds an upper bound such that Z** ⱕ bound so that either Bound ⫺ K ⱕ Z* or (1 ⫺ ␣)bound ⱕ Z* would imply that the corresponding inequality in the preceding sentence is satisfied. Even if this solution is not feasible for the current subproblem, a valid upper bound is still obtained for the value of Z for the subproblem’s optimal solution. Thus, satisfying either of these last two inequalities is sufficient to fathom this subproblem because the incumbent must be “close enough” to the subproblem’s optimal solution. Therefore, to find a solution that is close enough to being optimal, only one change is needed in the usual branch-and-bound procedure. This change is to replace the usual fathoming test 1 for a subproblem Bound ⱕ Z*? by either Bound ⫺ K ⱕ Z*? or (1 ⫺ ␣)(bound) ⱕ Z*? and then perform this test after test 3 (so that a feasible solution found with Z ⬎ Z* is still kept as the new incumbent). The reason this weaker test 1 suffices is that regardless of how close Z for the subproblem’s (unknown) optimal solution is to the subproblem’s bound, the incumbent is still close enough to this solution (if the new inequality holds) that the subproblem does not need to be considered further. When there are no remaining subproblems, the current incumbent will be the desired nearly optimal solution. However, it is much easier to fathom with this new fathoming test (in either form), so the algorithm should run much faster. For an extremely large problem, this acceleration may make the difference between finishing with a solution guaranteed to be close to optimal and never terminating. For many extremely large problems arising in practice, since the model provides only an idealized representation of the real problem anyway, finding a nearly optimal solution for the model in this way may be sufficient for all practical purposes. Therefore, this shortcut is used fairly frequently in practice. 12.7 A BRANCH-AND-BOUND ALGORITHM ■ 12.7 513 A BRANCH-AND-BOUND ALGORITHM FOR MIXED INTEGER PROGRAMMING We shall now consider the general MIP problem, where some of the variables (say, I of them) are restricted to integer values (but not necessarily just 0 and 1) but the rest are ordinary continuous variables. For notational convenience, we shall order the variables so that the first I variables are the integer-restricted variables. Therefore, the general form of the problem being considered is n Maximize Z   cj xj, j1 subject to n  aij xj ⱕ bi, j1 for i ⫽ 1, 2, . . . , m, and xj ⱖ 0, for j ⫽ 1, 2, . . . , n, xj is integer, for j ⫽ 1, 2, . . . , I; I ⱕ n. (When I ⫽ n, this problem becomes the pure IP problem.) We shall describe a basic branch-and-bound algorithm for solving this problem that, with a variety of refinements, has provided a standard approach to MIP. The structure of this algorithm was first developed by R. J. Dakin,9 based on a pioneering branch-and-bound algorithm by A. H. Land and A. G. Doig.10 This algorithm is quite similar in structure to the BIP algorithm presented in the preceding section. Solving LP relaxations again provides the basis for both the bounding and fathoming steps. In fact, only four changes are needed in the BIP algorithm to deal with the generalizations from binary to general integer variables and from pure IP to mixed IP. One change involves the choice of the branching variable. Before, the next variable in the natural ordering—x1, x2, . . . , xn—was chosen automatically. Now, the only variables considered are the integer-restricted variables that have a noninteger value in the optimal solution for the LP relaxation of the current subproblem. Our rule for choosing among these variables is to select the first one in the natural ordering. (Production codes generally use a more sophisticated rule.) The second change involves the values assigned to the branching variable for creating the new smaller subproblems. Before, the binary variable was fixed at 0 and 1, respectively, for the two new subproblems. Now, the general integer-restricted variable could have a very large number of possible integer values, and it would be inefficient to create and analyze many subproblems by fixing the variable at its individual integer values. Therefore, what is done instead is to create just two new subproblems (as before) by specifying two ranges of values for the variable. To spell out how this is done, let xj be the current branching variable, and let xj* be its (noninteger) value in the optimal solution for the LP relaxation of the current subproblem. Using square brackets to denote [xj*] ⫽ greatest integer ⱕ xj*, 9 R. J. Dakin, “A Tree Search Algorithm for Mixed Integer Programming Problems,” Computer Journal, 8(3): 250–255, 1965. 10 A. H. Land and A. G. Doig, “An Automatic Method of Solving Discrete Programming Problems,” Econometrica, 28: 497–520, 1960. 514 CHAPTER 12 INTEGER PROGRAMMING x1 ⱕ 1 ■ FIGURE 12.10 Illustration of the phenomenon of a recurring branching variable, where here x1 becomes a branching variable three times because it has a noninteger value in the optimal solution for the LP relaxation at three nodes. x1 ⱕ 3 (1 1 , ...) 4 All (3 1 , ...) 2 ⭈⭈⭈ x1 ⫽ 1 ⭈⭈⭈ ⭈⭈⭈ ( 3 , ...) 4 ⭈⭈⭈ x1 ⫽ 0 x1 ⱖ 2 x1 ⱖ 4 we have for the range of values for the two new subproblems xj ⱕ [xj*] and xj ⱖ [xj*] ⫹ 1, respectively. Each inequality becomes an additional constraint for that new subproblem. For example, if xj* ⫽ 312ᎏᎏ, then xj ⱕ 3 and xj ⱖ 4 are the respective additional constraints for the new subproblem. When the two changes to the BIP algorithm described above are combined, an interesting phenomenon of a recurring branching variable can occur. To illustrate, as shown in Fig. 12.10, let j ⫽ 1 in the above example where xj* ⫽ 321ᎏᎏ, and consider the new subproblem where x1 ⱕ 3. When the LP relaxation of a descendant of this subproblem is solved, suppose that x1* ⫽ 141ᎏᎏ. Then x1 recurs as the branching variable, and the two new subproblems created have the additional constraint x1 ⱕ 1 and x1 ⱖ 2, respectively (as well as the previous additional constraint x1 ⱕ 3). Later, when the LP relaxation for a descendant of, say, the x1 ⱕ 1 subproblem is solved, suppose that x1* ⫽ 34ᎏᎏ. Then x1 recurs again as the branching variable, and the two new subproblems created have x1 ⫽ 0 (because of the new x1 ⱕ 0 constraint and the nonnegativity constraint on x1) and x1 ⫽ 1 (because of the new x1 ⱖ 1 constraint and the previous x1 ⱕ 1 constraint). The third change involves the bounding step. Before, with a pure IP problem and integer coefficients in the objective function, the value of Z for the optimal solution for the subproblem’s LP relaxation was rounded down to obtain the bound, because any feasible solution for the subproblem must have an integer Z. Now, with some of the variables not integer-restricted, the bound is the value of Z without rounding down. The fourth (and final) change to the BIP algorithm to obtain our MIP algorithm involves fathoming test 3. Before, with a pure IP problem, the test was that the optimal solution for the subproblem’s LP relaxation is integer, since this ensures that the solution is feasible, and therefore optimal, for the subproblem. Now, with a mixed IP problem, the test requires only that the integer-restricted variables be integer in the optimal solution for the subproblem’s LP relaxation, because this suffices to ensure that the solution is feasible, and therefore optimal, for the subproblem. Incorporating these four changes into the summary presented in the preceding section for the BIP algorithm yields the following summary for the new algorithm for MIP. An Application Vignette With headquarters in Houston, Texas, Waste Management, Inc. (a Fortune 100 company) is the leading provider of comprehensive waste-management services and integrated environmental solutions in North America. With 21,000 collection and transfer vehicles, its 45,000 employees provide services to over 20 million customers throughout the United States and Canada. The company’s collection-and-transfer vehicles need to follow nearly 20,000 daily routes. With an annual operating cost of nearly $120,000 per vehicle, management wanted to have a comprehensive route-management system that would make every route as profitable and efficient as possible. Therefore, an OR team that included a number of consultants was formed to attack this problem. The heart of the route-management system developed by this team is a huge mixed BIP model that optimizes the routes assigned to the respective collectionand-transfer vehicles. Although the objective function takes several factors into account, the primary goal is the minimization of total travel time. The main decision variables are binary variables that equal 1 if the route assigned to a particular vehicle includes a particular possible leg and equal 0 otherwise. A geographical information system (GIS) provides the data about the distance and time required to go between any two points. All of this is imbedded within a Web-based Java application that is integrated with the company’s other systems. Soon after the implementation of this comprehensive route-management system, it was estimated that the system will increase the company’s cash flow by $648 million over a 5-year period, largely because of savings of $498 million in operational expenses over this same period. It also is providing better customer service. Source: S. Sahoo, S. Kim, B.-I. Kim, B. Krass, and A. Popov, Jr.: “Routing Optimization for Waste Management,” Interfaces, 35(1): 24–36, Jan.–Feb. 2005. (A link to this article is provided on our website, www.mhhe.com/hillier.) (As before, this summary assumes that the objective function is to be maximized, but the only change needed for minimization is to change the direction of the inequality for fathoming test 1.) Summary of the MIP Branch-and-Bound Algorithm Initialization: Set Z*  ⴥ. Apply the bounding step, fathoming step, and optimality test described below to the whole problem. If not fathomed, classify this problem as the one remaining subproblem for performing the first full iteration below. Steps for each iteration: 1. Branching: Among the remaining (unfathomed) subproblems, select the one that was created most recently. (Break ties according to which has the larger bound.) Among the integer-restricted variables that have a noninteger value in the optimal solution for the LP relaxation of the subproblem, choose the first one in the natural ordering of the variables to be the branching variable. Let xj be this variable and xj* its value in this solution. Branch from the node for the subproblem to create two new subproblems by adding the respective constraints xj ⱕ [xj*] and xj ⱖ [xj*] ⫹ 1. 2. Bounding: For each new subproblem, obtain its bound by applying the simplex method (or the dual simplex method when reoptimizing) to its LP relaxation and using the value of Z for the resulting optimal solution. 3. Fathoming: For each new subproblem, apply the three fathoming tests given below, and discard those subproblems that are fathomed by any of the tests. Test 1: Its bound ⱕ Z*, where Z* is the value of Z for the current incumbent. Test 2: Its LP relaxation has no feasible solutions. 516 CHAPTER 12 INTEGER PROGRAMMING Test 3: The optimal solution for its LP relaxation has integer values for the integerrestricted variables. (If this solution is better than the incumbent, it becomes the new incumbent and test 1 is reapplied to all unfathomed subproblems with the new larger Z*.) Optimality test: Stop when there are no remaining subproblems that are not fathomed; the current incumbent is optimal.11 Otherwise, perform another iteration. An MIP Example. We will now illustrate this algorithm by applying it to the following MIP problem: Maximize Z  4x1  2x2  7x3  x4, subject to x1  5x3 ⱕ x1 ⫹ x2 ⫺ x3 ⱕ 6x1 ⫺ 5x2 ⫺ 2x4 ⱕ ⫺x1 5x2 ⫹ 2x3 ⫺ 2x4 ⱕ 10 1 0 3 and xj ⱖ 0, for j ⫽ 1, 2, 3, 4 xj is an integer, for j ⫽ 1, 2, 3. Note that the number of integer-restricted variables is I ⫽ 3, so x4 is the only continuous variable. Initialization. After setting Z* ⫽ ⫺⬁, we form the LP relaxation of this problem by deleting the set of constraints that xj is an integer for j ⫽ 1, 2, 3. Applying the simplex method to this LP relaxation yields its optimal solution below. 冢 冣 5 3 7 LP relaxation of whole problem: (x1, x2, x3, x4) ⫽ ᎏᎏ, ᎏᎏ, ᎏᎏ, 0 , 4 2 4 1 with Z ⫽ 14ᎏᎏ. 4 Because it has feasible solutions and this optimal solution has noninteger values for its integer-restricted variables, the whole problem is not fathomed, so the algorithm continues with the first full iteration below. Iteration 1. In this optimal solution for the LP relaxation, the first integer-restricted variable that has a noninteger value is x1 ⫽ 5ᎏ4ᎏ, so x1 becomes the branching variable. Branching from the All node (all feasible solutions) with this branching variable then creates the following two subproblems: Subproblem 1: Original problem plus additional constraint x1 ⱕ 1. Subproblem 2: Original problem plus additional constraint x1 ⱖ 2. Deleting the set of integer constraints again and solving the resulting LP relaxations of these two subproblems yield the following results. 11 If there is no incumbent, the conclusion is that the problem has no feasible solutions. 517 12.7 A BRANCH-AND-BOUND ALGORITHM x1 ⱕ 1 All ■ FIGURES 12.11 The branching tree after the first iteration of the MIP branch-and-bound algorithm for the MIP example. 1 14 4 ( 54, 32, 74, 0) 1 14 5 (1, 65, 95, 0 ) x1 ⱖ 2 F(2) Subproblem 1: 6 9 Optimal solution for LP relaxation: (x1, x2, x3, x4)  1, ᎏᎏ, ᎏᎏ, 0 , 5 5 1 Bound: Z ⱕ 14ᎏᎏ. 5 Subproblem 2: LP relaxation: No feasible solutions. 冢 冣 1 with Z  14ᎏᎏ. 5 This outcome for subproblem 2 means that it is fathomed by test 2. However, just as for the whole problem, subproblem 1 fails all fathoming tests. These results are summarized in the branching tree shown in Fig. 12.11. Iteration 2. With only one remaining subproblem, corresponding to the x1 ⱕ 1 node in Fig. 12.11, the next branching is from this node. Examining its LP relaxation’s optimal solution given above, we see that this node reveals that the branching variable is x2, because x2 ⫽ 6ᎏ5ᎏ is the first integer-restricted variable that has a noninteger value. Adding one of the constraints x2 ⱕ 1 or x2 ⱖ 2 then creates the following two new subproblems. Subproblem 3: Original problem plus additional constraints x1 ⱕ 1, x2 ⱕ 1. Subproblem 4: Original problem plus additional constraints x1 ⱕ 1, x2 ⱖ 2. Solving their LP relaxations gives the following results. Subproblem 3: 5 11 1 Optimal solution for LP relaxation: (x1, x2, x3, x4) ⫽ ᎏᎏ, 1, ᎏᎏ, 0 , with Z ⫽ 14ᎏᎏ. 6 6 6 1 Bound: Z ⱕ 14ᎏᎏ. 6 Subproblem 4: 5 11 1 Optimal solution for LP relaxation: (x1, x2, x3, x4) ⫽ ᎏᎏ, 2, ᎏᎏ, 0 , with Z ⫽ 12ᎏᎏ. 6 6 6 1 Bound: Z ⱕ 12ᎏᎏ. 6 Because both solutions exist (feasible solutions) and have noninteger values for integerrestricted variables, neither subproblem is fathomed. (Test 1 still is not operational, since Z* ⫽ ⫺⬁ until the first incumbent is found.) The branching tree at this point is given in Fig. 12.12. 冢 冣 冢 冣 518 CHAPTER 12 INTEGER PROGRAMMING x2 ⱕ 1 1 14 6 x1 ⱕ 1 1 14 5 All ■ FIGURE 12.12 The branching tree after the second iteration of the MIP branch-and-bound algorithm for the MIP example. ( 56, 1,116, 0 ) x2 ⱖ 2 1 14 4 F(2) x1 ⱖ 2 1 12 6 ( 56, 2,116, 0 ) Iteration 3. With two remaining subproblems (3 and 4) that were created simultaneously, the one with the larger bound (subproblem 3, with 14ᎏ16ᎏ  12ᎏ16ᎏ) is selected for the next branching. Because x1  ᎏ56ᎏ has a noninteger value in the optimal solution for this subproblem’s LP relaxation, x1 becomes the branching variable. (Note that x1 now is a recurring branching variable, since it also was chosen at iteration 1.) This leads to the following new subproblems. Subproblem 5: Original problem plus additional constraints x1 ⱕ 1 x2 ⱕ 1 x1 ⱕ 0 (so x1 ⫽ 0). Subproblem 6: Original problem plus additional constraints x1 ⱕ 1 x2 ⱕ 1 x1 ⱖ 1 (so x1 ⫽ 1). The results from solving their LP relaxations are given below. Subproblem 5: 1 1 Optimal solution for LP relaxation: (x1, x2, x3, x4) ⫽ 0, 0, 2, ᎏᎏ , with Z ⫽ 13ᎏᎏ. 2 2 1 Bound: Z ⱕ 13ᎏᎏ. 2 冢 Subproblem 6: LP relaxation: 冣 No feasible solutions. Subproblem 6 is immediately fathomed by test 2. However, note that subproblem 5 also can be fathomed. Test 3 passes because the optimal solution for its LP relaxation has integer values (x1 ⫽ 0, x2 ⫽ 0, x3 ⫽ 2) for all three integer-restricted variables. (It does not matter that x4 ⫽ ᎏ12ᎏ, since x4 is not integer-restricted.) This feasible solution for the original problem becomes our first incumbent: 冢 冣 1 Incumbent ⫽ 0, 0, 2, ᎏᎏ 2 1 with Z* ⫽ 13ᎏᎏ. 2 519 12.8 THE BRANCH-AND-CUT APPROACH TO SOLVING BIP PROBLEMS x1 ⫽ 0 x2 ⱕ 1 1 14 6 x1 ⱕ 1 ■ FIGURE 12.13 The branching tree after the final (third) iteration of the MIP branch-and-bound algorithm for the MIP example. All 1 14 4 1 13 2 incumbent ( 0, 0, 2, 12 ) ⫽⫽ optimal solution 1 14 5 x1 ⫽ 1 x2 ⱖ 2 x1 ⱖ 2 F(2) F(3) F(2) F(1) 1 12 6 Using this Z* to reapply fathoming test 1 to the only other subproblem (subproblem 4) is successful, because its bound 121ᎏ6ᎏ ⱕ Z*. This iteration has succeeded in fathoming subproblems in all three possible ways. Furthermore, there now are no remaining subproblems, so the current incumbent is optimal. 冢 冣 1 Optimal solution ⫽ 0, 0, 2, ᎏᎏ 2 1 with Z ⫽ 13ᎏᎏ. 2 These results are summarized by the final branching tree given in Fig. 12.13. Another example of applying the MIP algorithm is presented in your OR Tutor. In addition, a small example (only two variables, both integer-restricted) that includes graphical displays is provided in the Solved Examples section of the book’s website. The IOR Tutorial also includes an interactive procedure for executing the MIP algorithm. ■ 12.8 THE BRANCH-AND-CUT APPROACH TO SOLVING BIP PROBLEMS Integer programming has been an especially exciting area of OR since the mid-1980s because of the dramatic progress being made in its solution methodology. Background To place this progress into perspective, consider the historical background. One big breakthrough had come in the 1960s and early 1970s with the development and refinement of the branch-and-bound approach. But then the state of the art seemed to hit a plateau. Relatively small problems (well under 100 variables) could be solved very efficiently, but even a modest increase in problem size might cause an explosion in computation time beyond feasible limits. Little progress was being made in overcoming this exponential growth in computation time as the problem size was increased. Many important problems arising in practice could not be solved. Then came the next breakthrough in the mid-1980s, with the introduction of the branchand-cut approach to solving BIP problems. There were early reports of very large problems with as many as a couple thousand variables being solved using this approach. This 520 CHAPTER 12 INTEGER PROGRAMMING created great excitement and led to intensive research and development activities to refine the approach that have continued ever since. At first, the approach was limited to pure BIP, but soon was extended to mixed BIP, and then to MIP problems with some general integer variables as well. We will limit our description of the approach to the pure BIP case. It is fairly common now for the branch-and-cut approach to solve some problems with many thousand variables, and occasionally even hundreds of thousands of variables. As mentioned in Sec. 12.4, this tremendous speedup is due to huge progress in three areas—dramatic improvements in BIP algorithms by incorporating and further developing the branch-and-cut approach, striking improvements in linear programming algorithms that are heavily used within the BIP algorithms, and the great speedup in computers (including desktop computers). We do need to add one note of caution. This algorithmic approach cannot consistently solve all pure BIP problems with a few thousand variables, or even a few hundred variables. The very large pure BIP problems solved have sparse A matrices; i.e., the percentage of coefficients in the functional constraints that are nonzeros is quite small (perhaps less than 5 percent, or even less than 1 percent). In fact, the approach depends heavily upon this sparsity. (Fortunately, this kind of sparsity is typical in large practical problems.) Furthermore, there are other important factors besides sparsity and size that affect just how difficult a given IP problem will be to solve. IP formulations of fairly substantial size should still be approached with considerable caution. Although it would be beyond the scope and level of this book to fully describe the algorithmic approach discussed above, we will now give a brief overview. Since this overview is limited to pure BIP, all variables introduced later in this section are binary variables. The approach mainly uses a combination of three kinds12 of techniques: automatic problem preprocessing, the generation of cutting planes, and clever branch-and-bound techniques. You already are familiar with branch-and-bound techniques, and we will not elaborate further on the more advanced versions incorporated here. An introduction to the other two kinds of techniques is given below. Automatic Problem Preprocessing for Pure BIP Automatic problem preprocessing involves a “computer inspection” of the user-supplied formulation of the IP problem in order to spot reformulations that make the problem quicker to solve without eliminating any feasible solutions. These reformulations fall into three categories: 1. Fixing variables: Identify variables that can be fixed at one of their possible values (either 0 or 1) because the other value cannot possibly be part of a solution that is both feasible and optimal. 2. Eliminating redundant constraints: Identify and eliminate redundant constraints (constraints that automatically are satisfied by solutions that satisfy all the other constraints). 3. Tightening constraints: Tighten some constraints in a way that reduces the feasible region for the LP relaxation without eliminating any feasible solutions for the BIP problem. These categories are described in turn. Fixing Variables. One general principle for fixing variables is the following. If one value of a variable cannot satisfy a certain constraint, even when the other variables equal their best values for trying to satisfy the constraint, then that variable should be fixed at its other value. 12 As discussed briefly in Sec. 12.4, still another technique that has played a significant role in the recent progress has been the use of heuristics for quickly finding good feasible solutions. 12.8 THE BRANCH-AND-CUT APPROACH TO SOLVING BIP PROBLEMS 521 For example, each of the following ⱕ constraints would enable us to fix x1 at x1 ⫽ 0, since x1 ⫽ 1 with the best values of the other variables (0 with a nonnegative coefficient and 1 with a negative coefficient) would violate the constraint. ⇒ ⇒ ⇒ 3x1 ⱕ 2 3x1 ⫹ x2 ⱕ 2 5x1 ⫹ x2 ⫺ 2x3 ⱕ 2 x1 ⫽ 0, x1 ⫽ 0, x1 ⫽ 0, since since since 3(1) ⬎ 2. 3(1) ⫹ 1(0) ⬎ 2. 5(1) ⫹ 1(0) ⫺ 2(1) ⬎ 2. The general procedure for checking any ⱕ constraint is to identify the variable with the largest positive coefficient, and if the sum of that coefficient and any negative coefficients exceeds the right-hand side, then that variable should be fixed at 0. (Once the variable has been fixed, the procedure can be repeated for the variable with the next largest positive coefficient, etc.) An analogous procedure with ⱖ constraints can enable us to fix a variable at 1 instead, as illustrated below three times: ⇒ ⇒ ⇒ 3x1 ⱖ 2 3x1 ⫹ x2 ⱖ 2 3x1 ⫹ x2 ⫺ 2x3 ⱖ 2 x1 ⫽ 1, x1 ⫽ 1, x1 ⫽ 1, since since since 3(0) ⬍ 2. 3(0) ⫹ 1(1) ⬍ 2. 3(0) ⫹ 1(1) ⫺ 2(0) ⬍ 2. A ⱖ constraint also can enable us to fix a variable at 0, as illustrated next: ⇒ x1 ⫹ x2 ⫺ 2x3 ⱖ 1 x3 ⫽ 0, since 1(1) ⫹ 1(1) ⫺ 2(1) ⬍ 1. The next example shows a ⱖ constraint fixing one variable at 1 and another at 0. ⇒ ⇒ 3x1 ⫹ x2 ⫺ 3x3 ⱖ 2 and x1 ⫽ 1, x3 ⫽ 0, since since 3(0) ⫹ 1(1) ⫺ 3(0) ⬍ 2 3(1) ⫹ 1(1) ⫺ 3(1) ⬍ 2. Similarly, a ⱕ constraint with a negative right-hand side can result in either 0 or 1 becoming the fixed value of a variable. For example, both happen with the following constraint: ⇒ ⇒ 3x1 ⫺ 2x2 ⱕ ⫺1 and x1 ⫽ 0, x2 ⫽ 1, since since 3(1) ⫺ 2(1) ⬎ ⫺1 3(0) ⫺ 2(0) ⬎ ⫺1. Fixing a variable from one constraint can sometimes generate a chain reaction of then being able to fix other variables from other constraints. For example, look at what happens with the following three constraints: 3x1 ⫹ x2 ⫺ 2x3 ⱖ 2 ⇒ x1 ⫽ 1 (as above). Then ⇒ x1 ⫹ x4 ⫹ x5 ⱕ 1 x4 ⫽ 0, x5 ⫽ 0. Then ⫺x5 ⫹ x6 ⱕ 0 ⇒ x6 ⫽ 0. In some cases, it is possible to combine one or more mutually exclusive alternatives constraints with another constraint to fix a variable, as illustrated below:  8x1 ⫺ 4x2 ⫺ 5x3 ⫹ 3x4 ⱕ 2 8x1 ⫺ 4x2 ⫹ x3 ⫹ 3x4 ⱕ 1 ⇒ x1 ⫽ 0, since 8(1) ⫺ max{4, 5}(1) ⫹ 3(0) ⬎ 2. There are additional techniques for fixing variables, including some involving optimality considerations, but we will not delve further into this topic. 522 CHAPTER 12 INTEGER PROGRAMMING Fixing variables can have a dramatic impact on reducing the size of a problem. It is not unusual to eliminate over half of the problem’s variables from further consideration. Eliminating Redundant Constraints. Here is one easy way to detect a redundant constraint: If a functional constraint satisfies even the most challenging binary solution, then it has been made redundant by the binary constraints and can be eliminated from further consideration. For a ⱕ constraint, the most challenging binary solution has variables equal to 1 when they have nonnegative coefficients and other variables equal to 0. (Reverse these values for a ⱖ constraint.) Some examples are given below: 3x1 ⫹ 2x2 ⱕ ⫺6 3x1 ⫺ 2x2 ⱕ ⫺3 3x1 ⫺ 2x2 ⱖ ⫺3 is redundant, since 3(1) ⫹ 2(1) ⱕ 6. is redundant, since 3(1) ⫺ 2(0) ⱕ 3. is redundant, since 3(0) ⫺ 2(1) ⱖ ⫺3. In most cases where a constraint has been identified as redundant, it was not redundant in the original model but became so after fixing some variables. Of the 11 examples of fixing variables given above, all but the last one left a constraint that then was redundant. Tightening Constraints.13 Consider the following problem. Maximize Z ⫽ 3x1 ⫹ 2x2, subject to 2x1 ⫹ 3x2 ⱕ 4 and x1, x2 binary. This BIP problem has just three feasible solutions—(0, 0), (1, 0), and (0, 1)—where the optimal solution is (1, 0) with Z ⫽ 3. The feasible region for the LP relaxation of this problem is shown in Fig. 12.14. The optimal solution for this LP relaxation is (1, ᎏ23ᎏ) ■ FIGURE 12.14 The LP relaxation (including its feasible region and optimal solution) for the BIP example used to illustrate tightening a constraint. x2 LP relaxation Z ⫽ 3x1 ⫹ 2x2, Maximize subject to 2x1 ⫹ 3x2 ⱕ 4 and 0 ⱕ x1 ⱕ 1, 0 ⱕ x2 ⱕ 1 1 Optimal solution Feasible region Optimal solution for BIP problem 0 13 1 Also commonly called coefficient reduction. x1 12.8 THE BRANCH-AND-CUT APPROACH TO SOLVING BIP PROBLEMS 523 x2 LP relaxation Maximize Z ⫽ 3x1 ⫹ 2x2, subject to x1 ⫹ x2 ⱕ 1 and 0 ⱕ x1 ⱕ 1, 0 ⱕ x2 ⱕ 1 1 ■ FIGURE 12.15 The LP relaxation after tightening the constraint, 2x1 ⫹ 3x2 ⱕ 4, to x1 ⫹ x2 ⱕ 1 for the example of Fig. 12.14. Optimal solution for both the LP relaxation and the BIP problem Feasible region 1 0 x1 with Z  4ᎏ13ᎏ, which is not very close to the optimal solution for the BIP problem. A branch-and-bound algorithm would have some work to do to identify the optimal BIP solution. Now look what happens when the functional constraint 2x1  3x2 ⱕ 4 is replaced by x1 ⫹ x2 ⱕ 1. The feasible solutions for the BIP problem remain exactly the same—(0, 0), (1, 0), and (0, 1)—so the optimal solution still is (1, 0). However, the feasible region for the LP relaxation has been greatly reduced, as shown in Fig. 12.15. In fact, this feasible region has been reduced so much that the optimal solution for the LP relaxation now is (1, 0), so the optimal solution for the BIP problem has been found without needing any additional work. This is an example of tightening a constraint in a way that reduces the feasible region for the LP relaxation without eliminating any feasible solutions for the BIP problem. It was easy to do for this tiny two-variable problem that could be displayed graphically. However, with application of the same principles for tightening a constraint without eliminating any feasible BIP solutions, the following algebraic procedure can be used to do this for any ⱕ constraint with any number of variables. Procedure for Tightening a ⱕ Constraint Denote the constraint by a1x1 ⫹ a2x2 ⫹ . . . ⫹ an xn ⱕ b. 1. Calculate S ⫽ sum of the positive aj. 2. Identify any aj ⫽ 0 such that S ⬍ b ⫹ aj. (a) If none, stop; the constraint cannot be tightened further. (b) If aj ⬎ 0, go to step 3. (c) If aj ⬍ 0, go to step 4. 3. (aj ⬎ 0) Calculate a苶j ⫽ S ⫺ b and b苶 ⫽ S ⫺ aj. Reset aj ⫽ a苶j and b ⫽ b苶. Return to step 1. 4. (aj ⬍ 0) Increase aj to aj ⫽ b ⫺ S. Return to step 1. Applying this procedure to the functional constraint in the above example flows as follows: The constraint is 2x1 ⫹ 3x2 ⱕ 4 (a1 ⫽ 2, a2 ⫽ 3, b ⫽ 4). 524 CHAPTER 12 INTEGER PROGRAMMING 1. S  2  3  5. 2. a1 satisfies S ⬍ b  a1, since 5 ⬍ 4  2. Also a2 satisfies S ⬍ b  a2, since 5 ⬍ 4  3. Choose a1 arbitrarily. 3. a苶1  5  4  1 and 苶b  5  2  3, so reset a1  1 and b  3. The new tighter constraint is x1  3x2 ⱕ 3 (a1 ⫽ 1, a2 ⫽ 3, b ⫽ 3). 1. S ⫽ 1 ⫹ 3 ⫽ 4. 2. a2 satisfies S ⬍ b ⫹ a2, since 4 ⬍ 3 ⫹ 3. 3. 苶a2 ⫽ 4 ⫺ 3 ⫽ 1 and 苶b ⫽ 4 ⫺ 3 ⫽ 1, so reset a2 ⫽ 1 and b ⫽ 1. The new tighter constraint is x1 ⫹ x2 ⱕ 1 (a1 ⫽ 1, a2 ⫽ 1, b ⫽ 1). 1. S ⫽ 1 ⫹ 1 ⫽ 2. 2. No aj ⫽ 0 satisfies S ⬍ b ⫹ aj, so stop; x1 ⫹ x2 ⱕ 1 is the desired tightened constraint. If the first execution of step 2 in the above example had chosen a2 instead, then the first tighter constraint would have been 2x1 ⫹ x2 ⱕ 2. The next series of steps again would have led to x1 ⫹ x2 ⱕ 1. In the next example, the procedure tightens the constraint on the left to become the one on its right and then tightens further to become the second one on the right. 4x1 ⫺ 3x2 ⫹ x3 ⫹ 2x4 ⱕ 5 ⇒ ⇒ 2x1 ⫺ 3x2 ⫹ x3 ⫹ 2x4 ⱕ 3 2x1 ⫺ 2x2 ⫹ x3 ⫹ 2x4 ⱕ 3. (Problem 12.8-5 asks you to apply the procedure to confirm these results.) A constraint in ⱖ form can be converted to ⱕ form (by multiplying through both sides by ⫺1) to apply this procedure directly. Generating Cutting Planes for Pure BIP A cutting plane (or cut) for any IP problem is a new functional constraint that reduces the feasible region for the LP relaxation without eliminating any feasible solutions for the IP problem. In fact, you have just seen one way of generating cutting planes for pure BIP problems, namely, apply the above procedure for tightening constraints. Thus, x1 ⫹ x2 ⱕ 1 is a cutting plane for the BIP problem considered in Fig. 12.14, which leads to the reduced feasible region for the LP relaxation shown in Fig. 12.15. In addition to this procedure, a number of other techniques have been developed for generating cutting planes that will tend to accelerate how quickly a branch-and-bound algorithm can find an optimal solution for a pure BIP problem. We will focus on just one of these techniques. To illustrate this technique, consider the California Manufacturing Co. pure BIP problem presented in Sec. 12.1 and used to illustrate the BIP branch-and-bound algorithm in Sec. 12.6. The optimal solution for its LP relaxation is given in Fig. 12.5 as (x1, x2, x3, x4) ⫽ (ᎏ56ᎏ, 1, 0, 1). One of the functional constraints is 6x1 ⫹ 3x2 ⫹ 5x3 ⫹ 2x4 ⱕ 10. Now note that the binary constraints and this constraint together imply that x1 ⫹ x2 ⫹ x4 ⱕ 2. This new constraint is a cutting plane. It eliminates part of the feasible region for the LP relaxation, including what had been the optimal solution, (ᎏ56ᎏ, 1, 0, 1), but it does not eliminate any feasible integer solutions. Adding just this one cutting plane to the original model 12.9 THE INCORPORATION OF CONSTRAINT PROGRAMMING 525 would improve the performance of the BIP branch-and-bound algorithm in Sec. 12.6 (see Fig. 12.9) in two ways. First, the optimal solution for the new (tighter) LP relaxation would be (1, 1, ᎏ15ᎏ, 0), with Z  15ᎏ15ᎏ, so the bounds for the All node, x1  1 node, and (x1, x2)  (1, 1) node now would be 15 instead of 16. Second, one less iteration would be needed because the optimal solution for the LP relaxation at the (x1, x2, x3)  (1, 1, 0) node now would be (1, 1, 0, 0), which provides a new incumbent with Z*  14. Therefore, on the third iteration (see Fig. 12.8), this node would be fathomed by test 3, and the (x1, x2)  (1, 0) node would be fathomed by test 1, thereby revealing that this incumbent is the optimal solution for the original BIP problem. Here is the general procedure used to generate this cutting plane. A Procedure for Generating Cutting Planes 1. Consider any functional constraint in ⱕ form with only nonnegative coefficients. 2. Find a group of variables (called a minimum cover of the constraint) such that (a) The constraint is violated if every variable in the group equals 1 and all other variables equal 0. (b) But the constraint becomes satisfied if the value of any one of these variables is changed from 1 to 0. 3. By letting N denote the number of variables in the group, the resulting cutting plane has the form Sum of variables in group ⱕ N ⫺ 1. Applying this procedure to the constraint 6x1 ⫹ 3x2 ⫹ 5x3 ⫹ 2x4 ⱕ 10, we see that the group of variables {x1, x2, x4} is a minimal cover because (a) (1, 1, 0, 1) violates the constraint. (b) But the constraint becomes satisfied if the value of any one of these three variables is changed from 1 to 0. Since N ⫽ 3 in this case, the resulting cutting plane is x1 ⫹ x2 ⫹ x4 ⱕ 2. This same constraint also has a second minimal cover {x1, x3}, since (1, 0, 1, 0) violates the constraint but both (0, 0, 1, 0) and (1, 0, 0, 0) satisfy the constraint. Therefore, x1 ⫹ x3 ⱕ 1 is another valid cutting plane. The branch-and-cut approach involves generating many cutting planes in a similar manner before then applying clever branch-and-bound techniques. The results of including the cutting planes can be quite dramatic in tightening the LP relaxations. In some cases, the gap between Z for the optimal solution for the LP relaxation of the whole BIP problem and Z for this problem’s optimal solution is reduced by as much as 98 percent. Ironically, the very first algorithms developed for integer programming, including Ralph Gomory’s celebrated algorithm announced in 1958, were based on cutting planes (generated in a different way), but this approach proved to be unsatisfactory in practice (except for special classes of problems). However, these algorithms relied solely on cutting planes. We now know that judiciously combining cutting planes and branch-and-bound techniques (along with automatic problem preprocessing) provides a powerful algorithmic approach for solving large-scale BIP problems. This is one reason that the name branchand-cut algorithm has been given to this approach. ■ 12.9 THE INCORPORATION OF CONSTRAINT PROGRAMMING No presentation of the basic ideas of integer programming is complete these days without introducing an exciting relatively recent development––the incorporation of the techniques of constraint programming––that is promising to greatly expand our ability to formulate 526 CHAPTER 12 INTEGER PROGRAMMING and solve integer programming models. (These same techniques also are beginning to be used in related areas of mathematical programming, especially combinatorial optimization, but we will limit our discussion to their central use in integer programming.) The Nature of Constraint Programming In the mid-1980s, researchers in the computer science community began to develop constraint programming by combining ideas in artificial intelligence with the development of computer programming languages. The goal was to have a flexible computer programming system that would include both variables and constraints on their values, while also allowing the description of search procedures that would generate feasible values of the variables. Each variable has a domain of possible values, e.g., {2, 4, 6, 8, 10}. Rather than being limited to the types of mathematical constraints used in mathematical programming, there is great flexibility in how to state the constraints. In particular, the constraints can be any of the following types: 1. Mathematical constraints, e.g., x  y ⬍ z. 2. Disjunctive constraints, e.g., the times of certain tasks in the problem being modeled cannot overlap. 3. Relational constraints, e.g., at least three tasks should be assigned to a certain machine. 4. Explicit constraints, e.g., although both x and y have domains {1, 2, 3, 4, 5}, (x, y) must be (1, 1), (2, 3), or (4, 5). 5. Unary constraints, e.g., z is an integer between 5 and 10. 6. Logical constraints, e.g., if x is 5, then y is between 6 and 8. When expressing these kinds of constraints, constraint programming allows the use of various standard logic functions, such as IF, AND, OR, NOT, and so on. Excel includes many of the same logic functions. LINGO now supports all the standard logic functions and can use its global optimizer to find a globally optimal solution. To illustrate the algorithms that constraint programming uses to generate feasible solutions, suppose that a problem has four variables––x1, x2, x3, x4––and their domains are x1 ∈ {1, 2}, x2 ∈ {1, 2}, x3 ∈ {1, 2, 3), x4 ∈ {1, 2, 3, 4, 5}, where the symbol ∈ signifies that the variable on the left belongs to the set on the right. Suppose also that the constraints are (1) All these variables must have different values, (2) x1  x3  4. By straightforward logic, since the values of 1 and 2 must be reserved for x1 and x2, the first constraint immediately implies that x3 ∈ {3}, which then implies that x4 ∈ {4, 5}. (This process of eliminating possible values for variables is referred to as domain reduction.) Next, since the domain of x3 has been changed, the process of constraint propagation applies the second constraint to imply that x1 ∈ {1}. This again triggers the first constraint, so that x1 ∈ {1}, x2 ∈ {2}, x3 ∈ {3}, x4 ∈ {4, 5} lists the only feasible solutions for the problem. This kind of feasibility reasoning based on alternating between the application of domain reduction and constraint propagation algorithms is a key part of constraint programming. After the application of the constraint propagation and domain reduction algorithms to a problem, a search procedure is used to find complete feasible solutions. In 12.9 THE INCORPORATION OF CONSTRAINT PROGRAMMING 527 the example above, since the domains of all the variables have been reduced to a single value except for x4, the search procedure would simply try the values x4  4 and x4  5 to determine the complete feasible solutions for that problem. However, for a problem with many constraints and variables, the constraint propagation and domain reduction algorithms typically do not reduce the domain of each variable to a single value. It is therefore necessary to write a search procedure that will try different assignments of values to the variables. As these assignments are tried, the constraint propagation algorithm is triggered and further domain reduction occurs. The process creates a search tree, which is similar to the branching tree when applying the branch-and-bound technique to integer programming. The overall process of applying constraint programming to complicated IP problems (or related problems) involves the following three steps: 1. Formulate a compact model for the problem by using a variety of constraint types (most of which do not fit the format of integer programming). 2. Efficiently find feasible solutions that satisfy all these constraints. 3. Search among these feasible solutions for an optimal solution. The power of constraint programming lies in its great ability to perform the first two steps rather than the third, whereas the main strength of integer programming and its algorithms lie in performing the third step. Thus, constraint programming is ideally suited for a highly constrained problem that has no objective function, so the only goal is to find a feasible solution. However, it also can be extended to the third step. One method of doing so is to enumerate the feasible solutions and calculate the value of the objective function for each one. However, this would be extremely inefficient for problems where there are numerous feasible solutions. To circumvent this drawback, the common approach is to add a constraint that tightly bounds the objective function to values that are very near to what is anticipated for an optimal solution. For example, if the objective is to maximize the objective function and its value Z is anticipated to be approximately Z  10 for an optimal solution, one might add the constraint that Z  9 so that the only remaining feasible solutions to be enumerated are those that are very close to being optimal. Each time that a new best solution then is found during the search, the bound on Z can be further tightened to consider only feasible solutions that are at least as good as the current best solution. Although this is a reasonable approach to the third step, a more attractive approach would be to integrate constraint programming and integer programming so that each is mainly used where it is strongest—steps 1 and 2 with constraint programming and step 3 with integer programming. This is part of the potential of constraint programming described next. The Potential of Constraint Programming In the 1990s, constraint programming features, including powerful constraint-solving algorithms, were successfully incorporated into a number of general-purpose programming languages, as well as several special-purpose programming languages. This brought computer science closer and closer to the Holy Grail of computer programming, namely, allowing the user to simply state the problem and then the computer will solve it. As word of this exciting development began to spread beyond the computer science community, researchers in operations research began to realize the great potential of integrating constraint programming with the traditional techniques of integer programming (and other areas of mathematical programming as well). The much greater flexibility in 528 CHAPTER 12 INTEGER PROGRAMMING expressing the constraints of the problem should greatly increase the ability to formulate valid models for complex problems. It also should lead to much more compact and straightforward formulations. In addition, by reducing the size of the feasible region that needs to be considered while efficiently finding solutions within this region, the constraint-solving algorithms of constraint programming might help accelerate the progress of integer programming algorithms in finding an optimal solution. Because of their substantial differences, integrating constraint programming with integer programming is a very difficult task. Since integer programming does not recognize most of the constraints of constraint programming, this requires developing computerimplemented procedures for translating from the language of constraint programming to the language of integer programming and vice versa. Good progress is being made, but this undoubtedly will continue to be one of the most active areas of OR research for some years to come. To illustrate the way in which constraint programming can greatly simplify the formulation of integer programming models, we now will introduce two of the most important “global constraints” of constraint programming. A global constraint is a constraint that succinctly expresses a global pattern in the allowable relationship between multiple variables. Therefore, a single global constraint often can replace what used to require a large number of traditional integer programming constraints while also making the model considerably more readable. To clarify the presentation, we will use very simple examples that don’t require the use of constraint programming to illustrate global constraints, but these same types of constraints also can readily be used for some much more complicated problems. The All-Different Constraint The all-different global constraint simply specifies that all the variables in a given set must have different values. If x1, x2, . . . , xn are the variables involved, the constraint can be written succinctly as all-different (x1, x2, . . . , xn) while also specifying the domains of the individual variables in the model. (These domains collectively need to include at least n different values in order to enforce the all-different constraint.) To illustrate this constraint, consider the classical assignment problem presented in Sec. 9.3. Recall that this problem involves assigning n assignees to n tasks on a oneto-one basis so as to minimize the total cost of these assignments. Although the assignment problem is a particularly easy one to solve (as described in Sec. 9.4), it nicely illustrates how the all-different constraint can greatly simplify the formulation of the model. With the traditional formulation presented in Sec. 9.3, the decision variables are the binary variables, xij   0, 1, if assignee i performs task j if not for i, j  1, 2, . . . , n. Ignoring the objective function for now, the functional constraints are the following. Each assignee i is to be assigned to exactly one task: n  xij  1 j1 for i  1, 2, . . . , n. 12.9 THE INCORPORATION OF CONSTRAINT PROGRAMMING 529 Each task j is to be performed by exactly one assignee: n  xij  1 i1 for j  1, 2, . . . , n. Thus, there are n2 variables and 2n functional constraints. Now let us look at the much smaller model that constraint programming can provide. In this case, the variables are yi  task to which assignee i is assigned for i  1, 2, . . . , n. There are n tasks and they are numbered 1, 2, . . . , n, so each of the yi variables has the domain {1, 2, . . . , n}. Since all the assignees must be assigned different tasks, this restriction on the variables is precisely described by the single global constraint, all-different (y1, y2, . . . , yn). Therefore, rather than n2 variables and 2n functional constraints, this complete constraint programming model (excluding the objective function) has only n variables and a single constraint (plus one domain for all the variables). Now let us see how the next global constraint enables incorporating the objective function into this tiny model as well. The Element Constraint The element global constraint is most commonly used to look up a cost or profit associated with an integer variable. In particular, suppose that a variable y has domain {1, 2, . . . , n} and that the cost associated with each of these values is c1, c2, . . . , cn, respectively. Then the constraint element (y, [c1, c2, . . . , cn], z) constrains the variable z to equal the yth constant in the list [c1, c2, . . . , cn]. In other words, z  cy. This variable z can now be included in the objective function to provide the cost associated with y. To illustrate the use of the element constraint, consider the assignment problem again and let cij  cost of assigning assignee i to task j for i, j,  1, 2, . . . , n. The complete constraint programming model (including the objective function for this problem is n Minimize Z   zi, i1 subject to element (yi, [ci1, ci2, . . . , cin], zi) for i  1, 2, . . . , n, all-different (y1, y2, . . . , yn), yi ∈ {1, 2, . . . , n} for i  1, 2, . . . , n. This complete model now has 2n variables and (n  1) constraints (plus the one domain for all the variables), which still is far smaller than the traditional integer programming formulation presented in Sec. 9.3. For example, when n  100, this model has 200 variables 530 CHAPTER 12 INTEGER PROGRAMMING and 101 constraints whereas the traditional integer programming model has 10,000 variables and 200 functional constraints. As an additional example, reconsider Example 2 (Violating Proportionality) presented in Sec. 12.4. In this case, the original decision variables are xj  number of TV spots allocated to product j for j  1, 2, 3, where a total of five TV spots are to be allocated to the three products. However, because the profits given in Table 12.3 for different values of each xj are not proportional to xj, Sec. 12.4 formulates two alternative integer programming models with auxiliary binary variables for this problem. Both models are fairly complicated. A constraint programming model that uses the element constraint is much more straightforward. For example, the profit for Product 1 given in Table 12.3 is 0, 1, 3, and 3 for x1  0, 1, 2, and 3, respectively. Therefore, this profit is simply z1 when the value of z1 is given by the constraint element (x1  1, [0, 1, 3, 3], z1). (The first component is x1  1 instead of x1 because x1  1  1, 2, 3, or 4, and it is the value of this component that indicates the choice of position 1, 2, 3, or 4 in the list [0, 1, 3, 3].) Proceeding in the same way for the other two products, the complete model is Maximize Z  z1  z2  z3, subject to element (x1  1, [0, 1, 3, 3], z1), element (x2  1, [0, 0, 2, 3], z2), element (x3  1, [0, 1, 2, 4], z3), x1  x2  x3  5, xj ∈ {0, 1, 2, 3} for j  1, 2, 3. Now compare this model to the two integer programming models for the same problem in Sec. 12.4. Note how the use of element constraints provides a considerably more compact and transparent model. The all-different and element constraints are but two of the various available global constraints (Selected Reference 5 describes nearly 40), but they nicely illustrate the power of constraint programming to provide a compact and readable model of a complex problem. Current Research Current research in integrating constraint programming and integer programming is moving along several parallel paths. For example, the most straightforward approach is to simultaneously use both a constraint programming model and an integer programming model to represent complementary parts of a problem. Thus, each relevant constraint is included in whichever model it fits or, when feasible, in both models. As a constraint programming algorithm and an integer programming algorithm are applied to the respective models, information is passed back and forth to focus the search on the feasible solutions (those that satisfy the constraints of both models). This kind of double modeling scheme can be implemented with the Optimization Programming Language (OPL) that is incorporated into the OPL-CPLEX Development System. After employing the OPL modeling language, the OPL-CPLEX Development System can invoke both a constraint programming algorithm (CP Optimizer) and a 12.10 CONCLUSIONS 531 mathematical programming solver (CPLEX) and then pass some information from one to the other. Although double modeling is a good first step, the goal is to fully integrate constraint programming and integer programming so that a single hybrid model and a single algorithm can be used. It is this kind of seamless integration that will be able to fully provide the complementary strengths of both techniques. Although fully achieving this goal remains a formidable research challenge, good progress continues to be made in this direction. Selected Reference 5 describes the current state of the art in this area. Even at this early stage, there already have been numerous successful applications of the merger of mathematical programming and constraint programming. The areas of application include network design, vehicle routing, crew rostering, the classical transportation problem with piecewise linear costs, inventory management, computer graphics, software engineering, databases, finance, engineering, and combinatorial optimization, among others. In addition, Selected Reference 3 describes how scheduling is proving to be a particularly fruitful area for the application of constraint programming. For example, because of the many complicated scheduling constraints involved, constraint programming has been used to determine the regular-season schedule for the National Football League in the United States. These applications only begin to tap the potential of integrating constraint programming and integer programming. Further progress in completing this integration promises to open up many exciting new opportunities for important applications. ■ 12.10 CONCLUSIONS IP problems arise frequently because some or all of the decision variables must be restricted to integer values. There also are many applications involving yes-or-no decisions (including combinatorial relationships expressible in terms of such decisions) that can be represented by binary (0–1) variables. These factors have made integer programming one of the most widely used OR techniques. IP problems are more difficult than they would be without the integer restriction, so the algorithms available for integer programming are generally considerably less efficient than the simplex method. However, there has been tremendous progress over the past two or three decades in the ability to solve some (but not all) huge IP problems with tens or even hundreds of thousands of integer variables. This progress is due to a combination of three factors—dramatic improvements in IP algorithms, striking improvement in the linear programming algorithms used within IP algorithms, and the great speedup in computers. However, IP algorithms also will occasionally still fail to solve rather small problems (even as few as a hundred integer variables). Various characteristics of an IP problem in addition to its size, have a great influence on how readily it can be solved. Nevertheless, size is one key factor in determining the time required to solve an IP problem, if it can be solved at all. The most important determinants of computation time for an IP algorithm are the number of integer variables and whether the problem has some special structure that can be exploited. For a fixed number of integer variables, BIP problems generally are much easier to solve than problems with general integer variables, but adding continuous variables (MIP) may not increase computation time substantially. For special types of BIP problems containing a special structure that can be exploited by a special-purpose algorithm, it may be possible to solve very large problems (thousands of binary variables) routinely. Computer codes for IP algorithms now are commonly available in mathematical programming software packages. Traditionally, these algorithms usually have been based on the branch-and-bound technique and variations thereof. 532 CHAPTER 12 INTEGER PROGRAMMING More modern IP algorithms now use the branch-and-cut approach. This algorithmic approach involves combining automatic problem preprocessing, the generation of cutting planes, and clever branch-and-bound techniques. Research in this area is continuing, along with the development of sophisticated new software packages that incorporate these techniques. The latest development in IP methodology is to begin incorporating constraint programming. It appears that this approach will greatly expand our ability to formulate and solve IP models. In recent years, there has been considerable investigation into the development of algorithms (including heuristic algorithms) for integer nonlinear programming, and this area continues to be an active area of research. (Selected Reference 7 describes some of the progress in this area.) ■ SELECTED REFERENCES 1. Achterberg, A.: “SCIP: Solving Constraint Integer Programs,” Mathematical Programming Computation, 1(1): 1-41, July 2009. 2. Appa, G., L. Pitsoulis, and H. P. Williams (eds.): Handbook on Modelling for Discrete Optimization, Springer, New York, 2006. 3. Baptiste, P., C. LePape, and W. Nuijten: Constraint-Based Scheduling: Applying Constraint Programming to Scheduling Problems, Kluwer Academic Publishers (now Springer), Boston, 2001. 4. Hillier, F. S., and M. S. Hillier: Introduction to Management Science: A Modeling and Case Studies Approach with Spreadsheets, 5th ed., McGraw-Hill/Irwin, Burr Ridge, IL, 2014, chap. 7. 5. Hooker, J. N.: Integrated Methods for Optimization, 2nd ed., Springer, New York, 2012. 6. Karlof, J. K.: Integer Programming: Theory and Practice, CRC Press, Boca Raton, FL, 2006. 7. Li, D., and X. Sun: Nonlinear Integer Programming, Springer, New York, 2006. (A 2nd edition currently is being prepared with publication scheduled in 2015.) 8. Lustig, I., and J.-F. Puget: “Program Does Not Equal Program: Constraint Programming and Its Relationship to Mathematical Programming,” Interfaces, 31(6): 29–53, November–December 2001. 9. Nemhauser, G. L., and L. A. Wolsey: Integer and Combinatorial Optimization, Wiley, Hoboken, NJ, 1988, reprinted in 1999. 10. Schriver, A.: Theory of Linear and Integer Programming, Wiley, Hoboken, NJ, 1986, reprinted in paperback in 1998. 11. Williams, H. P.: Logic and Integer Programming, Springer, New York, 2009. 12. Williams, H. P.: Model Building in Mathematical Programming, 5th ed., Wiley, Hoboken, NJ, 2013. Some Award-Winning Applications of Integer Programming: (A link to all these articles is provided on our website, www.mhhe.com/hillier.) A1. Armacost, A. P., C. Barnhart, K. A. Ware, and A. M. Wilson: “UPS Optimizes Its Air Network,” Interfaces, 34(1): 15–25, January–February 2004. A2. Bertsimas, D., C. Darnell, and R. Soucy: “Portfolio Construction Through Mixed-Integer Programming at Grantham, Mayo, Van Otterloo and Company,” Interfaces, 29(1): 49–66, January–February 1999. A3. Denton, B. T., J. Forrest, and R. J. Milne: “IBM Solves a Mixed-Integer Program to Optimize Its Semiconductor Supply Chain,” Interfaces, 36(5): 386–399, September–October 2006. LEARNING AIDS FOR THIS CHAPTER ON OUR WEBSITE 533 A4. Eveborn, P., M. Ronnqvist, H. Einarsdottir, M. Eklund, K. Liden, and M. Almroth: “Operations Research Improves Quality and Efficiency in Home Care,” Interfaces, 39(1):18–34, January–February 2009. A5. Everett, G., A. Philpott, K. Vatn, and R. Gjessing: “Norske Skog Improves Global Profitability Using Operations Research,” Interfaces, 40(1): 58–70, January–February 2010. A6. Gryffenberg, I, et al.: “Guns or Butter: Decision Support for Determining the Size and Shape of the South African National Defense Force,” Interfaces, 27(1): 7–28, January–February 1997. A7. Menezes, F., et al.: “Optimizing Helicopter Transport of Oil Rig Crews at Petrobras, Interfaces, 40(5), 408–416, September–October 2010. A8. Metty, T., et al.: “Reinventing the Supplier Negotiation Process at Motorola,” Interfaces, 35(1), 7–23, January–February 2005. A9. Smith, B. C., R. Darrow, J. Elieson, D. Guenther, B. V. Rao, and F. Zouaoui: “Travelocity Becomes a Travel Retailer,” Interfaces, 37(1): 68–81, January–February 2007. A10. Spencer III, T., A. J. Brigandi, D. R. Dargon, and M. J. Sheehan: “AT&T’s Telemarketing Site Selection System Offers Customer Support,” Interfaces, 20(1): 83–96, January–February 1990. A11. Subramanian, R., R. P. Scheff, Jr., J. D. Quillinan, D. S. Wiper, and R. E. Marsten: “Coldstart: Fleet Assignment at Delta Air Lines,” Interfaces, 24(1): 104–120, January–February 1994. A12. Yu, G., M. Argüello, G. Song, S. M. McCowan, and A. White: “A New Era for Crew Recovery at Continental Airlines,” Interfaces, 33(1): 5–22, January–February 2003. ■ LEARNING AIDS FOR THIS CHAPTER ON OUR WEBSITE (www.mhhe.com/hillier) Solved Examples: Examples for Chapter 12 Demonstration Examples in OR Tutor: Binary Integer Programming Branch-and-Bound Algorithm Mixed Integer Programming Branch-and-Bound Algorithm Interactive Procedures in IOR Tutorial: Enter or Revise an Integer Programming Model Solve Binary Integer Program Interactively Solve Mixed Integer Program Interactively An Excel Add-in: Analytic Solver Platform for Education (ASPE) “Ch. 12—Integer Programming” Files for Solving the Examples: Excel Files LINGO/LINDO File MPL/Solvers File Glossary for Chapter 12 See Appendix 1 for documentation of the software. 534 CHAPTER 12 INTEGER PROGRAMMING ■ PROBLEMS The symbols to the left of some of the problems (or their parts) have the following meaning: D: The corresponding demonstration example just listed in Learning Aids may be helpful. I: We suggest that you use the corresponding interactive procedure just listed (the printout records your work). C: Use the computer with any of the software options available to you (or as instructed by your instructor) to solve the problem. An asterisk on the problem number indicates that at least a partial answer is given in the back of the book. 12.1-1. Reconsider the California Manufacturing Co. example presented in Sec. 12.1. The mayor of San Diego now has contacted the company’s president to try to persuade him to build a factory and perhaps a warehouse in that city. With the tax incentives being offered the company, the president’s staff estimates that the net present value of building a factory in San Diego would be $7 million and the amount of capital required to do this would be $4 million. The net present value of building a warehouse there would be $5 million and the capital required would be $3 million. (This option would be considered only if a factory also is being built there.) The company president now wants the previous OR study revised to incorporate these new alternatives into the overall problem. The objective still is to find the feasible combination of investments that maximizes the total net present value, given that the amount of capital available for these investments is $10 million. (a) Formulate a BIP model for this problem. (b) Display this model on an Excel spreadsheet. C (c) Use the computer to solve this model. 12.1-2* A young couple, Eve and Steven, want to divide their main household chores (marketing, cooking, dishwashing, and laundering) between them so that each has two tasks but the total time they spend on household duties is kept to a minimum. Their efficiencies on these tasks differ, where the time each would need to perform the task is given by the following table: Time Needed per Week Eve Steven Marketing Cooking Dishwashing Laundry 4.5 hours 4.9 hours 7.8 hours 7.2 hours 3.6 hours 4.3 hours 2.9 hours 3.1 hours (a) Formulate a BIP model for this problem. (b) Display this model on an Excel spreadsheet. C (c) Use the computer to solve this model. 12.1-3. A real estate development firm, Peterson and Johnson, is considering five possible development projects. The following table shows the estimated long-run profit (net present value) that each project would generate, as well as the amount of investment required to undertake the project, in units of millions of dollars. Development Project Estimated profit Capital required 1 2 3 4 5 1 6 1.8 12 1.6 10 0.8 4 1.4 8 The owners of the firm, Dave Peterson and Ron Johnson, have raised $20 million of investment capital for these projects. Dave and Ron now want to select the combination of projects that will maximize their total estimated long-run profit (net present value) without investing more that $20 million. (a) Formulate a BIP model for this problem. (b) Display this model on an Excel spreadsheet. C (c) Use the computer to solve this model. 12.1-4. The board of directors of General Wheels Co. is considering six large capital investments. Each investment can be made only once. These investments differ in the estimated long-run profit (net present value) that they will generate as well as in the amount of capital required, as shown by the following table (in units of millions of dollars): Investment Opportunity Estimated profit Capital required 1 2 3 4 5 6 15 38 12 33 16 39 18 45 9 23 11 27 The total amount of capital available for these investments is $100 million. Investment opportunities 1 and 2 are mutually exclusive, and so are 3 and 4. Furthermore, neither 3 nor 4 can be undertaken unless one of the first two opportunities is undertaken. There are no such restrictions on investment opportunities 5 and 6. The objective is to select the combination of capital investments that will maximize the total estimated long-run profit (net present value). (a) Formulate a BIP model for this problem. C (b) Use the computer to solve this model. 12.1-5. Reconsider Prob. 9.3-4, where a swim team coach needs to assign swimmers to the different legs of a 200-yard medley relay team. Formulate a BIP model for this problem. Identify the groups of mutually exclusive alternatives in this formulation. 12.1-6. Vincent Cardoza is the owner and manager of a machine shop that does custom order work. This Wednesday afternoon, he has received calls from two customers who would like to place rush orders. One is a trailer hitch company which would like some custom-made heavy-duty tow bars. The other is a mini-car-carrier company which needs some customized stabilizer bars. Both customers would like as many as possible by the end of the week (two working days). Since both products would require the use of the same two machines, Vincent needs to decide and inform 535 PROBLEMS the customers this afternoon about how many of each product he will agree to make over the next two days. Each tow bar requires 3.2 hours on machine 1 and 2 hours on machine 2. Each stabilizer bar requires 2.4 hours on machine 1 and 3 hours on machine 2. Machine 1 will be available for 16 hours over the next two days and machine 2 will be available for 15 hours. The profit for each tow bar produced would be $130 and the profit for each stabilizer bar produced would be $150. Vincent now wants to determine the mix of these production quantities that will maximize the total profit. (a) Formulate an IP model for this problem. (b) Use a graphical approach to solve this model. C (c) Use the computer to solve the model. 12.1-7. Reconsider Prob. 9.2-21 involving a contractor (Susan Meyer) who needs to arrange for hauling gravel from two pits to three building sites. Susan now needs to hire the trucks (and their drivers) to do the hauling. Each truck can only be used to haul gravel from a single pit to a single site. In addition to the hauling and gravel costs specified in Prob. 9.2-21, there now is a fixed cost of $50 associated with hiring each truck. A truck can haul 5 tons, but it is not required to go full. For each combination of pit and site, there are now two decisions to be made: the number of trucks to be used and the amount of gravel to be hauled. (a) Formulate an MIP model for this problem. C (b) Use the computer to solve this model. Product Start-up cost Marginal revenue 1 2 3 4 $50,000 $70 $40,000 $60 $70,000 $90 $60,000 $80 Let the continuous decision variables x1, x2, x3, and x4 be the production levels of products 1, 2, 3, and 4, respectively. Management has imposed the following policy constraints on these variables: 1. No more than two of the products can be produced. 2. Either product 3 or 4 can be produced only if either product 1 or 2 is produced. 3. Either 5x1  3x2  6x3  4x4 ⱕ 6,000 or 4x1 ⫹ 6x2 ⫹ 3x3 ⫹ 5x4 ⱕ 6,000. (a) Introduce auxiliary binary variables to formulate a mixed BIP model for this problem. C (b) Use the computer to solve this model. 12.3-2. Suppose that a mathematical model fits linear programming except for the restriction that x1 ⫺ x2 ⫽ 0, or 3, or 6. Show how to reformulate this restriction to fit an MIP model. 12.3-3. Suppose that a mathematical model fits linear programming except for the restrictions that 1. At least one of the following two inequalities holds: 12.2-1. Read the referenced article that fully describes the OR study summarized in the first application vignette presented in Sec. 12.2. Briefly describe how integer programming was applied in this study. Then list the various financial and nonfinancial benefits that resulted from this study. 12.2-2. Select one of the actual applications of BIP by a company or governmental agency mentioned in Sec. 12.2. Read the article describing the application in the referenced issue of Interfaces. Write a two-page summary of the application and its benefits. 12.2-3. Select three of the actual applications of BIP by a company or governmental agency mentioned in Sec. 12.2. Read the articles describing the applications in the referenced issues of Interfaces. For each one, write a one-page summary of the application and its benefits. 12.2-4. Follow the instructions of Prob. 12.2-1 for the second application vignette presented in Sec. 12.2. 12.3-1.* The Research and Development Division of the Progressive Company has been developing four possible new product lines. Management must now make a decision as to which of these four products actually will be produced and at what levels. Therefore, an operations research study has been requested to find the most profitable product mix. A substantial cost is associated with beginning the production of any product, as given in the first row of the following table. Management’s objective is to find the product mix that maximizes the total profit (total net revenue minus start-up costs). 3x1 ⫺ x2 ⫺ x3 ⫹ x4 ⱕ 12 x1 ⫹ x2 ⫹ x3 ⫹ x4 ⱕ 15. 2. At least two of the following three inequalities holds: 2x1 ⫹ 5x2 ⫺ x3 ⫹ x4 ⱕ 30 ⫺x1 ⫹ 3x2 ⫹ 5x3 ⫹ x4 ⱕ 40 3x1 ⫺ x2 ⫹ 3x3 ⫺ x4 ⱕ 60. Show how to reformulate these restrictions to fit an MIP model. 12.3-4. The Toys-R-4-U Company has developed two new toys for possible inclusion in its product line for the upcoming Christmas season. Setting up the production facilities to begin production would cost $50,000 for toy 1 and $80,000 for toy 2. Once these costs are covered, the toys would generate a unit profit of $10 for toy 1 and $15 for toy 2. The company has two factories that are capable of producing these toys. However, to avoid doubling the start-up costs, just one factory would be used, where the choice would be based on maximizing profit. For administrative reasons, the same factory would be used for both new toys if both are produced. Toy 1 can be produced at the rate of 50 per hour in factory 1 and 40 per hour in factory 2. Toy 2 can be produced at the rate of 40 per hour in factory 1 and 25 per hour in factory 2. Factories 1 and 2, respectively, have 500 hours and 700 hours of production time available before Christmas that could be used to produce these toys. It is not known whether these two toys would be continued after Christmas. Therefore, the problem is to determine how many 536 CHAPTER 12 INTEGER PROGRAMMING units (if any) of each new toy should be produced before Christmas to maximize the total profit. (a) Formulate an MIP model for this problem. C (b) Use the computer to solve this model. 12.3-5.* Northeastern Airlines is considering the purchase of new long-, medium-, and short-range jet passenger airplanes. The purchase price would be $67 million for each long-range plane, $50 million for each medium-range plane, and $35 million for each short-range plane. The board of directors has authorized a maximum commitment of $1.5 billion for these purchases. Regardless of which airplanes are purchased, air travel of all distances is expected to be sufficiently large that these planes would be utilized at essentially maximum capacity. It is estimated that the net annual profit (after capital recovery costs are subtracted) would be $4.2 million per long-range plane, $3 million per medium-range plane, and $2.3 million per short-range plane. It is predicted that enough trained pilots will be available to the company to crew 30 new airplanes. If only short-range planes were purchased, the maintenance facilities would be able to handle 40 new planes. However, each medium-range plane is equivalent to 131ᎏᎏ short-range planes, and each long-range plane is equivalent to 132ᎏᎏ short-range planes in terms of their use of the maintenance facilities. The information given here was obtained by a preliminary analysis of the problem. A more detailed analysis will be conducted subsequently. However, using the preceding data as a first approximation, management wishes to know how many planes of each type should be purchased to maximize profit. (a) Formulate an IP model for this problem. C (b) Use the computer to solve this problem. (c) Use a binary representation of the variables to reformulate the IP model in part (a) as a BIP problem. C (d) Use the computer to solve the BIP model formulated in part (c). Then use this optimal solution to identify an optimal solution for the IP model formulated in part (a). 12.3-6. Consider the two-variable IP example discussed in Sec. 12.5 and illustrated in Fig. 12.3. (a) Use a binary representation of the variables to reformulate this model as a BIP problem. C (b) Use the computer to solve this BIP problem. Then use this optimal solution to identify an optimal solution for the original IP model. 12.3-7. The Fly-Right Airplane Company builds small jet airplanes to sell to corporations for the use of their executives. To meet the needs of these executives, the company’s customers sometimes order a custom design of the airplanes being purchased. When this occurs, a substantial start-up cost is incurred to initiate the production of these airplanes. Fly-Right has recently received purchase requests from three customers with short deadlines. However, because the company’s production facilities already are almost completely tied up filling previous orders, it will not be able to accept all three orders. Therefore, a decision now needs to be made on the number of airplanes the company will agree to produce (if any) for each of the three customers. The relevant data are given in the next table. The first row gives the start-up cost required to initiate the production of the airplanes for each customer. Once production is under way, the marginal net revenue (which is the purchase price minus the marginal production cost) from each airplane produced is shown in the second row. The third row gives the percentage of the available production capacity that would be used for each airplane produced. The last row indicates the maximum number of airplanes requested by each customer (but less will be accepted). Customer Start-up cost Marginal net revenue Capacity used per plane Maximum order 1 2 3 $3 million $2 million 20% 3 planes $2 million $3 million 40% 2 planes 0 $0.8 million 20% 5 planes Fly-Right now wants to determine how many airplanes to produce for each customer (if any) to maximize the company’s total profit (total net revenue minus start-up costs). (a) Formulate a model with both integer variables and binary variables for this problem. C (b) Use the computer to solve this model. 12.4-1. Reconsider the Fly-Right Airplane Co. problem introduced in Prob. 12.3-7. A more detailed analysis of the various cost and revenue factors now has revealed that the potential profit from producing airplanes for each customer cannot be expressed simply in terms of a start-up cost and a fixed marginal net revenue per airplane produced. Instead, the profits are given by the following table. Profit from Customer Airplanes Produced 0 1 2 3 4 5 1 2 0 $1 million $2 million $4 million 0 $1 million $5 million 3 $1 $3 $5 $6 $7 0 million million million million million (a) Formulate a BIP model for this problem that includes constraints for mutually exclusive alternatives. C (b) Use the computer to solve the model formulated in part (a). Then use this optimal solution to identify the optimal number of airplanes to produce for each customer. 537 PROBLEMS (c) Formulate another BIP model for this model that includes constraints for contingent decisions. C (d) Repeat part (b) for the model formulated in part (c). 12.4-2. Reconsider the Wyndor Glass Co. problem presented in Sec. 3.1. Management now has decided that only one of the two new products should be produced, and the choice is to be made on the basis of maximizing profit. Introduce auxiliary binary variables to formulate an MIP model for this new version of the problem. 12.4-3.* Reconsider Prob. 3.1-11, where the management of the Omega Manufacturing Company is considering devoting excess production capacity to one or more of three products. (See the Partial Answers to Selected Problems in the back of the book for additional information about this problem.) Management now has decided to add the restriction that no more than two of the three prospective products should be produced. (a) Introduce auxiliary binary variables to formulate an MIP model for this new version of the problem. C (b) Use the computer to solve this model. 12.4-4. Consider the following integer nonlinear programming problem: Z  4x21  x31  10x22  x42, Maximize subject to x1  x2 ⱕ 3 and x1 ⱖ 0, x2 ⱖ 0 x1 and x2 are integers. This problem can be reformulated in two different ways as an equivalent pure BIP problem (with a linear objective function) with six binary variables (y1 j and y2 j for j ⫽ 1, 2, 3), depending on the interpretation given the binary variables. (a) Formulate a BIP model for this problem where the binary variables have the interpretation, yij ⫽ 1 0 3 B C  D 3 2 The numbers along the links represent distances, and the objective is to find the shortest path from the origin to the destination. This problem also can be formulated as a BIP model involving both mutually exclusive alternatives and contingent decisions. (a) Formulate this model. Identify the constraints that are for mutually exclusive alternatives and that are for contingent decisions. C (b) Use the computer to solve this problem. 12.4-6. Speedy Delivery provides two-day delivery service of large parcels across the United States. Each morning at each collection center, the parcels that have arrived overnight are loaded onto several trucks for delivery throughout the area. Since the competitive battlefield in this business is speed of delivery, the parcels are divided among the trucks according to their geographical destinations to minimize the average time needed to make the deliveries. On this particular morning, the dispatcher for the Blue River Valley Collection Center, Sharon Lofton, is hard at work. Her three drivers will be arriving in less than an hour to make the day’s deliveries. There are nine parcels to be delivered, all at locations many miles apart. As usual, Sharon has loaded these locations into her computer. She is using her company’s special software package, a decision support system called Dispatcher. The first thing Dispatcher does is use these locations to generate a considerable number of attractive possible routes for the individual delivery trucks. These routes are shown in the following table (where the numbers in each column indicate the order of the deliveries), along with the estimated time required to traverse the route. if xi ⫽ j otherwise. (b) Use the computer to solve the model formulated in part (a), and thereby identify an optimal solution for (x1, x2) for the original problem. (c) Formulate a BIP model for this problem where the binary variables have the interpretation, 1 0 3 T (Destination) 4 C yij ⫽ C 5 (Origin) O 6 6 A if xi ⱖ j otherwise. (d) Use the computer to solve the model formulated in part (c), and thereby identify an optimal solution for (x1, x2) for the original problem. 12.4-5.* Consider the following special type of shortest-path problem (see Sec. 10.3) where the nodes are in columns and the only paths considered always move forward one column at a time. Attractive Possible Route Delivery Location 1 A B C D E F G H I 1 Time (in hours) 2 3 4 3 1 3 2 2 5 6 8 1 2 2 3 2 1 1 3 10 1 2 3 2 1 1 1 3 4 9 2 3 6 7 4 7 2 3 1 3 5 2 4 6 5 3 7 6 538 CHAPTER 12 INTEGER PROGRAMMING Dispatcher is an interactive system that shows these routes to Sharon for her approval or modification. (For example, the computer may not know that flooding has made a particular route infeasible.) After Sharon approves these routes as attractive possibilities with reasonable time estimates, Dispatcher next formulates and solves a BIP model for selecting three routes that minimize their total time while including each delivery location on exactly one route. This morning, Sharon does approve all the routes. (a) Formulate this BIP model. C (b) Use the computer to solve this model. 12.4-7. An increasing number of Americans are moving to a warmer climate when they retire. To take advantage of this trend, Sunny Skies Unlimited is undertaking a major real estate development project. The project is to develop a completely new retirement community (to be called Pilgrim Haven) that will cover several square miles. One of the decisions to be made is where to locate the two fire stations that have been allocated to the community. For planning purposes, Pilgrim Haven has been divided into five tracts, with no more than one fire station to be located in any given tract. Each station is to respond to all the fires that occur in the tract in which it is located as well as in the other tracts that are assigned to this station. Thus, the decisions to be made consist of (1) the tracts to receive a fire station and (2) the assignment of each of the other tracts to one of the fire stations. The objective is to minimize the overall average of the response times to fires. The following table gives the average response time to a fire in each tract (the columns) if that tract is served by a station in a given tract (the rows). The bottom row gives the forecasted average number of fires that will occur in each of the tracts per day. Response Times (in minutes) Fire in Tract tract 4, and $500,000 for tract 5. Management’s objective now is the following: Determine which tracts should receive a station to minimize the total cost of stations while ensuring that each tract has at least one station close enough to respond to a fire in no more than 15 minutes (on the average). In contrast to the original problem, note that the total number of fire stations is no longer fixed. Furthermore, if a tract without a station has more than one station within 15 minutes, it is no longer necessary to assign this tract to just one of these stations. (a) Formulate a complete pure BIP model with 5 binary variables for this problem. (b) Is this a set covering problem? Explain, and identify the relevant sets. C (c) Use the computer to solve the model formulated in part (a). 12.4-9. Suppose that a state sends R persons to the U.S. House of Representatives. There are D counties in the state (D  R), and the state legislature wants to group these counties into R distinct electoral districts, each of which sends a delegate to Congress. The total population of the state is P, and the legislature wants to form districts whose population approximates p  P/R. Suppose that the appropriate legislative committee studying the electoral districting problem generates a long list of N candidates to be districts (N  R). Each of these candidates contains contiguous counties and a total population pj ( j  1, 2, . . . , N ) that is acceptably close to p. Define cj  pj  p. Each county i (i  1, 2, . . . , D) is included in at least one candidate and typically will be included in a considerable number of candidates (in order to provide many feasible ways of selecting a set of R candidates that includes each county exactly once). Define aij  Assigned Station Located in Tract 1 2 3 4 5 1 2 3 4 5 5 20 15 25 10 12 4 20 15 25 30 15 6 25 15 20 10 15 4 12 15 25 12 10 5 Average frequency of fires 2 per day 1 per day 3 per day 1 per day 3 per day Formulate a BIP model for this problem. Identify any constraints that correspond to mutually exclusive alternatives or contingent decisions. 1 0 if county i is included in candidate j if not. Given the values of the cj and the aij, the objective is to select R of these N possible districts such that each county is contained in a single district and such that the largest of the associated cj is as small as possible. Formulate a BIP model for this problem. 12.5-1. Read the referenced article that fully describes the OR study summarized in the application vignette presented in Sec. 12.5. Briefly describe how integer programming was applied in this study. Then list the various financial and nonfinancial benefits that resulted from this study. 12.5-2.* Consider the following IP problem: Maximize Z  5x1  x2, 12.4-8. Reconsider Prob. 12.4-7. The management of Sunny Skies Unlimited now has decided that the decision on the locations of the fire stations should be based mainly on costs. The cost of locating a fire station in a tract is $200,000 for tract 1, $250,000 for tract 2, $400,000 for tract 3, $300,000 for subject to x1  2x2 ⱕ 4 x1 ⫺ x2 ⱕ 1 4x1 ⫹ x2 ⱕ 12 539 PROBLEMS and x2  0 x1  0, x1, x2 are integers. (a) Solve this problem graphically. (b) Solve the LP relaxation graphically. Round this solution to the nearest integer solution and check whether it is feasible. Then enumerate all the rounded solutions by rounding this solution for the LP relaxation in all possible ways (i.e., by rounding each noninteger value both up and down). For each rounded solution, check for feasibility and, if feasible, calculate Z. Are any of these feasible rounded solutions optimal for the IP problem? (b) For IP problems, the number of integer variables is generally more important in determining the computational difficulty than is the number of functional constraints. (c) To solve an IP problem with an approximate procedure, one may apply the simplex method to the LP relaxation problem and then round each noninteger value to the nearest integer. The result will be a feasible but not necessarily optimal solution for the IP problem. 12.6-1.* Use the BIP branch-and-bound algorithm presented in Sec. 12.6 to solve the following problem interactively: D,I Maximize subject to 12.5-3. Follow the instructions of Prob. 12.5-2 for the following IP problem: Maximize Z  220x1  80x2, 3x1 ⫺ 2x2 ⫹ 7x3 ⫺ 5x4 ⫹ 4x5 ⱕ 6 x1 ⫺ x2 ⫹ 2x3 ⫺ 4x4 ⫹ 2x5 ⱕ 0 and subject to 5x1  2x2 ⱕ 16 2x1 ⫺ x2 ⱕ 4 ⫺x1 ⫹ 2x2 ⱕ 4 xj is binary, 12.6-2. Use the BIP branch-and-bound algorithm presented in Sec. 12.6 to solve the following problem interactively: Minimize Z ⫽ 2x1 ⫹ 5x2, 3x1 ⫺ x2 ⫹ x3 ⫹ x4 ⫺ 2x5 ⱖ 2 x1 ⫹ 3x2 ⫺ x3 ⫺ 2x4 ⫹ x5 ⱖ 0 ⫺x1 ⫺ x2 ⫹ 3x3 ⫹ x4 ⫹ x5 ⱖ 1 and xj is binary, subject to 10x1 ⫹ 30x2 ⱕ 30 95x1 ⫺ 30x2 ⱕ 75 12.6-3. Use the BIP branch-and-bound algorithm presented in Sec. 12.6 to solve the following problem interactively: Maximize Z ⫽ ⫺5x1 ⫹ 25x2, ⫺3x1 ⫹ 6x2 ⫺ 7x3 ⫹ 9x4 ⫹ 9x5 ⱖ 10 x1 ⫹ 2x27x ⫺ x4 ⫺ 3x5 ⱕ 0 and xj is binary, subject to ⫺3x1 ⫹ 30x2 ⱕ 27 3x1 ⫹ x2 ⱕ 4 and Z ⫽ 5x1 ⫹ 5x2 ⫹ 8x3 ⫺ 2x4 ⫺ 4x5, subject to 12.5-5. Follow the instructions of Prob. 12.5-2 for the following BIP problem: Maximize for j ⫽ 1, 2, . . . , 5. D,I and x1, x2 are binary. Z ⫽ 5x1 ⫹ 6x2 ⫹ 7x3 ⫹ 8x4 ⫹ 9x5, subject to 12.5-4. Follow the instructions of Prob. 12.5-2 for the following BIP problem: Maximize for j ⫽ 1, 2, . . . , 5. D,I and x1 ⱖ 0, x2 ⱖ 0 x1, x2 are integers. Z ⫽ 2x1 ⫺ x2 ⫹ 5x3 ⫺ 3x4 ⫹ 4x5, for j ⫽ 1, 2, . . . , 5. 12.6-4. Reconsider Prob. 12.3-6(a). Use the BIP branch-andbound algorithm presented in Sec. 12.6 to solve this BIP model interactively. D,I 12.6-5. Reconsider Prob. 12.4-8(a). Use the BIP algorithm presented in Sec. 12.6 to solve this problem interactively. D,I x1, x2 are binary. 12.5-6. Label each of the following statements as True or False, and then justify your answer by referring to specific statements in the chapter: (a) Linear programming problems are generally considerably easier to solve than IP problems. 12.6-6. Consider the following statements about any pure IP problem (in maximization form) and its LP relaxation. Label each of the statements as True or False, and then justify your answer: (a) The feasible region for the LP relaxation is a subset of the feasible region for the IP problem. 540 CHAPTER 12 INTEGER PROGRAMMING (b) If an optimal solution for the LP relaxation is an integer solution, then the optimal value of the objective function is the same for both problems. (c) If a noninteger solution is feasible for the LP relaxation, then the nearest integer solution (rounding each variable to the nearest integer) is a feasible solution for the IP problem. 12.6-7.* Consider the assignment problem with the following cost table: subject to xj is binary, Given the value of the first k variables x1, . . . , xk, where k  0, 1, 2, or 3, an upper bound on the value of Z that can be achieved by the corresponding feasible solutions is k k 4 1 2 3 4 5 39 64 49 48 59 65 84 50 45 34 69 24 61 55 30 66 92 31 23 34 57 22 45 50 18 (a) Design a branch-and-bound algorithm for solving such assignment problems by specifying how the branching, bounding, and fathoming steps would be performed. (Hint: For the assignees not yet assigned for the current subproblem, form the relaxation by deleting the constraints that each of these assignees must perform exactly one task.) (b) Use this algorithm to solve this problem. 12.6-8. Five jobs need to be done on a certain machine. However, the setup time for each job depends upon which job immediately preceded it, as shown by the following table: Setup Time Immediately Preceding Job 1 2 3 4 5 4 — 6 10 7 12 5 7 — 11 8 9 8 12 10 — 15 8 9 10 14 12 — 16 4 9 11 10 7 — The objective is to schedule the sequence of jobs that minimizes the sum of the resulting setup times. (a) Design a branch-and-bound algorithm for sequencing problems of this type by specifying how the branch, bound, and fathoming steps would be performed. (b) Use this algorithm to solve this problem. 12.6-9.* Consider the following nonlinear BIP problem: Maximize Z  80x1  60x2  40x3  20x4  (7x1  5x2  3x3  2x4)2, 冦  冱 max 0, cj  jk1 k k 2 冱 di xi  dj冣 冤冢i1  2 冱 dixi冣 冥冧, 冢i1 where c1  80, c2  60, c3  40, c4  20, d1  7, d2  5, d3  3, d4  2. Use this bound to solve the problem by the branch-andbound technique. 12.6-10. Consider the Lagrangian relaxation described near the end of Sec. 12.6. (a) If x is a feasible solution for an MIP problem, show that x also must be a feasible solution for the corresponding Lagrangian relaxation. (b) If x* is an optimal solution for an MIP problem, with an objective function value of Z, show that Z ⱕ Z R*, where Z R* is the optimal objective function value for the corresponding Lagrangian relaxation. 12.7-1. Read the referenced article that fully describes the OR study summarized in the application vignette presented in Sec. 12.7. Briefly describe how integer programming was applied in this study. Then list the various financial and nonfinancial benefits that resulted from this study. 12.7-2.* Consider the following IP problem: Job None 1 2 3 4 5 2 冱 dj xj冣  cj xj  冢 j1 j1 Task 1 2 Assignee 3 4 5 for j  1, 2, 3, 4. Maximize Z ⫽ ⫺3x1 ⫹ 5x2, subject to 5x1 ⫺ 7x2 ⱖ 3 and xj ⱕ 3 xj ⱖ 0 xj is integer, for j ⫽ 1, 2. (a) Solve this problem graphically. (b) Use the MIP branch-and-bound algorithm presented in Sec. 12.7 to solve this problem by hand. For each subproblem, solve its LP relaxation graphically. (c) Use the binary representation for integer variables to reformulate this problem as a BIP problem. D,I (d) Use the BIP branch-and-bound algorithm presented in Sec. 12.6 to solve the problem as formulated in part (c) interactively. 541 PROBLEMS 12.7-3. Follow the instructions of Prob. 12.7-2 for the following IP model: Minimize and for j ⫽ 1, 2, 3, 4 xj ⱖ 0, xj is integer, for j ⫽ 1, 2, 3. Z  2x1  3x2, subject to 12.7-9. Use the MIP branch-and-bound algorithm presented in Sec. 12.7 to solve the following MIP problem interactively: D,I x1  x2  3 x1  3x2  6 Maximize Z ⫽ 3x1 ⫹ 4x2 ⫹ 2x3 ⫹ x4 ⫹ 2x5, and subject to x1  0, x2  0 x1, x2 are integers. 12.7-4. Reconsider the IP model of Prob. 12.5-2. (a) Use the MIP branch-and-bound algorithm presented in Sec. 12.7 to solve this problem by hand. For each subproblem, solve its LP relaxation graphically. D,I (b) Now use the interactive procedure for this algorithm in your IOR Tutorial to solve this problem. C (c) Check your answer by using an automatic procedure to solve the problem. 12.7-5. Consider the IP example discussed in Sec. 12.5 and illustrated in Fig. 12.3. Use the MIP branch-and-bound algorithm presented in Sec. 12.7 to solve this problem interactively. 2x1 ⫺ x2 ⫹ x3 ⫹ x4 ⫹ x5 ⱕ 3 ⫺x1 ⫹ 3x2 ⫹ x3 ⫺ x4 ⫺ 2x5 ⱕ 2 2x1 ⫹ x2 ⫺ x3 ⫹ x4 ⫹ 3x5 ⱕ 1 and for j ⫽ 1, 2, 3, 4, 5 xj ⱖ 0, xj is binary, for j ⫽ 1, 2, 3. 12.7-10. Use the MIP branch-and-bound algorithm presented in Sec. 12.7 to solve the following MIP problem interactively: D,I D,I 12.7-6. Reconsider Prob. 12.3-5a. Use the MIP branch-andbound algorithm presented in Sec. 12.7 to solve this IP problem interactively. Minimize Z ⫽ 5x1 ⫹ x2 ⫹ x3 ⫹ 2x4 ⫹ 3x5, subject to D,I 12.7-7. A machine shop makes two products. Each unit of the first product requires 3 hours on machine 1 and 2 hours on machine 2. Each unit of the second product requires 2 hours on machine 1 and 3 hours on machine 2. Machine 1 is available only 8 hours per day and machine 2 only 7 hours per day. The profit per unit sold is 16 for the first product and 10 for the second. The amount of each product produced per day must be an integral multiple of 0.25. The objective is to determine the mix of production quantities that will maximize profit. (a) Formulate an IP model for this problem. (b) Solve this model graphically. (c) Use graphical analysis to apply the MIP branch-and-bound algorithm presented in Sec. 12.7 to solve this model. D,I (d) Now use the interactive procedure for this algorithm in your IOR Tutorial to solve this model. C (e) Check your answers in parts (b), (c), and (d ) by using an automatic procedure to solve the model. 12.7-8. Use the MIP branch-and-bound algorithm presented in Sec. 12.7 to solve the following MIP problem interactively: D,I Maximize Z  5x1  4x2  4x3  2x4, subject to x1  3x2  2x3  x4 ⱕ 10 5x1 ⫹ x2 ⫹ 3x3 ⫹ 2x4 ⱕ 15 x1 ⫹ x2 ⫹ x3 ⫹ x4 ⱕ 6 x2 ⫺ 5x3 ⫹ x4 ⫹ 2x5 ⱖ ⫺2 5x1 ⫺ x2 ⫹ x4 ⫹ x5 ⱖ ⫺7 x1 ⫹ x2 ⫹ 6x3 ⫹ x4 ⱖ ⫺4 and xj ⱖ 0, for j ⫽ 1, 2, 3, 4, 5 xj is integer, for j ⫽ 1, 2, 3. 12.8-1.* For each of the following constraints of pure BIP problems, use the constraint to fix as many variables as possible: (a) 4x1 ⫹ x2 ⫹ 3x3 ⫹ 2x4 ⱕ 2 (b) 4x1 ⫺ x2 ⫹ 3x3 ⫹ 2x4 ⱕ 2 (c) 4x1 ⫺ x2 ⫹ 3x3 ⫹ 2x4 ⱖ 7 12.8-2. For each of the following constraints of pure BIP problems, use the constraint to fix as many variables as possible: (a) 20x1 ⫺ 7x2 ⫹ 5x3 ⱕ 10 (b) 10x1 ⫺ 7x2 ⫹ 5x3 ⱖ 10 (c) 10x1 ⫺ 7x2 ⫹ 5x3 ⱕ ⫺1 12.8-3. Use the following set of constraints for the same pure BIP problem to fix as many variables as possible. Also identify the constraints which become redundant because of the fixed variables. 3x3 ⫺ x5 ⫹ x7 ⱕ 1 x2 ⫹ x4 ⫹ x6 ⱕ 1 x1 ⫺ 2x5 ⫹ 2x6 ⱖ 2 x1 ⫹ x2 ⫺ x4 ⱕ 0 542 CHAPTER 12 INTEGER PROGRAMMING 12.8-4. For each of the following constraints of pure BIP problems, identify which ones are made redundant by the binary constraints. Explain why each one is, or is not, redundant. (a) 2x1  x2  2x3 ⱕ 5 (b) 3x1 ⫺ 4x2 ⫹ 5x3 ⱕ 5 (c) x1 ⫹ x2 ⫹ x3 ⱖ 2 (d) 3x1 ⫺ x2 ⫺ 2x3 ⱖ ⫺4 12.8-5. In Sec. 12.8, at the end of the subsection on tightening constraints, we indicated that the constraint 4x1 ⫺ 3x2 ⫹ x3 ⫹ 2x4 ⱕ 5 can be tightened to 2x1 ⫺ 3x2 ⫹ x3 ⫹ 2x4 ⱕ 3 and then to 2x1 ⫺ 2x2 ⫹ x3 ⫹ 2x4 ⱕ 3. Apply the procedure for tightening constraints to confirm these results. 12.8-6. Apply the procedure for tightening constraints to the following constraint for a pure BIP problem: 3x1 ⫺ 2x2 ⫹ x3 ⱕ 3. 12.8-7. Apply the procedure for tightening constraints to the following constraint for a pure BIP problem: x1 ⫺ x2 ⫹ 3x3 ⫹ 4x4 ⱖ 1. 12.8-8. Apply the procedure for tightening constraints to each of the following constraints for a pure BIP problem: (a) x1 ⫹ 3x2 ⫺ 4x3 ⱕ 2. (b) 3x1 ⫺ x2 ⫹ 4x3 ⱖ 1. 12.8-9. In Sec. 12.8, a pure BIP example with the constraint, 2x1 ⫹ 3x2 ⱕ 4, was used to illustrate the procedure for tightening constraints. Show that applying the procedure for generating cutting planes to this constraint yields the same new constraint, x1 ⫹ x2 ⱕ 1. 12.8-10. One of the constraints of a certain pure BIP problem is x1 ⫹ 3x2 ⫹ 2x3 ⫹ 4x4 ⱕ 5. Identify all the minimal covers for this constraint, and then give the corresponding cutting planes. 12.8-11. One of the constraints of a certain pure BIP problem is 3x1 ⫹ 4x2 ⫹ 2x3 ⫹ 5x4 ⱕ 7. subject to 3x2 ⫹ x4 ⫹ x5 x1 ⫹ x2 x2 ⫹ x4 ⫺ x5 ⫺ x6 x2 ⫹ 2x6 ⫹ 3x7 ⫹ x8 ⫹ 2x9 ⫺x3 ⫹ 2x5 ⫹ x6 ⫹ 2x7 ⫺ 2x8 ⫹ x9 ⱖ ⱕ ⱕ ⱖ ⱕ 3 1 ⫺1 4 5 and all xj binary. Develop the tightest possible formulation of this problem by using the techniques of automatic problem reprocessing (fixing variables, deleting redundant constraints, and tightening constraints). Then use this tightened formulation to determine an optimal solution by inspection. 12.9-1. Consider the following problem: Maximize Z ⫽ 3x1 ⫹ 2x2 ⫹ 4x3 ⫹ x4, subject to x1 ∈ {1, 3}, x2 ∈ {1, 2}, x3 ∈ {2, 3}, x4 ∈ {1, 2, 3, 4}, all these variables must have different values, x1 ⫹ x2 ⫹ x3 ⫹ x4 ⱕ 10. Use the techniques of constraint programming (domain reduction, constraint propagation, a search procedure, and enumeration) to identify all the feasible solutions and then to find an optimal solution. Show your work. 12.9-2. Consider the following problem: Maximize Z ⫽ 5x1 ⫺ x21 ⫹ 8x2 ⫺x22 ⫹ 10x3 ⫺ x23 ⫹ 15x4 ⫺ x24 ⫹ 20x5 ⫺ x25, subject to x1 ∈ {3, 6, 12}, x2 ∈ {3, 6}, x3 ∈ {3, 6, 9, 12}, x4 ∈ {6, 12}, x5 ∈ {9, 12, 15, 18}, all these variables must have different values, x1 ⫹ x3 ⫹ x4 ⱕ 25. Identify all the minimal covers for this constraint, and then give the corresponding cutting planes. Use the techniques of constraint programming (domain reduction, constraint propagation, a search procedure, and enumeration) to identify all the feasible solutions and then to find an optimal solution. Show your work. 12.8-12. Generate as many cutting planes as possible from the following constraint for a pure BIP problem: 12.9-3. Consider the following problem: 3x1 ⫹ 5x2 ⫹ 4x3 ⫹ 8x4 ⱕ 10. 12.8-13. Generate as many cutting planes as possible from the following constraint for a pure BIP problem. 5x1 ⫹ 3x2 ⫹ 7x3 ⫹ 4x4 ⫹ 6x5 ⱕ 9. 12.8-14. Consider the following BIP problem: Maximize Z ⫽ 2x1 ⫹ 3x2 ⫹ x3 ⫹ 4x4 ⫹ 3x5 ⫹ 2x6 ⫹ 2x7 ⫹ x8 ⫹ 3x9, Maximize Z ⫽ 100x1 ⫺ 3x21⫹ 400x2 ⫺5x22 ⫹ 200x3 ⫺ 4x23 ⫹ 100x4 ⫺ 2x44, subject to x1 ∈ {25, 30}, x2 ∈ {20, 25, 30, 35, 40, 50}, x3 ∈ {20, 25, 30}, x4 ∈ {20, 25}, all these variables must have different values, x2 ⫹ x3 ⱕ 60, x1 ⫹ x3 ⱕ 50. 543 CASES Use the techniques of constraint programming (domain reduction, constraint propagation, a search procedure, and enumeration) to identify all the feasible solutions and then to find an optimal solution. Show your work. 12.9-4. Consider the Job Shop Co. example introduced in Sec. 9.3. Table 9.25 shows its formulation as an assignment problem. Use global constraints to formulate a compact constraint programming model for this assignment problem. 12.9-5. Consider the problem of assigning swimmers to the different legs of a medley relay team that is presented in Prob. 9.3-4. The answer in the back of the book shows the formulation of this problem as an assignment problem. Use global constraints to formulate a compact constraint programming model for this assignment problem. 12.9-6. Consider the problem of determining the best plan for how many days to study for each of four final examinations that is presented in Prob. 11.3-3. Formulate a compact constraint programming model for this problem. 12.9-7. Problem 11.3-2 describes how the owner of a chain of three grocery stores needs to determine how many crates of fresh strawberries should be allocated to each of the stores. Formulate a compact constraint programming model for this problem. 12.9-8. One powerful feature of constraint programming is that variables can be used as subscripts for the terms in the objective function. For example, consider the following traveling salesman problem. The salesman needs to visit each of n cities (city 1, 2, . . . , n) exactly once, starting in city 1 (his home city) and returning to city 1 after completing the tour. Let cij be the distance from city i to city j for i, j  1, 2, . . . , n (i ≠ j). The objective is to determine which route to follow so as to minimize the total distance of the tour. (As discussed further in Chap. 14, this traveling salesman problem is a famous classic OR problem with many applications that have nothing to do with salesmen.) Letting the decision variable xj (j  1, 2, . . . , n, n  1) denote the jth city visited by the salesman, where x1  1 and xn1  1, constraint programming allows writing the objective as n Minimize Z   cxj xj1. j1 Using this objective function, formulate a complete constraint programming model for this problem. 12.10-1. From the bottom part of the selected references given at the end of the chapter, select one of these award-winning applications of integer programming. Read this article and then write a two-page summary of the application and the benefits (including nonfinancial benefits) it provided. 12.10-2. From the bottom part of the selected references given at the end of the chapter, select three of these award-winning applications of integer programming. For each one, read the article and then write a one-page summary of the application and the benefits (including nonfinancial benefits) it provided. ■ CASES CASE 12.1 Capacity Concerns Bentley Hamilton throws the business section of The New York Times onto the conference room table and watches as his associates jolt upright in their overstuffed chairs. Mr. Hamilton wants to make a point. He throws the front page of The Wall Street Journal on top of The New York Times and watches as his associates widen their eyes once heavy with boredom. Mr. Hamilton wants to make a big point. He then throws the front page of The Financial Times on top of the newspaper pile and watches as his associates dab the fine beads of sweat off their brows. Mr. Hamilton wants his point indelibly etched into his associates’ minds. “I have just presented you with three leading financial newspapers carrying today’s top business story,” Mr. Hamilton declares in a tight, angry voice. “My dear associates, our company is going to hell in a hand basket! Shall I read you the headlines? From The New York Times, ‘CommuniCorp stock drops to lowest in 52 weeks.’ From The Wall Street Journal, ‘CommuniCorp loses 25 percent of the pager market in only one year.’ Oh and my favorite, from The Financial Times, ‘CommuniCorp cannot CommuniCate: CommuniCorp stock drops because of internal communications disarray.’ How did our company fall into such dire straits?” Mr. Hamilton throws a transparency showing a line sloping slightly upward onto the overhead projector. “This is a graph of our productivity over the last 12 months. As you can see from the graph, productivity in our pager production facility has increased steadily over the last year. Clearly, productivity is not the cause of our problem.” Mr. Hamilton throws a second transparency showing a line sloping steeply upward onto the overhead projector. “This is a graph of our missed or late orders over the last 12 months.” Mr. Hamilton hears an audible gasp from his associates. “As you can see from the graph, our missed or late orders have increased steadily and significantly over the past 12 months. I think this trend explains why we have been 544 CHAPTER 12 INTEGER PROGRAMMING losing market share, causing our stock to drop to its lowest level in 52 weeks. We have angered and lost the business of retailers, our customers who depend upon on-time deliveries to meet the demand of consumers.” “Why have we missed our delivery dates when our productivity level should have allowed us to fill all orders?” Mr. Hamilton asks. “I called several departments to ask this question.” “It turns out that we have been producing pagers for the hell of it!” Mr. Hamilton says in disbelief. “The marketing and sales departments do not communicate with the manufacturing department, so manufacturing executives do not know what pagers to produce to fill orders. The manufacturing executives want to keep the plant running, so they produce pagers regardless of whether the pagers have been ordered. Finished pagers are sent to the warehouse, but marketing and sales executives do not know the number and styles of pagers in the warehouse. They try to communicate Month 1 Month 2 with warehouse executives to determine if the pagers in inventory can fill the orders, but they rarely receive answers to their questions.” Mr. Hamilton pauses and looks directly at his associates. “Ladies and gentlemen, it seems to me that we have a serious internal communications problem. I intend to correct this problem immediately. I want to begin by installing a companywide computer network to ensure that all departments have access to critical documents and are able to easily communicate with each other through e-mail. Because this intranet will represent a large change from the current communications infrastructure, I expect some bugs in the system and some resistance from employees. I therefore want to phase in the installation of the intranet.” Mr. Hamilton passes the following timeline and requirements chart to his associates (IN  Intranet). Month 3 Month 4 Month 5 IN Education Install IN in Sales Install IN in Manufacturing Install IN in Warehouse Install IN in Marketing Department Number of Employees Sales Manufacturing Warehouse Marketing 60 200 30 75 Mr. Hamilton proceeds to explain the timeline and requirements chart. “In the first month, I do not want to bring any department onto the intranet; I simply want to disseminate information about it and get buy-in from employees. In the second month, I want to bring the sales department onto the intranet since the sales department receives all critical information from customers. In the third month, I want to bring the manufacturing department onto the intranet. In the fourth month, I want to install the intranet at the warehouse, and in the fifth and final month, I want to bring the marketing department onto the intranet. The requirements chart under the timeline lists the number of employees requiring access to the intranet in each department.” Mr. Hamilton turns to Emily Jones, the head of Corporate Information Management. “I need your help in planning for the installation of the intranet. Specifically, the company needs to purchase servers for the internal network. Employees will connect to company servers and download information to their own desktop computers.” 545 PREVIEWS OF ADDED CASES ON OUR WEBSITE Type of Server Number of Employees Server Supports Standard Intel Pentium PC Enhanced Intel Pentium PC SGI Workstation Sun Workstation Up Up Up Up to to to to Mr. Hamilton passes Emily the above chart detailing the types of servers available, the number of employees each server supports, and the cost of each server. “Emily, I need you to decide what servers to purchase and when to purchase them to minimize cost and to ensure that the company possesses enough server capacity to follow the intranet implementation timeline,” Mr. Hamilton says. “For example, you may decide to buy one large server during the first month to support all employees, or buy several small servers during the first month to support all employees, or buy one small server each month to support each new group of employees gaining access to the intranet.” “There are several factors that complicate your decision,” Mr. Hamilton continues. “Two server manufacturers are willing to offer discounts to CommuniCorp. SGI is willing to give you a discount of 10 percent off each server purchased, but only if you purchase servers in the first or second month. Sun is willing to give you a 25 percent discount off all servers purchased in the first two months. You are also limited in the amount of money you can spend during the first month. CommuniCorp has already allocated much of the budget for the next two months, so you only have a total of $9,500 available to purchase 30 employees 80 employees 200 employees 2,000 employees Cost of Server $ 2,500 $ 5,000 $10,000 $25,000 servers in months 1 and 2. Finally, the Manufacturing Department requires at least one of the three more powerful servers. Have your decision on my desk at the end of the week.” (a) Emily first decides to evaluate the number and type of servers to purchase on a month-to-month basis. For each month, formulate an IP model to determine which servers Emily should purchase in that month to minimize costs in that month and support the new users. How many and which types of servers should she purchase in each month? How much is the total cost of the plan? (b) Emily realizes that she could perhaps achieve savings if she bought a larger server in the initial months to support users in the final months. She therefore decides to evaluate the number and type of servers to purchase over the entire planning period. Formulate an IP model to determine which servers Emily should purchase in which months to minimize total cost and support all new users. How many and which types of servers should she purchase in each month? How much is the total cost of the plan? (c) Why is the answer using the first method different from that using the second method? (d) Are there other costs that Emily is not accounting for in her problem formulation? If so, what are they? (e) What further concerns might the various departments of CommuniCorp have regarding the intranet? ■ PREVIEWS OF ADDED CASES ON OUR WEBSITE (www.mhhe.com/hillier) CASE 12.2 Assigning Art Plans are being made for an exhibit of up-and-coming modern artists at the San Francisco Museum of Modern Art. A long list of possible artists, their available pieces, and the display prices for these pieces has been compiled. There also are various constraints regarding the mix of pieces that can be chosen. BIP now needs to be applied to make the selection of the pieces for the exhibit under three different scenarios. CASE 12.3 Stocking Sets Poor inventory management at the local warehouse for Furniture City has led to overstocking of many items and frequent shortages of some others. To begin to rectify this situation, the 20 most popular kitchen sets in Furniture City’s kitchen department have just been identified. These kitchen sets are composed of up to eight features in a variety of styles, so each of these styles should be well stocked 546 CHAPTER 12 INTEGER PROGRAMMING in the warehouse. However, the limited amount of warehouse space allocated to the kitchen department means that some difficult stocking decisions need to be made. After gathering the relevant data for the 20 kitchen sets, BIP now needs to be applied to determine how many of each feature and style Furniture City should stock in the local warehouse under three different scenarios. CASE 12.4 Assigning Students to Schools, Revisited Again As introduced in Case 4.3 and revisited in Case 7.3, the Springfield School Board needs to assign the middle school students in the city’s six residential areas to the three remaining middle schools. The new complication in that the school board has just made the decision to prohibit the splitting of residential areas among multiple schools. Therefore, since each of the six areas must be assigned to a single school, BIP now must be applied to make these assignments under the various scenarios considered in Case 4.3. 13 C H A P T E R Nonlinear Programming T he fundamental role of linear programming in OR is accurately reflected by the fact that it is the focus of a third of this book. A key assumption of linear programming is that all its functions (objective function and constraint functions) are linear. Although this assumption essentially holds for many practical problems, it frequently does not hold. Therefore, it often is necessary to deal directly with nonlinear programming problems, so we turn our attention to this important area. In one general form,1 the nonlinear programming problem is to find x  (x1, x2, . . . , xn ) so as to Maximize f(x), subject to gi (x)  bi, for i  1, 2, . . . , m, and x  0, where f(x) and the gi (x) are given functions of the n decision variables.2 There are many different types of nonlinear programming problems, depending on the characteristics of the f(x) and gi(x) functions. Different algorithms are used for the different types. For certain types where the functions have simple forms, problems can be solved relatively efficiently. For some other types, solving even small problems is a real challenge. Because of the many types and the many algorithms, nonlinear programming is a particularly large subject. We do not have the space to survey it completely. However, we do present a few sample applications and then introduce some of the basic ideas for solving certain important types of nonlinear programming problems. Both Appendixes 2 and 3 provide useful background for this chapter, and we recommend that you review these appendixes as you study the next few sections. 1 The other legitimate forms correspond to those for linear programming listed in Sec. 3.2. Section 4.6 describes how to convert these other forms to the form given here. 2 For simplicity, we assume throughout the chapter that all these functions either are differentiable everywhere or are piecewise linear functions (discussed in Secs. 13.1 and 13.8). 547 548 ■ 13.1 CHAPTER 13 NONLINEAR PROGRAMMING SAMPLE APPLICATIONS The following examples illustrate a few of the many important types of problems to which nonlinear programming has been applied. The Product-Mix Problem with Price Elasticity In product-mix problems, such as the Wyndor Glass Co. problem introduced in Sec. 3.1, the goal is to determine the optimal mix of production levels for a firm’s products, given limitations on the resources needed to produce those products, in order to maximize the firm’s total profit. In some cases, there is a fixed unit profit associated with each product, so the resulting objective function will be linear. However, in many product-mix problems, certain factors introduce nonlinearities into the objective function. For example, a large manufacturer may encounter price elasticity, whereby the amount of a product that can be sold has an inverse relationship to the price charged. Thus, the price-demand curve for a typical product might look like the one shown in Fig. 13.1, where p(x) is the price required in order to be able to sell x units. The firm’s profit from producing and selling x units of the product then would be the sales revenue, xp(x), minus the production and distribution costs. Therefore, if the unit cost for producing and distributing the product is fixed at c (see the dashed line in Fig. 13.1), the firm’s profit from producing and selling x units is given by the nonlinear function P(x)  xp(x)  cx, as plotted in Fig. 13.2. If each of the firm’s n products has a similar profit function, say, Pj (xj) for producing and selling xj units of product j ( j  1, 2, . . . , n), then the overall objective function is n f(x)  冱 Pj(xj), j1 a sum of nonlinear functions. Another reason that nonlinearities can arise in the objective function is the fact that the marginal cost of producing another unit of a given product varies with the production level. For example, the marginal cost may decrease when the production level is increased because of a learning-curve effect (more efficient production with more experience). On p(x) Price ■ FIGURE 13.1 Price-demand curve. c Unit cost Demand x 549 13.1 SAMPLE APPLICATIONS Profit P(x) P(x)  x [p(x)  c] ■ FIGURE 13.2 Profit function. Amount x the other hand, it may increase instead, because special measures such as overtime or more expensive production facilities may be needed to increase production further. Nonlinearities also may arise in the gi(x) constraint functions in a similar fashion. For example, if there is a budget constraint on total production cost, the cost function will be nonlinear if the marginal cost of production varies as just described. For constraints on the other kinds of resources, gi (x) will be nonlinear whenever the use of the corresponding resource is not strictly proportional to the production levels of the respective products. The Transportation Problem with Volume Discounts on Shipping Costs As illustrated by the P & T Company example in Sec. 9.1, a typical application of the transportation problem is to determine an optimal plan for shipping goods from various sources to various destinations, given supply and demand constraints, in order to minimize total shipping cost. It was assumed in Chap. 9 that the cost per unit shipped from a given source to a given destination is fixed, regardless of the amount shipped. In actuality, this cost may not be fixed. Volume discounts sometimes are available for large shipments, so that the marginal cost of shipping one more unit might follow a pattern like the one shown in Fig. 13.3. The resulting cost of shipping x units then is given by a nonlinear function C(x), which is a piecewise linear function with slope equal to the marginal cost, like the one shown in Fig. 13.4. [The function in Fig. 13.4 consists of a line segment with slope 6.5 from (0, 0) to (0.6, 3.9), a second line segment with slope 5 from (0.6, 3.9) to (1.5, 8.4), a third line segment with slope 4 from (1.5, 8.4) to (2.7, 13.2), and a fourth line segment with slope 3 from (2.7, 13.2) to (4.5, 18.6).] Consequently, if each combination of source and destination has a similar shipping cost function, so that the cost of shipping xij units from source i (i  1, 2, . . . , m) to destination j ( j  1, 2, . . . , n) is given by a nonlinear function Cij (xij), then the overall objective function to be minimized is m f(x)  冱 n 冱 Cij (xij). i1 j1 Even with this nonlinear objective function, the constraints normally are still the special linear constraints that fit the transportation problem model in Sec. 9.1. 550 CHAPTER 13 NONLINEAR PROGRAMMING Marginal cost 6.5 5 4 3 ■ FIGURE 13.3 Marginal shipping cost. 0.6 1.5 2.7 4.5 Amount shipped 0.6 1.5 2.7 4.5 Amount shipped 18.6 Total cost 13.2 8.4 3.9 ■ FIGURE 13.4 Shipping cost function. Portfolio Selection with Risky Securities It now is common practice for professional managers of large stock portfolios to use computer models based partially on nonlinear programming to guide them. Because investors are concerned about both the expected return (gain) and the risk associated with their investments, nonlinear programming is used to determine a portfolio that, under certain assumptions, provides an optimal trade-off between these two factors. This approach is based largely on path-breaking research done by Harry Markowitz and William Sharpe that helped them win the 1990 Nobel Prize in Economics. A nonlinear programming model can be formulated for this problem as follows. Suppose that n stocks (securities) are being considered for inclusion in the portfolio, and let the An Application Vignette The Bank Hapoalim Group is Israel’s largest banking group, providing services throughout the country. As of the beginning of 2012, it had approximately 300 branches and eight regional business centers in Isreal. It also operates worldwide through many branches, offices, and subsidiaries in major financial centers in North and South America and Europe. A major part of Bank Hapoalim’s business involves providing investment advisors for its customers. To stay ahead of its competitors, management embarked on a restructuring program to provide these investment advisors with state-of-the-art methodology and technology. An OR team was formed to do this. The team concluded that it needed to develop a flexible decision-support system for the investment advisors that could be tailored to meet the diverse needs of every customer. Each customer would be asked to provide extensive information about his or her needs, including choosing among various alternatives regarding his or her investment objectives, investment horizon, choice of an index to strive to exceed, preference with regard to liquidity and currency, etc. A series of questions also would be asked to ascertain the customer's risk-taking classification. The natural choice of the model to drive the resulting decision-support system (called the Opti-Money System) was the classical nonlinear programming model for portfolio selection described in this section of the book, with modifications to incorporate all the information about the needs of the individual customer. This model generates an optimal weighting of 60 possible asset classes of equities and bonds in the portfolio, and the investment advisor then works with the customer to choose the specific equities and bonds within these classes. During the first year of full implementation, the bank’s investment advisors held some 133,000 consultation sessions with 63,000 customers while using this decision-support system. The annual earnings over benchmarks to customers who follow the investment advice provided by the system total approximately US$244 million, while adding more than US$31 million to the bank’s annual income. Source: M. Avriel, H. Pri-Zan, R. Meiri, and A. Peretz: “OptiMoney at Bank Hapoalim: A Model-Based Investment DecisionSupport System for Individual Customers,” Interfaces, 34(1): 39–50, Jan.–Feb. 2004. (A link to this article is provided on our website, www.mhhe.com/hillier.) decision variables xj ( j  1, 2, . . . , n) be the number of shares of stock j to be included. Let ␮j and ␴jj be the (estimated) mean and variance, respectively, of the return on each share of stock j, where ␴jj measures the risk of this stock. For i  1, 2, . . . , n (i  j), let ␴ij be the covariance of the return on one share each of stock i and stock j. (Because it would be difficult to estimate all the ␴ij values, the usual approach is to make certain assumptions about market behavior that enable us to calculate ␴ij directly from ␴ii and ␴jj .) Then the expected value R(x) and the variance V(x) of the total return from the entire portfolio are n R(x)  冱 ␮j xj j1 and n n 冱 ␴ij x i xj, i1 j1 V(x)  冱 where V(x) measures the risk associated with the portfolio. One way to consider the tradeoff between these two factors is to use V(x) as the objective function to be minimized and then impose the constraint that R(x) must be no smaller than the minimum acceptable expected return. The complete nonlinear programming model then would be n Minimize subject to n 冱 ␮j xj  L j1 n 冱 Pj xj  B j1 n 冱 ␴ij xi xj, i1 j1 V(x)  冱 552 CHAPTER 13 NONLINEAR PROGRAMMING and xj  0, for j  1, 2, . . . , n, where L is the minimum acceptable expected return, Pj is the price for each share of stock j, and B is the amount of money budgeted for the portfolio. One drawback of this formulation is that it is relatively difficult to choose an appropriate value for L for obtaining the best trade-off between R(x) and V(x). Therefore, rather than stopping with one choice of L, it is common to use a parametric (nonlinear) programming approach to generate the optimal solution as a function of L over a wide range of values of L. The next step is to examine the values of R(x) and V(x) for these solutions that are optimal for some value of L and then to choose the solution that seems to give the best trade-off between these two quantities. This procedure often is referred to as generating the solutions on the efficient frontier of the two-dimensional graph of (R(x), V(x)) points for feasible x. The reason is that the (R(x), V(x)) point for an optimal x (for some L) lies on the frontier (boundary) of the feasible points. Furthermore, each optimal x is efficient in the sense that no other feasible solution is at least equally good with one measure (R or V) and strictly better with the other measure (smaller V or larger R). This application of nonlinear programming is a particularly important one. The use of nonlinear programming for portfolio optimization now lies at the center of modern financial analysis. (More broadly, the relatively new field of financial engineering has arisen to focus on the application of OR techniques such as nonlinear programming to various finance problems, including portfolio optimization.) As illustrated by the application vignette in this section, this kind of application of nonlinear programming is having a tremendous impact in practice. Much research also continues to be done on the properties and application of both the above model and related nonlinear programming models to sophisticated kinds of portfolio analysis.3 ■ 13.2 GRAPHICAL ILLUSTRATION OF NONLINEAR PROGRAMMING PROBLEMS When a nonlinear programming problem has just one or two variables, it can be represented graphically much like the Wyndor Glass Co. example for linear programming in Sec. 3.1. Because such a graphical representation gives considerable insight into the properties of optimal solutions for linear and nonlinear programming, let us look at a few examples. To highlight the difference between linear and nonlinear programming, we shall use some nonlinear variations of the Wyndor Glass Co. problem. Figure 13.5 shows what happens to this problem if the only changes in the model shown in Sec. 3.1 are that both the second and the third functional constraints are replaced by the single nonlinear constraint 9x21  5x22  216. Compare Fig. 13.5 with Fig. 3.3. The optimal solution still happens to be (x1, x2)  (2, 6). Furthermore, it still lies on the boundary of the feasible region. However, it is not a corner-point feasible (CPF) solution. The optimal solution could have been a CPF solution with a different objective function (check Z  3x1  x2), but the fact that it need not be one means that we no longer have the tremendous simplification used in linear programming of limiting the search for an optimal solution to just the CPF solutions. 3 Important research includes the following papers. B. I. Jacobs, K. N. Levy, and H. M. Markowitz: “Portfolio Optimization with Factors, Scenarios, and Realistic Short Positions,” Operations Research, 53(4): 586–599, July–Aug. 2005; A. F. Siegel and A. Woodgate: “Performance of Portfolios Optimized with Estimation Error,” Management Science, 53(6): 1005–1015, June 2007; H. Konno and T. Koshizuka: “Mean-Absolute Deviation Model,” IIE Transactions, 37(10): 893–900, Oct. 2005; T. P. Filomena and M. A. Lejeune: “Stochastic Portfolio Optimization with Proportional Transaction Costs: Convex Reformulations and Computational Experiments,” Operations Research Letters, 40(3): 212–217, May 2012. 13.2 GRAPHICAL ILLUSTRATION OF NONLINEAR PROGRAMMING PROBLEMS 553 x2 Maximize subject to 6 (2, 6)  optimal solution and Z  3x1  5x2, x1 ⱕ 4 9x12  5x22 ⱕ 216 x1 ⱖ 0, x2 ⱖ 0 Z  36  3x1  5x2 4 Feasible region 2 ■ FIGURE 13.5 The Wyndor Glass Co. example with the nonlinear constraint 9x 21  5x 22  216 replacing the original second and third functional constraints. 0 2 4 x1 Now suppose that the linear constraints of Sec. 3.1 are kept unchanged, but the objective function is made nonlinear. For example, if Z  126x1  9x21  182x2  13x22, then the graphical representation in Fig. 13.6 indicates that the optimal solution is x1  83, x2  5, which again lies on the boundary of the feasible region. (The value of Z for this optimal solution is Z  857, so Fig. 13.6 depicts the fact that the locus of all points with Z  857 intersects the feasible region at just this one point, whereas the locus of points with any larger Z does not intersect the feasible region at all.) On the other hand, if Z  54x1  9x21  78x2  13x22, then Fig. 13.7 illustrates that the optimal solution turns out to be (x1, x2)  (3, 3), which lies inside the boundary of the feasible region. (You can check that this solution is optimal by using calculus to derive it as the unconstrained global maximum; because it also satisfies the constraints, it must be optimal for the constrained problem.) Therefore, a general algorithm for solving similar problems needs to consider all solutions in the feasible region, not just those on the boundary. Another complication that arises in nonlinear programming is that a local maximum need not be a global maximum (the overall optimal solution). For example, consider the function of a single variable plotted in Fig. 13.8. Over the interval 0  x  5, this function has three local maxima—x  0, x  2, and x  4—but only one of these—x  4— is a global maximum. (Similarly, there are local minima at x  1, 3, and 5, but only x  5 is a global minimum.) Nonlinear programming algorithms generally are unable to distinguish between a local maximum and a global maximum (except by finding another better local maximum). Therefore, it becomes crucial to know the conditions under which any local maximum is guaranteed to be a global maximum over the feasible region. You may recall from calculus that 554 CHAPTER 13 NONLINEAR PROGRAMMING x2 6 Maximize subject to 5 and Z  126x1  9x21  182x2  13x22, x1 ⱕ 4 2x2 ⱕ 12 3x1  2x2 ⱕ 18 x1 ⱖ 0, x2 ⱖ 0 Z  907 3 ■ FIGURE 13.6 The Wyndor Glass Co. example with the original feasible region but with the nonlinear objective function Z  126x1  9x 21  182x2  13x 22 replacing the original objective function. 0 Z  857 Feasible region Z  807 2 x1 4 Maximize subject to x2 and Z  54x1  9x12 x1 2x2 3x1  2x2 x1 ⱖ 0,  78x2  13x22, ⱕ 4 ⱕ 12 ⱕ 18 x2 ⱖ 0 6 Z  117 Z  162 Z  189 4 Z  198 (3, 3) 2 ■ FIGURE 13.7 The Wyndor Glass Co. example with the original feasible region but with another nonlinear objective function, Z  54x1  9x 21  78x2  13x 22, replacing the original objective function. 0 2 4 6 x1 13.2 GRAPHICAL ILLUSTRATION OF NONLINEAR PROGRAMMING PROBLEMS 555 f (x) ■ FIGURE 13.8 A function with several local maxima (x  0, 2, 4), but only x  4 is a global maximum. 0 1 2 3 4 f (x) 5 x f (x) Concave function Convex function ■ FIGURE 13.9 Examples of (a) a concave function and (b) a convex function. x (a) x (b) when we maximize an ordinary (doubly differentiable) function of a single variable f(x) without any constraints, this guarantee can be given when 2 f ᎏ2  0 x for all x. Such a function that is always “curving downward” (or not curving at all) is called a concave function.4 Similarly, if  is replaced by , so that the function is always “curving upward” (or not curving at all), it is called a convex function.5 (Thus, a linear function is both concave and convex.) See Fig. 13.9 for examples. Then note that Fig. 13.8 illustrates a function that is neither concave nor convex because it alternates between curving upward and curving downward. Functions of multiple variables also can be characterized as concave or convex if they always curve downward or curve upward. These intuitive definitions are restated in precise terms, along with further elaboration on these concepts, in Appendix 2. (Concave and convex functions play a fundamental role in nonlinear programming, so if you are not very familiar with such functions, we suggest that you read further in Appendix 2.) Appendix 2 also provides a convenient test for checking whether a function of two variables is concave, convex, or neither. Here is a convenient way of checking this for a function of more than two variables when the function consists of a sum of smaller functions of just one or two variables each. 4 Concave functions sometimes are referred to as concave downward. Convex functions sometimes are referred to as concave upward. 5 556 CHAPTER 13 NONLINEAR PROGRAMMING If each smaller function is concave, then the overall function is concave. Similarly, the overall function is convex if each smaller function is convex. To illustrate, consider the function f(x1, x2, x3)  4x1  x21  (x2  x3)2  [4x1  x21]  [(x2  x3)2], which is the sum of the two smaller functions given in square brackets. The first smaller function 4x1  x21 is a function of the single variable x1, so it can be found to be concave by noting that its second derivative is negative. The second smaller function (x2  x3)2 is a function of just x2 and x3, so the test for functions of two variables given in Appendix 2 is applicable. In fact, Appendix 2 uses this particular function to illustrate the test and finds that the function is concave. Because both smaller functions are concave, the overall function f(x1, x2, x3) must be concave. If a nonlinear programming problem has no constraints, the objective function being concave guarantees that a local maximum is a global maximum. (Similarly, the objective function being convex ensures that a local minimum is a global minimum.) If there are constraints, then one more condition will provide this guarantee, namely, that the feasible region is a convex set. For this reason, convex sets play a key role in nonlinear programming. As discussed in Appendix 2, a convex set is simply a set of points such that, for each pair of points in the collection, the entire line segment joining these two points is also in the collection. Thus, the feasible region for the original Wyndor Glass Co. problem (see Fig. 13.6 or 13.7) is a convex set. In fact, the feasible region for any linear programming problem is a convex set. Similarly, the feasible region in Fig. 13.5 is a convex set. In general, the feasible region for a nonlinear programming problem is a convex set whenever all the gi(x) [for the constraints gi(x)  bi] are convex functions. For the example of Fig. 13.5, both of its gi(x) are convex functions, since g1(x)  x1 (a linear function is automatically both concave and convex) and g2(x)  9x12  5x22 (both 9x12 and 5x22 are convex functions so their sum is a convex function). These two convex gi (x) lead to the feasible region of Fig. 13.5 being a convex set. Now let’s see what happens when just one of these gi (x) is a concave function instead. In particular, suppose that the only changes in the original Wyndor Glass Co. example are that the second and third functional constraints are replaced by 2x2  14 and 8x1  x21  14x2  x22  49. Therefore, the new g3(x)  8x1  x21  14x2  x22 is a concave function since both 8x1  x21 and 14x2  x22 are concave functions. The new feasible region shown in Fig. 13.10 is not a convex set. Why? Because this feasible region contains pairs of points, for example, (0, 7) and (4, 3), such that part of the line segment joining these two points is not in the feasible region. Consequently, we cannot guarantee that a local maximum is a global maximum. In fact, this example has two local maxima, (0, 7) and (4, 3), but only (0, 7) is a global maximum. Therefore, to guarantee that a local maximum is a global maximum for a nonlinear programming problem with constraints gi (x)  bi (i  1, 2, . . . , m) and x  0, the objective function f(x) must be a concave function and each gi (x) must be a convex function. Such a problem is called a convex programming problem, which is one of the key types of nonlinear programming problems discussed in Sec. 13.3. ■ 13.3 TYPES OF NONLINEAR PROGRAMMING PROBLEMS Nonlinear programming problems come in many different shapes and forms. Unlike the simplex method for linear programming, no single algorithm can solve all these different types of problems. Instead, algorithms have been developed for various individual 13.3 TYPES OF NONLINEAR PROGRAMMING PROBLEMS x2 8 (0, 7)  optimal solution 557 Maximize Z  3x1  5x2, subject to x1 ⱕ 4 2x2 ⱕ 14 8x1  x12  14x2  x22 ⱕ 49 and x1 ⱖ 0, x2 ⱖ 0 6 Z  35  3x1  5x2 4 (4, 3)  local maximum ■ FIGURE 13.10 The Wyndor Glass Co. example with 2x2  14 and a nonlinear constraint, 8x1  x 21  14x2  x 22  49, replacing the original second and third functional constraints. 2 Feasible region (not a convex set) Z  27  3x1  5x2 0 2 4 6 x1 classes (special types) of nonlinear programming problems. The most important classes are introduced briefly in this section. The subsequent sections then describe how some problems of these types can be solved. To simplify the discussion, we will assume throughout that the problems have been formulated (or reformulated) in the general form presented at the beginning of the chapter. Unconstrained Optimization Unconstrained optimization problems have no constraints, so the objective is simply to Maximize f(x) over all values of x  (x1, x2, . . . , xn). As reviewed in Appendix 3, the necessary condition that a particular solution x  x* be optimal when f(x) is a differentiable function is f ᎏ 0 xj at x  x*, for j  1, 2, . . . , n. When f(x) is a concave function, this condition also is sufficient, so then solving for x* reduces to solving the system of n equations obtained by setting the n partial derivatives equal to zero. Unfortunately, for nonlinear functions f(x), these equations often are going to be nonlinear as well, in which case you are unlikely to be able to solve analytically for their simultaneous solution. What then? Sections 13.4 and 13.5 describe algorithmic search procedures for finding x*, first for n  1 and then for n 1. These procedures also play an important role in solving many of the problem types described next, where there are constraints. The reason is that many algorithms for constrained problems are designed so that they can focus on an unconstrained version of the problem during a portion of each iteration. 558 CHAPTER 13 NONLINEAR PROGRAMMING When a variable xj does have a nonnegativity constraint xj  0, the preceding necessary and (perhaps) sufficient condition changes slightly to 冦 f 0 ᎏ xj  0 at x  x*, at x  x*, if if xj*  0 xj* 0 for each such j. This condition is illustrated in Fig. 13.11, where the optimal solution for a problem with a single variable is at x  0 even though the derivative there is negative rather than zero. Because this example has a concave function to be maximized subject to a nonnegativity constraint, having the derivative less than or equal to 0 at x  0 is both a necessary and sufficient condition for x  0 to be optimal. A problem that has some nonnegativity constraints but no functional constraints is one special case (m  0) of the next class of problems. Linearly Constrained Optimization Linearly constrained optimization problems are characterized by constraints that completely fit linear programming, so that all the gi(x) constraint functions are linear, but the objective function f(x) is nonlinear. The problem is considerably simplified by having just one nonlinear function to take into account, along with a linear programming feasible region. A number of special algorithms based upon extending the simplex method to consider the nonlinear objective function have been developed. One important special case, which we consider next, is quadratic programming. Quadratic Programming Quadratic programming problems again have linear constraints, but now the objective function f(x) being maximised must be both quadratic and concave. Thus in addition to the concave assumption, the only difference between such a problem and a linear programming problem is that some of the terms in the objective function involve the square of a variable or the product of two variables. ■ FIGURE 13.11 An example that illustrates how an optimal solution can lie at a point where a derivative is negative instead of zero, because that point lies at the boundary of a nonnegativity constraint. f (x) Maximize subject to 28 24 f (x)  24  2x  x2, x ⱖ 0. Global maximum because f (x) is concave and df dx  2 ⱕ 0 at x  0. So x  0 is optimal. 20 16 12 8 4 0 x 1 2 3 4 5 13.3 TYPES OF NONLINEAR PROGRAMMING PROBLEMS 559 Several algorithms have been developed specifically to solve quadratic programming problems very efficiently. Section 13.7 presents one such algorithm that involves a direct extension of the simplex method. Quadratic programming is very important, partially because such formulations arise naturally in many applications. For example, the problem of portfolio selection with risky securities described in Sec. 13.1 fits into this format. However, another major reason for its importance is that a common approach to solving general linearly constrained optimization problems is to solve a sequence of quadratic programming approximations. Convex Programming Convex programming covers a broad class of problems that actually encompasses as special cases all the preceding types when f (x) is a concave function to be maximized. Continuing to assume the general problem form (including maximization) presented at the beginning of the chapter, the assumptions are that 1. f(x) is a concave function. 2. Each gi(x) is a convex function. As discussed at the end of Sec. 13.2, these assumptions are enough to ensure that a local maximum is a global maximum. (If the objective were to minimize f(x) instead, subject to either gi(x)  bi or gi(x)  bi for i  1, 2, . . . , m, the first assumption would change to requiring that f(x) must be a convex function, since this is what is needed to ensure that a local minimum is a global minimum.) You will see in Sec. 13.6 that the necessary and sufficient conditions for such an optimal solution are a natural generalization of the conditions just given for unconstrained optimization and its extension to include nonnegativity constraints. Section 13.9 then describes algorithmic approaches to solving convex programming problems. Separable Programming Separable programming is a special case of convex programming, where the one additional assumption is that 3. All the f(x) and gi (x) functions are separable functions. A separable function is a function where each term involves just a single variable, so that the function is separable into a sum of functions of individual variables. For example, if f(x) is a separable function, it can be expressed as n f(x)  冱 fj (xj), j1 where each fj(xj) function includes only the terms involving just xj. In the terminology of linear programming (see Sec. 3.3), separable programming problems satisfy the assumption of additivity but violate the assumption of proportionality when any of the fj(xj) functions are nonlinear functions. To illustrate, the objective function considered in Fig. 13.6, f(x1, x2)  126x1  9x12  182x2  13x22 is a separable function because it can be expressed as f(x1, x2)  f1(x1)  f2(x2) where f1(x1)  126x1  9x12 and f2(x2)  182x2  13x22 are each a function of a single variable—x1 and x2, respectively. By the same reasoning, you can verify that the objective function considered in Fig. 13.7 also is a separable function. 560 CHAPTER 13 NONLINEAR PROGRAMMING It is important to distinguish separable programming problems from other convex programming problems, because any such problem can be closely approximated by a linear programming problem so that the extremely efficient simplex method can be used. This approach is described in Sec. 13.8. (For simplicity, we focus there on the linearly constrained case where the special approach is needed only on the objective function.) Nonconvex Programming Nonconvex programming encompasses all nonlinear programming problems that do not satisfy the assumptions of convex programming. Now, even if you are successful in finding a local maximum, there is no assurance that it also will be a global maximum. Therefore, there is no algorithm that will find an optimal solution for all such problems. However, there do exist some algorithms that are relatively well suited for exploring various parts of the feasible region and perhaps finding a global maximum in the process. We describe this approach in Sec. 13.10. Section 13.10 also will introduce two global optimizers (available with LINGO and MPL) for finding an optimal solution for nonconvex programming problems of moderate size, as well as a search procedure (available with both the standard Excel Solver and the ASPE Solver) that generally will find a near-optimal solution for rather large problems. Certain specific types of nonconvex programming problems can be solved without great difficulty by special methods. Two especially important such types are discussed briefly next. Geometric Programming When we apply nonlinear programming to engineering design problems, as well as certain economics and statistics problems, the objective function and the constraint functions frequently take the form N g(x)  冱 ci Pi (x), i1 where Pi(x)  x1ai1x2ai2 xnain, for i  1, 2, . . . , N. In such cases, the ci and aij typically represent physical constants, and the xj are design variables. These functions generally are neither convex nor concave, so the techniques of convex programming cannot be applied directly to these geometric programming problems. However, there is one important case where the problem can be transformed to an equivalent convex programming problem. This case is where all the ci coefficients in each function are strictly positive, so that the functions are generalized positive polynomials now called posynomials and the objective function is to be minimized. The equivalent convex programming problem with decision variables y1, y2, . . . , yn is then obtained by setting xj  eyj, for j  1, 2, . . . , n throughout the original model, so now a convex programming algorithm can be applied. Alternative solution procedures also have been developed for solving these posynomial programming problems, as well as for geometric programming problems of other types. Fractional Programming Suppose that the objective function is in the form of a fraction, i.e., the ratio of two functions, Maximize f1(x) f(x)  ᎏ . f2(x) 13.3 TYPES OF NONLINEAR PROGRAMMING PROBLEMS 561 Such fractional programming problems arise, e.g., when one is maximizing the ratio of output to person-hours expended (productivity), or profit to capital expended (rate of return), or expected value to standard deviation of some measure of performance for an investment portfolio (return/risk). Some special solution procedures have been developed for certain forms of f1(x) and f2(x). When it can be done, the most straightforward approach to solving a fractional programming problem is to transform it to an equivalent problem of a standard type for which effective solution procedures already are available. To illustrate, suppose that f(x) is of the linear fractional programming form cx  c0 f(x)   , dx  d0 where c and d are row vectors, x is a column vector, and c0 and d0 are scalars. Also assume that the constraint functions gi (x) are linear, so that the constraints in matrix form are Ax  b and x  0. Under mild additional assumptions, we can transform the problem to an equivalent linear programming problem by letting x y  dx  d0 and 1 t  , dx  d0 so that x  y/t. This result yields Z  cy  c0t, Maximize subject to Ay  bt  0, dy  d0t  1, and y  0, t  0, which can be solved by the simplex method. More generally, the same kind of transformation can be used to convert a fractional programming problem with concave f1(x), convex f2(x), and convex gi (x) to an equivalent convex programming problem. The Complementarity Problem When we deal with quadratic programming in Sec. 13.7, you will see one example of how solving certain nonlinear programming problems can be reduced to solving the complementarity problem. Given variables w1, w2, . . . , wp and z1, z2, . . . , zp, the complementarity problem is to find a feasible solution for the set of constraints w  F(z), w  0, z0 that also satisfies the complementarity contraint wTz  0. Here, w and z are column vectors, F is a given vector-valued function, and the superscript T denotes the transpose (see Appendix 4). The problem has no objective function, so technically it is not a full-fledged nonlinear programming problem. It is called the complementarity problem because of the complementary relationships that either wi  0 or zi  0 (or both) for each i  1, 2, . . . , p. 562 CHAPTER 13 NONLINEAR PROGRAMMING An important special case is the linear complementarity problem, where F(z)  q  Mz, where q is a given column vector and M is a given p p matrix. Efficient algorithms have been developed for solving this problem under suitable assumptions6 about the properties of the matrix M. One type involves pivoting from one basic feasible (BF) solution to the next, much like the simplex method for linear programming. In addition to having applications in nonlinear programming, complementarity problems have applications in game theory, economic equilibrium problems, and engineering equilibrium problems. ■ 13.4 ONE-VARIABLE UNCONSTRAINED OPTIMIZATION We now begin discussing how to solve some of the types of problems just described by considering the simplest case—unconstrained optimization with just a single variable x (n  1), where the differentiable function f (x) to be maximized is concave.7 Thus, the necessary and sufficient condition for a particular solution x  x* to be optimal (a global maximum) is df  dx  0 at x  x*, as depicted in Fig. 13.12. If this equation can be solved directly for x*, you are done. However, if f(x) is not a particularly simple function, so the derivative is not just a linear or quadratic function, you may not be able to solve the equation analytically. If not, a number of search procedures are available for solving the problem numerically. The approach with any of these search procedures is to find a sequence of trial solutions that leads toward an optimal solution. At each iteration, you begin at the current trial solution to conduct a systematic search that culminates by identifying a new improved f (x) ■ FIGURE 13.12 The one-variable unconstrained optimization problem when the function is concave. df (x) dx  0 x* 6 x See R. W. Cottle, J.-S. Pang, and R. E. Stone, The Linear Complementarity Problem, Academic Press, Boston, 1992, and republished by SIAM Bookmart, Philadelphia, PA, 2009. 7 See the beginning of Appendix 3 for a review of the corresponding case when f(x) is not concave. 13.4 ONE-VARIABLE UNCONSTRAINED OPTIMIZATION 563 trial solution. The procedure is continued until the trial solutions have converged to an optimal solution, assuming that one exists. We now will describe two common search procedures. The first one (the bisection method) was chosen because it is such an intuitive and straightforward procedure. The second one (Newton’s method) is included because it plays a fundamental role in nonlinear programming in general. The Bisection Method This search procedure always can be applied when f(x) is concave (so that the second derivative is negative or zero for all x) as depicted in Fig. 13.12. It also can be used for certain other functions as well. In particular, if x* denotes the optimal solution, all that is needed8 is that df(x)  0 dx df(x)  0 dx df(x)  0 dx if x x*, if x  x*, if x x*. These conditions automatically hold when f(x) is concave, but they also can hold when the second derivative is positive for some (but not all) values of x. The idea behind the bisection method is a very intuitive one, namely, that whether the slope (derivative) is positive or negative at a trial solution definitely indicates whether improvement lies immediately to the right or left, respectively. Thus, if the derivative evaluated at a particular value of x is positive, then x* must be larger than this x (see Fig. 13.12), so this x becomes a lower bound on the trial solutions that need to be considered thereafter. Conversely, if the derivative is negative, then x* must be smaller than this x, so x would become an upper bound. Therefore, after both types of bounds have been identified, each new trial solution selected between the current bounds provides a new tighter bound of one type, thereby narrowing the search further. As long as a reasonable rule is used to select each trial solution in this way, the resulting sequence of trial solutions must converge to x*. In practice, this means continuing the sequence until the distance between the bounds is sufficiently small that the next trial solution must be within a prespecified error tolerance of x*. This entire process is summarized next, given the notation x  current trial solution, x  current lower bound on x*, x苶  current upper bound on x*, ⑀  error tolerance for x*. Although there are several reasonable rules for selecting each new trial solution, the one used in the bisection method is the midpoint rule (traditionally called the Bolzano search plan), which says simply to select the midpoint between the two current bounds. 8 Another possibility is that the graph of f(x) is flat at the top so that x is optimal over some interval [a, b]. In this case, the procedure still will converge to one of these optimal solutions as long as the derivative is positive for x a and negative for x b. 564 CHAPTER 13 NONLINEAR PROGRAMMING Summary of the Bisection Method Initialization: Select ⑀. Find an initial x and x苶 by inspection (or by respectively finding any value of x at which the derivative is positive and then negative). Select an initial trial solution x  苶x x   . 2 Iteration: df (x) 1. Evaluate  at x  x. dx df (x) 2. If   0, reset x  x. dx df (x) 3. If   0, reset x苶  x. dx x  苶x 4. Select a new x   . 2 Stopping rule: If 苶x  x  2⑀, so that the new x must be within ⑀ of x*, stop. Otherwise, perform another iteration. We shall now illustrate the bisection method by applying it to the following example. Example. Suppose that the function to be maximized is f(x)  12x  3x4  2x6, as plotted in Fig. 13.13. Its first two derivatives are df (x)   12(1  x3  x5), dx 2 d f (x)   12(3x2  5x4). dx2 f (x) ■ FIGURE 13.13 Example for the bisection method. 10 8 f(x)  12x  3x 4  2x6 6 4 2 0.2 0.2 2 0.4 0.6 0.8 1.0 1.2 1.4 1.6 1.8 x 565 13.4 ONE-VARIABLE UNCONSTRAINED OPTIMIZATION ■ TABLE 13.1 Application of the bisection method to the example Iteration 0 1 2 3 4 5 6 7 Stop df(x)  dx 12. 10.12 4.09 2.19 1.31 0.34 0.51 x x New xⴕ f(xⴕ) 0. 0. 0.5 0.75 0.75 0.8125 0.8125 0.828125 2. 1. 1. 1. 0.875 0.875 0.84375 0.84375 1. 0.5 0.75 0.875 0.8125 0.84375 0.828125 0.8359375 7.0000 5.7812 7.6948 7.8439 7.8672 7.8829 7.8815 7.8839 Because the second derivative is nonpositive everywhere, f(x) is a concave function, so the bisection method can be safely applied to find its global maximum (assuming a global maximum exists). A quick inspection of this function (without even constructing its graph as shown in Fig. 13.13) indicates that f(x) is positive for small positive values of x, but it is negative for x ⬍ 0 or x 2. Therefore, x  0 and 苶x  2 can be used as the initial bounds, with their midpoint, x  1, as the initial trial solution. Let ⑀  0.01 be the error tolerance for x* in the stopping rule, so the final (x苶  x )  0.02 with the final x⬘ at the midpoint. Applying the bisection method then yields the sequence of results shown in Table 13.1. [This table includes both the function and derivative values for your information, where the derivative is evaluated at the trial solution generated at the preceding iteration. However, note that the algorithm actually doesn’t need to calculate f (x⬘) at all and that it only needs to calculate the derivative far enough to determine its sign.] The conclusion is that x*  0.836, 0.828125 ⬍ x* ⬍ 0.84375. Your IOR Tutorial includes an interactive procedure for executing the bisection method. Newton’s Method Although the bisection method is an intuitive and straightforward procedure, it has the disadvantage of converging relatively slowly toward an optimal solution. Each iteration only decreases the difference between the bounds by one-half. Therefore, even with the fairly simple function being considered in Table 13.1, seven iterations were required to reduce the error tolerance for x* to less than 0.01. Another seven iterations would be needed to reduce this error tolerance to less than 0.0001. The basic reason for this slow convergence is that the only information about f(x) being used is the value of the first derivative f⬘(x) at the respective trial values of x. Additional helpful information can be obtained by considering the second derivative f ⬙(x) as well. This is what Newton’s method 9 does. 9 This method is due to the great 17th-century mathematician and physicist, Sir Isaac Newton. While a young student at the University of Cambridge (England), Newton took advantage of the university being closed for two years (due to the bubonic plague that devastated Europe in 1664–65) to discover the law of universal gravitation and invent calculus (among other achievements). His development of calculus led to this method. 566 CHAPTER 13 NONLINEAR PROGRAMMING The basic idea behind Newton’s method is to approximate f(x) within the neighborhood of the current trial solution by a quadratic function and then to maximize (or minimize) the approximate function exactly to obtain the new trial solution to start the next iteration. (This idea of working with a quadratic approximation of the objective function has since been made a key feature of many algorithms for more general kinds of nonlinear programming problems.) This approximating quadratic function is obtained by truncating the Taylor series after the second derivative term. In particular, by letting xi+1 be the trial solution generated at iteration i to start iteration i  1 (so x1 is the initial trial solution provided by the user to begin iteration 1), the truncated Taylor series for xi+1 is f (xi) f(xi1)  f(xi)  f(xi)(xi1  xi)   (xi1  xi)2. 2 Having fixed xi at the beginning of iteration i, note that f(xi), f(xi), and f (xi) also are fixed constants in this approximating function on the right. Thus, this approximating function is just a quadratic function of xi1. Furthermore, this quadratic function is such a good approximation of f(xi1) in the neighborhood of xi that their values and their first and second derivatives are exactly the same when xi1  xi. This quadratic function now can be maximized in the usual way by setting its first derivative to zero and solving for xi1. (Remember that we are assuming that f(x) is concave, which implies that this quadratic function is concave, so the solution when setting the first derivative to zero will be a global maximum.) This first derivative is f(xi1)  f(xi)  f (xi)(xi1xi) since xi , f(xi), f(xi), and f (xi) are constants. Setting the first derivative on the right to zero yields f(xi1)  f (xi)(xi1xi)  0, which directly leads algebraically to the solution, f(xi) xi1  xi   . f (xi) This is the key formula that is used at each iteration i to calculate the next trial solution xi1 after obtaining the trial solution xi to begin iteration i and then calculating the first and second derivatives at xi. (The same formula is used when minimizing a convex function.) Iterations generating new trial solutions in this way would continue until these solutions have essentially converged. One criterion for convergence is that ⏐xi1  xi⏐ has become sufficiently small. Another is that f(x) is sufficiently close to zero. Still another is that ⏐f(xi1)  f(xi)⏐ is sufficiently small. Choosing the first criterion, define ⑀ as the value such that the algorithm is stopped when ⏐xi1  xi⏐  ⑀. Here is a complete description of the algorithm. Summary of Newton’s Method Initialization: Select ⑀. Find an initial trial solution xi by inspection. Set i  1. Iteration i: 1. Calculate f (xi) and f (xi). [Calculating f(xi) is optional.] f(xi) 2. Set xi1  xi   . f (xi) Stopping Rule: If ⏐xi1  xi⏐ ⑀, stop; xi1 is essentially the optimal solution. Otherwise, reset i  i 1 and perform another iteration. 567 13.5 MULTIVARIABLE UNCONSTRAINED OPTIMIZATION ■ TABLE 13.2 Application of Newton’s method to the example Iteration i xi f(xi) fⴕ(xi) f ⴖ(xi) xi+1 1 1 7 96 0.875 2 0.875 7.8439 2.1940 62.733 0.84003 3 0.84003 7.8838 0.1325 55.279 0.83763 4 0.83763 7.8839 0.0006 54.790 0.83762 12 Example. We now will apply Newton’s method to the same example used for the bisection method. As depicted in Fig. 13.13, the function to be maximized is f(x)  12x  3x4  2x6. Thus, the formula for calculating the new trial solution (xi1) from the current one (xi) is f(xi) 12(1  x3  x5) 1  x3  x5  xi1  xi    xi    x  . i f (xi) 12(3x2  5x4) 3x2  5x4 After selecting ⑀  0.00001 and choosing x1  1 as the initial trial solution, Table 13.2 shows the results from applying Newton’s method to this example. After just four iterations, this method has converged to x  0.83762 as the optimal solution with a very high degree of precision. A comparison of this table with Table 13.1 illustrates how much more rapidly Newton’s method converges than the bisection method. Nearly 20 iterations would be required for the bisection method to converge with the same degree of precision that Newton’s method achieved after only four iterations. Although this rapid convergence is fairly typical of Newton’s method, its performance does vary from problem to problem. Since the method is based on using a quadratic approximation of f(x), its performance is affected by the degree of accuracy of the approximation. ■ 13.5 MULTIVARIABLE UNCONSTRAINED OPTIMIZATION Now consider the problem of maximizing a concave function f (x) of multiple variables x  (x1, x2, . . . , xn ) when there are no constraints on the feasible values. Suppose again that the necessary and sufficient condition for optimality, given by the system of equations obtained by setting the respective partial derivatives equal to zero (see Sec. 13.3), cannot be solved analytically, so that a numerical search procedure must be used. As for the one-variable case, a number of search procedures are available for solving such a problem numerically. One of these (the gradient search procedure) is an especially important one because it identifies and uses the direction of movement from the current trial solution that maximizes the rate at which f(x) is increased. This is one of the key ideas of nonlinear programming. Adaptations of this same idea to take constraints into account are a central feature of many algorithms for constrained optimization as well. After discussing this procedure in some detail, we will briefly describe how Newton’s method is extended to the multivariable case. The Gradient Search Procedure In Sec. 13.4, the value of the ordinary derivative was used by the bisection method to select one of just two possible directions (increase x or decrease x) in which to move from 568 CHAPTER 13 NONLINEAR PROGRAMMING the current trial solution to the next one. The goal was to reach a point eventually where this derivative is (essentially) 0. Now, there are innumerable possible directions in which to move; they correspond to the possible proportional rates at which the respective variables can be changed. The goal is to reach a point eventually where all the partial derivatives are (essentially) 0. Therefore, a natural approach is to use the values of the partial derivatives to select the specific direction in which to move. This selection involves using the gradient of the objective function, as described next. Because the objective function f(x) is assumed to be differentiable, it possesses a gradient, denoted by f(x), at each point x. In particular, the gradient at a specific point x  x⬘ is the vector whose elements are the respective partial derivatives evaluated at x  x⬘, so that 冢 f f f ⵜf(x⬘)  , , . . . ,  x1 x2 xn 冣 at x  x. The significance of the gradient is that the (infinitesimal) change in x that maximizes the rate at which f(x) increases is the change that is proportional to f(x). To express this idea geometrically, the “direction” of the gradient f(x) is interpreted as the direction of the directed line segment (arrow) from the origin (0, 0, . . . , 0) to the point ( f/ x1, f/ x2, . . . , f/ xn), where f/ xj is evaluated at xj  xj. Therefore, it may be said that the rate at which f(x) increases is maximized if (infinitesimal) changes in x are in the direction of the gradient f(x). Because the objective is to find the feasible solution maximizing f(x), it would seem expedient to attempt to move in the direction of the gradient as much as possible. Because the current problem has no constraints, this interpretation of the gradient suggests that an efficient search procedure should keep moving in the direction of the gradient until it (essentially) reaches an optimal solution x*, where f(x*)  0. However, normally it would not be practical to change x continuously in the direction of f(x), because this series of changes would require continuously reevaluating the f/ xj and changing the direction of the path. Therefore, a better approach is to keep moving in a fixed direction from the current trial solution, not stopping until f(x) stops increasing. This stopping point would be the next trial solution, so the gradient then would be recalculated to determine the new direction in which to move. With this approach, each iteration involves changing the current trial solution x as follows: Reset x  x  t* f(x), where t* is the positive value of t that maximizes f(x  t f(x)); that is, f(x  t* f(x))  max f(x  t f(x)). t0 [Note that f(x  t f(x)) is simply f(x) where 冢 冣 f xj  xj  t  xj , xx for j  1, 2, . . . , n, and that these expressions for the xj involve only constants and t, so f(x) becomes a function of just the single variable t.] The iterations of this gradient search procedure continue until f(x)  0 within a small tolerance ⑀, that is, until f   ⑀ xj 冨 冨 10 for j  1, 2, . . . , n.10 This stopping rule generally will provide a solution x that is close to an optimal solution x*, with a value of f(x) that is very close to f(x*). However, this cannot be guaranteed, since it is possible that the function maintains a very small positive slope ( ⑀) over a great distance from x to x*. 13.5 MULTIVARIABLE UNCONSTRAINED OPTIMIZATION 569 An analogy may help to clarify this procedure. Suppose that you need to climb to the top of a hill. You are nearsighted, so you cannot see the top of the hill in order to walk directly in that direction. However, when you stand still, you can see the ground around your feet well enough to determine the direction in which the hill is sloping upward most sharply. You are able to walk in a straight line. While walking, you also are able to tell when you stop climbing (zero slope in your direction). Assuming that the hill is concave, you now can use the gradient search procedure for climbing to the top efficiently. This problem is a two-variable problem, where (x1, x2) represents the coordinates (ignoring height) of your current location. The function f(x1, x2) gives the height of the hill at (x1, x2). You start each iteration at your current location (current trial solution) by determining the direction [in the (x1, x2) coordinate system] in which the hill is sloping upward most sharply (the direction of the gradient) at this point. You then begin walking in this fixed direction and continue as long as you still are climbing. You eventually stop at a new trial location (solution) when the hill becomes level in your direction, at which point you prepare to do another iteration in another direction. You continue these iterations, following a zigzag path up the hill, until you reach a trial location where the slope is essentially zero in all directions. Under the assumption that the hill [ f(x1, x2)] is concave, you must then be essentially at the top of the hill. The most difficult part of the gradient search procedure usually is to find t*, the value of t that maximizes f in the direction of the gradient, at each iteration. Because x and f (x) have fixed values for the maximization, and because f (x) is concave, this problem should be viewed as maximizing a concave function of a single variable t. Therefore, it can be solved by the kind of search procedures for one-variable unconstrained optimization that are described in Sec. 13.4 (while considering only nonnegative values of t because of the t  0 constraint). Alternatively, if f is a simple function, it may be possible to obtain an analytical solution by setting the derivative with respect to t equal to zero and solving. Summary of the Gradient Search Procedure Initialization: Select  and any initial trial solution x⬘. Go first to the stopping rule. Iteration: 1. Express f(x  t f(x)) as a function of t by setting 冢 冣 f xj  xj  t  xj , xx for j  1, 2, . . . , n, and then substituting these expressions into f(x). 2. Use a search procedure for one-variable unconstrained optimization (or calculus) to find t  t* that maximizes f(x  t f(x)) over t  0. 3. Reset x  x  t* f(x). Then go to the stopping rule. Stopping rule: Evaluate f(x) at x  x. Check if 冨x冨   f for all j  1, 2, . . . , n. j If so, stop with the current x as the desired approximation of an optimal solution x*. Otherwise, perform another iteration. Now let us illustrate this procedure. Example. Consider the following two-variable problem: Maximize f(x)  2x1x2  2x2  x21  2x22. 570 CHAPTER 13 NONLINEAR PROGRAMMING Thus, f   2x2  2x1, x1 f   2x1  2  4x2. x2 We also can verify (see Appendix 2) that f(x) is concave. To begin the gradient search procedure, after choosing a suitably small value of ⑀ (normally well under 0.1) suppose that x  (0, 0) is selected as the initial trial solution. Because the respective partial derivatives are 0 and 2 at this point, the gradient is f(0, 0)  (0, 2). With ⑀ 2, the stopping rule then says to perform an iteration. Iteration 1: With values of 0 and 2 for the respective partial derivatives, the first iteration begins by setting x1  0  t(0)  0, x2  0  t(2)  2t, and then substituting these expressions into f(x) to obtain f(x  t f(x))  f(0, 2t)  2(0)(2t)  2(2t)  02  2(2t)2  4t  8t 2. Because f(0, 2t*)  max f(0, 2t)  max {4t  8t2} t0 t0 and d  (4t  8t2)  4  16t  0, dt it follows that 1 t*  , 4 so 冢 冣 1 1 x  (0, 0)  (0, 2)  0,  . 4 2 Reset This completes the first iteration. For this new trial solution, the gradient is 冢 冣 1 f 0,   (1, 0). 2 With ⑀ 1, the stopping rule now says to perform another iteration. Iteration 2: To begin the second iteration, use the values of 1 and 0 for the respective partial derivatives to set 冢 冣 冢 冣 1 1 x  0,   t(1, 0)  t,  , 2 2 571 13.5 MULTIVARIABLE UNCONSTRAINED OPTIMIZATION so 冢 冣 冢 冣 冢冣 冢冣 冢冣 1 1 f(x  t f(x))  f 0  t,   0t  f t,  2 2 1 1 1  (2t)   2   t 2  2  2 2 2 1  t  t 2  . 2 2 Because 冢 冣 冢 冣 冦 冧 1 1 1 f t*,   max f t,   max t  t 2   2 2 2 t0 t0 and 冢 冣 d 1  t  t 2    1  2t  0, dt 2 then 1 t*  , 2 so Reset 冢 冣 冢 冣 1 1 1 1 x  0,   (1, 0)  ,  . 2 2 2 2 This completes the second iteration. With a typically small value of ⑀, the procedure now would continue on to several more iterations in a similar fashion. (We will forgo the details.) A nice way of organizing this work is to write out a table such as Table 13.3 which summarizes the preceding two iterations. At each iteration, the second column shows the current trial solution, and the rightmost column shows the eventual new trial solution, which then is carried down into the second column for the next iteration. The fourth column gives the expressions for the xj in terms of t that need to be substituted into f(x) to give the fifth column. By continuing in this fashion, the subsequent trial solutions would be (21, 43), (43, 43), 3 7 (4, 8), (87, 87), . . . , as shown in Fig. 13.14. Because these points are converging to x*  (1, 1), this solution is the optimal solution, as verified by the fact that f(1, 1)  (0, 0). However, because this converging sequence of trial solutions never reaches its limit, the procedure actually will stop somewhere (depending on ⑀) slightly below (1, 1) as its final approximation of x*. As Fig. 13.14 suggests, the gradient search procedure zigzags to the optimal solution rather than moving in a straight line. Some modifications of the procedure have been ■ TABLE 13.3 Application of the gradient search procedure to the example Iteration xⴕ ⵱f(xⴕ) xⴕ ⴙ t ⵱f(xⴕ) f(xⴕ ⴙ t ⵱f(xⴕ)) t* xⴕ ⴙ t* ⵱f(xⴕ) 1 (0, 0) (0, 2) (0, 2t) 4t  8t2 2 冢0, 12冣 (1, 0) 冢t, 12冣 1 t  t2   2 1  4 1  2 冢0, 12冣 冢12, 12冣 572 CHAPTER 13 NONLINEAR PROGRAMMING x2 .. ( 34, 78 ) ( 12, 34 ) ( 0, 12 ) ■ FIGURE 13.14 Illustration of the gradient search procedure when f(x1, x2)  2x1x2  2x2  x 12  2x 22. x*  (1, 1) ( 78, 78 ) ( 34, 34 ) ( 12, 12 ) (0, 0) x1 developed that accelerate movement toward the optimal solution by taking this zigzag behavior into account. If f(x) were not a concave function, the gradient search procedure still would converge to a local maximum. The only change in the description of the procedure for this case is that t* now would correspond to the first local maximum of f(x  t f(x)) as t is increased from 0. If the objective were to minimize f(x) instead, one change in the procedure would be to move in the opposite direction of the gradient at each iteration. In other words, the rule for obtaining the next point would be Reset x  x  t* f(x). The only other change is that t* now would be the nonnegative value of t that minimizes f(x  t f(x)); that is, f(x  t* f(x))  min f(x  t f(x)). t0 Additional examples of the application of the gradient search procedure are included in both the Solved Examples section of the book’s website and your OR Tutor. The IOR Tutorial includes both an interactive procedure and an automatic procedure for applying this algorithm. Newton’s Method Section 13.4 describes how Newton’s method would be used to solve one-variable unconstrained optimization problems. The general version of Newton’s method actually is designed to solve multivariable unconstrained optimization problems. The basic idea is the same as described in Sec. 13.4, namely, work with a quadratic approximation of the objective function f(x) being maximized, where x  (x1, x2, . . . , xn) in this case. This approximating quadratic function is obtained by truncating the Taylor series around the current trial solution after the second derivative term. This approximate function then is maximized exactly to obtain the new trial solution to start the next iteration. 573 13.6 THE KARUSH-KUHN-TUCKER (KKT) CONDITIONS When the objective function is concave and both the current trial solution x and its gradient ⵜf(x) are written as column vectors, the solution xⴕ that maximizes the approximating quadratic function has the form, ⫺1 xⴕ ⫽ x ⫺ [ⵜ2f(x)] ⵜf(x), where ⵜ2f(x) is the n ⫻ n matrix (called the Hessian matrix) of the second partial deriva⫺1 tives of f(x) evaluated at the current trial solution x and [ⵜ2f(x)] is the inverse of this Hessian matrix. Nonlinear programming algorithms that employ Newton’s method (including those that adapt it to help deal with constrained optimization problems) commonly approximate the inverse of the Hessian matrix in various ways. These approximations of Newton’s method are referred to as quasi-Newton methods (or variable metric methods). We will comment further on the important role of these methods in nonlinear programming in Sec. 13.9. Further description of these methods is beyond the scope of this book, but further details can be found in books devoted to nonlinear programming. ■ 13.6 THE KARUSH-KUHN-TUCKER (KKT) CONDITIONS FOR CONSTRAINED OPTIMIZATION We now focus on the question of how to recognize an optimal solution for a nonlinear programming problem (with differentiable functions) when the problem is in the form shown at the beginning of the chapter. What are the necessary and (perhaps) sufficient conditions that such a solution must satisfy? In the preceding sections we already noted these conditions for unconstrained optimization, as summarized in the first two rows of Table 13.4. Early in Sec. 13.3 we also gave these conditions for the slight extension of unconstrained optimization where the only constraints are nonnegativity constraints. These conditions are shown in the third row of Table 13.4. As indicated in the last row of the table, the conditions for the general case are called the Karush-Kuhn-Tucker conditions (or KKT conditions), because they were derived independently by Karush11 and by Kuhn and Tucker.12 Their basic result is embodied in the following theorem. ■ TABLE 13.4 Necessary and sufficient conditions for optimality Problem Necessary Conditions for Optimality Also Sufficient If: One-variable unconstrained df ᎏᎏ ⫽ 0 dx f (x) concave Multivariable unconstrained ⭸f ᎏᎏ ⫽ 0 ⭸xj Constrained, nonnegativity constraints only ⭸f ᎏᎏ ⫽ 0 ( j ⫽ 1, 2, . . . , n) ⭸xj (or ⱕ 0 if xj ⫽ 0) General constrained problem Karush-Kuhn-Tucker conditions 11 ( j ⫽ 1, 2, . . . , n) f (x) concave f(x) concave f(x) concave and gi (x) convex (i ⫽ 1, 2, . . . , m) W. Karush, “Minima of Functions of Several Variables with Inequalities as Side Conditions,” M.S. thesis, Department of Mathematics, University of Chicago, 1939. 12 H. W. Kuhn and A. W. Tucker, “Nonlinear Programming,” in Jerzy Neyman (ed.), Proceedings of the Second Berkeley Symposium, University of California Press, Berkeley, 1951, pp. 481–492. CHAPTER 13 NONLINEAR PROGRAMMING Theorem. Assume that f(x), g1(x), g2(x), . . . , gm(x) are differentiable functions satisfying certain regularity conditions.13 Then x*  (x1*, x2*, . . . , x*n) can be an optimal solution for the nonlinear programming problem only if there exist m numbers u1, u2, . . . , um such that all the following KKT conditions are satisfied: m f gi 1.   冱 ui   0 xj i1 xj 冢 m 冣 f gi 2. x*j   冱 ui   0 xj i1 xj 3. 4. 5. 6. 冧 gi (x*)  bi  0 ui[gi (x*)  bi]  0 x*j  0, ui  0, ⎧ ⎪⎪ ⎨ ⎪⎪ ⎩ 574 at x  x*, for j  1, 2, . . . , n. for i  1, 2, . . . , m. for j  1, 2, . . . , n. for i  1, 2, . . . , m. Note that both conditions 2 and 4 require that the product of two quantities be zero. Therefore, each of these conditions really is saying that at least one of the two quantities must be zero. Consequently, condition 4 can be combined with condition 3 to express them in another equivalent form as (3, 4) gi (x*)  bi  0 (or  0 if ui  0), for i  1, 2, . . . , m. Similarly, condition 2 can be combined with condition 1 as (1, 2) m f gi   冱 ui   0 xj i1 xj (or  0 if xj*  0), for j  1, 2, . . . , n. When m  0 (no functional constraints), this summation drops out and the combined condition (1, 2) reduces to the condition given in the third row of Table 13.4. Thus, for m 0, each term in the summation modifies the m  0 condition to incorporate the effect of the corresponding functional constraint. In conditions 1, 2, 4, and 6, the ui correspond to the dual variables of linear programming (we expand on this correspondence at the end of the section), and they have a comparable economic interpretation. However, the ui actually arose in the mathematical derivation as Lagrange multipliers (discussed in Appendix 3). Conditions 3 and 5 do nothing more than ensure the feasibility of the solution. The other conditions eliminate most of the feasible solutions as possible candidates for an optimal solution. However, note that satisfying these conditions does not guarantee that the solution is optimal. As summarized in the rightmost column of Table 13.4, certain additional convexity assumptions are needed to obtain this guarantee. These assumptions are spelled out in the following extension of the theorem. Corollary. Assume that f(x) is a concave function and that g1(x), g2(x), . . . , gm(x) are convex functions (i.e., this problem is a convex programming problem), where all these functions satisfy the regularity conditions. Then x*  (x1*, x2*, . . . , xn*) is an optimal solution if and only if all the conditions of the theorem are satisfied. 13 Ibid., p. 483. 13.6 THE KARUSH-KUHN-TUCKER (KKT) CONDITIONS 575 Example. To illustrate the formulation and application of the KKT conditions, we consider the following two-variable nonlinear programming problem: f(x)  ln(x1  1)  x2, Maximize subject to 2x1  x2  3 and x1  0, x2  0, where ln denotes the natural logarithm. Thus, m  1 (one functional constraint) and g1(x)  2x1  x2, so g1(x) is convex. Furthermore, it can be easily verified (see Appendix 2) that f(x) is concave. Hence, the corollary applies, so any solution that satisfies the KKT conditions will definitely be an optimal solution. Applying the formulas given in the theorem yields the following KKT conditions for this example: 1 1( j  1).   2u1  0. x1  1 1 2( j  1). x1   2u1  0. x1  1 1( j  2). 1  u1  0. 2( j  2). x2(1 ⫺ u1)  0. 3. 2x1  x2  3  0. 4. u1(2x1  x2  3)  0. 5. x1  0, x2  0. 6. u1  0. 冢 冣 The steps in solving the KKT conditions for this particular example are outlined below: 1. u1  1, from condition 1( j  2). x1  0, from condition 5. 1 2. Therefore,   2u1 0. x1  1 3. Therefore, x1  0, from condition 2( j  1). 4. u1  0 implies that 2x1  x2  3  0, from condition 4. 5. Steps 3 and 4 imply that x2  3. 6. x2  0 implies that u1  1, from condition 2( j  2). 7. No conditions are violated by x1  0, x2  3, u1  1. Therefore, there exists a number u1  1 such that x1  0, x2  3, and u1  1 satisfy all the conditions. Consequently, x*  (0, 3) is an optimal solution for this problem. This particular problem was relatively easy to solve because the first two steps above quickly led to the remaining conclusions. It often is more difficult to see how to get started. The particular progression of steps needed to solve the KKT conditions will differ from one problem to the next. When the logic is not apparent, it is sometimes helpful to consider separately the different cases where each xj and ui are specified to be either equal to or greater than 0 and then trying each case until one leads to a solution. To illustrate, suppose this approach of considering the different cases separately had been applied to the above example instead of using the logic involved in the above seven steps. For this example, eight cases need to be considered. These cases correspond to the eight combinations of x1  0 versus x1 0, x2  0 versus x2 0, and u1  0 versus u1 0. Each case leads to a simpler statement and analysis of the conditions. To illustrate, consider first the case shown next, where x1  0, x2  0, and u1  0. 576 CHAPTER 13 NONLINEAR PROGRAMMING KKT Conditions for the Case x1  0, x2  0, u1  0 1 1( j  1).   0. Contradiction. 01 1( j  2). 1  0  0. Contradiction. 3. 0  0  3. (All the other conditions are redundant.) As listed below, the other three cases where u1  0 also give immediate contradictions in a similar way, so no solution is available. Case x1  0, x2 0, u1  0 contradicts conditions 1( j  1), 1( j  2), and 2( j  2). Case x1 0, x2  0, u1  0 contradicts conditions 1( j  1), 2( j  1), and 1( j  2). Case x1 0, x2 0, u1  0 contradicts conditions 1( j  1), 2( j  1), 1( j  2), and 2( j  2). The case x1 0, x2 0, u1 0 enables one to delete these nonzero multipliers from conditions 2( j  1), 2( j  2), and 4, which then enables deletion of conditions 1( j  1), 1( j  2), and 3 as redundant, as summarized next. KKT Conditions for the Case x1 0, x2 0, u1 0 1 1( j  1).   2u1  0. x1  1 2( j  2). 1  u1  0. 4. 2x1  x2  3  0. (All the other conditions are redundant.) Therefore, u1  1, so x1  12, which contradicts x1 0. Now suppose that the case x1  0, x2 0, u1 0 is tried next. KKT Conditions for the Case x1  0, x2 0, u1 0 1 1( j  1).   2u1  0. 01 2( j  2). 1  u1  0. 4. 0  x2  3  0. (All the other conditions are redundant.) Therefore, x1  0, x2  3, u1  1. Having found a solution, we know that no additional cases need be considered. If you would like to see another example of using the KKT conditions to solve for an optimal solution, one is provided in the Solved Examples section of the book’s website. For problems more complicated than the above example, it may be difficult, if not essentially impossible, to derive an optimal solution directly from the KKT conditions. Nevertheless, these conditions still provide valuable clues as to the identity of an optimal solution, and they also permit us to check whether a proposed solution may be optimal. There also are many valuable indirect applications of the KKT conditions. One of these applications arises in the duality theory that has been developed for nonlinear programming to parallel the duality theory for linear programming presented in Chap. 6. In particular, for any given constrained maximization problem (call it the primal problem), the KKT conditions can be used to define a closely associated dual problem that is a constrained minimization problem. The variables in the dual problem consist of both the Lagrange multipliers ui (i  1, 2, . . . , m) and the primal variables xj ( j  1, 2, . . . , n). 13.7 QUADRATIC PROGRAMMING 577 In the special case where the primal problem is a linear programming problem, the xj variables drop out of the dual problem and it becomes the familiar dual problem of linear programming (where the ui variables here correspond to the yi variables in Chap. 6). When the primal problem is a convex programming problem, it is possible to establish relationships between the primal problem and the dual problem that are similar to those for linear programming. For example, the strong duality property of Sec. 6.1, which states that the optimal objective function values of the two problems are equal, also holds here. Furthermore, the values of the ui variables in an optimal solution for the dual problem can again be interpreted as shadow prices (see Secs. 4.7 and 6.2); i.e., they give the rate at which the optimal objective function value for the primal problem could be increased by (slightly) increasing the right-hand side of the corresponding constraint. Because duality theory for nonlinear programming is a relatively advanced topic, the interested reader is referred elsewhere for further information.14 You will see another indirect application of the KKT conditions in the next section. ■ 13.7 QUADRATIC PROGRAMMING As indicated in Sec. 13.3, the quadratic programming problem differs from the linear programming problem only in that the objective function also includes xj2 and xi xj (i  j) terms. Thus, if we use matrix notation like that introduced at the beginning of Sec. 5.2, the problem is to find x so as to 1 Maximize f(x)  cx  xTQx, 2 subject to Ax  b and x  0, where the objective function is concave, c is a row vector, x and b are column vectors, Q and A are matrices, and the superscript T denotes the transpose (see Appendix 4). The qij (elements of Q) are given constants such that qij  q ji (which is the reason for the factor of 12 in the objective function). By performing the indicated vector and matrix multiplications, the objective function then is expressed in terms of these qij , the cj (elements of c), and the variables as follows: n 1 1 n n f(x)  cx  xTQx  冱 cj xj   冱 冱 qij xi xj. 2 2 i1 j1 j1 For each term where i  j in this double summation, xi xj  xj2, so 12q jj is the coefficient of xj2. When i  j, then 12(qij xi xj  qji xj xi )  qij xi xj , so qij is the total coefficient for the product of xi and xj . To illustrate, consider the following example: f(x1, x2)  15x1  30x2  4x1x2  2x12  4x22, Maximize subject to x1  2x2  30 and x1  0, 14 x2  0. For a unified survey of various approaches to duality in nonlinear programming, see A. M. Geoffrion, “Duality in Nonlinear Programming: A Simplified Applications-Oriented Development,” SIAM Review, 13: 1–37, 1971. 578 CHAPTER 13 NONLINEAR PROGRAMMING As can be verified from the results in Appendix 2 (see Prob. 13.7-1a), the objective function is strictly concave, so this is indeed a quadratic programming problem. In this case, c  [15 30], A  [1 2], x1 , x2 b  [30]. x Q 4 4 , 4 8 Note that xTQx  [x1 x2] 4 4 4 8 x1 x2  [(4x1  4x2) (4x1  8x2)] x1 x2  4x12  4x2x1  4x1x2  8x22  q11x12  q21x2x1  q12x1x2  q22x22. Multiplying through by 12 gives 1 xTQx  2x12  4x1x2  4x22, 2 which is the nonlinear portion of the objective function for this example. Since q11  4 and q22  8, the example illustrates that 12q jj is the coefficient of xj2 in the objective function. The fact that q12  q21  4 illustrates that both qij and q ji give the total coefficient of the product of xi and xj. Several algorithms have been developed for quadratic programming problem while using its assumption that the objective function is a concave function. (The results in Appendix 2 make it easy to check whether this assumption holds when the objective function has only two variables. With more than two variables, another way to verify that the objective function is concave is to verify the equivalent condition that xTQx  0 for all x, that is, Q is a positive semidefinite matrix.) We shall describe one15 of these algorithms, the modified simplex method, that has been quite popular because it requires using only the simplex method with a slight modification. The key to this approach is to construct the KKT conditions from the preceding section and then to reexpress these conditions in a convenient form that closely resembles linear programming. Therefore, before describing the algorithm, we shall develop this convenient form. The KKT Conditions for Quadratic Programming For concreteness, let us first consider the above example. Starting with the form given in the preceding section, its KKT conditions are the following: 1( j  1). 2( j  1). 1( j  2). 2( j  2). 3. 4. 5. 6. 15 15  4x2  4x1  u1  0. x1(15  4x2  4x1  u1)  0. 30  4x1  8x2  2u1  0. x2(30  4x1  8x2  2u1)  0. x1  2x2  30  0. u1(x1  2x2  30)  0. x1  0, x2  0. u1  0. P. Wolfe, “The Simplex Method for Quadratic Programming,” Econometrics, 27: 382–398, 1959. This paper develops both a short form and a long form of the algorithm. We present a version of the short form, which assumes further that either c  0 or the objective function is strictly concave. 579 13.7 QUADRATIC PROGRAMMING To begin reexpressing these conditions in a more convenient form, we move the constants in conditions 1( j  1), 1( j  2), and 3 to the right-hand side and then introduce nonnegative slack variables (denoted by y1, y2, and v1, respectively) to convert these inequalities to equations. 1( j  1). 4x1  4x2  u1  y1  15 1( j  2). 4x1  8x2  2u1  y2  30 3. x1  2x2  v1  30 Note that condition 2( j  1) can now be reexpressed as simply requiring that either x1  0 or y1  0; that is, 2( j  1). x1y1  0. In just the same way, conditions 2( j  2) and 4 can be replaced by 2( j  2). x2y2  0, 4. u1v1  0. For each of these three pairs—(x1, y1), (x2, y2), (u1, v1)—the two variables are called complementary variables, because only one of the two variables can be nonzero. These new forms of conditions 2( j  1), 2( j  2), and 4 can be combined into one constraint, x1y1  x2y2  u1v1  0, called the complementarity constraint. After multiplying through the equations for conditions 1( j  1) and 1( j  2) by 1 to obtain nonnegative right-hand sides, we now have the desired convenient form for the entire set of conditions shown here: 4x1  4x2  u1  y1  15 4x1  8x2  2u1  y2  30 4x1  2x2  v1  30 x1  0, x2  0, u1  0, y1  0, x1y1  x2y2  u1v1  0 y2  0, v1  0 This form is particularly convenient because, except for the complementarity constraint, these conditions are linear programming constraints. For any quadratic programming problem, its KKT conditions can be reduced to this same convenient form containing just linear programming constraints plus one complementarity constraint. In matrix notation again, this general form is Qx  ATu  y  cT, Ax  v  b, x  0, u  0, y  0, xTy  uTv  0, v  0, where the elements of the column vector u are the ui of the preceding section and the elements of the column vectors y and v are slack variables. Because the objective function of the original problem is assumed to be concave and because the constraint functions are linear and therefore convex, the corollary to the theorem of Sec. 13.6 applies. Thus, x is optimal if and only if there exist values of y, u, and v such that all four vectors together satisfy all these conditions. The original problem is thereby reduced to the equivalent problem of finding a feasible solution to these constraints. It is of interest to note that this equivalent problem is one example of the linear complementarity problem introduced in Sec. 13.3 (see Prob. 13.3-6), and that a key constraint for the linear complementarity problem is its complementarity constraint. 580 CHAPTER 13 NONLINEAR PROGRAMMING The Modified Simplex Method The modified simplex method exploits the key fact that, with the exception of the complementarity constraint, the KKT conditions in the convenient form obtained above are nothing more than linear programming constraints. Furthermore, the complementarity constraint simply implies that it is not permissible for both complementary variables of any pair to be (nondegenerate) basic variables (the only variables 0) when (nondegenerate) BF solutions are considered. Therefore, the problem reduces to finding an initial BF solution to any linear programming problem that has these constraints, subject to this additional restriction on the identity of the basic variables. (This initial BF solution may be the only feasible solution in this case.) As we discussed in Sec. 4.6, finding such an initial BF solution is relatively straightforward. In the simple case where cT  0 (unlikely) and b  0, the initial basic variables are the elements of y and v (multiply through the first set of equations by 1), so that the desired solution is x  0, u  0, y  cT, v  b. Otherwise, you need to revise the problem by introducing an artificial variable into each of the equations where cj 0 (add the variable on the left) or bi 0 (subtract the variable on the left and then multiply through by 1) in order to use these artificial variables (call them z1, z2, and so on) as initial basic variables for the revised problem. (Note that this choice of initial basic variables satisfies the complementarity constraint, because as nonbasic variables x  0 and u  0 automatically.) Next, use phase 1 of the two-phase method (see Sec. 4.6) to find a BF solution for the real problem; i.e., apply the simplex method (with one modification) to the following linear programming problem Minimize Z  冱 zj, j subject to the linear programming constraints obtained from the KKT conditions, but with these artificial variables included. The one modification in the simplex method is the following change in the procedure for selecting an entering basic variable. Restricted-Entry Rule: When you are choosing an entering basic variable, exclude from consideration any nonbasic variable whose complementary variable already is a basic variable; the choice should be made from the other nonbasic variables according to the usual criterion for the simplex method. This rule keeps the complementarity constraint satisfied throughout the course of the algorithm. When an optimal solution x*, u*, y*, v*, z1  0, . . . , zn  0 is obtained for the phase 1 problem, x* is the desired optimal solution for the original quadratic programming problem. Phase 2 of the two-phase method is not needed. Example. We shall now illustrate this approach on the example given at the beginning of the section. As can be verified from the results in Appendix 2 (see Prob. 13.7-1a), f(x1, x2) is strictly concave; i.e., Q 4 4 4 8 is positive definite, so the algorithm can be applied. 581 13.7 QUADRATIC PROGRAMMING The starting point for solving this example is its KKT conditions in the convenient form obtained earlier in the section. After the needed artificial variables are introduced, the linear programming problem to be addressed explicitly by the modified simplex method then is Minimize Z  z1  z2, subject to 4x1  4x2  u1  y1  z1  15 4x1  8x2  2u1  y2  z2  30 x1  2x2  v1  30 and x1  0, z1  0, x2  0, z2  0. u1  0, y1  0, y2  0, v1  0, The additional complementarity constraint x1y1  x2y2  u1v1  0, is not included explicitly, because the algorithm automatically enforces this constraint because of the restricted-entry rule. In particular, for each of the three pairs of complementary variables—(x1, y1), (x2, y2), (u1,v1)—whenever one of the two variables already is a basic variable, the other variable is excluded as a candidate for the entering basic variable. Remember that the only nonzero variables are basic variables. Because the initial set of basic variables for the linear programming problem—z1, z2, v1—gives an initial BF solution that satisfies the complementarity constraint, there is no way that this constraint can be violated by any subsequent BF solution. Table 13.5 shows the results of applying the modified simplex method to this problem. The first simplex tableau exhibits the initial system of equations after converting from minimizing Z to maximizing Z and algebraically eliminating the initial basic variables from Eq. (0), just as was done for the radiation therapy example in Sec. 4.6. The three iterations proceed just as for the regular simplex method, except for eliminating certain candidates for the entering basic variable because of the restricted-entry rule. In the first tableau, u1 is eliminated as a candidate because its complementary variable (v1) already is a basic variable (but x2 would have been chosen anyway because 4 3). In the second tableau, both u1 and y2 are eliminated as candidates (because v1 and x2 are basic variables), so x1 automatically is chosen as the only candidate with a negative coefficient in row 0 (whereas the regular simplex method would have permitted choosing either x1 or u1 because they are tied for having the largest negative coefficient). In the third tableau, both y1 and y2 are eliminated (because x1 and x2 are basic variables). However, u1 is not eliminated because v 1 no longer is a basic variable, so u1 is chosen as the entering basic variable in the usual way. The resulting optimal solution for this phase 1 problem is x1  12, x2  9, u1  3, with the rest of the variables zero. (Problem 13.7-1c asks you to verify that this solution is optimal by showing that x1  12, x2  9, u1  3 satisfy the KKT conditions for the original problem when they are written in the form given in Sec. 13.6.) Therefore, the optimal solution for the quadratic programming problem (which includes only the x1 and x2 variables) is (x1, x2)  (12, 9). 582 CHAPTER 13 NONLINEAR PROGRAMMING ■ TABLE 13.5 Application of the modified simplex method to the quadratic programming example Iteration 0 Basic Variable Eq. Z x1 x2 u1 y1 y2 Z z1 z2 v1 (0) (1) (2) (3) 1 0 0 0 0 4 4 1 4 4 8 2 3 1 2 0 1 1 0 0 1 0 1 0 Z (0) 1 2 0 2 1 z1 (1) 0 2 0 2 1 x2 (2) 0 1  2 1 v1 (3) 0 2 0 Z (0) 1 0 0 z1 (1) 0 0 0 x2 (2) 0 0 1 x1 (3) 0 1 0 Z (0) 1 0 0 0 u1 (1) 0 0 0 1 x2 (2) 0 0 1 0 x1 (3) 0 1 0 0 1 2 1  4 1  2 0 0 5  2 5  2 1  8 1  4 3 1 1 0 0 0 2  5 1  20 1  10 v1 0 0 0 1 0 1 0 0 0 0 0 1 0 0 1 0 1 0 1 1 1  2 1  2 1  8 1  4 3  4 3  4 1  16 1  8 0 3  10 1  40 1  20 z1 1  4 1  2 0 2  5 3  10 2  5 0 0 1 2  5 1  20 1  10 z2 Right Side 0 0 1 0 1 45 4 1 15 4 1 30 4 1 30 4 1  2 1  2 1  8 1  4 1 30 4 1 30 4 3 3 4 1 22 2 1  4 3  4 1  16 1  8 1 7 2 1 7 2 3 9 8 1 11 4 1 3  10 1  40 1  20 1 0 4 1 3 4 1 9 4 1 12 4 The Solved Examples section of the book’s website include another example that illustrates the application of the modified simplex method to a quadratic programming problem. The KKT conditions also are applied to this example. Some Software Options Your IOR Tutorial includes an interactive procedure for the modified simplex method to help you learn this algorithm efficiently. In addition, Excel, MPL/Solvers, LINGO, and LINDO all can solve quadratic programming problems. The procedure for using Excel is almost the same as with linear programming. The one crucial difference is that the equation entered for the cell that contains the value of the objective function now needs to be a quadratic equation. To illustrate, consider again the example introduced at the beginning of the section, which has the objective function f(x1, x2)  15x1  30x2  4x1x2  2x12  4x22. Suppose that the values of x1 and x2 are in cells B4 and C4 of the Excel spreadsheet, and that the value of the objective function is in cell F4. Then the equation for cell F4 needs to be F4  15*B4  30*C4  4*B4*C4  2*(B4^2)  4*(C4^2), where the symbol ^2 indicates an exponent of 2. 13.8 SEPARABLE PROGRAMMING 583 The standard Excel Solver does not have a solving method that is specifically for quadratic programming. However, it does include a solving method called GRG Nonlinear for solving convex programming problems. As pointed out in Sec. 13.3, quadratic programming is a special case of convex programming. Therefore, GRG Nonlinear should be chosen as the solving method in the Solver Parameters dialog box (along with the option of Make Variables Nonnegative) instead of the LP Simplex solving method that always was chosen for solving linear programming problems. The ASPE Solver includes this same solving method, but it also has another one called Quadratic that should be chosen instead because it has been designed specifically to solve quadratic programming problems very efficiently. When using MPL/Solvers, you should set the model type to Quadratic by adding the following statement at the beginning of the model file. OPTIONS ModelType  Quadratic (Alternatively, you can select the Quadratic Models option from the MPL Language option dialog box, but then you will need to remember to change the setting when dealing with linear programming problems again.) Otherwise, the procedure is the same as with linear programming except that the expression for the objective function now is a quadratic function. Thus, for the example, the objective function would be expressed as 15x1  30x2  4x1*x2  2(x1^2)  4(x2^2). Two of the elite solvers included in the student version of MPL—CPLEX and GUROBI— include a special algorithm for solving quadratic programming problems. This objective function would be expressed in this same way for a LINGO model. LINGO/LINDO then will automatically call its nonlinear solver to solve the model. In fact, the Excel, MPL/Solvers, and LINGO/LINDO files for this chapter in your OR Courseware all demonstrate their procedures by showing the details for how these software packages set up and solve this example. ■ 13.8 SEPARABLE PROGRAMMING The preceding section showed how one class of nonlinear programming problems can be solved by an extension of the simplex method. We now consider another class, called separable programming, that actually can be solved by the simplex method itself, because any such problem can be approximated as closely as desired by a linear programming problem with a larger number of variables. As indicated in Sec. 13.3, in separable programming it is assumed that the objective function f(x) is concave, that each of the constraint functions gi(x) is convex, and that all these functions are separable functions (functions where each term involves just a single variable). However, to simplify the discussion, we focus here on the special case where the convex and separable gi(x) are, in fact, linear functions, just as for linear programming. (We will turn to the general case briefly at the end of this section.) Thus, only the objective function requires special treatment for this special case. Under the preceding assumptions, the objective function can be expressed as a sum of concave functions of individual variables n f(x)  冱 fj (xj), j1 584 CHAPTER 13 NONLINEAR PROGRAMMING so that each fj (xj) has a shape16 such as the one shown in Fig. 13.15 (either case) over the feasible range of values of xj. Because f(x) represents the measure of performance (say, profit) for all the activities together, fj (xj) represents the contribution to profit from activity j when it is conducted at level xj. The condition of f(x) being separable simply implies additivity (see Sec. 3.3); i.e., there are no interactions between the activities (no crossproduct terms) that affect total profit beyond their independent contributions. The assumption that each fj (xj) is concave says that the marginal profitability (slope of the profit curve) either stays the same or decreases (never increases) as xj is increased. Concave profit curves occur quite frequently. For example, it may be possible to sell a limited amount of some product at a certain price, then a further amount at a lower price, and perhaps finally a further amount at a still lower price. Similarly, it may be necessary to purchase raw materials from increasingly expensive sources. In another common situation, a more expensive production process must be used (e.g., overtime rather than regular-time work) to increase the production rate beyond a certain point. These kinds of situations can lead to either type of profit curve shown in Fig. 13.15. In case 1, the slope decreases only at certain breakpoints, so that fj (xj) is a piecewise linear function (a sequence of connected line segments). For case 2, the slope may decrease continuously as xj increases, so that fj (xj) is a general concave function. Any such function can be approximated as closely as desired by a piecewise linear function, and this kind of approximation is used as needed for separable programming problems. (Figure 13.15 shows an approximating function that consists of just three line segments, but the approximation can be made even better just by introducing additional breakpoints.) This approximation is very convenient because a piecewise linear function of a single variable can be rewritten as a linear function of several variables, with one special restriction on the values of these variables, as described next. Reformulation as a Linear Programming Problem The key to rewriting a piecewise linear function as a linear function is to use a separate variable for each line segment. To illustrate, consider the piecewise linear function fj (xj) shown in Fig. 13.15, case 1 (or the approximating piecewise linear function for case 2), which has three line segments over the feasible range of values of xj. Introduce the three new variables xj1, xj2, and xj3 and set xj  xj1  xj2  xj3, where 0  xj1  uj1, 0  xj2  uj2, 0  xj3  uj3. Then use the slopes sj1, sj2, and sj3 to rewrite fj (xj) as fj (xj)  sj1xj1  sj2 xj2  sj3 xj3, with the special restriction that xj2  0 xj3  0 16 whenever whenever xj1 xj2 uj1, uj2. f(x) is concave if and only if every fj(xj) is concave. 585 13.8 SEPARABLE PROGRAMMING Case 1 fj(xj) is concave and piecewise linear fj(xj) Profit from activity j pj3 sj3 pj2 sj2 pj1 sj1 (slope) xj uj1  uj2 uj1 0 3 Level of activity j 冱 ujk k1 xj1 xj2 xj3 Case 2 fj(xj) is just concave fj(xj) Profit from activity j pj3 sj3 pj2 fj(xj) sj2 Approximation of fj(xj) pj1 sj1 (slope) xj ■ FIGURE 13.15 Shape of profit curves for separable programming. uj1  uj2 uj1 0 3 冱 ujk k1 xj1 xj2 xj3 Level of activity j 586 CHAPTER 13 NONLINEAR PROGRAMMING To see why this special restriction is required, suppose that xj  1, where ujk (k  1, 2, 3), so that fj (1)  sj1. Note that 1 xj1  xj2  xj3  1 permits xj2  0, xj3  0 ⇒ fj (1)  sj1, xj2  1, xj3  0 ⇒ fj (1)  sj2, xj2  0, xj3  1 ⇒ fj (1)  sj3, xj1  1, xj1  0, xj1  0, and so on, where sj1 sj3. sj2 However, the special restriction permits only the first possibility, which is the only one giving the correct value for fj (1). Unfortunately, the special restriction does not fit into the required format for linear programming constraints, so some piecewise linear functions cannot be rewritten in a lin, ear programming format. However, our fj (xj) are assumed to be concave, so sj1 sj2 so that an algorithm for maximizing f(x) automatically gives the highest priority to using xj1 when (in effect) increasing xj from zero, the next highest priority to using xj2, and so on, without even including the special restriction explicitly in the model. This observation leads to the following key property. Key Property of Separable Programming. When f(x) and the gi (x) satisfy the assumptions of separable programming, and when the resulting piecewise linear functions are rewritten as linear functions, deleting the special restriction gives a linear programming model whose optimal solution automatically satisfies the special restriction. We shall elaborate further on the logic behind this key property later in this section in the context of a specific example. (Also see Prob. 13.8-6a.) To write down the complete linear programming model in the above notation, let nj be the number of line segments in fj (xj ) (or the piecewise linear function approximating it), so that nj xj  冱 xjk k1 would be substituted throughout the original model and nj fj (xj)  冱 sjk xjk k1 would be substituted17 into the objective function for j  1, 2, . . . , n. The resulting model is n Maximize Z冱 j1 nj 冢 冱 s x 冣, jk jk k1 subject to n nj 冱 aij 冢 k1 冱 xjk冣  bi , j1 xjk  ujk, 17 for i  1, 2, . . . , m for k  1, 2, . . . , nj; j  1, 2, . . . , n If one or more of the fj(xj) already are linear functions fj(xj)  cj xj, then nj  1 so neither of these substitutions will be made for j. 13.8 SEPARABLE PROGRAMMING 587 and xjk  0, for k  1, 2, . . . , nj; j  1, 2, . . . , n. j nk1 (The xjk  0 constraints are deleted because they are ensured by the xjk  0 constraints.) If some original variable xj has no upper bound, then ujnj  , so the constraint involving this quantity will be deleted. An efficient way of solving this model18 is to use the streamlined version of the simplex method for dealing with upper bound constraints (described in Sec. 8.3). After obtaining an optimal solution for this model, you then would calculate nj xj  冱 xjk, k1 for j  1, 2, . . . , n in order to identify an optimal solution for the original separable programming problem (or its piecewise linear approximation). Example. The Wyndor Glass Co. (see Sec. 3.1) has received a special order for handcrafted goods to be made in Plants 1 and 2 throughout the next four months. Filling this order will require borrowing certain employees from the work crews for the regular products, so the remaining workers will need to work overtime to utilize the full production capacity of the plant’s machinery and equipment for these regular products. In particular, for the two new regular products discussed in Sec. 3.1, overtime will be required to utilize the last 25 percent of the production capacity available in Plant 1 for product 1 and for the last 50 percent of the capacity available in Plant 2 for product 2. The additional cost of using overtime work will reduce the profit for each unit involved from $3 to $2 for product 1 and from $5 to $1 for product 2, giving the profit curves of Fig. 13.16, both of which fit the form for case 1 of Fig. 13.15. Management has decided to go ahead and use overtime work rather than hire additional workers during this temporary situation. However, it does insist that the work crew for each product be fully utilized on regular time before any overtime is used. Furthermore, it feels that the current production rates (x1  2 for product 1 and x2  6 for product 2) should be changed temporarily if this would improve overall profitability. Therefore, it has instructed the OR team to review products 1 and 2 again to determine the most profitable product mix during the next four months. Formulation. To refresh your memory, the linear programming model for the original Wyndor Glass Co. problem in Sec. 3.1 is Maximize Z  3x1  5x2, subject to  4 2x2  12 3x1  2x2  18 x1 and x1  0, 18 x2  0. For a specialized algorithm for solving this model very efficiently, see R. Fourer, “A Specialized Algorithm for Piecewise-Linear Programming III: Computational Analysis and Applications,” Mathematical Programming, 53: 213–235, 1992. Also see A. M. Geoffrion, “Objective Function Approximations in Mathematical Programming,” Mathematical Programming, 13: 23–37, 1977, as well as Selected Reference 8. 588 CHAPTER 13 NONLINEAR PROGRAMMING Product 1 Product 2 18 ■ FIGURE 13.16 Profit data during the next 4 months for the Wyndor Glass Co. Rate of profit Rate of profit 15 11 9 0 3 4 Rate of production 0 3 Rate of production 6 We now need to modify this model to fit the new situation described above. For this purpose, let the production rate for product 1 be x1  x1R  x1O, where x1R is the production rate achieved on regular time and x1O is the incremental production rate from using overtime. Define x2  x2R  x2O in the same way for product 2. Thus, in the notation of the general linear programming model for separable programming given just before this example, n  2, n1  2, and n2  2. Plugging the data given in Fig. 13.16 (including maximum rates of production on regular time and on overtime) into this general model gives the specific model for this application. In particular, the new linear programming problem is to determine the values of x1R, x1O, x2R, and x2O so as to Maximize Z  3x1R  2x1O  5x2R  x2O, subject to  4 2(x2R  x2O)  12 3(x1R  x1O)  2(x2R  x2O)  18 x1R  3, x1O  1, x2R  3, x1R  x1O x2O  3 and x1R  0, x1O  0, x2R  0, x2O  0. (Note that the upper bound constraints in the next-to-last row of the model make the first two functional constraints redundant, so these two functional constraints can be deleted.) However, there is one important factor that is not taken into account explicitly in this formulation. Specifically, there is nothing in the model that requires all available regular time for a product to be fully utilized before any overtime is used for that product. In other words, it may be feasible to have x1O 0 even when x1R 3 and to have x2O 0 even when x2R 3. Such solutions would not, however, be acceptable to management. (Prohibiting such solutions is the special restriction discussed earlier in this section.) Now we come to the key property of separable programming. Even though the model does not take this factor into account explicitly, the model does take it into account implicitly! Despite the model’s having excess “feasible” solutions that actually are 13.8 SEPARABLE PROGRAMMING 589 unacceptable, any optimal solution for the model is guaranteed to be a legitimate one that does not replace any available regular-time work with overtime work. (The reasoning here is analogous to that for the Big M method discussed in Sec. 4.6, where excess feasible but nonoptimal solutions also were allowed in the model as a matter of convenience.) Therefore, the simplex method can be safely applied to this model to find the most profitable acceptable product mix. The reasons are twofold. First, the two decision variables for each product always appear together as a sum, x1R  x1O or x2R  x2O, in each functional constraint other than the upper bound constraints on individual variables. Therefore, it always is possible to convert an unacceptable feasible solution to an acceptable one having the same total production rates, x1  x1R  x1O and x2  x2R  x2O, merely by replacing overtime production by regular-time production as much as possible. Second, overtime production is less profitable than regular-time production (i.e., the slope of each profit curve in Fig. 13.16 is a monotonic decreasing function of the rate of production), so converting an unacceptable feasible solution to an acceptable one in this way must increase the total rate of profit Z. Consequently, any feasible solution that uses overtime production for a product when regular-time production is still available cannot be optimal with respect to the model. For example, consider the unacceptable feasible solution x1R  1, x1O  1, x2R  1, x2O  3, which yields a total rate of profit Z  13. The acceptable way of achieving the same total production rates x1  2 and x2  4 is x1R  2, x1O  0, x2R  3, x2O  1. This latter solution is still feasible, but it also increases Z by (3  2)(1)  (5  1)(2)  9 to a total rate of profit Z  22. Similarly, the optimal solution for this model turns out to be x1R  3, x1O  1, x2R  3, x2O  0, which is an acceptable feasible solution. Another example that illustrates the application of separable programming is included in the Solved Examples section of the book’s website. Extensions Thus far we have focused on the special case of separable programming where the only nonlinear function is the objective function f(x). Now consider briefly the general case where the constraint functions gi(x) need not be linear but are convex and separable, so that each gi(x) can be expressed as a sum of functions of individual variables n gi(x)  冱 gij (xj), j1 where each gij(xj) is a convex function. Once again, each of these new functions may be approximated as closely as desired by a piecewise linear function (if it is not already in that form). The one new restriction is that for each variable xj ( j  1, 2, . . . , n), all the piecewise linear approximations of the functions of this variable [ fj(xj), g1j(xj), . . . , gmj (xj)] must have the same breakpoints so that the same new variables (xj1, xj2, . . . , xjnj) can be used for all these piecewise linear functions. This formulation leads to a linear programming model just like the one given for the special case except that for each i and j, the xjk variables now have different coefficients in constraint i [where these coefficients are the corresponding slopes of the piecewise linear function approximating gij (xj)]. Because the gij (xj) are required to be convex, essentially the same logic as before implies that the key property of separable programming still must hold. (See Prob. 13.8-6b.) One drawback of approximating functions by piecewise linear functions as described in this section is that achieving a close approximation requires a large number of line segments (variables), whereas such a fine grid for the breakpoints is needed only in the immediate neighborhood of an optimal solution. Therefore, more sophisticated approaches that use a 590 CHAPTER 13 NONLINEAR PROGRAMMING succession of two-segment piecewise linear functions have been developed19 to obtain successively closer approximations within this immediate neighborhood. This kind of approach tends to be both faster and more accurate in closely approximating an optimal solution. The key property of separable programming depends critically on the assumptions that the objective function f(x) is concave and the constraint functions gi(x) are convex. However, even when either or both of these assumptions are violated, methods have been developed for still doing piecewise-linear optimization by introducing auxiliary binary variables into the model.20 This requires considerably more computational effort, but it provides a reasonable option for attempting to solve the problem. ■ 13.9 CONVEX PROGRAMMING We already have discussed some special cases of convex programming in Secs. 13.4 and 13.5 (unconstrained problems), 13.7 (quadratic objective function with linear constraints), and 13.8 (separable functions). You also have seen some theory for the general case (necessary and sufficient conditions for optimality) in Sec. 13.6. In this section, we briefly discuss some types of approaches used to solve the general convex programming problem [where the objective function f(x) to be maximized is concave and the gi (x) constraint functions are convex], and then we present one example of an algorithm for convex programming. There is no single standard algorithm that always is used to solve convex programming problems. Many different algorithms have been developed, each with its own advantages and disadvantages, and research continues to be active in this area. Roughly speaking, most of these algorithms fall into one of the following three categories. The first category is gradient algorithms, where the gradient search procedure of Sec. 13.5 is modified in some way to keep the search path from penetrating any constraint boundary. For example, one popular gradient method is the generalized reduced gradient (GRG) method. Solver uses the GRG method for solving convex programming problems. (As discussed in the next section, both Solver and ASPE now include an Evolutionary Solver option that is well suited for dealing with nonconvex programming problems.) The second category—sequential unconstrained algorithms—includes penalty function and barrier function methods. These algorithms convert the original constrained optimization problem to a sequence of unconstrained optimization problems whose optimal solutions converge to the optimal solution for the original problem. Each of these unconstrained optimization problems can be solved by the kinds of procedures described in Sec. 13.5. This conversion is accomplished by incorporating the constraints into a penalty function (or barrier function) that is subtracted from the objective function in order to impose large penalties for violating constraints (or even being near constraint boundaries). In the latter part of this section, we will describe an algorithm from the 1960s, called the sequential unconstrained minimization technique (or SUMT for short), that pioneered this category of algorithms. (SUMT also helped to motivate some of the interior-point methods for linear programming.) The third category—sequential-approximation algorithms—includes linear approximation and quadratic approximation methods. These algorithms replace the nonlinear objective function by a succession of linear or quadratic approximations. For linearly constrained optimization problems, these approximations allow repeated application of linear or quadratic programming algorithms. This work is accompanied by other analysis that yields a sequence of solutions that converges to an optimal solution for the original problem. Although 19 R. R. Meyer, “Two-Segment Separable Programming,” Management Science, 25: 385–395, 1979. For example, see J. P. Vielma, S. Ahmed, and G. Nemhauser: “Mixed-Integer Models for Nonseparable PiecewiseLinear Optimization: Unifying Framework and Extensions, Operations Research, 58(2): 303–315, March–April 2010. 20 591 13.9 CONVEX PROGRAMMING these algorithms are particularly suitable for linearly constrained optimization problems, some also can be extended to problems with nonlinear constraint functions by the use of appropriate linear approximations. As one example of a sequential-approximation algorithm, we present here the Frank-Wolfe algorithm21 for the case of linearly constrained convex programming (so the constraints are Ax  b and x  0 in matrix form). This procedure is particularly straightforward; it combines linear approximations of the objective function (enabling us to use the simplex method) with a procedure for one-variable unconstrained optimization (such as described in Sec. 13.4). A Sequential Linear Approximation Algorithm (Frank-Wolfe) Given a feasible trial solution x⬘, the linear approximation used for the objective function f(x) is the first-order Taylor series expansion of f(x) around x  x⬘, namely, n f(x) f(x⬘)  f(x⬘)   ᎏ (xj  xj )  f(x)  f(x)(x  x), xj j1 where these partial derivatives are evaluated at x  x. Because f(x) and f(x)x have fixed values, they can be dropped to give an equivalent linear objective function n g(x)  f(x)x   cj xj, j1 ⭸f(x) where cj  ᎏ ⭸xj at x  x. The simplex method (or the graphical procedure if n  2) then is applied to the resulting linear programming problem [maximize g(x) subject to the original constraints, Ax  b and x  0] to find its optimal solution xLP. Note that the linear objective function necessarily increases steadily as one moves along the line segment from xⴕ to xLP (which is on the boundary of the feasible region). However, the linear approximation may not be a particularly close one for x far from xⴕ, so the nonlinear objective function may not continue to increase all the way from xⴕ to xLP. Therefore, rather than just accepting xLP as the next trial solution, we choose the point that maximizes the nonlinear objective function along this line segment. This point may be found by conducting a procedure for one-variable unconstrained optimization of the kind presented in Sec. 13.4, where the one variable for purposes of this search is the fraction t of the total distance from xⴕ to xLP. This point then becomes the new trial solution for initiating the next iteration of the algorithm, as just described. The sequence of trial solutions generated by repeated iterations converges to an optimal solution for the original problem, so the algorithm stops as soon as the successive trial solutions are close enough together to have essentially reached this optimal solution. Summary of the Frank-Wolfe Algorithm Initialization: Find a feasible initial trial solution x(0), for example, by applying linear programming procedures to find an initial BF solution. Set k  1. Iteration k: 1. For j  1, 2, . . . , n, evaluate f(x)  xj at x  x(k1) and set cj equal to this value. 21 M. Frank and P. Wolfe, “An Algorithm for Quadratic Programming,” Naval Research Logistics Quarterly, 3: 95–110, 1956. Although originally designed for quadratic programming, this algorithm is easily adapted to the case of a general concave objective function considered here. 592 CHAPTER 13 NONLINEAR PROGRAMMING (k) 2. Find an optimal solution xLP for the following linear programming problem. n g(x)  冱 cj xj, Maximize j1 subject to Ax  b and x  0. 3. For the variable t (0  t  1), set for x  x(k1)  t(x(k)  x(k1)), LP h(t)  f(x) so that h(t) gives the value of f(x) on the line segment between x(k1) (where t  0) (k) and xLP (where t  1). Use some procedure for one-variable unconstrained optimization (see Sec. 13.4) to maximize h(t) over 0  t  1, and set x(k) equal to the corresponding x. Go to the stopping rule: Stopping rule: If x(k1) and x(k) are sufficiently close, stop and use x(k) (or some extrapolation of x(0), x(1), . . . , x(k1), x(k)) as your estimate of an optimal solution. Otherwise, reset k  k  1 and perform another iteration. Now let us illustrate this procedure. Example. Consider the following quadratic programming problem (a special type of linearly constrained convex programming problem): f(x)  5x1  x21  8x2  2x22, Maximize subject to 3x1  2x2  6 and x1  0, x2  0. Note that ⭸f ᎏ  5  2x1, ⭸x1 ⭸f ᎏ  8  4x2, ⭸x2 so that the unconstrained maximum x  (52, 2) violates the functional constraint. Thus, more work is needed to find the constrained maximum. Iteration 1: Because x  (0, 0) is clearly feasible (and corresponds to the initial BF solution for the linear programming constraints), let us choose it as the initial trial solution x(0) for the Frank-Wolfe algorithm. Plugging x1  0 and x2  0 into the expressions for the partial derivatives gives c1  5 and c2  8, so that g(x)  5x1  8x2 is the initial linear approximation of the objective function. Graphically, solving this linear programming problem (see Fig. 13.17a) yields x(1) LP  (0, 3). For step 3 of the first iteration, the points on the line segment between (0, 0) and (0, 3) shown in Fig. 13.17a are expressed by (x1, x2)  (0, 0)  t[(0, 3)  (0, 0)]  (0, 3t) for 0  t  1 as shown in the sixth column of Table 13.6. This expression then gives h(t)  f(0, 3t)  8(3t)  2(3t)2  24t  18t 2, 593 13.9 CONVEX PROGRAMMING x2 x2 3 2 x(1) LP 3 24  5x1  8x2 x(1) 2 x(1) x(5) x(3) 1 x(2) ■ FIGURE 13.17 Illustration of the Frank-Wolfe algorithm. 0 x(2) 1 x(2) x(0) 2 x(4) x(0) LP 1 Optimal solution x1 0 1 x1 2 (b) (a) ■ TABLE 13.6 Application of the Frank-Wolfe algorithm to the example k x(kⴚ1) c1 c2 x(k) LP 1 (0, 0) 5 8 (0, 3) (0, 3t) 24t  18t2 2 (0, 2) 5 0 (2, 0) (2t, 2  2t) 8  10t  12t2 x for h(t) h(t) t* 2  3 5  12 x(k) (0, 2) 冢56, 76冣 so that the value t  t* that maximizes h(t) over 0  t  1 may be obtained in this case by setting dh(t) ᎏ  24  36t  0, dt so that t*  23. This result yields the next trial solution 2 x(1)  (0, 0)  [(0, 3)  (0, 0)] 3  (0, 2), which completes the first iteration. Iteration 2: To sketch the calculations that lead to the results in the second row of Table 12.6, note that x(1)  (0, 2) gives c1  5  2(0)  5, c2  8  4(2)  0. For the objective function g(x)  5x1, graphically solving the problem over the feasible region in Fig. 13.17a gives x(2) LP  (2, 0). Therefore, the expression for the line segment between x(1) and x(2) LP (see Fig. 13.17a) is x  (0, 2)  t[(2, 0)  (0, 2)]  (2t, 2  2t), 594 CHAPTER 13 NONLINEAR PROGRAMMING so that h(t)  f(2t, 2  2t)  5(2t)  (2t)2  8(2  2t)  2(2  2t)2  8  10t  12t2. Setting dh(t) ᎏ  10  24t  0 dt yields t*  152. Hence, 5 x(2)  (0, 2)  [(2, 0)  (0, 2)] 12 5 7  ,  , 6 6 冢 冣 which completes the second iteration. Figure 13.17b shows the trial solutions that are obtained from iterations 3, 4, and 5 as well. You can see how these trial solutions keep alternating between two trajectories that appear to intersect at approximately the point x  (1, 23). This point is, in fact, the optimal solution, as can be verified by applying the KKT conditions from Sec. 13.6. This example illustrates a common feature of the Frank-Wolfe algorithm, namely, that the trial solutions alternate between two (or more) trajectories. When they alternate in this way, we can extrapolate the trajectories to their approximate point of intersection to estimate an optimal solution. This estimate tends to be better than using the last trial solution generated. The reason is that the trial solutions tend to converge rather slowly toward an optimal solution, so the last trial solution may still be quite far from optimal. If you would like to see another example of the application of the Frank-Wolfe algorithm, one is included in the Solved Examples section of the book’s website. Your OR Tutor provides an additional example as well. IOR Tutorial also includes an interactive procedure for this algorithm. Some Other Algorithms We should emphasize that the Frank-Wolfe algorithm is just one example of sequentialapproximation algorithms. Many of these algorithms use quadratic instead of linear approximations at each iteration because quadratic approximations provide a considerably closer fit to the original problem and thus enable the sequence of solutions to converge considerably more rapidly toward an optimal solution than was the case in Fig. 13.17b. For this reason, even though sequential linear approximation methods such as the Frank-Wolfe algorithm are relatively straightforward to use, sequential quadratic approximation methods now are generally preferred in actual applications. Popular among these are the quasiNewton (or variable metric) methods. As already mentioned in Sec. 13.5, these methods use a fast approximation of Newton’s method and then further adapt this method to take the constraints of the problem into account. To speed up the algorithm, quasi-Newton methods compute a quadratic approximation to the curvature of a nonlinear function without explicitly calculating second (partial) derivatives. (For linearly constrained optimization problems, this nonlinear function is just the objective function; whereas with nonlinear constraints, it is the Lagrangian function described in Appendix 3.) Some quasi-Newton algorithms do not even explicitly form and solve an approximating quadratic programming problem at each iteration, but instead incorporate some of the basic ingredients of gradient algorithms. (See Selected Reference 5 for further details about sequential-approximation algorithms.) 13.9 CONVEX PROGRAMMING 595 We turn now from sequential-approximation algorithms to sequential unconstrained algorithms. As mentioned at the beginning of the section, algorithms of the latter type solve the original constrained optimization problem by instead solving a sequence of unconstrained optimization problems. A particularly prominent sequential unconstrained algorithm that has been widely used since its development in the 1960s is the sequential unconstrained minimization technique (or SUMT for short).22 There actually are two main versions of SUMT, one of which is an exterior-point algorithm that deals with infeasible solutions while using a penalty function to force convergence to the feasible region. We shall describe the other version, which is an interior-point algorithm that deals directly with feasible solutions while using a barrier function to force staying inside the feasible region. Although SUMT was originally presented as a minimization technique, we shall convert it to a maximization technique in order to be consistent with the rest of the chapter. Therefore, we continue to assume that the problem is in the form given at the beginning of the chapter and that all the functions are differentiable. Sequential Unconstrained Minimization Technique (SUMT) As the name implies, SUMT replaces the original problem by a sequence of unconstrained optimization problems whose solutions converge to a solution (local maximum) of the original problem. This approach is very attractive because unconstrained optimization problems are much easier to solve (see Sec. 13.5) than those with constraints. Each of the unconstrained problems in this sequence involves choosing a (successively smaller) strictly positive value of a scalar r and then solving for x so as to Maximize P(x; r)  f(x)  rB(x). Here B(x) is a barrier function that has the following properties (for x that are feasible for the original problem): 1. B(x) is small when x is far from the boundary of the feasible region. 2. B(x) is large when x is close to the boundary of the feasible region. 3. B(x)   as the distance from the (nearest) boundary of the feasible region  0. Thus, by starting the search procedure with a feasible initial trial solution and then attempting to increase P(x; r), B(x) provides a barrier that prevents the search from ever crossing (or even reaching) the boundary of the feasible region for the original problem. The most common choice of B(x) is m n 1 1 B(x)  冱   冱  . x j i1 bi  gi(x) j1 For feasible values of x, note that the denominator of each term is proportional to the distance of x from the constraint boundary for the corresponding functional or nonnegativity constraint. Consequently, each term is a boundary repulsion term that has all the preceding three properties with respect to this particular constraint boundary. Another attractive feature of this B(x) is that when all the assumptions of convex programming are satisfied, P(x; r) is a concave function. Because B(x) keeps the search away from the boundary of the feasible region, you probably are asking the very legitimate question: What happens if the desired solution lies there? This concern is the reason that SUMT involves solving a sequence of these unconstrained optimization problems for successively smaller values of r approaching zero (where the final trial solution from each one becomes the initial trial solution for the next). For example, each new r might be obtained from the preceding one by multiplying by a 22 See Selected Reference 4. 596 CHAPTER 13 NONLINEAR PROGRAMMING constant ␪ (0 ⬍ ␪ ⬍ 1), where a typical value is ␪  0.01. As r approaches 0, P(x; r) approaches f(x), so the corresponding local maximum of P(x; r) converges to a local maximum of the original problem. Therefore, it is necessary to solve only enough unconstrained optimization problems to permit extrapolating their solutions to this limiting solution. How many are enough to permit this extrapolation? When the original problem satisfies the assumptions of convex programming, useful information is available to guide us in this decision. In particular, if 苶x is a global maximizer of P(x; r), then f(x苶)  f(x*)  f(x 苶)  rB(x 苶), where x* is the (unknown) optimal solution for the original problem. Thus, rB(x 苶) is the maximum error (in the value of the objective function) that can result by using 苶x to approximate x*, and extrapolating beyond 苶x to increase f(x) further decreases this error. If an error tolerance is established in advance, then you can stop as soon as rB(x 苶) is less than this quantity. Summary of SUMT Initialization: Identify a feasible initial trial solution x(0) that is not on the boundary of the feasible region. Set k  1 and choose appropriate strictly positive values for the initial r and for ␪ 1 (say, r  1 and ␪  0.01).23 Iteration k: Starting from x(k1), apply a multivariable unconstrained optimization procedure (e.g., the gradient search procedure) such as described in Sec. 13.5 to find a local maximum x(k) of m n 1 1 P(x; r)  f(x)  r 冱   冱  . x b  g (x) j i i i1 j1 冤 冥 Stopping rule: If the change from x(k1) to x(k) is negligible, stop and use x(k) (or an extrapolation of x(0), x(1), . . . , x(k1), x(k)) as your estimate of a local maximum of the original problem. Otherwise, reset k  k  1 and r  ␪r and perform another iteration. Finally, we should note that SUMT also can be extended to accommodate equality constraints gi(x)  bi. One standard way is as follows. For each equality constraint, [bi  gi(x)]2  兹r苶 replaces r  bi  gi (x) in the expression for P(x; r) given under “Summary of SUMT,” and then the same procedure is used. The numerator [bi  gi (x)]2 imposes a large penalty for deviating substantially from satisfying the equality constraint, and then the denominator tremendously increases this penalty as r is decreased to a tiny amount, thereby forcing the sequence of trial solutions to converge toward a point that satisfies the constraint. SUMT has been widely used because of its simplicity and versatility. However, numerical analysts have found that it is relatively prone to numerical instability, so considerable caution is advised. For further information on this issue as well as similar analyses for alternative algorithms, see Selected Reference 6. Example. To illustrate SUMT, consider the following two-variable problem: Maximize f(x)  x1x2, subject to x21  x2  3 23 A reasonable criterion for choosing the initial r is one that makes rB(x) about the same order of magnitude as f(x) for feasible solutions x that are not particularly close to the boundary. 597 13.9 CONVEX PROGRAMMING ■ TABLE 13.7 Illustration of SUMT k r 0 1 2 3 1 102 104 (k) x1 1 0.90 0.987 0.998 ⏐ ↓ 1. (k) x2 1 1.36 1.925 1.993 ⏐ ↓ 2 and x1  0, x2  0. Even though g1(x)  x21  x2 is convex (because each term is convex), this problem is a nonconvex programming problem because f(x)  x1x2 is not concave (see Appendix 2). However, the problem is close enough to being a convex programming problem that SUMT necessarily will still converge to an optimal solution in this case. (We will discuss nonconvex programming further, including the role of SUMT in dealing with such problems, in the next section.) For the initialization, (x1, x2)  (1, 1) is one obvious feasible solution that is not on the boundary of the feasible region, so we can set x(0)  (1, 1). Reasonable choices for r and ␪ are r  1 and ␪  0.01. For each iteration, 1 1 1  P(x; r)  x1x2  r   x1  x2 . 3  x21  x2 冢 冣 With r  1, applying the gradient search procedure starting from (1, 1) to maximize this expression eventually leads to x(1)  (0.90, 1.36). Resetting r  0.01 and restarting the gradient search procedure from (0.90, 1.36) then lead to x(2)  (0.983, 1.933). One more iteration with r  0.01(0.01)  0.0001 leads from x(2) to x(3)  (0.998, 1.994). This sequence of points, summarized in Table 13.7, quite clearly is converging to (1, 2). Applying the KKT conditions to this solution verifies that it does indeed satisfy the necessary condition for optimality. Graphical analysis demonstrates that (x1, x2)  (1, 2) is, in fact, a global maximum (see Prob. 13.9-13b). For this problem, there are no local maxima other than (x1, x2)  (1, 2), so reapplying SUMT from various feasible initial trial solutions always leads to this same solution.24 The Solved Examples section of the book’s website provides another example that illustrates the application of SUMT to a convex programming problem in minimization form. You also can go to your OR Tutor to see an additional example. An automatic procedure for executing SUMT is included in IOR Tutorial. Some Software Options for Convex Programming As mentioned in Sec. 13.7, the standard Excel Solver includes a solving method called GRG Nonlinear for solving convex programming problems. The ASPE Solver also includes this solving method. The Excel file for this chapter shows the application of this 24 The technical reason is that f(x) is a (strictly) quasiconcave function that shares the property of concave functions that a local maximum always is a global maximum. For further information, see M. Avriel, W. E. Diewert, S. Schaible, and I. Zang, Generalized Concavity, Plenum, New York, 1985, and republished by SIAM Bookmart,Philadelphia, PA, 2010. 598 CHAPTER 13 NONLINEAR PROGRAMMING solving method to the first example in this section. LINGO can solve convex programming problems, but the student version of LINDO cannot except for the special case of quadratic programming (which includes the first example in this section). Details for this example are given in the LINGO/LINDO file for this chapter in your OR Courseware. The professional version of MPL supports a large number of solvers, including some that can handle convex programming. One of these, called CONOPT, is included with the student version of MPL that is on the book’s website. CONOPT (a product of AKRI Consulting) is designed specifically to solve convex programming problems very efficiently. It can be used by adding the following statement at the beginning of the MPL model file. OPTIONS ModelType  Nonlinear The convex programming examples that are formulated in this chapter’s MPL/Solvers file have been solved with this solver. ■ 13.10 NONCONVEX PROGRAMMING (WITH SPREADSHEETS) The assumptions of convex programming (the function f(x) to be maximized is concave and all the gi(x) constraint functions are convex) are very convenient ones, because they ensure that any local maximum also is a global maximum. (If the objective is to minimize f(x) instead, then convex programming assumes that f(x) is convex, and so on, which ensures that a local minimum also is a global minimum.) Unfortunately, the nonlinear programming problems that arise in practice frequently fail to satisfy these assumptions. What kind of approach can be used to deal with such nonconvex programming problems? The Challenge of Solving Nonconvex Programming Problems There is no single answer to the above question because there are so many different types of nonconvex programming problems. Some are much more difficult to solve than others. For example, a maximization problem where the objective function is nearly convex generally is much more difficult than one where the objective function is nearly concave. (The SUMT example in Sec. 13.9 illustrated a case where the objective function was so close to being concave that the problem could be treated as if it were a convex programming problem.) Similarly, having a feasible region that is not a convex set (because some of the gi(x) functions are not convex) generally is a major complication. Dealing with functions that are not differentiable, or perhaps not even continuous, also tends to be a major complication. The goal of much ongoing research is to develop efficient global optimization procedures for finding a globally optimal solution for various types of nonconvex programming problems, and some progress has been made. As one example, LINDO Systems (which produces LINDO, LINGO, and What’sBest!) has incorporated a global optimizer into its advanced solver that is shared by some of its software products. In particular, LINGO and What’sBest! have a multistart option to automatically generate a number of starting points for their nonlinear programming solver in order to quickly find a good solution. If the global option is checked, they next employ the global optimizer. The global optimizer converts a nonconvex programming problem (including even those whose formulation includes logic functions such as IF, AND, OR, and NOT) into several subproblems that are convex programming relaxations of portions of the original problem. The branch-and-bound technique then is used to exhaustively search over the subproblems. Once the procedure runs to completion, the solution found is guaranteed to be a globally optimal solution. (The other possible conclusion is that the problem has no feasible solutions.) The student version of this global optimizer is included in the version of LINGO that is provided on the book’s website. However, it is limited to relatively small problems (a maximum of five An Application Vignette Deutsche Post DHL is the largest logistics service provider worldwide. It employs over half a million people in more than 220 countries while delivering three million items and over 70 million letters each day with over 150,000 vehicles. The dramatic story of how DHL quickly achieved this lofty status is one that combines enlightened managerial leadership, an innovative marketing campaign, and the application of nonlinear programming to optimize the use of marketing resources. Starting as just a German postal service, the company’s senior management developed a visionary plan to begin the 21st century by transforming the company into a truly global logistics business. The first step was to acquire and integrate a number of similar companies that already had a strong presence in various other parts of the world. Because customers who operate on a global scale expect to deal with just one provider, the next step was to develop an aggressive marketing program based on extensive marketing research to rebrand DHL as a superior truly global company that could fully meet the needs of these customers. These marketing activities were pursued vigorously in more than 20 of the largest countries on four continents. This kind of marketing program is very expensive, so it is important to use the limited marketing resources as effectively as possible. Therefore, OR analysts developed a brand choice model with an objective function that measures this effectiveness. Nonconvex programming then was implemented in a spreadsheet environment to maximize this objective function without exceeding the total marketing budget. This innovative use of marketing theory and nonlinear programming led to a substantial increase in the global brand value of DHL that enabled it to catapult into a market-leading position. This increase from 2003 to 2008 was estimated to be $1.32 billion (a 32 percent increase). The corresponding return on investment was 38 percent. Source: M. Fischer, W. Giehl, and T. Freundt, “Managing Global Brand Investments at DHL,” Interfaces, 41(1): 35–50, Jan.–Feb. 2011. (A link to this article is provided on our website, www.mhhe.com/hillier.) nonlinear variables out of 500 variables total). The professional version of the global optimizer has successfully solved some much larger problems. Similarly, MPL now supports a global optimizer called LGO. The student version of LGO is available to you as one of the MPL solvers provided on the book’s website. LGO also can be used to solve convex programming problems. A variety of approaches to global optimization (such as the one incorporated into LINGO described above) are being tried. We will not attempt to survey this advanced topic in any depth. We instead will begin with a simple case and then introduce a more general approach at the end of the section. We will illustrate our methodology with spreadsheets and Excel software, but other software packages also can be used. Using Solver to Find Local Optima We now will focus on straightforward approaches to relatively simple types of nonconvex programming problems. In particular, we will consider (maximization) problems where the objective function is nearly concave either over the entire feasible region or within major portions of the feasible region. We also will ignore the added complexity of having nonconvex constraint functions gi(x) by simply using linear constraints. We will begin by illustrating what can be accomplished by simply applying some algorithm for convex programming to such problems. Although any such algorithm (such as those described in Sec. 13.9) could be selected, we will use the convex programming algorithm that is employed by Solver for nonlinear programming problems. For example, consider the following one-variable nonconvex programming problem: Maximize subject to x5 x  0, Z  0.5x5  6x4  24.5x3  39x2  20x, 600 CHAPTER 13 NONLINEAR PROGRAMMING ■ FIGURE 13.18 The profit graph for a nonconvex programming example. Profit ($) 6 4 2 2 −2 −4 −6 4 x where Z represents the profit in dollars. Figure 13.18 shows a plot of the profit over the feasible region that demonstrates how highly nonconvex this function is. However, if this graph were not available, it might not be immediately clear that this is not a convex programming problem since a little analysis is required to verify that the objective function is not concave over the feasible region. Therefore, suppose that Solver’s GRG Nonlinear solving method, which is designed for solving convex programming problems, is applied to this example. (ASPE also has this same solving method and so would be applied in the same way.) Figure 13.19 demonstrates what a difficult time Solver has in attempting to cope with this problem. The model is straightforward to formulate in a spreadsheet, with x (C5) as the changing cell and Profit (C8) as the objective cell. (Note that GRG Nonlinear is chosen as the solving method.) When x  0 is entered as the initial value in the changing cell, the left spreadsheet in Fig. 13.19 shows that Solver then indicates that x  0.371 is the optimal solution with Profit  $3.19. However, if x  3 is entered as the initial value instead, as in the middle spreadsheet in Fig. 13.19, Solver obtains x  3.126 as the optimal solution with Profit  $6.13. Trying still another initial value of x  4.7 in the right spreadsheet, Solver now indicates an optimal solution of x  5 with Profit  $0. What is going on here? Figure 13.18 helps to explain Solver’s difficulties with this problem. Starting at x  0, the profit graph does indeed climb to a peak at x  0.371, as reported in the left spreadsheet of Fig. 13.19. Starting at x  3 instead, the graph climbs to a peak at x  3.126, which is the solution found in the middle spreadsheet. Using the right spreadsheet’s starting solution of x  4.7, the graph climbs until it reaches the boundary imposed by the x  5 constraint, so x  5 is the peak in that direction. These three peaks are the local maxima (or local optima) because each one is a maximum of the graph within a local neighborhood of that point. However, only the largest of these local maxima is the global maximum, that is, the highest point on the entire graph. Thus, the middle spreadsheet in Fig. 13.19 did succeed in finding the globally optimal solution at x  3.126 with Profit  $6.13. Solver uses the generalized reduced gradient method, which adapts the gradient search method described in Sec. 13.5 to solve convex programming problems. Therefore, this algorithm can be thought of as a hill-climbing procedure. It starts at the initial solution entered into the changing cells and then begins climbing that hill until it reaches the peak (or is blocked ■ FIGURE 13.19 An example of a nonconvex programming problem (depicted in Fig. 13.18) where Solver obtains three different solutions when it starts with three different initial solutions. Solver Parameters Set Objective Cell: Profit To: Max By Changing Variable Cells: x Subject to the Constraints: x <= Maximum Solver Options: Make Variables Nonnegative Solving Method: GRG Nonlinear 13.10 NONCONVEX PROGRAMMING (WITH SPREADSHEETS) 601 from climbing further by reaching the boundary imposed by the constraints). The procedure terminates when it reaches this peak (or boundary) and reports this solution. It has no way of detecting whether there is a taller hill somewhere else on the profit graph. The same thing would happen with any other hill-climbing procedure, such as SUMT (described in Sec. 13.9), that stops when it finds a local maximum. Thus, if SUMT were to be applied to this example with each of the three initial trial solutions used in Fig. 13.19, it would find the same three local maxima found by Solver. A More Systematic Approach to Finding Local Optima A common approach to “easy” nonconvex programming problems is to apply some algorithmic hill-climbing procedure that will stop when it finds a local maximum and then to restart it a number of times from a variety of initial trial solutions (either chosen randomly or as a systematic cross-section) in order to find as many distinct local maxima as possible. The best of these local maxima is then chosen for implementation. Normally, the hill-climbing procedure is one that has been designed to find a global maximum when all the assumptions of convex programming hold, but it also can operate to find a local maximum when they do not. Solver includes an automated way of trying multiple starting points. In Excel’s Solver, clicking on the Options button in Solver and then choosing the GRG Nonlinear tab brings up the Options dialog box shown in Fig. 13.20. Selecting the Use Multistart option causes Solver to randomly select 100 different starting points. (The number of starting points can be varied by changing the Population Size option.) In ASPE’s Solver, these options are available on the Engine tab in the Model pane. When Multistart is enabled, Solver then provides the best solution found after solving with each of the different starting points. ■ FIGURE 13.20 The GRG Nonlinear Options dialog box provides several parameters for solving nonlinear models. The Multistart option causes Solver to try many random starting points. (The number of starting points can be adjusted by changing the Population Size.) 602 CHAPTER 13 NONLINEAR PROGRAMMING Unfortunately, there generally is no guarantee of finding a globally optimal solution, no matter how many different starting points are tried. Also, if the profit graphs are not smooth (e.g., if they have discontinuities or kinks), then Solver may not even be able to find local optima when using GRG Nonlinear as the solving method. Fortunately, both ASPE and recent versions of Excel's Solver provide another search procedure, called Evolutionary Solver, to attempt to solve these somewhat more difficult nonconvex programming problems. Evolutionary Solver Both ASPE’s Solver and the standard Excel Solver (for Excel 2010 and newer) include a search procedure called Evolutionary Solver in the set of tools available to search for an optimal solution for a model. The philosophy of Evolutionary Solver is based on genetics, evolution, and the survival of the fittest. Hence, this type of algorithm is sometimes called a genetic algorithm. We will devote Sec. 14.4 to describing how genetic algorithms operate. Evolutionary Solver has three crucial advantages over the standard Solver (or any other convex programming algorithm) for solving nonconvex programming problems. First, the complexity of the objective function does not impact Evolutionary Solver. As long as the function can be evaluated for a given trial solution, it does not matter if the function has kinks or discontinuities or many local optima. Second, the complexity of the given constraints (including even nonconvex constraints) also doesn’t substantially impact Evolutionary Solver (although the number of constraints does). Third, because it evaluates whole populations of trial solutions that aren’t necessarily in the same neighborhood as the current best trial solution, Evolutionary Solver keeps from getting trapped at a local optimum. In fact, Evolutionary Solver is guaranteed to eventually find a globally optimal solution for any nonlinear programming problem (including nonconvex programming problems), if it is run forever (which is impractical of course). Therefore, Evolutionary Solver is well suited for dealing with many relatively small nonconvex programming problems. On the other hand, it must be pointed out that Evolutionary Solver is not a panacea. First, it can take much longer than the standard Solver to find a final solution. Second, Evolutionary Solver does not perform well on models that have many constraints. Third, Evolutionary Solver is a random process, so running it again on the same model usually will yield a different final solution. Finally, the best solution found typically is not quite optimal (although it may be very close). Evolutionary Solver does not continuously move toward better solutions. Rather it is more like an intelligent search engine, trying out different random solutions. Thus, while it is quite likely to end up with a solution that is very close to optimal, it almost never returns the exact globally optimal solution on most types of nonlinear programming problems. Consequently, if often can be beneficial to run Solver with the GRG Nonlinear option after the Evolutionary Solver, starting with the final solution obtained by the Evolutionary Solver, to see if this solution can be improved by searching around its neighborhood. ■ 13.11 CONCLUSIONS Practical optimization problems frequently involve nonlinear behavior that must be taken into account. It is sometimes possible to reformulate these nonlinearities to fit into a linear programming format, as can be done for separable programming problems. However, it is frequently necessary to use a nonlinear programming formulation. In contrast to the case of the simplex method for linear programming, there is no efficient all-purpose algorithm that can be used to solve all nonlinear programming problems. In fact, some of these problems cannot be solved in a very satisfactory manner by any method. However, considerable progress has been made for some important classes of problems, including quadratic programming, convex programming, and certain special LEARNING AIDS FOR THIS CHAPTER ON BOOK’S WEBSITE 603 types of nonconvex programming. A variety of algorithms that frequently perform well are available for these cases. Some of these algorithms incorporate highly efficient procedures for unconstrained optimization for a portion of each iteration, and some use a succession of linear or quadratic approximations to the original problem. There has been a strong emphasis in recent years on developing high-quality, reliable software packages for general use in applying the best of these algorithms. For example, several powerful software packages have been developed in the Systems Optimization Laboratory at Stanford University This chapter also has pointed out the impressive capabilities of Solver, ASPE, MPL/Solvers, and LINGO/LINDO. These packages are widely used for solving many of the types of problems discussed in this chapter (as well as linear and integer programming problems). The steady improvements being made in both algorithmic techniques and software now are bringing some rather large problems into the range of computational feasibility. Research in nonlinear programming remains very active. ■ SELECTED REFERENCES 1. Bazarra, M. S., H. D. Sherali, and C. M. Shetty: Nonlinear Programming: Theory and Algorithms, 3rd ed., Wiley, Hoboken, NJ, 2006. 2. Best, M. J.: Portfolio Optimization, Chapman & Hall/CRC Press, Boca Raton, FL, 2010. 3. Boyd, S., and L. Vandenberghe: Convex Optimization, Cambridge University Press, Cambridge, UK, 2004. 4. Fiacco, A. V., and G. P. McCormick: Nonlinear Programming: Sequential Unconstrained Minimization Techniques, Classics in Applied Mathematics 4, Society for Industrial and Applied Mathematics, Philadelphia, 1990. (Reprint of a classic book published in 1968.) 5. Fletcher, R.: Practical Methods of Optimization, 2nd ed., Wiley, Hoboken, NJ, 2000. 6. Gill, P. E., W. Murray, and M. H. Wright: Practical Optimization, Academic Press, London, 1981. 7. Hillier, F. S., and M. S. Hillier: Introduction to Management Science: A Modeling and Case Studies Approach with Spreadsheets, 5th ed., McGraw-Hill/Irwin, Burr Ridge, IL, 2014, chap. 8. 8. Li, H.-L., H.-C. Lu, C.-H. Huang, and N.-Z. Hu: “A Superior Representation Method for Piecewise Linear Functions,” INFORMS Journal on Computing, 21(2): 314–321, Spring 2009. 9. Luenberger, D., and Y. Ye: Linear and Nonlinear Programming, 3rd ed., Springer, New York, 2008. 10. Murty, K. G.: Optimization for Decision Making: Linear and Quadratic Models, Springer, New York, 2010. 11. Vielma, J. P., S. Ahmed, and G. Nemhauser: “Mixed-Integer Models for Nonseparable PiecewiseLinear Optimization: Unifying Framework and Extensions,” Operations Research, 58(2): 303–315, March–April 2010. 12. Yunes, T., I. D. Aron, and J. N. Hooker: “An Integrated Solver for Optimization Problems,” Operations Research, 58(2): 342–356, March–April 2010. ■ LEARNING AIDS FOR THIS CHAPTER ON OUR WEBSITE (www.mhhe.com/hillier) Solved Examples: Examples for Chapter 13 Demonstration Examples in OR Tutor: Gradient Search Procedure Frank-Wolfe Algorithm Sequential Unconstrained Minimization Technique—SUMT Interactive Procedures in IOR Tutorial: Interactive Interactive Interactive Interactive One-Dimensional Search Procedure Gradient Search Procedure Modified Simplex Method Frank-Wolfe Algorithm 604 CHAPTER 13 NONLINEAR PROGRAMMING Automatic Procedures in IOR Tutorial: Automatic Gradient Search Procedure Sequential Unconstrained Minimization Technique—SUMT Excel Add-in: Analytic Solver Platform for Education (ASPE) “Ch. 13—Nonlinear Programming” Files for Solving the Examples: Excel Files LINGO/LINDO File MPL/Solvers File Glossary for Chapter 13 See Appendix 1 for documentation of the software. ■ PROBLEMS The symbols to the left of some of the problems (or their parts) have the following meaning: D: The corresponding demonstration example just listed in Learning Aids may be helpful. I: We suggest that you use the corresponding interactive routine just listed (the printout records your work). C: Use the computer with any of the software options available to you (or as instructed by your instructor) to solve the problem. An asterisk on the problem number indicates that at least a partial answer is given in the back of the book. 13.1-1. Read the referenced article that fully describes the OR study summarized in the application vignette presented in Sec. 13.1. Briefly describe how nonlinear programming was applied in this study. Then list the various financial and nonfinancial benefits that resulted from this study. 13.1-2. Consider the product mix problem described in Prob. 3.1-11. Suppose that this manufacturing firm actually encounters price elasticity in selling the three products, so that the profits would be different from those stated in Chap. 3. In particular, suppose that the unit costs for producing products 1, 2, and 3 are $25, $10, and $15, respectively, and that the prices required (in dollars) in order to be able to sell x1, x2, and x3 units are (35  100x1 ), (15  40x2 ), and (20  50x3 ), respectively. Formulate a nonlinear programming model for the problem of determining how many units of each product the firm should produce to maximize profit. 1  3 1  4 1  2 13.1-3. For the P & T Co. problem described in Sec. 9.1, suppose that there is a 10 percent discount in the shipping cost for all truckloads beyond the first 40 for each combination of cannery and warehouse. Draw figures like Figs. 13.3 and 13.4, showing the marginal cost and total cost for shipments of truckloads of peas from cannery 1 to warehouse 1. Then describe the overall nonlinear programming model for this problem. 13.1-4. A stockbroker, Richard Smith, has just received a call from his most important client, Ann Hardy. Ann has $50,000 to invest and wants to use it to purchase two stocks. Stock 1 is a solid blue-chip security with a respectable growth potential and little risk involved. Stock 2 is much more speculative. It is being touted in two investment newsletters as having outstanding growth potential but also is considered very risky. Ann would like a large return on her investment but also has considerable aversion to risk. Therefore, she has instructed Richard to analyze what mix of investments in the two stocks would be appropriate for her. Ann is used to talking in units of thousands of dollars and 1,000-share blocks of stocks. Using these units, the price per block is 20 for stock 1 and 30 for stock 2. After doing some research, Richard has made the following estimates. The expected return per block is 5 for stock 1 and 10 for stock 2. The variance of the return on each block is 4 for stock 1 and 100 for stock 2. The covariance of the return on one block each of the two stocks is 5. Without yet assigning a specific numerical value to the minimum acceptable expected return, formulate a nonlinear programming model for this problem. (To be continued in Prob. 13.7-6.) 13.2-1. Reconsider Prob. 13.1-2. Verify that this problem is a convex programming problem. 13.2-2. Reconsider Prob. 13.1-4. Show that the model formulated is a convex programming problem by using the test in Appendix 2 to show that the objective function being minimized is convex. 13.2-3. Consider the variation of the Wyndor Glass Co. example represented in Fig. 13.5, where the second and third functional constraints of the original problem (see Sec. 3.1) have been replaced by 9x12  5x22  216. Demonstrate that (x1, x2)  (2, 6) with Z  36 is indeed optimal by showing that the objective function line 36  3x1  5x2 is tangent to this constraint boundary at (2, 6). (Hint: Express x2 in terms of x1 on this boundary, and then differentiate this expression with respect to x1 to find the slope of the boundary.) 605 PROBLEMS 13.2-4. Consider the variation of the Wyndor Glass Co. problem represented in Fig. 13.6, where the original objective function (see Sec. 3.1) has been replaced by Z  126x1  9x12  182x2  13x22. Demonstrate that (x1, x2)  (83, 5) with Z  857 is indeed optimal by showing that the ellipse 857  126x1  9x12  182x2  13x22 is tangent to the constraint boundary 3x1  2x2  18 at (83, 5). (Hint: Solve for x2 in terms of x1 for the ellipse, and then differentiate this expression with respect to x1 to find the slope of the ellipse.) 13.2-5. Consider the following function: f(x)  48x  60x2  x3. (a) Use the first and second derivatives to find the local maxima and local minima of f(x). (b) Use the first and second derivatives to show that f(x) has neither a global maximum nor a global minimum because it is unbounded in both directions. 13.2-6. For each of the following functions, show whether it is convex, concave, or neither. (a) f(x)  10x  x2 (b) f(x)  x4  6x2  12x (c) f(x)  2x3  3x2 (d) f(x)  x4  x2 (e) f(x)  x3  x4 13.2-7.* For each of the following functions, use the test given in Appendix 2 to determine whether it is convex, concave, or neither. (a) f(x)  x1x2  x21  x22 (b) f(x)  3x1  2x21  4x2  x22  2x1x2 (c) f(x)  x21  3x1x2  2x22 (d) f(x)  20x1  10x2 (e) f(x)  x1x2 13.2-8. Consider the following function: f(x)  5x1  2x22  x23  3x3x4  4x24  2x45  x25  3x5x6  6x26  3x6 x7  x27. Show that f(x) is convex by expressing it as a sum of functions of one or two variables and then showing (see Appendix 2) that all these functions are convex. subject to x21  x22  2. (No nonnegativity constraints.) (a) Use geometric analysis to determine whether the feasible region is a convex set. (b) Now use algebra and calculus to determine whether the feasible region is a convex set. 13.3-1. Reconsider Prob. 13.1-3. Show that this problem is a nonconvex programming problem. 13.3-2. Consider the following constrained optimization problem: Maximize f(x)  6x  3x2  2x3, subject to x  0. Use just the first and second derivatives of f(x) to derive an optimal solution. 13.3-3. Consider the following nonlinear programming problem: Minimize Z  x14  2x12  2x1x2  4x22, subject to 2x1  x2  10 x1  2x2  10 and x1  0, x2  0. (a) Of the special types of nonlinear programming problems described in Sec. 13.3, to which type or types can this particular problem be fitted? Justify your answer. (b) Now suppose that the problem is changed slightly by replacing the nonnegativity constraints by x1  1 and x2  1. Convert this new problem to an equivalent problem that has just two functional constraints, two variables, and two nonnegativity constraints. 13.3-4. Consider the following geometric programming problem: Minimize f(x)  2x12x21  x22, subject to 13.2-9. Consider the following nonlinear programming problem: f(x)  x1  x2, Maximize subject to and x1  0, x21  x22  1 and x1  0, 4x1x2  x12x22  12 x2  0. (a) Verify that this is a convex programming problem. (b) Solve this problem graphically. (a) Transform this problem to an equivalent convex programming problem. (b) Use the test given in Appendix 2 to verify that the model formulated in part (a) is indeed a convex programming problem. 13.3-5. Consider the following linear fractional programming problem: 13.2-10. Consider the following nonlinear programming problem: Minimize Z  x41  2x22, x2  0. Maximize 10x1  20x2  10 f(x)   , 3x1  4x2  20 606 CHAPTER 13 NONLINEAR PROGRAMMING subject to subject to x1  3x2  50 3x1  2x2  80 x2 and x1  0, x2  0. (a) Transform this problem to an equivalent linear programming problem. C (b) Use the computer to solve the model formulated in part (a). What is the resulting optimal solution for the original problem? 13.3-6. Consider the expressions in matrix notation given in Sec. 13.7 for the general form of the KKT conditions for the quadratic programming problem. Show that the problem of finding a feasible solution for these conditions is a linear complementarity problem, as introduced in Sec. 13.3, by identifying w, z, q, and M in terms of the vectors and matrices in Sec. 13.7. 13.4-1.* Consider the following problem: Maximize f(x)  x3  2x  2x2  0.25x4. (a) Apply the bisection method to (approximately) solve this problem. Use an error tolerance ⑀  0.04 and initial bounds x  0, 苶x  2.4. (b) Apply Newton’s method, with ⑀  0.001 and x1  1.2, to this problem. I 13.4-2. Use the bisection method with an error tolerance ⑀  0.04 and with the following initial bounds to interactively solve (approximately) each of the following problems. (a) Maximize f(x)  6x  x2, with x  0, 苶x  4.8. (b) Minimize f(x)  6x  7x2  4x3  x4, with x  4, x苶  1. I 13.4-3. Consider the following problem: Maximize f(x)  48x5  42x3  3.5x  16x6  61x4  16.5x2. (a) Apply the bisection method to (approximately) solve this problem. Use an error tolerance ⑀  0.08 and initial bounds x  1, x苶  4. (b) Apply Newton’s method, with ⑀  0.001 and x1  1, to this problem. I 13.4-4. Consider the following problem: Maximize 3 6 4 2 f(x)  x  30x  x  2x  3x . (a) Apply the bisection method to (approximately) solve this problem. Use an error tolerance ⑀  0.07 and find appropriate initial bounds by inspection. (b) Apply Newton’s method, with ⑀  0.001 and x1  1, to this problem. and x  0. (a) Use one simple calculation just to check whether the optimal solution lies in the interval 0  x  1 or the interval 1  x  2. (Do not actually solve for the optimal solution in order to determine in which interval it must lie.) Explain your logic. I (b) Use the bisection method with initial bounds x  0, x 苶2  and with an error tolerance ⑀  0.02 to interactively solve (approximately) this problem. (c) Apply Newton’s method, with ⑀  0.0001 and x1  1, to this problem. 13.4-6. Consider the problem of maximizing a differentiable function f(x) of a single unconstrained variable x. Let x0 and 苶x0, respectively, be a valid lower bound and upper bound on the same global maximum (if one exists). Prove the following general properties of the bisection method (as presented in Sec. 13.4) for attempting to solve such a problem. (a) Given x0, 苶x0, and ⑀  0, the sequence of trial solutions selected by the midpoint rule must converge to a limiting solution. [Hint: First show that limn(x苶n  xn)  0, where 苶xn and xn are the upper and lower bounds identified at iteration n.] (b) If f(x) is concave [so that df(x)/dx is a monotone decreasing function of x], then the limiting solution in part (a) must be a global maximum. (c) If f (x) is not concave everywhere, but would be concave if its domain were restricted to the interval between x0 and x苶0, then the limiting solution in part (a) must be a global maximum. (d) If f(x) is not concave even over the interval between x0 and 苶x0, then the limiting solution in part (a) need not be a global maximum. (Prove this by graphically constructing a counterexample.) (e) If df(x)/dx 0 for all x, then no x0 exists. If df(x)/dx 0 for all x, then no x苶0 exists. In either case, f(x) does not possess a global maximum. (f) If f(x) is concave and lim df(x)/dx 0, then no x0 exists. If f(x) x is concave and lim df (x)/dx 0, then no 苶x0 exists. In either case, x f(x) does not possess a global maximum. 13.4-7. Consider the following linearly constrained convex programming problem: I Maximize f(x)  32x1  50x2  10x22  x23  x14  x24, subject to 3x1  x2  11 2x1  5x2  16 and I 13.4-5. Consider the following convex programming problem: Minimize Z  x4  x2  4x, x1  0, x2  0. Ignore the constraints and solve the resulting two one-variable unconstrained optimization problems. Use calculus to solve the problem involving x1 and use the bisection method with ⑀  0.001 and initial bounds 0 and 4 to solve the problem involving x2. Show that the resulting solution for (x1, x2) satisfies all of the constraints, so it is actually optimal for the original problem. 607 PROBLEMS 13.5-1. Consider the following unconstrained optimization problem: Maximize f(x)  2x1x2  x2  x12  2x22. C (a) Starting from the initial trial solution (x1, x2)  (1, 1), interactively apply the gradient search procedure with ⑀  0.25 to obtain an approximate solution. (b) Solve the system of linear equations obtained by setting f(x)  0 to obtain the exact solution. (c) Referring to Fig. 13.14 as a sample for a similar problem, draw the path of trial solutions you obtained in part (a). Then show the apparent continuation of this path with your best guess for the next three trial solutions [based on the pattern in part (a) and in Fig. 13.14]. Also show the exact solution from part (b) toward which this sequence of trial solutions is converging. C (d) Apply the automatic routine for the gradient search procedure (with ⑀  0.01) in your IOR Tutorial to this problem. D,I 13.5-2. Starting from the initial trial solution (x1, x2)  (1, 1), interactively apply two iterations of the gradient search procedure to begin solving the following problem, and then apply the automatic routine for this procedure (with ⑀  0.01). D,I,C Maximize f(x)  4x1x2  2x12  3x22. Then solve f(x)  0 directly to obtain the exact solution. ⑀  0.05 to solve (approximately) the two-variable problem identified in part (a). (c) Repeat part (b) with the automatic routine for this procedure (with ⑀  0.005). D,I,C 13.5-7.* Starting from the initial trial solution (x1, x2)  (0, 0), interactively apply the gradient search procedure with ⑀  1 to solve (approximately) the following problem, and then apply the automatic routine for this procedure (with ⑀  0.01). f(x)  x1x2  3x2  x12  x22. Maximize 13.6-1. Reconsider the one-variable convex programming model given in Prob. 13.4-5. Use the KKT conditions to derive an optimal solution for this model. 13.6-2. Reconsider Prob. 13.2-9. Use the KKT conditions to check whether (x1, x2)  (1/兹2 苶, 1/兹2苶) is optimal. 13.6-3.* Reconsider the model given in Prob. 13.3-3. What are the KKT conditions for this model? Use these conditions to determine whether (x1, x2)  (0, 10) can be optimal. 13.6-4. Consider the following convex programming problem: f(x)  24x1  x12  10x2  x22, Maximize subject to 13.5-3.* Starting from the initial trial solution (x1, x2)  (0, 0), interactively apply the gradient search procedure with ⑀  0.3 to obtain an approximate solution for the following problem, and then apply the automatic routine for this procedure (with ⑀  0.01). D,I,C Maximize f(x)  8x1  x12  12x2  2x22  2x1x2. Then solve f(x)  0 directly to obtain the exact solution. 13.5-4. Starting from the initial trial solution (x1, x2)  (0, 0), interactively apply two iterations of the gradient search procedure to begin solving the following problem, and then apply the automatic routine for this procedure (with ⑀  0.01). D,I,C Maximize f(x)  6x1  2x1x2  2x2  2x12  x22. Then solve f(x)  0 directly to obtain the exact solution. 13.5-5. Starting from the initial trial solution (x1, x2)  (0, 0), apply one iteration of the gradient search procedure to the following problem by hand: Maximize f(x)  4x1  2x2  x12  x14  2x1x2  x22. To complete this iteration, approximately solve for t* by manually applying two iterations of the bisection method with initial bounds t  0, t苶  1. 13.5-6. Consider the following unconstrained optimization problem: Maximize f(x)  3x1x2  3x2x3  x12  6x22  x32. (a) Describe how solving this problem can be reduced to solving a two-variable unconstrained optimization problem. D,I (b) Starting from the initial trial solution (x1, x2, x3)  (1, 1, 1), interactively apply the gradient search procedure with x1  10, x2  15, and x1  0, x2  0. (a) Use the KKT conditions for this problem to derive an optimal solution. (b) Decompose this problem into two separate constrained optimization problems involving just x1 and just x2, respectively. For each of these two problems, plot the objective function over the feasible region in order to demonstrate that the value of x1 or x2 derived in part (a) is indeed optimal. Then prove that this value is optimal by using just the first and second derivatives of the objective function and the constraints for the respective problems. 13.6-5. Consider the following linearly constrained optimization problem: f(x)  ln(x1  1)  x22, Maximize subject to x1  2x2  3 and x1  0, x2  0, where ln denotes the natural logarithm, (a) Verify that this problem is a convex programming problem. (b) Use the KKT conditions to derive an optimal solution. (c) Use intuitive reasoning to demonstrate that the solution obtained in part (b) is indeed optimal. 608 CHAPTER 13 NONLINEAR PROGRAMMING 13.6-6.* Consider the nonlinear programming problem given in Prob. 11.3-11. Determine whether (x1, x2)  (1, 2) can be optimal by applying the KKT conditions. (Hint: Convert this form to our standard form assumed in this chapter by using the techniques presented in Sec. 4.6 and then applying the KKT conditions as given in Sec. 13.6.) 13.6-7. Consider the following nonlinear programming problem: x1 Maximize f(x)  , x2  1 subject to 13.6-10. Consider the following nonlinear programming problem: and and x1  0, x1  0, x2  0. (a) Use the KKT conditions to demonstrate that (x1, x2)  (4, 2) is not optimal. (b) Derive a solution that does satisfy the KKT conditions. (c) Show that this problem is not a convex programming problem. (d) Despite the conclusion in part (c), use intuitive reasoning to show that the solution obtained in part (b) is, in fact, optimal. [The theoretical reason is that f(x) is pseudo-concave.] (e) Use the fact that this problem is a linear fractional programming problem to transform it into an equivalent linear programming problem. Solve the latter problem and thereby identify the optimal solution for the original problem. (Hint: Use the equality constraint in the linear programming problem to substitute one of the variables out of the model, and then solve the model graphically.) 13.6-8.* Use the KKT conditions to derive an optimal solution for each of the following problems. f(x)  x1  2x2  x23, Maximize subject to x1  x2  1 and x1  0, (b) subject to x1  x2  10 x1  x2  2 (a) Z  2x12  x22, Minimize x2  0. Maximize f(x)  20x1  10x2, subject to x12  x22  1 x1  2x2  2 and x  0. f(x)  x13  4x22  16x3, Minimize subject to x1  x2  x3  5 and x1  1, x2  1, x3  1. (a) Convert this problem to an equivalent nonlinear programming problem that fits the form given at the beginning of the chapter (second paragraph), with m  2 and n  3. (b) Use the form obtained in part (a) to construct the KKT conditions for this problem. (c) Use the KKT conditions to check whether (x1, x2, x3)  (2, 1, 2) is optimal. 13.6-12. Consider the following linearly constrained convex programming problem: Z  x12  6x1  x23  3x2, x1  x2  1 x2  0. f(x), subject to gi (x)  bi, 13.6-11. Consider the following linearly constrained programming problem: subject to 13.6-9. What are the KKT conditions for nonlinear programming problems of the following form? Minimize (a) Of the special types of nonlinear programming problems described in Sec. 13.3, to which type or types can this particular problem be fitted? Justify your answer. (Hint: First convert this problem to an equivalent nonlinear programming problem that fits the form given in the second paragraph of the chapter, with m  2 and n  2.) (b) Obtain the KKT conditions for this problem. (c) Use the KKT conditions to derive an optimal solution. Minimize and x1  0, x2  0. for i  1, 2, . . . , m and x1  0, x2  0. (a) Obtain the KKT conditions for this problem. (b) Use the KKT conditions to check whether (x1, x2)  (12, 12) is an optimal solution. (c) Use the KKT conditions to derive an optimal solution. 13.6-13. Consider the following linearly constrained convex programming problem: Maximize f(x)  8x1  x12  2x2  x3, 609 PROBLEMS 13.7-3. Consider the following quadratic programming problem: subject to x1  3x2  2x3  12 and x1  0, subject to x2  0, x3  0. (a) Use the KKT conditions to demonstrate that (x1, x2, x3)  (2, 2, 2) is not an optimal solution. (b) Use the KKT conditions to derive an optimal solution. (Hint: Do some preliminary intuitive analysis to determine the most promising case regarding which variables are nonzero and which are zero.) 13.6-14. Use the KKT conditions to determine whether (x1, x2, x3)  (1, 1, 1) can be optimal for the following problem: Z  2x1  x23  x32, Minimize subject to x12  2x22  x32  4 and x1  0, x1  x2  6 x1  4x2  18 and x1  0, x2  0. Suppose that this problem is to be solved by the modified simplex method. (a) Formulate the linear programming problem that is to be addressed explicitly, and then identify the additional complementarity constraint that is enforced automatically by the algorithm. I (b) Apply the modified simplex method to the problem as formulated in part (a). 13.7-4. Consider the following quadratic programming problem: x2  0, x3  0. subject to 13.6-16. Reconsider the linearly constrained convex programming model given in Prob. 13.4-7. Use the KKT conditions to determine whether (x1, x2)  (2, 2) can be optimal. and 13.7-1. Consider the quadratic programming example presented in Sec. 13.7. (a) Use the test given in Appendix 2 to show that the objective function is strictly concave. (b) Verify that the objective function is strictly concave by demonstrating that Q is a positive definite matrix; that is, xTQx 0 for all x  0. (Hint: Reduce xTQx to a sum of squares.) (c) Show that x1  12, x2  9, and u1  3 satisfy the KKT conditions when they are written in the form given in Sec. 13.6. 13.7-2.* Consider the following quadratic programming problem: f(x)  8x1  x12  4x2  x22, Maximize subject to x1  x2  2 and x2  0. (a) Use the KKT conditions to derive an optimal solution. (b) Now suppose that this problem is to be solved by the modified simplex method. Formulate the linear programming problem that is to be addressed explicitly, and then identify the additional complementarity constraint that is enforced automatically by the algorithm. I (c) Apply the modified simplex method to the problem as formulated in part (b). C (d) Use the computer to solve the quadratic programming problem directly. f(x)  2x1  3x2  x12  x22, Maximize 13.6-15. Reconsider the model given in Prob. 13.2-10. What are the KKT conditions for this problem? Use these conditions to determine whether (x1, x2)  (1, 1) can be optimal. x1  0, f(x)  20x1  20x12  50x2  50x22  18x1x2, Maximize x1  x2  2 x1  0, x2  0. (a) Use the KKT conditions to derive an optimal solution directly. (b) Now suppose that this problem is to be solved by the modified simplex method. Formulate the linear programming problem that is to be addressed explicitly, and then identify the additional complementarity constraint that is enforced automatically by the algorithm. (c) Without applying the modified simplex method, show that the solution derived in part (a) is indeed optimal (Z  0) for the equivalent problem formulated in part (b). I (d) Apply the modified simplex method to the problem as formulated in part (b). C (e) Use the computer to solve the quadratic programming problem directly. 13.7-5. Reconsider the first quadratic programming variation of the Wyndor Glass Co. problem presented in Sec. 13.2 (see Fig. 13.6). Analyze this problem by following the instructions of parts (a), (b), and (c) of Prob. 13.7-4. 13.7-6. Reconsider Prob. 13.1-4 and its quadratic programming model. (a) Display this model [including the values of R(x) and V(x)] on an Excel spreadsheet. (b) Use Solver (or ASPE) and its GRG Nonlinear solving method to solve this model for four cases: minimum acceptable expected return  13, 14, 15, 16. (c) Repeat part b while using ASPE and its Quadratic solving method. (d) For typical probability distributions (with mean ␮ and variance ␴ 2) of the total return from the entire portfolio, the probability 610 CHAPTER 13 NONLINEAR PROGRAMMING is fairly high (about 0.8 or 0.9) that the return will exceed ␮  ␴, and the probability is extremely high (often close to 0.999) that the return will exceed ␮  3␴. Calculate ␮  ␴ and ␮  3␴ for the four portfolios obtained in part (b). Which portfolio will give the highest ␮ among those that also give ␮  ␴  0? 13.7-7. The management of the Albert Hanson Company is trying to determine the best product mix for two new products. Because these products would share the same production facilities, the total number of units produced of the two products combined cannot exceed two per hour. Because of uncertainty about how well these products will sell, the profit from producing each product provides decreasing marginal returns as the production rate is increased. In particular, with a production rate of R1 units per hour, it is estimated that Product 1 would provide a profit per hour of $200R1  $100 R 21. If the production rate of product 2 is R2 units per hour, its estimated profit per hour would be $300R2  $100R22. (a) Formulate a quadratic programming model in algebraic form for determining the product mix that maximizes the total profit per hour. (b) Formulate this model on a spreadsheet. (c) Use Solver (or ASPE) and its GRG Nonlinear solving method to solve this model. (d) Use ASPE and its Quadratic solving method to solve this model. 13.8-1. The MFG Corporation is planning to produce and market three different products. Let x1, x2, and x3 denote the number of units of the three respective products to be produced. The preliminary estimates of their potential profitability are as follows. For the first 15 units produced of Product 1, the unit profit would be approximately $360. The unit profit would be only $30 for any additional units of Product 1. For the first 20 units produced of Product 2, the unit profit is estimated at $240. The unit profit would be $120 for each of the next 20 units and $90 for any additional units. For the first 20 units of Product 3, the unit profit would be $450. The unit profit would be $300 for each of the next 10 units and $180 for any additional units. Certain limitations on the use of needed resources impose the following constraints on the production of the three products: x1  x2  x3  60 3x1  2x2  200 x1  2  2x3  70. Management wants to know what values of x1, x2 and x3 should be chosen to maximize the total profit. (a) Plot the profit graph for each of the three products. (b) Use separable programming to formulate a linear programming model for this problem. C (c) Solve the model. What is the resulting recommendation to management about the values of x1, x2, and x3 to use? (d) Now suppose that there is an additional constraint that the profit from products 1 and 2 must total at least $12,000. Use the technique presented in the “Extensions” subsection of Sec. 13.8 to add this constraint to the model formulated in part (b). C (e) Repeat part (c) for the model formulated in part (d). 12.8-2.* The Dorwyn Company has two new products that will compete with the two new products for the Wyndor Glass Co. (described in Sec. 3.1). Using units of hundreds of dollars for the objective function, the linear programming model shown below has been formulated to determine the most profitable product mix. Maximize Z  4x1  6x2, subject to x1  3x2  8 5x1  2x2  14 and x1  0, x2  0. However, because of the strong competition from Wyndor, Dorwyn management now realizes that the company will need to make a strong marketing effort to generate substantial sales of these products. In particular, it is estimated that achieving a production and sales rate of x1 units of Product 1 per week will require weekly marketing costs of x13 hundred dollars. The corresponding marketing costs for Product 2 are estimated to be 2x22 hundred dollars. Thus, the objective function in the model should be Z  4x1  6x2  x13  2x22. Dorwyn management now would like to use the revised model to determine the most profitable product mix. (a) Verify that (x1, x2)  (2/兹3 苶, 32) is an optimal solution by applying the KKT conditions. (b) Construct tables to show the profit data for each product when the production rate is 0, 1, 2, 3. (c) Draw a figure like Fig. 13.15b that plots the weekly profit points for each product when the production rate is 0, 1, 2, 3. Connect the pairs of consecutive points with (dashed) line segments. (d) Use separable programming based on this figure to formulate an approximate linear programming model for this problem. C (e) Solve the model. What does this say to Dorwyn management about which product mix to use? 13.8-3. The B. J. Jensen Company specializes in the production of power saws and power drills for home use. Sales are relatively stable throughout the year except for a jump upward during the Christmas season. Since the production work requires considerable work and experience, the company maintains a stable employment level and then uses overtime to increase production in November. The workers also welcome this opportunity to earn extra money for the holidays. B. J. Jensen, Jr., the current president of the company, is overseeing the production plans being made for the upcoming November. He has obtained the following data: Maximum Monthly Production* Regular Time Power saws Power drills 3,000 5,000 Overtime 2,000 3,000 Profit per Unit Produced Regular Time Overtime $150 $100 $50 $75 *Assuming adequate supplies of materials from the company’s vendors. 611 PROBLEMS However, Mr. Jensen now has learned that, in addition to the limited number of labor hours available, two other factors will limit the production levels that can be achieved this November. One is that the company’s vendor for power supply units will only be able to provide 10,000 of these units for November (2,000 more than his usual monthly shipment). Each power saw and each power drill requires one of these units. Second, the vendor who supplies a key part for the gear assemblies will only be able to provide 15,000 for November (4,000 more than for other months). Each power saw requires two of these parts and each power drill requires one. Mr. Jensen now wants to determine how many power saws and how many power drills to produce in November to maximize the company’s total profit. (a) Draw the profit graph for each of these two products. (b) Use separable programming to formulate a linear programming model for this problem. C (c) Solve the model. What does this say about how many power saws and how many power drills to produce in November? 13.8-4. Reconsider the linearly constrained convex programming model given in Prob. 13.4-7. (a) Use the separable programming technique presented in Sec. 13.8 to formulate an approximate linear programming model for this problem. Use x1  0, 1, 2, 3 and x2  0, 1, 2, 3 as the breakpoints of the piecewise linear functions. C (b) Use the simplex method to solve the model formulated in part (a). Then reexpress this solution in terms of the original variables of the problem. 13.8-5. Suppose that the separable programming technique has been applied to a certain problem (the “original problem”) to convert it to the following equivalent linear programming problem: Maximize Z  5x11  4x12  2x13  4x21  x22, subject to 3x11  3x12  3x13  2x21  2x22  25 2x11  2x12  2x13  x21  x22  10 and 0  x11  2 0  x12  3 0  x13 0  x21  3 0  x22  1. What was the mathematical model for the original problem? (You may define the objective function either algebraically or graphically, but express the constraints algebraically.) 13.8-6. For each of the following cases, prove that the key property of separable programming given in Sec. 13.8 must hold. (Hint: Assume that there exists an optimal solution that violates this property, and then contradict this assumption by showing that there exists a better feasible solution.) (a) The special case of separable programming where all the gi(x) are linear functions. (b) The general case of separable programming where all the functions are nonlinear functions of the designated form. [Hint: Think of the functional constraints as constraints on resources, where gij (xj) represents the amount of resource i used by running activity j at level xj, and then use what the convexity assumption implies about the slopes of the approximating piece-wise linear function.] 13.8-7. The MFG Company produces a certain subassembly in each of two separate plants. These subassemblies are then brought to a third nearby plant where they are used in the production of a certain product. The peak season of demand for this product is approaching, so to maintain the production rate within a desired range, it is necessary to use temporarily some overtime in making the subassemblies. The cost per subassembly on regular time (RT) and on overtime (OT) is shown in the following table for both plants, along with the maximum number of subassemblies that can be produced on RT and on OT each day. Unit Cost Plant 1 Plant 2 Capacity RT OT RT OT $15 $16 $25 $24 2,000 1,000 1,000 500 Let x1 and x2 denote the total number of subassemblies produced per day at plants 1 and 2, respectively. The objective is to maximize Z  x1  x2, subject to the constraint that the total daily cost not exceed $60,000. Note that the mathematical programming formulation of this problem (with x1 and x2 as decision variables) has the same form as the main case of the separable programming model described in Sec. 13.8, except that the separable functions appear in a constraint function rather than the objective function. However, the same approach can be used to reformulate the problem as a linear programming model where it is feasible to use OT even when the RT capacity at that plant is not fully used. (a) Formulate this linear programming model. (b) Explain why the logic of separable programming also applies here to guarantee that an optimal solution for the model formulated in part (a) never uses OT unless the RT capacity at that plant has been fully used. 13.8-8. Consider the following nonlinear programming problem: Maximize Z  5x1  x2, subject to 2x12  x2  13 x12  x2  9 and x1  0, x2  0. (a) Show that this problem is a convex programming problem. (b) Use the separable programming technique discussed at the end of Sec. 13.8 to formulate an approximate linear programming model for this problem. Use the integers as the breakpoints of the piecewise linear function. C (c) Use the computer to solve the model formulated in part (b). Then reexpress this solution in terms of the original variables of the problem. 612 CHAPTER 13 NONLINEAR PROGRAMMING 13.8-9. Consider the following convex programming problem: Z  32x1  Maximize x14  4x2  x22, 13.9-4. Consider the quadratic programming example presented in Sec. 13.7. Starting from the initial trial solution (x1, x2)  (5, 5), apply eight iterations of the Frank-Wolfe algorithm. D,I subject to x12  x22  9 and x1  0, x2  0. (a) Apply the separable programming technique discussed at the end of Sec. 13.8, with x1  0, 1, 2, 3 and x2  0, 1, 2, 3 as the breakpoint of the piecewise linear functions, to formulate an approximate linear programming model for this problem. C (b) Use the computer to solve the model formulated in part (a). Then reexpress this solution in terms of the original variables of the problem. (c) Use the KKT conditions to determine whether the solution for the original variables obtained in part (b) actually is optimal for the original problem (not the approximate model). 13.8-10. Reconsider the integer nonlinear programming model given in Prob. 11.3-9. (a) Show that the objective function is not concave. (b) Formulate an equivalent pure binary integer linear programming model for this problem as follows. Apply the separable programming technique with the feasible integers as the breakpoints of the piecewise linear functions, so that the auxiliary variables are binary variables. Then add some linear programming constraints on these binary variables to enforce the special restriction of separable programming. (Note that the key property of separable programming does not hold for this problem because the objective function is not concave.) C (c) Use the computer to solve this problem as formulated in part (b). Then reexpress this solution in terms of the original variables of the problem. c (d) Use the computer with the software option of your choice to solve this problem. 13.9-1. Reconsider the linearly constrained convex programming model given in Prob. 13.6-5. Starting from the initial trial solution (x1, x2)  (0, 0), use one iteration of the Frank-Wolfe algorithm to obtain exactly the same solution you found in part (b) of Prob. 13.6-5, and then use a second iteration to verify that it is an optimal solution (because it is replicated exactly). D,I 13.9-2. Reconsider the linearly constrained convex programming model given in Prob. 13.6-12. Starting from the initial trial solution (x1, x2)  (0, 0), use one iteration of the Frank-Wolfe algorithm to obtain exactly the same solution you found in part (c) of Prob. 13.6-12, and then use a second iteration to verify that it is an optimal solution (because it is replicated exactly). Explain why exactly the same results would be obtained on these two iterations with any other trial solution. D,I 13.9-3. Reconsider the linearly constrained convex programming model given in Prob. 13.6-13. Starting from the initial trial solution (x1, x2, x3)  (0, 0, 0), apply two iterations of the FrankWolfe algorithm. D,I 13.9-5. Reconsider the quadratic programming model given in Prob. 13.7-4. D,I (a) Starting from the initial trial solution (x1, x2)  (0, 0), use the Frank-Wolfe algorithm (six iterations) to solve the problem (approximately). (b) Show graphically how the sequence of trial solutions obtained in part (a) can be extrapolated to obtain a closer approximation of an optimal solution. What is your resulting estimate of this solution? 13.9-6. Reconsider the linearly constrained convex programming model given in Prob. 13.4-7. Starting from the initial trial solution (x1, x2)  (0, 0), use the Frank-Wolfe algorithm (four iterations) to solve this model (approximately). D,I 13.9-7. Consider the following linearly constrained convex programming problem: D,I f(x)  3x1x2  40x1  30x2  4x12  x14  3x22  x24, Maximize subject to 4x1  3x2  12 x1  2x2  4 and x1  0, x2  0. Starting from the initial trial solution (x1, x2)  (0, 0), apply two iterations of the Frank-Wolfe algorithm. 13.9-8.* Consider the following linearly constrained convex programming problem: D,I f(x)  3x1  4x2  x13  x22, Maximize subject to x1  x2  1 and x1  0, x2  0. (a) Starting from the initial trial solution (x1, x2)  (14, 14), apply three iterations of the Frank-Wolfe algorithm. (b) Use the KKT conditions to check whether the solution obtained in part (a) is, in fact, optimal. C (c) Use the computer with the software option of your choice to solve this problem. 13.9-9. Consider the following linearly constrained convex programming problem: f(x)  4x1  x14  2x2  x22, Maximize subject to 4x1  2x2  5 and x1  0, x2  0. 613 PROBLEMS (a) Starting from the initial trial solution (x1, x2)  (12, 12), apply four iterations of the Frank-Wolfe algorithm. (b) Show graphically how the sequence of trial solutions obtained in part (a) can be extrapolated to obtain a closer approximation of an optimal solution. What is your resulting estimate of this solution? (c) Use the KKT conditions to check whether the solution you obtained in part (b) is, in fact, optimal. If not, use these conditions to derive the exact optimal solution. C (d) Use the computer with the software option of your choice to solve this problem. (a) If SUMT were applied to this problem, what would be the unconstrained function P(x; r) to be maximized at each iteration? (b) Derive the maximizing solution of P(x; r) analytically, and then give this solution for r  1, 102, 104, 106. D,C (c) Beginning with the initial trial solution (x1, x2)  (4, 4), use the automatic procedure in your IOR Tutorial to apply SUMT to this problem with r  1, 102, 104, 106. 13.9-10. Reconsider the linearly constrained convex programming model given in Prob. 13.9-8. (a) If SUMT were to be applied to this problem, what would be the unconstrained function P(x; r) to be maximized at each iteration? (b) Setting r  1 and using (41, 41) as the initial trial solution, manually apply one iteration of the gradient search procedure (except stop before solving for t*) to begin maximizing the function P(x; r) you obtained in part (a). D,C (c) Beginning with the same initial trial solution as in part (b), use the automatic procedure in your IOR Tutorial to apply SUMT to this problem with r  1, 102, 104. (d) Compare the final solution obtained in part (c) to the true optimal solution for Prob. 13.9-8 given in the back of the book. What is the percentage error in x1, in x2, and in f(x)? subject to 13.9-11. Reconsider the linearly constrained convex programming model given in Prob. 13.9-9. Follow the instructions of parts (a), (b), and (c) of Prob. 13.9-10 for this model, except use (x1, x2)  (21, 21) as the initial trial solution and use r  1, 102, 104, 106. 13.9-12. Reconsider the model given in Prob. 13.3-3. (a) If SUMT were to be applied directly to this problem, what would be the unconstrained function P(x; r) to be minimized at each iteration? (b) Setting r  100 and using (x1, x2)  (5, 5) as the initial trial solution, manually apply one iteration of the gradient search procedure (except stop before solving for t*) to begin minimizing the function P(x; r) you obtained in part (a). D,C (c) Beginning with the same initial trial solution as in part (b), use the automatic procedure in your IOR Tutorial to apply SUMT to this problem with r  100, 1, 102, 104. (Hint: The computer routine assumes that the problem has been converted to maximization form with the functional constraints in  form.) 13.9-13. Consider the example for applying SUMT given in Sec. 13.9. (a) Show that (x1, x2)  (1, 2) satisfies the KKT conditions. (b) Display the feasible region graphically, and then plot the locus of points x1x2  2 to demonstrate that (x1, x2)  (1, 2) with f(1, 2)  2 is, in fact, a global maximum. 13.9-14.* Consider the following convex programming problem: Maximize f(x)  2x1  (x2  3)2, subject to x1  3 and x2  3. 13.9-15. Consider the following convex programming problem: D,C Maximize f(x)  x1x2  x1  x12  x2  x22, x2  0. Beginning with the initial trial solution (x1, x2)  (1, 1), use the automatic procedure in your IOR Tutorial to apply SUMT to this problem with r  1, 102, 104. 13.9-16. Reconsider the quadratic programming model given in Prob. 13.7-4. Beginning with the initial trial solution (x1, x2)  (21, 21), use the automatic procedure in your IOR Tutorial to apply SUMT to this model with r  1, 102, 104, 106. D,C 13.9-17. Reconsider the first quadratic programming variation of the Wyndor Glass Co. problem presented in Sec. 13.2 (see Fig. 13.6). Beginning with the initial trial solution (x1, x2)  (2, 3), use the automatic procedure in your IOR Tutorial to apply SUMT to this problem with r  102, 1, 102, 104. D,C 13.9-18. Reconsider the convex programming model with an equality constraint given in Prob. 13.6-11. (a) If SUMT were to be applied to this model, what would be the unconstrained function P(x; r) to be minimized at each iteration? 3 3 D,C (b) Starting from the initial trial solution (x1, x2, x3)  (2, 2, 2), use the automatic procedure in your IOR Tutorial to apply SUMT to this model with r  102, 104, 106, 108. C (c) Use Solver to solve this problem. C (d) Use Evolutionary Solver to solve this problem. C (e) Use LINGO to solve this problem. 13.10-1. Consider the following nonconvex programming problem: Maximize f(x)  1,000x  400x2  40x3  x4, subject to x2  x  500 and x  0. (a) Identify the feasible values for x. Obtain general expressions for the first three derivatives of f(x). Use this information to help you draw a rough sketch of f(x) over the feasible region for x. Without calculating their values, mark the points on your graph that correspond to local maxima and minima. I (b) Use the bisection method with ⑀  0.05 to find each of the local maxima. Use your sketch from part (a) to identify appropriate initial bounds for each of these searches. Which of the local maxima is a global maximum? 614 CHAPTER 13 NONLINEAR PROGRAMMING (c) Starting with x  3 and x  15 as the initial trial solutions, use Newton’s method with ⑀  0.001 to find each of the local maxima. D,C (d) Use the automatic procedure in your IOR Tutorial to apply SUMT to this problem with r  103, 102, 10, 1 to find each of the local maxima. Use x  3 and x  15 as the initial trial solutions for these searches. Which of the local maxima is a global maximum? C (e) Formulate this problem in a spreadsheet and then use the GRG Nonlinear solving method with the Multistart option to solve this problem. C (f) Use Evolutionary Solver to solve this problem. C (g) Use the global optimizer feature of LINGO to solve this problem. C (h) Use MPL and its global optimizer LGO to solve this problem. 13.10-2. Consider the following nonconvex programming problem: f(x)  3x1x2  2x12  x22, Maximize subject to x12  2x22  4 2x1  x2  3 x1x22  x12x2  2 and x1  0, x2  0. (a) If SUMT were to be applied to this problem, what would be the unconstrained function P(x; r) to be maximized at each iteration? D,C (b) Starting from the initial trial solution (x1, x2)  (1, 1), use the automatic procedure in your IOR Tutorial to apply SUMT to this problem with r  1, 102, 104. C (c) Use Evolutionary Solver to solve this problem. C (d) Use the global optimizer feature of LINGO to solve this problem. C (e) Use MPL and its global optimizer LGO to solve this problem. 13.10-3. Consider the following nonconvex programming problem: Minimize f(x)  sin 3x1  cos 3x2  sin(x1  x2), subject to x12  10x2  1 10x1  x22  100 and x1  0, x2  0. (a) If SUMT were applied to this problem, what would be the unconstrained function P(x; r) to be minimized at each iteration? (b) Describe how SUMT should be applied to attempt to obtain a global minimum. (Do not actually solve.) C (c) Use the global optimizer feature of LINGO to solve this problem. C (d) Use MPL and its global optimizer LGO to solve this problem. C 13.10-4. Consider the following nonconvex programming problem: Maximize Profit  x5  13x4  59x3  107x2  61x, subject to 0  x  5. (a) Formulate this problem in a spreadsheet, and then use the GRG Nonlinear solving method with the Multistart option to solve this problem. (b) Use Evolutionary Solver to solve this problem. C 13.10-5. Consider the following nonconvex programming problem: Maximize Profit  100x6  1,359x5  6,836x4  15,670x3  15,870x2  5,095x, subject to 0  x  5. (a) Formulate this problem in a spreadsheet, and then use the GRG Nonlinear solving method with the Multistart option to solve this problem. (b) Use Evolutionary Solver to solve this problem. 13.10-6. Because of population growth, the state of Washington has been given an additional seat in the House of Representatives, making a total of 10. The state legislature, which is currently controlled by the Republicans, needs to develop a plan for redistricting the state. There are 18 major cities in the state of Washington that need to be assigned to one of the 10 congressional districts. The table below gives the numbers of registered Democrats and registered Republicans in each city. Each district must contain between 150,000 and 350,000 of these registered voters. Use Evolutionary Solver to assign each city to one of the 10 congressional districts in order to maximize the number of districts that have more registered Republicans than registered Democrats. (Hint: Use the SUMIF function.) C City Democrats (Thousands) Republicans (Thousands) 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 152 81 75 34 62 38 48 74 98 66 83 86 72 28 112 45 93 72 62 59 83 52 87 87 69 49 62 72 75 82 83 53 98 82 68 98 13.10-7. Reconsider the Wyndor Glass Co. problem introduced in Sec. 3.1. C (a) Solve this problem using Solver. C (b) Starting with an initial solution of producing 0 batches of doors and 0 batches of windows, solve this problem using Evolutionary Solver. (c) Comment on the performance of the two approaches. CASES 13.10-8. Read the referenced article that fully describes the OR study summarized in the application vignette presented in Sec. 13.10. Briefly describe how nonlinear programming was applied in this study. Then list the various financial and nonfinancial benefits that resulted from this study. 13.11-1. Consider the following problem: Z  4x1  x12  10x2  x22, Maximize subject to x12  4x22  16 and x1  0, x2  0. (a) Is this a convex programming problem? Answer yes or no, and then justify your answer. (b) Can the modified simplex method be used to solve this problem? Answer yes or no, and then justify your answer (but do not actually solve). (c) Can the Frank-Wolfe algorithm be used to solve this problem? 615 Answer yes or no, and then justify your answer (but do not actually solve). (d) What are the KKT conditions for this problem? Use these conditions to determine whether (x1, x2)  (1, 1) can be optimal. (e) Use the separable programming technique to formulate an approximate linear programming model for this problem. Use the feasible integers as the breakpoints for each piecewise linear function. C (f) Use the simplex method to solve the problem as formulated in part (e). (g) Give the function P(x; r) to be maximized at each iteration when applying SUMT to this problem. (Do not actually solve.) D,C (h) Use SUMT (the automatic procedure in your IOR Tutorial) to solve the problem as formulated in part (g). Begin with the initial trial solution (x1, x2)  (2, 1) and use r  1, 102, 104, 106. C (i) Formulate this problem in a spreadsheet, and then use. Solver to solve this problem. C (j) Use Evolutionary Solver to solve this problem. C (k) Use LINGO to solve this problem. ■ CASES Case 13.1 Savvy Stock Selection Ever since the day she took her first economics class in high school, Lydia wondered about the financial practices of her parents. They worked very hard to earn enough money to live a comfortable middle-class life, but they never made their money work for them. They simply deposited their hard-earned paychecks in savings accounts earning a nominal amount of interest. (Fortunately, there always was enough money when it came time to pay her college bills.) She promised herself that when she became an adult, she would not follow the same financially conservative practices as her parents. And Lydia kept this promise. Every morning while getting ready for work, she watches the CNN financial reports. She plays investment games on the World Wide Web, finding portfolios that maximize her return while minimizing her risk. She reads The Wall Street Journal and Financial Times with a thirst she cannot quench. Lydia also reads the investment advice columns of the financial magazines, and she has noticed that on average, the advice of the investment advisers turns out to be very good. Therefore, she decides to follow the advice given in the latest issue of one of the magazines. In his monthly column the editor Jonathan Taylor recommends three stocks that he believes will rise far above market average. In addition, the well-known mutual fund guru Donna Carter advocates the purchase of three more stocks that she thinks will outperform the market over the next year. BIGBELL (ticker symbol on the stock exchange: BB), one of the nation’s largest telecommunications companies, trades at a price-earnings ratio well below market average. Huge investments over the last eight months have depressed earnings considerably. However, with their new cutting-edge technology, the company is expected to significantly raise their profit margins. Taylor predicts that the stock will rise from its current price of $60 per share to $72 per share within the next year. LOTSOFPLACE (LOP) is one of the leading hard drive manufacturers in the world. The industry recently underwent major consolidation, as fierce price wars over the last few years were followed by many competitors going bankrupt or being bought by LOTSOFPLACE and its competitors. Due to reduced competition in the hard drive market, revenues and earnings are expected to rise considerably over the next year. Taylor predicts a one-year increase of 42 percent in the stock of LOTSOFPLACE from the current price of $127 per share. INTERNETLIFE (ILI) has survived the many ups and downs of Internet companies. With the next Internet frenzy just around the corner, Taylor expects a doubling of this company’s stock price from $4 to $8 within a year. HEALTHTOMORROW (HEAL) is a leading biotechnology company that is about to get approval for several new drugs from the Food and Drug Administration, which will help earnings to grow 20 percent over the next few years. In particular a new drug to significantly reduce the risk of heart attacks is supposed to reap huge profits. Also, due to several new great-tasting medications for children, the company has been able to build an excellent image in the media. This public relations coup will surely have positive effects for the sale of its over-the-counter medications. Carter is convinced that the stock will rise from $50 to $75 per share within a year. 616 CHAPTER 13 NONLINEAR PROGRAMMING QUICKY (QUI) is a fast-food chain which has been vastly expanding its network of restaurants all over the United States. Carter has followed this company closely since it went public some 15 years ago when it had only a few dozen restaurants on the west coast of the United States. Since then the company has expanded, and it now has restaurants in every state. Due to its emphasis on healthy foods, it is capturing a growing market share. Carter believes that the stock will continue to perform well above market average for an increase of 46 percent in one year from its current stock price of $150. Company Variance Covariances BB LOP AUTOMOBILE ALLIANCE (AUA) is a leading car manufacturer from the Detroit area that just recently introduced two new models. These models show very strong initial sales, and therefore the company’s stock is predicted to rise from $20 to $26 over the next year. On the World Wide Web Lydia found data about the risk involved in the stocks of these companies. The historical variances of return of the six stocks and their covariances are shown below: BB LOP ILI HEAL QUI AUA 0.032 0.1 0.333 0.125 0.065 0.08 LOP ILI HEAL QUI AUA 0.005 0.03 0.031 0.027 0.01 0.085 0.07 0.05 0.02 0.11 0.02 0.042 ILI HEAL QUI (a) At first, Lydia wants to ignore the risk of all the investments. Given this strategy, what is her optimal investment portfolio; that is, what fraction of her money should she invest in each of the six different stocks? What is the total risk of her portfolio? (b) Lydia decides that she doesn’t want to invest more than 40 percent in any individual stock. While still ignoring risk, what is her new optimal investment portfolio? What is the total risk of her new portfolio? (c) Now Lydia wants to take into account the risk of her investment opportunities. For use in the following parts, formulate a quadratic programming model that will minimize her risk (measured by the variance of the return from her portfolio), while ensuring that 0.05 0.06 0.02 her expected return is at least as large as her choice of a minimum acceptable value. (d) Lydia wants to ensure that she receives an expected return of at least 35 percent. She wants to reach this goal at minimum risk. What investment portfolio allows her to do that? (e) What is the minimum risk Lydia can achieve if she wants an expected return of at least 25 percent? Of at least 40 percent? (f) Do you see any problems or disadvantages with Lydia’s approach to her investment strategy? (Note: A data file for this case is provided on the book’s website for your convenience.) ■ PREVIEWS OF ADDED CASES ON OUR WEBSITE, (www.mhhe.com/hillier) CASE 13.2 International Investments A financial analyst is holding some German bonds that offer increasing interest rates if they are kept until their full maturity in three more years. They also can be redeemed at any time to obtain the original principal plus the accrued interest. The German federal government has just introduced a capital gains tax on interest income above a certain level, so holding the bonds to maturity now is less attractive. Therefore, the analyst needs to determine his optimal investment strategy regarding how many bonds to sell during each of the next three years under a few different scenarios. CASE 13.3 Promoting a Breakfast Cereal, Revisited This case continues Case 3.4 involving an advertising campaign for Super Grain Corporation’s new breakfast cereal. The analysis requested for Case 3.4 leads to the application of linear programming. However, certain assumptions of linear programming are quite questionable in this situation. In particular, the assumption that the total profit from the introduction of the breakfast cereal is proportional to the total number of exposures from the advertising campaign clearly is only a rough approximation. To refine the analysis, both a general nonlinear programming model and a separable programming model need to be formulated, applied, and compared. 14 C H A P T E R Metaheuristics S everal of the preceding chapters have described algorithms that can be used to obtain an optimal solution for various kinds of OR models, including certain types of linear programming, integer programming, and nonlinear programming models. These algorithms have proven to be invaluable for addressing a wide variety of practical problems. However, this approach doesn’t always work. Some problems (and the corresponding OR models) are so complicated that it may not be possible to solve for an optimal solution. In such situations, it still is important to find a good feasible solution that is at least reasonably close to being optimal. Heuristic methods commonly are used to search for such a solution. A heuristic method is a procedure that is likely to discover a very good feasible solution, but not necessarily an optimal solution, for the specific problem being considered. No guarantee can be given about the quality of the solution obtained, but a well-designed heuristic method usually can provide a solution that is at least nearly optimal (or conclude that no such solutions exist). The procedure also should be sufficiently efficient to deal with very large problems. The procedure often is a full-fledged iterative algorithm, where each iteration involves conducting a search for a new solution that might be better than the best solution found previously. When the algorithm is terminated after a reasonable time, the solution it provides is the best one that was found during any iteration. Heuristic methods often are based on relatively simple common-sense ideas for how to search for a good solution. These ideas need to be carefully tailored to fit the specific problem of interest. Thus, heuristic methods tend to be ad hoc in nature. That is, each method usually is designed to fit a specific problem type rather than a variety of applications. For many years, this meant that an OR team would need to start from scratch to develop a heuristic method to fit the problem at hand, whenever an algorithm for finding an optimal solution was not available. This all has changed in relatively recent years with the development of powerful metaheuristics. A metaheuristic is a general solution method that provides both a general structure and strategy guidelines for developing a specific heuristic method to fit a particular kind of problem. Metaheuristics have become one of the most important techniques in the toolkit of OR practitioners. This chapter provides an elementary introduction to metaheuristics. After describing the general nature of metaheuristics in the first section, the following three sections will introduce and illustrate three commonly used metaheuristics. 617 618 ■ 14.1 CHAPTER 14 METAHEURISTICS THE NATURE OF METAHEURISTICS To illustrate the nature of metaheuristics, let us begin with an example of a small but modestly difficult nonlinear programming problem: An Example: A Nonlinear Programming Problem with Multiple Local Optima Consider the following problem: Maximize f(x)  12x5 ⫺ 975x4 ⫹ 28,000x3 ⫺ 345,000x2 ⫹ 1,800,000x, subject to 0 ⱕ x ⱕ 31. Figure 14.1 graphs the objective function f(x) over the feasible values of the single variable x. This plot reveals that the problem has three local optima, one at x ⫽ 5, another at x ⫽ 20, and the third at x ⫽ 31, where the global optimum is at x ⫽ 20. The objective function f(x) is sufficiently complicated that it would be difficult to determine where the global optimum lies without the benefit of viewing the plot in Fig. 14.1. Calculus could be used, but this would require solving a polynomial equation of the fourth degree (after setting the first derivative equal to zero) to determine where the critical points lie. It would even be difficult to ascertain that f(x) has multiple local optima rather than just a global optimum. This problem is an example of a nonconvex programming problem, a special type of nonlinear programming problem that typically has multiple local optima. Section 13.10 ■ FIGURE 14.1 A plot of the value of the objective function over the feasible range, 0 ⱕ x ⱕ 31, for the nonlinear programming example. The local optima are at x ⫽ 5, x ⫽ 20, and x ⫽ 31, but only x ⫽ 20 is a global optimum. f (x) 5 × 106 4 × 106 3 × 106 2 × 106 1 × 106 0 5 10 15 20 25 30 x 619 14.1 THE NATURE OF METAHEURISTICS discusses nonconvex programming and even introduces a software package (Evolutionary Solver) that uses the kind of metaheuristic described in Sec. 14.4. For nonlinear programming problems that appear to be somewhat difficult, like this one, a simple heuristic method is to conduct a local improvement procedure. Such a procedure starts with an initial trial solution and then, at each iteration, searches in the neighborhood of the current trial solution to find a better trial solution. This process continues until no improved solution can be found in the neighborhood of the current trial solution. Thus, this kind of procedure can be viewed as a hill-climbing procedure that keeps climbing higher on the plot of the objective function (assuming the objective is maximization) until it essentially reaches the top of the hill. A well-designed local improvement procedure usually will be successful in converging to a local optimum (the top of a hill), but it then will stop even if this local optimum is not a global optimum (the top of the tallest hill). For example, the gradient search procedure described in Sec. 13.5 is a local improvement procedure. If it were to start with, say, x  0 as the initial trial solution in Fig. 14.1, it would climb up the hill by trying successively larger values of x until it essentially reaches the top of the hill at x  5, at which point it would stop. Figure 14.2 shows a typical sequence of values of f(x) that would be obtained by such a local improvement procedure when starting from far down the hill. Since the nonlinear programming example depicted in Fig. 14.1 involves only a single variable, the bisection method described in Sec. 13.4 also could be applied to this particular problem. This procedure is another example of a local improvement procedure, since each iteration starts from the current trial solution to search in its neighborhood (defined by a current lower bound and upper bound on the value of the variable) for a better solution. For example, if the search were to begin with a lower bound of x  0 and an upper bound of x  6 in Fig. 14.1, the sequence of trial solutions obtained by the bisection method would be x  3, x  4.5, x  5.25, x  4.875, and so forth as it converges to x  5. The corresponding values of the objective function for these four trial solutions are 2.975 million, 3.286 million, 3.300 million, and 3.302 million, respectively. Thus, the second iteration provides a relatively large improvement over the first one (311,000), the third iteration gives a considerably smaller improvement (14,000), and the fourth iteration yields only a very small improvement (2000). As depicted in Fig. 14.2, this pattern is rather typical of local improvement procedures (although with some variation in the rate of convergence to the local maximum). ■ FIGURE 14.2 A typical sequence of objective function values for the solutions obtained by a local improvement procedure as it converges to a local optimum when it is applied to a maximization problem. A smaller improvement f (x) A very small improvement A large improvement 1 2 3 4 Iteration 620 CHAPTER 14 METAHEURISTICS Just as with the gradient search procedure, this search with the bisection method would get trapped at the local optimum at x  5, so it never would find the global optimum at x  20. Like other local improvement procedures, both the gradient search procedure and the bisection method are designed only to keep improving on the current trial solutions within the local neighborhood of those solutions. Once they climb to the top of a hill, they must stop because they cannot climb any higher within the local neighborhood of the trial solution at the top of the hill. This illustrates the drawback of any local improvement procedure. The drawback of a local improvement procedure: When a well-designed local improvement procedure is applied to an optimization problem with multiple local optima, the procedure will converge to one local optimum and then stop. Which local optimum it finds depends on where the procedure begins the search. Thus, the procedure will find the global optimum only if it happens to begin the search in the neighborhood of this global optimum. To try to overcome this drawback, one can restart the local improvement procedure a number of times from randomly selected initial trial solutions. Restarting from a new part of the feasible region often will lead to a new local optimum. Repeating this a number of times increases the chance that the best of the local optima obtained actually will be the global optimum. (As described in Sec. 13.10, this is what is done with either Solver or ASPE when using the GRG Nonlinear solving method and then selecting the Use Multistart option.) This approach works well on small problems, like the one-variable nonlinear programming example depicted in Fig. 14.1. However, it is much less successful on large problems with many variables and a complicated feasible region. When the feasible region has numerous “nooks and crannies” and restarting a local improvement procedure from only one of them will lead to the global optimum, restarting from randomly selected initial trial solutions becomes a haphazard way to reach the global optimum. What is needed instead is a more structured approach that uses the information being gathered to guide the search toward the global optimum. This is the role that a metaheuristic plays. The nature of metaheuristics: A metaheuristic is a general kind of solution method that orchestrates the interaction between local improvement procedures and higher level strategies to create a process that is capable of escaping from local optima and performing a robust search of a feasible region. Thus, one key feature of a metaheuristic is its ability to escape from a local optimum. After reaching (or nearly reaching) a local optimum, different metaheuristics execute this escape in different ways. However, a common characteristic is that the trial solutions that immediately follow a local optimum are allowed to be inferior to this local optimum. Consequently, when a metaheuristic is applied to a maximization problem (such as the example depicted in Fig. 14.1), the objective function values for the sequence of trial solutions obtained typically would follow a pattern similar to that shown in Fig. 14.3. As with Fig. 14.2, the process begins by using a local improvement procedure to climb to the top of the current hill (iteration 4). However, rather than stopping there, the metaheuristic might guide the search a little way down the other side of this hill until it can start climbing to the top of the tallest hill (iteration 8). To verify that this appears to be the global optimum, a metaheuristic continues exploring further before stopping (iteration 12). Figure 14.3 illustrates both an advantage and a disadvantage of a well-designed metaheuristic. The advantage is that it tends to move relatively quickly toward very good solutions, so it provides a very efficient way of dealing with large complicated problems. The disadvantage is that there is no guarantee that the best solution found will be an optimal solution or even a nearly optimal solution. Therefore, whenever a problem can be solved 621 14.1 THE NATURE OF METAHEURISTICS ■ FIGURE 14.3 A typical sequence of objective function values for the solutions obtained by a metaheuristic as it first converges to a local optimum (iteration 4) and then escapes to converge to (hopefully) the global optimum (iteration 8) of a maximization problem before concluding its search (iteration 12). 2 4 6 8 10 12 Iteration by an algorithm that can guarantee optimality, that should be done instead. The role of metaheuristics is to deal with problems that are too large and complicated to be solved by exact algorithms. All the examples in this chapter are too small to require the use of metaheuristics, since they are intended only to illustrate in a straightforward way how metaheuristics can approach far more complicated problems. Section 14.3 will illustrate the application of a particular metaheuristic to the nonlinear programming example depicted in Fig. 14.1. Section 14.4 then will apply another metaheuristic to the integer programming version of this same example. Although metaheuristics sometimes are applied to difficult nonlinear programming and integer programming problems, a more common area of application is to combinatorial optimization problems. Our next example is of this type. An Example: A Traveling Salesman Problem Perhaps the most famous classic combinatorial optimization problem is called the traveling salesman problem. It has been given this picturesque name because it can be described in terms of a salesman (or saleswoman) who must travel to a number of cities during one tour. Starting from his (or her) home city, the salesman wishes to determine which route to follow to visit each city exactly once before returning to his home city so as to minimize the total length of the tour. Figure 14.4 shows an example of a small traveling salesman problem with seven cities. City 1 is the salesman’s home city. Therefore, starting from this city, the salesman must choose a route to visit each of the other cities exactly once before returning to city 1. The number next to each link between each pair of cities represents the distance (or cost or time) between these cities. We assume that the distance is the same in either direction. (This is referred to as a symmetric traveling salesman problem.) Although there commonly is a direct link between every pair of cities, we are simplifying this example by assuming that the only direct links are those shown in the figure. The objective is to determine which route will minimize the total distance that the salesman must travel. There have been a number of applications of traveling salesman problems that have nothing to do with salesmen. For example, when a truck leaves a distribution center to 622 CHAPTER 14 METAHEURISTICS 2 12 4 8 12 11 11 3 10 1 5 6 9 ■ FIGURE 14.4 The example of a traveling salesman problem that will be used for illustrative purposes throughout this chapter. 10 3 6 7 12 9 7 deliver goods to a number of locations, the problem of determining the shortest route for doing this is a traveling salesman problem. Another example involves the manufacture of printed circuit boards for wiring chips and other components. When many holes need to be drilled into a printed circuit board, the problem of finding the most efficient drilling sequence is a traveling salesman problem. The difficulty of traveling salesman problems increases rapidly as the number of cities increases. For a problem with n cities and a link between every pair of cities, the number of feasible routes to be considered is (n ⫺ 1)!/2 since there are (n ⫺ 1) possibilities for the first city after the home city, (n ⫺ 2) possibilities for the next city, and so forth. The denominator of 2 arises because every route has an equivalent reverse route with exactly the same distance. Thus, while a 10-city traveling salesman problem has less than 200,000 feasible solutions to be considered, a 20-city problem has roughly 1016 feasible solutions, while a 50-city problem has about 1062. Surprisingly, powerful algorithms based on the branch-and-cut approach introduced in Sec. 12.8 have succeeded in solving to optimality certain huge traveling salesman problems with many hundreds (or even thousands) of cities. However, because of the enormous difficulty of solving large traveling salesman problems, heuristic methods guided by metaheuristics continue to be a popular way of addressing such problems. These heuristic methods commonly involve generating a sequence of feasible trial solutions, where each new trial solution is obtained by making a certain type of small adjustment in the current trial solution. Several methods have been suggested for how to adjust the current trial solution. Because of its ease of implementation, one popular method uses the following type of adjustment. A sub-tour reversal adjusts the sequence of cities visited in the current trial solution by selecting a subsequence of the cities and simply reversing the order in which that subsequence of cities is visited. (The subsequence being reversed can consist of as few as two cities, but also can have more.) 623 14.1 THE NATURE OF METAHEURISTICS To illustrate a sub-tour reversal, suppose that the initial trial solution for our example in Fig. 14.4 is to visit the cities in numerical order: 1-2-3-4-5-6-7-1 Distance  69 If we select, say, the subsequence 3-4 and reverse it, we obtain the following new trial solution: 1-2-4-3-5-6-7-1 Distance  65 Thus, this particular sub-tour reversal has succeeded in reducing the distance for the complete tour from 69 to 65. Figure 14.5 depicts this sub-tour reversal, which leads from the initial trial solution on the left to the new trial solution on the right. The dashed lines indicate the links that are deleted from the tour (on the left) or added to the tour (on the right) by sub-tour reversal. Note that the new trial solution deletes exactly two links from the previous tour and replaces them by exactly two new links to form the new tour. This is a characteristic of any sub-tour reversal (including those where the subsequence of cities being reversed consists of more than two cities). Thus, a particular sub-tour reversal is possible only if the corresponding two new links actually exist. This success in obtaining an improved tour by simply performing a sub-tour reversal suggests the following heuristic method for seeking a good feasible solution for any traveling salesman problem. The Sub-Tour Reversal Algorithm Initialization. Start with any feasible tour as the initial trial solution. Iteration. For the current trial solution, consider all possible ways of performing a subtour reversal (except exclude the reversal of the entire tour) that would provide an improved solution. Select the one that provides the largest decrease in the distance traveled to be the new trial solution. (Ties may be broken arbitrarily.) ■ FIGURE 14.5 A sub-tour reversal that replaces the tour on the left (the initial trial solution) by the tour on the right (the new trial solution) by reversing the order in which cities 3 and 4 are visited. This sub-tour reversal results in replacing the dashed lines on the left by the dashed lines on the right as the links that are traversed in the new tour. 2 2 12 4 4 8 12 3 3 1 Distance = 69 11 11 3 1 12 11 5 Distance = 65 5 6 6 6 6 12 12 9 9 7 7 624 CHAPTER 14 METAHEURISTICS Stopping rule. Stop when no sub-tour reversal will improve the current trial solution. Accept this solution as the final solution. Now let us apply this algorithm to the example, starting with 1-2-3-4-5-6-7-1 as the initial trial solution. There are four possible sub-tour reversals that would improve upon this solution, as listed in the second, third, fourth, and fifth rows below: Reverse Reverse Reverse Reverse Distance  69 Distance  68 Distance  65 Distance  65 Distance  66 1-2-3-4-5-6-7-1 1-3-2-4-5-6-7-1 1-2-4-3-5-6-7-1 1-2-3-5-4-6-7-1 1-2-3-4-6-5-7-1 2-3: 3-4: 4-5: 5-6: The two solutions with Distance  65 tie for providing the largest decrease in the distance traveled, so suppose that the first of these, 1-2-4-3-5-6-7-1 (as shown on the right side of Fig. 14.5), is chosen arbitrarily to be the next trial solution. This completes the first iteration. The second iteration begins with the tour on the right side of Fig. 14.5 as the current trial solution. For this solution, there is only one sub-tour reversal that will provide an improvement, as listed in the second row below: Reverse 3-5-6: Distance  65 Distance  64 1-2-4-3-5-6-7-1 1-2-4-6-5-3-7-1 Figure 14.6 shows this sub-tour reversal, where the entire subsequence of cities 3-5-6 on the left now is visited in reverse order (6-5-3) on the right. Thus, the tour on the right now traverses the link 4-6 instead of 4-3, as well as the link 3-7 instead of 6-7, in order to use the reverse order 6-5-3 between cities 4 and 7. This completes the second iteration. We next try to find a sub-tour reversal that will improve upon this new trial solution. However, there is none, so the sub-tour reversal algorithm stops with this trial solution as the final solution. Is 1-2-4-6-5-3-7-1 the optimal solution? Unfortunately, no. The optimal solution turns out to be 1-2-4-6-7-5-3-1 Distance  63 ■ FIGURE 14.6 The sub-tour reversal of 3-5-6 that leads from the trial solution on the left to an improved trial solution on the right. 2 2 12 12 4 12 4 11 12 3 3 3 1 Distance = 65 1 5 Distance = 64 6 12 5 6 9 6 6 12 9 7 10 3 7 14.2 TABU SEARCH 625 (or 1-3-5-7-6-4-2-1 by reversing the direction of this entire tour) However, this solution cannot be reached by performing a sub-tour reversal that improves 1-2-4-6-5-3-7-1. The sub-tour reversal algorithm is another example of a local improvement procedure. It improves upon the current trial solution at each iteration. When it can no longer find a better solution, it stops because the current trial solution is a local optimum. In this case, 1-2-4-6-5-3-7-1 is indeed a local optimum because there is no better solution within its local neighborhood that can be reached by performing a sub-tour reversal. What is needed to provide a better chance of reaching a global optimum is to use a metaheuristic that will enable the process to escape from a local optimum. You will see how three different metaheuristics do this with this same example in the next three sections. ■ 14.2 TABU SEARCH Tabu search is a widely used metaheuristic that uses some common-sense ideas to enable the search process to escape from a local optimum. After introducing its basic concepts, we will go through a simple example and then return to the traveling salesman example. Basic Concepts Any application of tabu search includes as a subroutine a local search procedure that seems appropriate for the problem being addressed. (A local search procedure operates just like a local improvement procedure except that it may not require that each new trial solution must be better than the preceding trial solution.) The process begins by using this procedure as a local improvement procedure in the usual way (i.e., only accepting an improved solution at each iteration) to find a local optimum. A key strategy of tabu search is that it then continues the search by allowing non-improving moves to the best solutions in the neighborhood of the local optimum. Once a point is reached where better solutions can be found in the neighborhood of the current trial solution, the local improvement procedure is reapplied to find a new local optimum. Using the analogy of hill climbing, this process is sometimes referred to as the steepest ascent/mildest descent approach because each iteration selects the available move that goes furthest up the hill, or, when an upward move is not available, selects a move that drops least down the hill. If all goes well, the process will follow a pattern like that shown in Fig. 14.3, where a local optimum is left behind in order to climb to the global optimum. The danger with this approach is that after moving away from a local optimum, the process will cycle right back to the same local optimum. To avoid this, a tabu search temporarily forbids moves that would return to (or perhaps toward) a solution recently visited. A tabu list records these forbidden moves, which are referred to as tabu moves. (The only exception to forbidding such a move is if it is found that a tabu move actually is better than the best feasible solution found so far.) This use of memory to guide the search by using tabu lists to record some of the recent history of the search is a distinctive feature of tabu search. This feature has roots in the field of artificial intelligence. Tabu search also can incorporate some more advanced concepts. One is intensification, which involves exploring a portion of the feasible region more thoroughly than usual after it has been identified as a particularly promising portion for containing very good solutions. Another concept is diversification, which involves forcing the search into previously unexplored areas of the feasible region. (Long-term memory is used to help implement both concepts.) However, we will focus on the basic form of tabu search summarized next without delving into these additional concepts. An Application Vignette Founded in 1886, Sears, Roebuck and Company (now commonly referred to as just Sears) grew to become the largest multiline retailer in the United States by the mid-20th century. It continues today to rank among the largest retailers in the world selling merchandise and services. By 2013, it had more than 2,500 full-line and specialty retail stores in the United States and Canada. It also provides the largest home-delivery service of furniture and appliances in these countries with approximately 4 million deliveries a year. Sears manages a fleet of over 1,000 delivery vehicles that includes contract carriers and Sears-owned vehicles. It also operates a U.S. fleet of about 12,500 service vehicles and the associated technicians, who make approximately 14 million on-site service calls annually to repair and install appliances and provide home improvement. The cost of operating this huge home-delivery and home-service business runs in the billions of dollars per year. With many thousands of vehicles being used to make many tens of thousands of calls on customers daily, the efficiency of this operation has a major impact on the company’s profitability. With so many calls on customers to be made with so many vehicles, a huge number of decisions must be made each day. Which stops should be assigned to each vehicle’s route? What should the order of the stops be (which considerably impacts the total distance and time for the route) for each vehicle? How can all these decisions be made so as to minimize total operational costs while providing satisfactory service to the customers? It became clear that operations research was needed to address this problem. The natural formulation is as a vehicle-routing problem with time windows (VRPTW), for which both exact and heuristic algorithms have been developed. Unfortunately, the Sears problem is so huge that it is a very difficult combinatorial optimization problem that is beyond the reach of standard algorithms for VRPTW. Therefore, a new algorithm was developed that was based on using tabu search for making both the decisions on which vehicle’s route serves which stops and what the sequence is of stops within a route. The resulting new vehicle-routing-and-scheduling system, based largely on tabu search, led to over $9 million in one-time savings and over $42 million in annual savings for Sears. It also provided a number of intangible benefits, including (most importantly) improved service to customers. Source: D. Weigel, and B. Cao: “Applying GIS and OR Techniques to Solve Sears Technician-Dispatching and HomeDelivery Problems,” Interfaces, 29(1): 112–130, Jan.–Feb. 1999. (A link to this article is provided on our website, www.mhhe.com/hillier.) Outline of a Basic Tabu Search Algorithm Initialization. Start with a feasible initial trial solution. Iteration. Use an appropriate local search procedure to define the feasible moves into the local neighborhood of the current trial solution. Eliminate from consideration any move on the current tabu list unless that move would result in a better solution than the best trial solution found so far. Determine which of the remaining moves provides the best solution. Adopt this solution as the next trial solution, regardless of whether it is better or worse than the current trial solution. Update the tabu list to forbid cycling back to what had been the current trial solution. If the tabu list already had been full, delete the oldest member of the tabu list to provide more flexibility for future moves. Stopping rule. Use some stopping criterion, such as a fixed number of iterations, a fixed amount of CPU time, or a fixed number of consecutive iterations without an improvement in the best objective function value. (The latter criterion is a particularly popular one.) Also stop at any iteration where there are no feasible moves into the local neighborhood of the current trial solution. Accept the best trial solution found on any iteration as the final solution. 627 14.2 TABU SEARCH This outline leaves a number of questions unanswered: 1. Which local search procedure should be used? 2. How should that procedure define the neighborhood structure that specifies which solutions are immediate neighbors (reachable in a single iteration) of any current trial solution? 3. What is the form in which tabu moves should be represented on the tabu list? 4. Which tabu move should be added to the tabu list in each iteration? 5. How long should a tabu move be retained on the tabu list? 6. Which stopping rule should be used? These all are important details that need to be worked out to fit the specific type of problem being addressed, as illustrated by the following examples. Tabu search only provides a general structure and strategy guidelines for developing a specific heuristic method to fit a specific situation. The selection of its parameters is a key part of developing a successful heuristic method. The following examples illustrate the use of tabu search. A Minimum Spanning Tree Problem with Constraints Section 10.4 describes the minimum spanning tree problem. In brief, starting with a network that has its nodes but no links between the nodes yet, the problem is to determine which links should be inserted into the network. The objective is to minimize the total cost (or length) of the inserted links that will provide a path between every pair of nodes. For a network with n nodes, (n ⫺ 1) links (with no cycles) are needed to provide a path between every pair of nodes. Such a network is referred to as a spanning tree. The left-hand side of Fig. 14.7 shows a network with five nodes, where the dashed lines represent the potential links that could be inserted into the network and the number next to each dashed line represents the cost associated with inserting that particular link. Thus, the problem is to determine which four of these links (with no cycles) should be inserted into the network to minimize the total cost of these links. The right-hand side of the figure shows the desired minimum spanning tree, where the dark lines represent the links ■ FIGURE 14.7 (a) The data for a minimum spanning tree problem before choosing the links to be included in the network and (b) the optimal solution for this problem where the dark lines represent the chosen links. B B 30 20 A 10 15 C 25 5 40 30 20 E A 10 15 C 5 E 40 25 D D (a) (b) 628 CHAPTER 14 METAHEURISTICS that have been inserted into the network with a total cost of 50. This optimal solution is obtained easily by applying the “greedy” algorithm presented in Sec. 10.4. To illustrate the use of tabu search, let us now add a couple complications to this example by supposing that the following constraints also must be observed when choosing the links to include in the network. Constraint 1: Link AD can be included only if link DE also is included. Constraint 2: At most one of the three links—AD, CD, and AB—can be included. Note that the previously optimal solution on the right-hand side of Fig. 14.7 violates both of these constraints because (1) link AD is included even though DE is not and (2) both AD and AB are included. By imposing such constraints, the greedy algorithm presented in Sec. 10.4 can no longer be used to find the new optimal solution. For such a small problem, this solution probably could be found rather quickly by inspection. However, let us see how tabu search could be used on either this problem or much larger problems to search for an optimal solution. The easiest way to take the constraints into account is to charge a huge penalty, such as the following, for violating them: 1. Charge a penalty of 100 if constraint 1 is violated. 2. Charge a penalty of 100 if two of the three links specified in constraint 2 are included. Increase this penalty to 200 if all three of the links are included. A penalty of 100 is large enough to ensure that the constraints will not be violated for a spanning tree that minimizes the total cost, including the penalty, provided only that there exist some feasible solutions. Doubling this penalty if constraint 2 is badly violated provides an incentive for at least reducing how many of the three links are included during an iteration of the tabu search. There are a variety of ways to answer the six questions that are needed to specify how the tabu search will be conducted. (See the list of questions that follows the outline of a basic tabu search algorithm.) Here is one straightforward way of answering the questions. 1. Local search procedure: At each iteration, choose the best immediate neighbor of the current trial solution that is not ruled out by its tabu status. 2. Neighborhood structure: An immediate neighbor of the current trial solution is one that is reached by adding a single link and then deleting one of the other links in the cycle that is formed by the addition of this link. (The deleted link must come from this cycle in order to still have a spanning tree.) 3. Form of tabu moves: List the links that should not be deleted. 4. Addition of a tabu move: At each iteration, after choosing the link to be added to the network, also add this link to the tabu list. 5. Maximum size of tabu list: Two. Whenever a tabu move is added to a full list, delete the older of the two tabu moves that already were on the list. (Since a spanning tree for the problem being considered only includes four links, the tabu list must be kept very small to provide some flexibility in choosing the link to be deleted at each iteration.) 6. Stopping rule: Stop after three consecutive iterations without an improvement in the best objective function value. (Also stop at any iteration where the current trial solution has no immediate neighbors that are not ruled out by their tabu status.) Having specified these details, we now can proceed to apply the tabu search algorithm to the example. To get started, a reasonable choice for the initial trial solution is the optimal solution for the unconstrained version of the problem that is shown in Fig. 14.7(b). 629 14.2 TABU SEARCH Because this solution violates both of the constraints (but with the inclusion of only two of the three links specified in constraint 2), penalties of 100 need to be imposed twice. Therefore, the total cost of this solution is Cost ⫽ 20 ⫹ 10 ⫹ 5 ⫹ 15 ⫹ 200 (constraint penalties) ⫽ 250. Iteration 1. The three options for adding a link to the network in Fig. 14.7(b) are BE, CD, and DE. If BE were to be chosen, the cycle formed would be BE-CE-AC-AB, so the three options for deleting a link would be CE, AC, and AB. (At this point, no links have yet been added to the tabu list.) If CE were to be deleted, the change in the cost would be 30 ⫺ 5 ⫽ 25 with no change in the constraint penalties, so the total cost would increase from 250 to 275. Similarly, if AC were to be deleted instead, the total cost would increase from 250 to 250 ⫹ (30 ⫺ 10) ⫽ 270. However, if link AB were to be the one deleted, the link costs would change by 30 ⫺ 20 ⫽ 10 and the constraint penalties would decrease from 200 to 100 because constraint 2 would no longer be violated, so the total cost would become 50 ⫹ 10 ⫹ 100 ⫽ 160. These results are summarized in the first three rows of Table 14.1. The next two rows summarize the calculations if CD were to be the link that is added to the network. In this case, the cycle created is CD-AD-AC, so AD and AC are the only options for deleting a link. AC would be a particularly bad choice because constraint 1 would still be violated (a penalty of 100), and a penalty of 200 now would need to be charged for violating constraint 2 since all three of the links specified in the constraint would be included in the network. Deleting AD instead would have the virtue of satisfying constraint 1 and not increasing the extent to which constraint 2 is violated. The last three rows of the table show the options if DE were the added link. The cycle created by adding this link would be DE-CE-AC-AD, so CE, AC, and AD would be the options for deletion. All three would satisfy constraint 1, but deleting AD would satisfy constraint 2 as well. By completely eliminating constraint penalties, the total cost for this option would become only 50 ⫹ (40 ⫺ 15) ⫽ 75. Since this is the smallest cost for all eight available options for moving to an immediate neighbor of the current trial solution, we choose this particular move by adding DE and deleting AD. This choice is indicated in the iteration 1 portion of Fig. 14.8 and the resulting spanning tree for beginning iteration 2 is shown to the right. To complete the iteration, since DE was added to the network, it becomes the first link placed on the tabu list. This will prevent deleting DE next and cycling back to the trial solution that began this iteration. ■ TABLE 14.1 The options for adding a link and deleting another link in iteration 1 Add Delete Cost BE BE BE CE AC AB 75 ⫹ 200 ⫽ 275 70 ⫹ 200 ⫽ 270 60 ⫹ 100 ⫽ 160 CD CD AD AC 60 ⫹ 100 ⫽ 160 65 ⫹ 300 ⫽ 365 DE DE DE CE AC AD 85 ⫹ 100 ⫽ 185 80 ⫹ 100 ⫽ 180 75 ⫹ 0 ⫽ 75 ← Minimum 630 CHAPTER 14 METAHEURISTICS Iteration 1 Iteration 2 Cost = 50 + 200 (constraint penalties) Cost = 75 B B 10 A Delete 20 30 20 Delete15 5 C E A 10 C 15 40 Add 25 30 E 40 Tabu D New cost = 75 (Local optimum) New cost = 85 (Escape local optimum) Iteration 3 Optimal Solution Cost = 85 Cost = 70 B B 30 20 10 15 5 25 D A Tabu 5 C 25 Add Add 30 20 E 40 Tabu Delete A 10 15 C 25 5 E 40 D D New cost = 70 (Override tabu status) Additional iterations only find inferior solutions. ■ FIGURE 14.8 Application of a tabu search algorithm to the minimum spanning tree problem shown in Fig. 14.7 after also adding two constraints. To summarize, the following decisions have been made during this first iteration: Add link DE to the network. Delete link AD from the network. Add link DE to the tabu list. 631 14.2 TABU SEARCH Iteration 2. The upper right-hand portion of Fig. 14.8 indicates that the corresponding decisions made during iteration 2 are the following: Add link BE to the network. Automatically place this added link on the tabu list. Delete link AB from the network. Table 14.2 summarizes the calculations that led to these decisions by finding that the move in the sixth row provides the smallest cost. The moves listed in the first and seventh rows of the table involve deleting DE, which is on the tabu list. Therefore, these moves would have been considered only if they would result in a better solution than the best trial solution found so far, which has a cost of 75. The calculation in the seventh row shows that this move would not provide a better solution. A calculation is not even needed for the first row because this move would cycle back to the preceding trial solution. Note that the move in the sixth row is made even though it results in a new trial solution that has a larger cost (85) than for the preceding trial solution (75) that initiated iteration 2. What this means is that the preceding trial solution was a local optimum because all of its immediate neighbors (those that can be reached by making one of the moves listed in Table 14.2) have a larger cost. However, moving to the best of the immediate neighbors allows us to escape the local optimum and continue the search for the global optimum. Before moving to iteration 3, we should interject an observation about what more advanced forms of tabu search might do here when selecting the best immediate neighbor. More general tabu search methods can change the meaning of a “best neighbor,” depending on history, by using additional forms of memory to support intensification and diversification processes. As mentioned earlier, intensification focuses the search in a particularly promising region of solutions identified previously and diversification drives the search into promising new regions. Iteration 3. The lower left-hand portion of Fig. 14.8 summarizes the decisions made during iteration 3. Add link CD to the network. Automatically place this added link on the tabu list. Delete link DE from the network. Table 14.3 shows that this move leads to the best immediate neighbor of the trial solution that initiated this iteration. ■ TABLE 14.2 The options for adding a link and deleting another link in iteration 2 Add Delete AD AD AD DE* CE AC BE BE BE CE AC AB CD CD DE* CE Cost (Tabu move) 85 ⫹ 100 ⫽ 185 80 ⫹ 100 ⫽ 180 100 ⫹ 0 95 ⫹ 0 85 ⫹ 0 ⫽ 100 ⫽ 95 ⫽ 85 ← Minimum 60 ⫹ 100 ⫽ 160 95 ⫹ 100 ⫽ 195 *A tabu move. Will be considered only if it would result in a better solution than the best trial solution found previously. 632 CHAPTER 14 METAHEURISTICS ■ TABLE 14.3 The options for adding a link and deleting another link in iteration 3 Add Delete Cost AB AB AB BE* CE AC (Tabu move) 100 ⫹ 0 ⫽ 100 95 ⫹ 0 ⫽ 95 AD AD AD DE* CE AC 60 ⫹ 100 ⫽ 160 95 ⫹ 0 ⫽ 95 90 ⫹ 0 ⫽ 90 CD CD DE* CE 70 ⫹ 0 105 ⫹ 0 ⫽ 70 ← Minimum ⫽ 105 *A tabu move. Will be considered only if it would result in a better solution than the best trial solution found previously. An interesting feature of this move is that it is made even though it is a tabu move. The reason it is made is that, in addition to being the best immediate neighbor, it also results in a solution that is better (a cost of 70) than the best trial solution found previously (a cost of 75). This enables the tabu status of the move to be overridden. (Tabu search also can incorporate a variety of more advanced criteria for overriding tabu status.) One more adjustment needs to be made in the tabu list before beginning the next iteration: Delete link DE from the tabu list. This is done for two reasons. First, the tabu list consists of links that normally should not be deleted from the network during the current iteration (with the exception noted above), but DE is no longer in the network. Second, since the size of the tabu list has been set at two and two other links (BE and CD) have been added to the list more recently, DE automatically would have been deleted from the list at this point anyway. Continuation. The current trial solution shown in the lower right-hand portion of Fig. 14.8 is, in fact, the optimal solution (the global optimum) for the problem. However, the tabu search algorithm has no way of knowing this, so it would continue on for a while. Iteration 4 would begin with this trial solution and with links BE and CD on the tabu list. After completing this iteration and two more, the algorithm would terminate because three consecutive iterations did not improve on the best previous objective function value (a cost of 70). With a well-designed tabu search algorithm, the best trial solution found after the algorithm has run a modest number of iterations is likely to be a good feasible solution. It might even be an optimal solution, but no such guarantee can be given. Selecting a stopping rule that provides a relatively long run of the algorithm increases the chance of reaching the global optimum. Having gotten our feet wet by designing and applying a tabu search algorithm to this small example, let us now apply a similar tabu search algorithm to the example of a traveling salesman problem presented in Sec. 14.1. The Traveling Salesman Problem Example There are some close parallels between a minimum spanning tree problem and a traveling salesman problem. In both cases, the problem is to choose which links to include in the solution. (Recall that a solution for a traveling salesman problem can be described 633 14.2 TABU SEARCH as the sequence of links that the salesman traverses in the tour of the cities.) In both cases, the objective is to minimize the total cost or distance associated with the fixed number of links that are included in the solution. And in both cases, there is an intuitive local search procedure available that involves adding and deleting links in the current trial solution to obtain the new trial solution. For minimum spanning tree problems, the local search procedure described in the preceding subsection involves adding and deleting only a single link at each iteration. The corresponding procedure described in Sec. 14.1 for traveling salesman problems involves using sub-tour reversals to add and delete a pair of links at each iteration. Because of the close parallels between these two types of problems, the design of a tabu search algorithm for traveling salesman problems can be quite similar to the one just described for the minimum spanning problem example. In particular, using the outline of a basic tabu search algorithm presented earlier, the six questions following the outline can be answered in a similar way below. 1. Local search algorithm: At each iteration, choose the best immediate neighbor of the current trial solution that is not ruled out by its tabu status. 2. Neighborhood structure: An immediate neighbor of the current trial solution is one that is reached by making a sub-tour reversal, as described in Sec. 14.1 and illustrated in Fig. 14.5. Such a reversal requires adding two links and deleting two other links from the current trial solution. (We rule out a sub-tour reversal that simply reverses the direction of the tour provided by the current trial solution.) 3. Form of tabu moves: List the links such that a particular sub-tour reversal would be tabu if both links to be deleted in this reversal are on the list. (This will prevent quickly cycling back to a previous trial solution.) 4. Addition of a tabu move: At each iteration, after choosing the two links to be added to the current trial solution, also add these two links to the tabu list. 5. Maximum size of tabu list: Four (two from each of the two most recent iterations). Whenever a pair of links is added to a full list, delete the two links that already have been on the list the longest. 6. Stopping rule: Stop after three consecutive iterations without an improvement in the best objective function value. (Also stop at any iteration where the current trial solution has no immediate neighbors that are not ruled out by their tabu status.) To apply this tabu search algorithm to our example (see Fig. 14.4), let us begin with the same initial trial solution, 1-2-3-4-5-6-7-1, as in Sec. 14.1. Recall how starting the sub-tour reversal algorithm (a local improvement algorithm) with this initial trial solution led in two iterations (see Figs. 14.5 and 14.6) to a local optimum at 1-2-4-6-5-3-7-1, at which point that algorithm stopped. Except for adding a tabu list, the tabu search algorithm starts off in exactly the same way, as summarized below: Initial trial solution: 1-2-3-4-5-6-7-1 Tabu list: Blank at this point. Distance  69 Iteration 1: Choose to reverse 3-4 (see Fig. 14.5). Deleted links: 2-3 and 4-5 Added links: 2-4 and 3-5 Tabu list: Links 2-4 and 3-5 New trial solution: 1-2-4-3-5-6-7-1 Distance  65 Iteration 2: Choose to reverse 3-5-6 (see Fig. 14.6). Deleted links: 4-3 and 6-7 (OK since not on tabu list) Added links: 4-6 and 3-7 634 CHAPTER 14 METAHEURISTICS Tabu list: Links 2-4, 3-5, 4-6, and 3-7 New trial solution: 1-2-4-6-5-3-7-1 Distance  64 However, rather than terminating, the tabu search algorithm now escapes from this local optimum (shown on the right side of Fig. 14.6 and the left side of Fig. 14.9) by moving next to the best immediate neighbor of the current trial solution even though its distance is longer. Considering the limited availability of links between pairs of nodes (cities) in Fig. 14.4, the current trial solution has only the two immediate neighbors listed below: Reverse 6-5-3: 1-2-4-3-5-6-7-1 Reverse 3-7: 1-2-4-6-5-7-3-1 Distance  65 Distance  66 (We are ruling out reversing 2-4-6-5-3-7 to obtain 1-7-3-5-6-4-2-1 because this is simply the same tour in the opposite direction.) However, we must rule out the first of these immediate neighbors because it would require deleting links 4-6 and 3-7, which is tabu since both of these links are on the tabu list. (This move could still be allowed if it would improve upon the best trial solution found so far, but it does not.) Ruling out this immediate neighbor prevents us from simply cycling back to the preceding trial solution. Therefore, by default, the second of these immediate neighbors is chosen to be the next trial solution, as summarized below: Iteration 3: Choose to reverse 3-7 (see Fig. 14.9). Deleted links: 5-3 and 7-1 Added links: 5-7 and 3-1 Tabu list: 4-6, 3-7, 5-7, and 3-1 (2-4 and 3-5 are now deleted from the list.) New trial solution: 1-2-4-6-5-7-3-1 Distance  66 The sub-tour reversal for this iteration can be seen in Fig. 14.9, where the dashed lines show the links being deleted (on the left) and added (on the right) to obtain the new trial solution. Note that one of the deleted links is 5-3 even though it was on the tabu list at the end of iteration 2. This is OK since a sub-tour reversal is tabu only if both of the deleted links are on the tabu list. Also note that the updated tabu list at the end of iteration 3 has ■ FIGURE 14.9 The sub-tour reversal of 3-7 in iteration 3 that leads from the trial solution on the left to the new trial solution on the right. 2 2 12 12 4 4 12 12 10 10 3 1 10 3 Distance = 64 3 1 5 5 Distance = 66 6 6 6 6 12 9 9 7 7 7 635 14.2 TABU SEARCH deleted the two links that had been on the list the longest (the ones added during iteration 1) since the maximum size of the tabu list has been set at four. The new trial solution has the four immediate neighbors listed below: Reverse Reverse Reverse Reverse 2-4-6-5-7: 1-7-5-6-4-2-3-1 6-5: 1-2-4-5-6-7-3-1 5-7: 1-2-4-6-5-7-3-1 7-3: 1-2-4-6-5-3-7-1 Distance  65 Distance  69 Distance  63 Distance  64 However, the second of these immediate neighbors is tabu because both of the deleted links (4-6 and 5-7) are on the tabu list. The fourth immediate neighbor (which is the preceding trial solution) also is tabu for the same reason. Thus, the only viable options are the first and third immediate neighbors. Since the latter neighbor has the shorter distance, it becomes the next trial solution, as summarized below: Iteration 4: Choose to reverse 5-7 (see Fig. 14.10). Deleted links: 6-5 and 7-3 Added links: 6-7 and 5-3 Tabu list: 5-7, 3-1, 6-7, and 5-3 (4-6 and 3-7 are now deleted from the list.) New trial solution: 1-2-4-6-7-5-3-1 Distance  63 Figure 14.10 shows this sub-tour reversal. The tour for the new trial solution on the right has a distance of only 63, which is less than for any of the preceding trial solutions. In fact, this new solution happens to be the optimal solution. Not knowing this, the tabu search algorithm would attempt to execute more iterations. However, the only immediate neighbor of the current trial solution is the trial solution that was obtained at the preceding iteration. This would require deleting links 6-7 and 5-3, both of which are on the tabu list, so we are prevented from cycling back to the preceding trial solution. Since no other immediate neighbors are available, the stopping rule terminates the algorithm at this point with 1-2-4-6-7-5-3-1 (the best of the trial solutions) as the final solution. Although there is no guarantee that the algorithm’s final solution is an optimal solution, we are fortunate that it turned out to be optimal in this case. The metaheuristics area in your IOR Tutorial includes a procedure for applying this particular tabu search algorithm to other small traveling salesman problems. This particular algorithm is just one example of a possible tabu search algorithm for traveling salesman problems. Various details of the algorithm could be modified in a number of reasonable ways. For example, the method typically doesn’t stop when all available moves are forbidden by their tabu status, but instead just selects a “least tabu” move. Also, an important feature of general tabu search methods includes the use of multiple neighborhoods, relying on basic neighborhoods as long as they bring progress, and then including more advanced neighborhoods when the rate of finding improved solutions diminishes. The most significant additional element of tabu search is its use of intensification and diversification strategies, as mentioned earlier. But the general outline of a basic “short-term memory” tabu search approach would remain roughly the same as we have illustrated. Both examples considered in this section fall into the category of combinatorial optimization problems involving networks. This is a particularly common area of application for tabu search algorithms. The general outline of these algorithms incorporates the principles presented in this section, but the details are worked out to fit the structure of the specific problems being considered. 636 CHAPTER 14 METAHEURISTICS 2 2 12 12 4 4 12 12 10 10 3 10 10 3 3 1 1 5 Distance = 66 5 Distance = 63 6 6 6 9 7 7 9 7 7 ■ FIGURE 14.10 The sub-tour reversal of 5-7 in iteration 4 that leads from the trial solution on the left to the new trial solution on the right (which happens to be the optimal solution). ■ 14.3 SIMULATED ANNEALING Simulated annealing is another widely used metaheuristic that enables the search process to escape from a local optimum. To better compare and contrast it with tabu search, we will apply it to the same traveling salesman problem example before returning to the nonlinear programming example introduced in Sec. 14.1. But first, let us examine the basic concepts of simulated annealing. Basic Concepts Figure 14.1 in Sec. 14.1 introduced the concept that finding the global optimum of a complicated maximization problem is analogous to determining which of a number of hills is the tallest hill and then climbing to the top of that particular hill. Unfortunately, a mathematical search process does not have the benefit of keen eyesight that would enable spotting a tall hill in the distance. Instead, it is like hiking in a dense fog where the only clue for the direction to take next is how much the next step in any direction would take you up or down. One approach, adopted into tabu search, is to climb the current hill in the steepest direction until reaching its top and then start climbing slowly downward while searching for another hill to climb. The drawback is that a lot of time (iterations) is spent climbing each hill encountered rather than searching for the tallest hill. Instead, the approach used in simulated annealing is to focus mainly on searching for the tallest hill. Since the tallest hill can be anywhere in the feasible region, the early emphasis is on taking steps in random directions (except for rejecting some, but not all, steps that would go downward rather than upward) in order to explore as much of the feasible region as possible. Because most of the accepted steps are upward, the search will gradually gravitate toward those parts of the feasible region containing the tallest hills. Therefore, the search process gradually increases the emphasis on climbing upward by rejecting an increasing proportion of steps that go downward. Given enough time, the process often will reach and climb to the top of the tallest hill. To be more specific, each iteration of the simulated annealing search process moves from the current trial solution to an immediate neighbor in the local neighborhood of this 637 14.3 SIMULATED ANNEALING solution, just as for tabu search. However, the difference from tabu search lies in how an immediate neighbor is selected to be the next trial solution. Let Zc  objective function value for the current trial solution, Zn  objective function value for the current candidate to be the next trial solution, T  a parameter that measures the tendency to accept the current candidate to be the next trial solution if this candidate is not an improvement on the current trial solution. The rule for selecting which immediate neighbor will be the next trial solution follows: Move selection rule: Among all the immediate neighbors of the current trial solution, select one randomly to become the current candidate to be the next trial solution. Assuming the objective is maximization of the objective function, accept or reject this candidate to be the next trial solution as follows: If Zn  Zc , always accept this candidate. If Zn  Zc , accept the candidate with the following probability: Zn  Zc Prob{acceptance}  ex where x    T (If the objective is minimization instead, reverse Zn and Zc in the above formulas.) If this candidate is rejected, repeat this process with a new randomly selected immediate neighbor of the current trial solution. (If no immediate neighbors remain, terminate the algorithm.) Thus, if the current candidate under consideration is better than the current trial solution, it always is accepted to be the next trial solution. If it is worse, the probability of acceptance depends on how much worse it is (and on the size of T). Table 14.4 shows a sampling of these probability values, ranging from a very high probability when the current candidate is only slightly worse (relative to T) than the current trial solution to an extremely small probability when it is much worse. In other words, the move selection rule usually will accept a step that is only slightly downhill, but seldom will accept a steep downward step. Starting with a relatively large value of T (as simulated annealing does) makes the probability of acceptance relatively large, which enables the search to proceed in almost random directions. Gradually decreasing the value of T as the search continues (as simulated annealing does) gradually decreases the probability of acceptance, which increases the emphasis on mostly climbing upward. Thus, the choice of the values of T over time controls the degree of randomness in the process for allowing downward steps. ■ TABLE 14.4 Some sample probabilities that the move selection rule will accept a downward step when the objective is maximization Zn ⴚ Zc xⴝ  T Prob{acceptance} ⴝ ex 0.01 0.1 0.25 0.5 1 2 3 4 5 0.990 0.905 0.779 0.607 0.368 0.135 0.050 0.018 0.007 638 CHAPTER 14 METAHEURISTICS This random component, not present in basic tabu search, provides more flexibility for moving toward another part of the feasible region in the hope of finding a taller hill. The usual method of implementing the move selection rule to determine whether a particular downward step will be accepted is to compare a random number between 0 and 1 to the probability of acceptance. Such a random number can be thought of as a random observation from a uniform distribution between 0 and 1. (All references to random numbers throughout the chapter will be to such random numbers.) There are a number of methods of generating these random numbers (as will be described in Sec. 20.3). For example, the Excel function RAND() generates such random numbers upon request. (The beginning of the Problems section also describes how you can use the random digits given in Table 20.3 to obtain the random numbers you will need for some of your homework problems.) After generating a random number, it is used as follows to determine whether to accept a downward step: If random number  Prob{acceptance}, accept a downward step. Otherwise, reject the step. Why does simulated annealing use the particular formula for Prob{acceptance} specified by the move selection rule? The reason is that simulated annealing is based on the analogy to a physical annealing process. This process initially involves melting a metal or glass at a high temperature and then slowly cooling the substance until it reaches a low-energy stable state with desirable physical properties. At any given temperature T during this process, the energy level of the atoms in the substance is fluctuating but tending to decrease. A mathematical model of how the energy level fluctuates assumes that changes occur randomly except that only some of the increases are accepted. In particular, the probability of accepting an increase when the temperature is T has the same form as for Prob{acceptance} in the move selection rule for simulated annealing. The analogy for an optimization problem in minimization form is that the energy level of the substance at the current state of the system corresponds to the objective function value at the current feasible solution of the problem. The objective of having the substance reach a stable state with an energy level that is as small as possible corresponds to having the problem reach a feasible solution with an objective function value that is as small as possible. Just as for a physical annealing process, a key question when designing a simulated annealing algorithm for an optimization problem is to select an appropriate temperature schedule to use. (Because of the analogy to physical annealing, we now are referring to T in a simulated annealing algorithm as the temperature.) This schedule needs to specify the initial, relatively large value of T, as well as the subsequent progressively smaller values. It also needs to specify how many moves (iterations) should be made at each value of T. The selection of these parameters to fit the problem under consideration is a key factor in the effectiveness of the algorithm. Some preliminary experimentation can be used to guide this selection of the parameters of the algorithm. We later will specify one specific temperature schedule that seems reasonable for the two examples considered in this section, but many others could be considered as well. With this background, we now can provide an outline of a basic simulated annealing algorithm. Outline of a Basic Simulated Annealing Algorithm Initialization. Start with a feasible initial trial solution. Iteration. Use the move selection rule to select the next trial solution. (If none of the immediate neighbors of the current trial solution are accepted, the algorithm is terminated.) Check the temperature schedule. When the desired number of iterations have been performed at the current value of T, decrease T to the next value in the temperature schedule and resume performing iterations at this next value. 14.3 SIMULATED ANNEALING 639 Stopping rule. When the desired number of iterations have been performed at the smallest value of T in the temperature schedule (or when none of the immediate neighbors of the current trial solution are accepted), stop. Accept the best trial solution found at any iteration (including for larger values of T) as the final solution. Before applying this algorithm to any particular problem, a number of details need to be worked out to fit the structure of the problem. 1. How should the initial trial solution be selected? 2. What is the neighborhood structure that specifies which solutions are immediate neighbors (reachable in a single iteration) of any current trial solution? 3. What device should be used in the move selection rule to randomly select one of the immediate neighbors of the current trial solution to become the current candidate to be the next trial solution? 4. What is an appropriate temperature schedule? We will illustrate some reasonable ways of addressing these questions in the context of applying the simulated annealing algorithm to the following two examples. The Traveling Salesman Problem Example We now return to the particular traveling salesman problem that was introduced in Sec. 14.1 and displayed in Fig. 14.4. The metaheuristics area in your IOR Tutorial includes a procedure for applying the basic simulated annealing algorithm to small traveling salesman problems like this example. This procedure answers the four questions in the following way: 1. Initial trial solution: You may enter any feasible solution (sequence of cities on the tour), perhaps by randomly generating the sequence, but it is helpful to enter one that appears to be a good feasible solution. For the example, the feasible solution 1-2-3-45-6-7-1 is a reasonable choice. 2. Neighborhood structure: An immediate neighbor of the current trial solution is one that is reached by making a sub-tour reversal, as described in Sec. 14.1 and illustrated in Fig. 14.5. (However, the sub-tour reversal that simply reverses the direction of the tour provided by the current trial solution is ruled out.) 3. Random selection of an immediate neighbor: Selecting a sub-tour to be reversed requires selecting the slot in the current sequence of cities where the sub-tour currently begins and then the slot where the sub-tour currently ends. The beginning slot can be anywhere except the first and last slots (reserved for the home city) and the next-to-last slot. The ending slot must be somewhere after the beginning slot, excluding the last slot. (Both beginning in the second slot and ending in the next-to-last slot also is ruled out since this would simply reverse the direction of the tour.) As will be illustrated shortly, random numbers are used to give equal probabilities to selecting any of the eligible beginning slots and then any of the eligible ending slots. If this selection of the beginning and ending slots turns out to be infeasible (because the links needed to complete the subtour reversal are not available), this process is repeated until a feasible selection is made. 4. Temperature schedule: Five iterations are performed at each of five values of T (T1, T2, T3, T4, T5) in turn, where T1  0.2Zc when Zc is the objective function value for the initial trial solution, T2  0.5T1, T3  0.5T2, T4  0.5T3, T5  0.5T4. 640 CHAPTER 14 METAHEURISTICS This particular temperature schedule is only illustrative of what could be used. T1  0.2Zc is a reasonable choice because T1 should tend to be fairly large compared to typical values of Zn  Zc, which will encourage an almost random search through the feasible region to find where the search should be focused. However, by the time the value of T is reduced to T5, almost no nonimproving moves will be accepted, so the emphasis will be on improving the value of the objective function. When dealing with larger problems, more than five iterations probably would be performed at each value of T. Furthermore, the values of T would probably be reduced more slowly than with the temperature schedule prescribed above. Now let us elaborate on how the random selection of an immediate neighbor is made. Suppose we are dealing with the initial trial solution of 1-2-3-4-5-6-7-1 in our example. Zc  69 Initial trial solution: 1-2-3-4-5-6-7-1 T1  0.2Zc  13.8 The sub-tour that will be reversed can begin anywhere between the second slot (currently designating city 2) and the sixth slot (currently designating city 6). These five slots can be given equal probabilities by having the following values of a random number between 0 and 1 correspond to choosing the slot indicated below. 0.0000–0.1999: 0.2000–0.3999: 0.4000–0.5999: 0.6000–0.7999: 0.8000–0.9999: Sub-tour Sub-tour Sub-tour Sub-tour Sub-tour begins begins begins begins begins in in in in in slot slot slot slot slot 2. 3. 4. 5. 6. Suppose that the random number generated happens to be 0.2779. 0.2779: Choose a sub-tour that begins in slot 3. By beginning in slot 3, the sub-tour that will be reversed needs to end somewhere between slots 4 and 7. These four slots are given equal probabilities by using the following correspondence with a random number. 0.0000–0.2499: 0.2500–0.4999: 0.5000–0.7499: 0.7500–0.9999: Sub-tour Sub-tour Sub-tour Sub-tour ends ends ends ends in in in in slot slot slot slot 4. 5. 6. 7. Suppose that the random number generated for this purpose happens to be 0.0461. 0.0461: Choose to end the sub-tour in slot 4. Since slots 3 and 4 currently designate that cities 3 and 4 are the third and fourth cities visited in the tour, the sub-tour of cities 3-4 will be reversed. Reverse 3-4 (see Fig. 14.5): 1-2-4-3-5-6-7-1 Zn  65 This immediate neighbor of the current (initial) trial solution becomes the current candidate to be the next trial solution. Since Zn  65  Zc  69, this candidate is better than the current trial solution (remember that the objective here is to minimize the total distance of the tour), so this candidate is automatically accepted to be next trial solution. This choice of a sub-tour reversal was a fortunate one because it led to a feasible solution. This does not always happen in traveling salesman problems like our example where certain pairs of cities are not directly connected by a link. For example, if the random numbers had called for reversing 2-3-4-5 to obtain the tour 1-5-4-3-2-6-7-1, Fig. 14.4 641 14.3 SIMULATED ANNEALING shows that this is an infeasible solution because there is no link between cities 1 and 5 as well as no link between cities 2 and 6. When this happens, new pairs of random numbers would need to be generated until a feasible solution is obtained. (A more sophisticated procedure also can be constructed to generate random numbers only for relevant links.) To illustrate a case where the current candidate to be the next trial solution is worse than the current trial solution, suppose that the second iteration results in reversing 3-5-6 (as in Fig. 14.6) to obtain 1-2-4-6-5-3-7-1, which has a total distance of 64. Then suppose that the third iteration begins by reversing 3-7 (as in Fig. 14.9) to obtain 1-2-4-6-5-7-3-1 (which has a total distance of 66) as the current candidate to be the next trial solution. Since 1-2-4-6-5-3-7-1 (with a total distance of 64) is the current trial solution for iteration 3, we now have Zc  64, Zn  66, T1  13.8. Therefore, since the objective here is minimization, the probability of accepting 1-2-4-65-7-3-1 as the next trial solution is Prob{acceptance}  e(Z Z )/T  e2/13.8  0.865. c n 1 If the next random number generated is less than 0.865, this candidate solution will be accepted as the next trial solution. Otherwise, it will be rejected. Table 14.5 shows the results of using IOR Tutorial to apply the complete simulated annealing algorithm to this problem. Note that iterations 14 and 16 tie for finding the best ■ TABLE 14.5 One application of the simulated annealing algorithm in IOR Tutorial to the traveling salesman problem example Iteration 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 T 13.8 13.8 13.8 13.8 13.8 6.9 6.9 6.9 6.9 6.9 3.45 3.45 3.45 3.45 3.45 1.725 1.725 1.725 1.725 1.725 0.8625 0.8625 0.8625 0.8625 0.8625 Trial Solution Obtained 1-2-3-4-5-6-7-1 1-3-2-4-5-6-7-1 1-2-3-4-5-6-7-1 1-3-2-4-5-6-7-1 1-3-2-4-6-5-7-1 1-2-3-4-6-5-7-1 1-2-3-4-5-6-7-1 1-3-2-4-5-6-7-1 1-2-3-4-5-6-7-1 1-2-3-5-4-6-7-1 1-2-3-4-5-6-7-1 1-2-3-4-6-5-7-1 1-3-2-4-6-5-7-1 1-3-7-5-6-4-2-1 1-3-5-7-6-4-2-1 1-3-7-5-6-4-2-1 1-3-5-7-6-4-2-1 1-3-7-5-6-4-2-1 1-3-2-4-6-5-7-1 1-2-3-4-6-5-7-1 1-3-2-4-6-5-7-1 1-3-7-5-6-4-2-1 1-3-2-4-6-5-7-1 1-2-3-4-6-5-7-1 1-3-2-4-6-5-7-1 1-3-7-5-6-4-2-1 Distance 69 68 69 68 65 66 69 68 69 65 69 66 65 66 63 ← Minimum 66 63 ← Minimum 66 65 66 65 66 65 66 65 66 642 CHAPTER 14 METAHEURISTICS trial solution, 1-3-5-7-6-4-2-1 (which happens to be the optimal solution along with the equivalent tour in the reverse direction, 1-2-4-6-7-5-3-1), so this solution is accepted as the final solution. You might find it interesting to apply this software to the same problem yourself. Due to the randomness built into the algorithm, the sequence of trial solutions obtained will be different each time. Because of this feature, practitioners sometimes will reapply a simulated annealing algorithm to the same problem several times to increase the chance of finding an optimal solution. (Problem 14.3-2 asks you to do this for this same example.) The initial trial solution also may be changed each time to help facilitate a more thorough exploration of the entire feasible region. If you would like to see another example of how random numbers are used to perform an iteration of the basic simulated annealing algorithm for a traveling salesman problem, one is provided in the Solved Examples section of the book’s website. Before going on to the next example, we should pause at this point to mention a couple of ways in which advanced features of tabu search can be combined fruitfully with simulated annealing. One way is by applying the strategic oscillation feature of tabu search to the temperature schedule of simulated annealing. Strategic oscillation adjusts the temperature schedule by decreasing the temperatures more rapidly than usual but then strategically moving the temperatures back and forth across levels where the best solutions were found. Another way involves applying the candidate-list strategies of tabu search to the move selection rule of simulated annealing. The idea here is to scan multiple neighbors to see if an improving move is found before applying the randomized rule for accepting or rejecting the current candidate to be the next trial solution. These changes have sometimes produced significant improvements. As these ideas for applying features of tabu search to simulated annealing suggest, a hybrid algorithm that combines the ideas of different metaheuristics can sometimes perform better than an algorithm that is based solely on a single metaheuristic. Although we are presenting three commonly used metaheuristics separately in this chapter, experienced practitioners occasionally will pick and choose among the ideas of these and other metaheuristics in designing their heuristic methods. The Nonlinear Programming Example Now reconsider the example of a small nonlinear programming problem (only a single variable) that was introduced in Sec. 14.1. The problem is to Maximize f(x)  12x5  975x4  28,000x3  345,000x2  1,800,000x, subject to 0  x  31. The graph of f(x) in Fig. 14.1 reveals that there are local optima at x  5, x  20, and x  31, but only x  20 is a global optimum. The metaheuristics area in IOR Tutorial includes a procedure for applying the simulated annealing algorithm to small nonlinear programming problems of the form, Maximize f(x1, . . . , xn) subject to Lj  xj  Uj, for j  1, . . . , n, 643 14.3 SIMULATED ANNEALING where n  1 or 2, and where Lj and Uj are constants (0  Lj  Uj  63) representing the bounds on xj. (Having relatively tight bounds on the individual variables is highly desirable for the efficiency of a simulated annealing algorithm, as well as for genetic algorithms discussed in the next section.) One or two linear functional constraints on the variables x  (x1, . . . , xn) also can be included when n  2. For the example, we have n  1, L1  0, U1  31, with no linear functional constraints. This procedure in IOR Tutorial designs the details of the simulated annealing algorithm for such nonlinear programming problems as follows. 1. Initial trial solution: You may enter any feasible solution, but it is helpful to enter one that appears to be a good feasible solution. In the absence of any clues about where the good feasible solutions might lie, it is reasonable to set each variable xj midway between its lower bound Lj and upper bound Uj in order to start the search in the middle of the feasible region. (For this reason, x  15.5 is a reasonable choice for the initial trial solution for the example.) 2. Neighborhood structure: Any feasible solution is considered to be an immediate neighbor of the current trial solution. However, the method described below for selecting an immediate neighbor to become the current candidate to be the next trial solution gives a preference to feasible solutions that are relatively close to the current trial solution, while still allowing for the possibility of moving to a different part of the feasible region to continue the search. 3. Random selection of an immediate neighbor: Set Uj  Lj ␴j   for j  1, . . . , n. , 6 Then, given the current trial solution (x1, . . . , xn), reset xj  xj  N(0, ␴j), for j = 1, . . . , n, where N(0, ␴j) is a random observation from a normal distribution with mean zero and standard deviation ␴j. If this does not result in a feasible solution, then repeat this process (starting again from the current trial solution) as many times as needed to obtain a feasible solution. 4. Temperature schedule: As for traveling salesman problems, five iterations are performed at each of five values of T (T1, T2, T3, T4, T5) in turn, where T1  0.2Zc when Zc is the objective function value for the initial trial solution, T2  0.5T1, T3  0.5T2, T4  0.5T3, T5  0.5T4. The reason for setting ␴j  (Uj  Lj)/6 when selecting an immediate neighbor is that when the variable xj is midway between Lj and Uj, any new feasible value of the variable is within three standard deviations of the current value. This gives a significant probability that the new value will move most of the way to one of its bounds even though there is a much higher probability that the new value will be relatively close to the current value. There are a number of methods for generating a random observation N(0, ␴j) from a normal 644 CHAPTER 14 METAHEURISTICS distribution (as will be discussed briefly in Sec. 20.4). For example, the Excel function, NORMINV(RAND(),0,␴j), generates such a random observation. For your homework, here is a straightforward way of generating the random observations you need. Obtain a random number r and then use the normal table in Appendix 5 to find the value of N(0, ␴j) such that P{X  N(0, ␴j)}  r when X is a normal random variable with mean 0 and standard deviation ␴j. To illustrate how the algorithm designed in this way would be applied to the example, let us start with x  15.5 as the initial trial solution. Thus, Zc  f(15.5)  3,741,121 and T1  0.2Zc  748,224. Since UL 31  0 ␴      5.167, 6 6 the next step is to generate a random observation N(0, 5.167) from a normal distribution with mean zero and this standard deviation. To do this, we first obtain a random number, which happens to be 0.0735. Going to the normal table in Appendix 5, P{standard normal  1.45}  0.0735, so N(0, 5.167)  1.45(5.167)  7.5. The current candidate to be the next trial solution then is obtained by resetting x as x  15.5  N(0, 5.167)  15.5  7.5  8, so that Zn  f(x)  3,055,616. Because Zn  Zc 3,055,616  3,741,121      0.916 T 748,224 the probability of accepting x  8 as the next trial solution is Prob{acceptance}  e0.916  0.400. Therefore, x  8 will be accepted only if the corresponding random number between 0 and 1 happens to be less than 0.400. Thus, x  8 is fairly likely to be rejected. (In somewhat later iterations when T is much smaller, x  8 would almost certainly be rejected.) This is fortunate since Fig. 14.1 reveals that the search should focus on the portion of the feasible region between x  10 and x  30 in order to start climbing the tallest hill. Table 14.6 provides the results that were obtained by using IOR Tutorial to apply the complete simulated annealing algorithm to this nonlinear programming problem. Note how the trial solutions obtained vary fairly widely over the feasible region during the early iterations, but then start approaching the top of the tallest hill more consistently during the later iterations when T has been reduced to much smaller values. Therefore, of the 25 iterations, the best trial solution of x  20.031 (as compared to the optimal solution of x  20) was not obtained until iteration 21. Once again, you might find it interesting to apply this software to the same problem yourself to see what is yielded by new sequences of random numbers and random observations from normal distributions. (Problem 14.3-6 asks you to do this several times.) 645 14.4 GENETIC ALGORITHMS ■ TABLE 14.6 One application of the simulated annealing algorithm in IOR Tutorial to the nonlinear programming example ■ 14.4 Iteration T Trial Solution Obtained 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 748,224 748,224 748,224 748,224 748,224 374,112 374,112 374,112 374,112 374,112 187,056 187,056 187,056 187,056 187,056 93,528 93,528 93,528 93,528 93,528 46,764 46,764 46,764 46,764 46,764 x  15.5 x  17.557 x  14.832 x  17.681 x  16.662 x  18.444 x  19.445 x  21.437 x  18.642 x  22.432 x  21.081 x  20.383 x  21.216 x  21.354 x  20.795 x  18.895 x  21.714 x  19.463 x  20.389 x  19.83 x  20.68 x  20.031 x  20.184 x  19.9 x  19.677 x  19.377 f(x) 3,741,121.0 4,167,533.956 3,590,466.203 4,188,641.364 3,995,966.078 4,299,788.258 4,386,985.033 4,302,136.329 4,322,687.873 4,113,901.493 4,345,233.403 4,393,306.255 4,330,358.125 4,313,392.276 4,370,624.01 4,348,060.727 4,259,787.734 4,387,360.1 4,393,076.988 4,398,710.575 4,378,591.085 4,399,955.913 ← Maximum 4,398,462.299 4,399,551.462 4,395,385.618 4,383,048.039 GENETIC ALGORITHMS Genetic algorithms provide a third type of metaheuristic that is quite different from the first two. This type tends to be particularly effective at exploring various parts of the feasible region and gradually evolving toward the best feasible solutions. After introducing the basic concepts for this type of metaheuristic, we will apply a basic genetic algorithm to the same nonlinear programming example just considered above with the additional constraint that the variable is restricted to integer values. We then will apply this approach to the same traveling salesman problem example considered in each of the preceding sections. Basic Concepts Just as simulated annealing is based on an analogy to a natural phenomenon (the physical annealing process), genetic algorithms are greatly influenced by another form of a natural phenomenon. In this case, the analogy is to the biological theory of evolution formulated by Charles Darwin in the mid-19th century. Each species of plants and animals has great individual variation. Darwin observed that those individuals with variations that impart a survival advantage through improved adaptation to the environment are most likely to survive to the next generation. This phenomenon has since been referred to as survival of the fittest. The modern field of genetics provides a further explanation of this process of evolution and the natural selection involved in the survival of the fittest. In any species that An Application Vignette Intel Corporation is the world’s largest semiconductor chip maker. With well over 80,000 employees and annual revenues over $53 billion, it has over 5000 products serving a wide variety of markets. With so many products, one key to the continuing success of the company is an effective system for continually updating the design and scheduling of its product line. It can maximize its revenues only by introducing products into markets with the right features, at the right price, and at the right time. Therefore, a major operations research study was undertaken to optimize how this is done. The resulting model incorporated market requirements and financials, designengineering capabilities, manufacturing costs, and multiple-time dynamics. This model then was embedded in a decision support system that soon was used by hundreds of Intel employees representing most major Intel groups and many distinct job functions. The algorithmic heart of this decision support system is a genetic algorithm that handles resource constraints, scheduling, and financial optimization. This algorithm uses a fitness function to evaluate candidate solutions and then performs the usual genetic operators of mutation and crossover. It also calls on a combination of heuristic methods and mathematical optimization techniques to optimize product composition. This algorithm and its associated database enabled a new business process that is shifting Intel divisions to a unified focus on global profit maximization. This dramatic application of operations research revolving around a genetic algorithm led to OR professionals from Intel winning the prestigious 2011 Daniel H. Wagner Prize for Excellence in Operations Research Practice. Source: Rash, E., and K. Kempf, “Product Line Design and Scheduling at Intel,” Interfaces, 42(5): 425–436, September– October 2012. (A link to this article is provided on our website, www.mhhe.com/hillier.) reproduces by sexual reproduction, each offspring inherits some of the chromosomes from each of the two parents, where the genes within the chromosomes determine the individual features of the child. A child who happens to inherit the better features of the parents is slightly more likely to survive into adulthood and then become a parent who passes on some of these features to the next generation. The population tends to improve slowly over time by this process. A second factor that contributes to this process is a random, low-level mutation rate in the DNA of the chromosomes. Thus, a mutation occasionally occurs that changes the features of a chromosome that a child inherits from a parent. Although most mutations have no effect or are disadvantageous, some mutations provide desirable improvements. Children with desirable mutations are slightly more likely to survive and contribute to the future gene pool of the species. These ideas transfer over to dealing with optimization problems in a rather natural way. Feasible solutions for a particular problem correspond to members of a particular species, where the fitness of each member now is measured by the value of the objective function. Rather than processing a single trial solution at a time (as with basic forms of tabu search and simulated annealing), we now work with an entire population of trial solutions.1 For each iteration (generation) of a genetic algorithm, the current population consists of the set of trial solutions currently under consideration. These trial solutions are thought of as the currently living members of the species. Some of the youngest members of the population (including especially the fittest members) survive into adulthood and become parents (paired at random) who then have children (new trial solutions) who share some of the features (genes) of both parents. Since the fittest members of the population are more likely to become parents than others, a genetic algorithm tends to generate improving populations of trial solutions as it proceeds. Mutations occasionally occur so that certain children also can acquire features (sometimes desirable features) that are not possessed by either parent. This helps a genetic algorithm to explore a new, perhaps better part of the feasible region than previously considered. Eventually, survival of the fittest should tend to lead a genetic algorithm to a trial solution (the best of any considered) that is at least nearly optimal. 1 One of the intensification strategies of tabu search also maintains a population of best solutions. The population is used to create linking paths between its members and to relaunch the search along these paths. 14.4 GENETIC ALGORITHMS 647 Although the analogy of the process of biological evolution defines the core of any genetic algorithm, it is not necessary to adhere rigidly to this analogy in every detail. For example, some genetic algorithms (including the one outlined below) allow the same trial solution to be a parent repeatedly over multiple generations (iterations). Thus, the analogy needs to be only a starting point for defining the details of the algorithm to best fit the problem under consideration. Here is a rather typical outline of a genetic algorithm that we will employ for the two examples. Outline of a Basic Genetic Algorithm Initialization. Start with an initial population of feasible trial solutions, perhaps by generating them randomly. Evaluate the fitness (the value of the objective function) for each member of this current population. Iteration. Use a random process that is biased toward the more fit members of the current population to select some of the members (an even number) to become parents. Pair up the parents randomly and then have each pair of parents give birth to two children (new feasible trial solutions) whose features (genes) are a random mixture of the features of the parents, except for occasional mutations. (Whenever the random mixture of features and any mutations result in an infeasible solution, this is a miscarriage, so the process of attempting to give birth then is repeated until a child is born that corresponds to a feasible solution.) Retain the children and enough of the best members of the current population to form the new population of the same size for the next iteration. (Discard the other members of the current population.) Evaluate the fitness for each new member (the children) in the new population. Stopping rule. Use some stopping rule, such as a fixed number of iterations, a fixed amount of CPU time, or a fixed number of consecutive iterations without any improvement in the best trial solution found so far. Use the best trial solution found on any iteration as the final solution. Before this algorithm can be implemented the following questions need to be answered: 1. 2. 3. 4. 5. What should the population size be? How should the members of the current population be selected to become parents? How should the features of the children be derived from the features of the parents? How should mutations be injected into the features of the children? Which stopping rule should be used? The answers to these questions depend greatly on the structure of the specific problem being addressed. The metaheuristics area in the IOR Tutorial does include two versions of the algorithm. One is for very small integer nonlinear programming problems like the example considered next. The other is for small traveling salesman problems. Both versions answer some of the questions in the same way, as described below: 1. Population size: Ten. (This size is reasonable for the small problems for which this software is designed, but much larger populations commonly are used for large problems.) 2. Selection of parents: From among the five most fit members of the population (according to the value of the objective function), select four randomly to become parents. From among the five least fit members, select two randomly to become parents. Pair up the six parents randomly to form three couples. 3. Passage of features (genes) from parents to children: This process is highly problem dependent and so differs for the two versions of the algorithm in the software, as described later for the two examples. 648 CHAPTER 14 METAHEURISTICS 4. Mutation rate: The probability that an inherited feature of a child mutates into an opposite feature is set at 0.1 in the software. (Much smaller mutation rates commonly are used for large problems.) 5. Stopping rule: Stop after five consecutive iterations without any improvement in the best trial solution found so far. Now we are ready to apply the algorithm to the two examples. The Integer Version of the Nonlinear Programming Example We return again to the small nonlinear programming problem that was introduced in Sec. 14.1 (see Fig. 14.1) and then addressed using a simulated annealing algorithm at the end of the preceding section. However, we now add the additional constraint that the problem’s single variable x must have an integer value. Because the problem already has the constraint that 0  x  31, this means that the problem has 32 feasible solutions, x  0, 1, 2, . . . , 31. (Having such bounds is very important for a genetic algorithm, since it reduces the search space to the relevant region.) Thus, we now are dealing with an integer nonlinear programming problem. When applying a genetic algorithm, strings of binary digits often are used to represent the solutions of the problem. Such an encoding of the solutions is a particularly convenient one for the various steps of a genetic algorithm, including the process of parents giving birth to children. This encoding is easy to do for our particular problem because we simply can write each value of x in base 2. Since 31 is the maximum feasible value of x, only five binary digits are required to write any feasible value. We always will include all five binary digits even when the leading digit or digits are zeroes. Thus, for example, x 3 x  10 x  25 is is is 00011 in base 2, 01010 in base 2, 11001 in base 2. Each of the five binary digits is referred to as one of the genes of the solution, where the two possible values of the binary digit describe which of two possible features is being carried in that gene to help form the overall genetic makeup. When both parents have the same feature, it will be passed down to each child (except when a mutation occurs). However, when the two parents carry opposite features on the same gene, which feature a child will inherit becomes random. For example, suppose that the two parents are P1: P2: 00011 and 01010. Since the first, third, and fourth digits agree, the children then automatically become (barring mutations) C1: C2: 0x01x and 0x01x, where x indicates that this particular digit is not known yet. Random numbers are used to identify these unknown digits, where a natural correspondence is 0.0000–0.4999 0.5000–0.9999 corresponds to the digit being 0, corresponds to the digit being 1. For example, suppose that the next four random numbers generated are 0.7265, 0.5190, 0.0402, and 0.3639 so that the two unknown digits for the first child are both 1s and the two unknown digits for the second child are both 0s. The children then become (barring mutations) C1: C2: 01011 and 00010. 14.4 GENETIC ALGORITHMS 649 This particular method of generating the children from the parents is known as uniform crossover. It is perhaps the most intuitive of the various alternative methods that have been proposed. We now need to consider the possibility of mutations that would affect the genetic makeup of the children. Since the probability of a mutation in any gene (flipping the binary digit to the opposite value) has been set at 0.1 for our algorithm, we can let the random numbers 0.0000–0.0999 0.1000–0.9999 correspond to a mutation, correspond to no mutation. For example, suppose that in the next 10 random numbers generated, only the eighth one is less than 0.1000. This indicates that no mutation occurs in the first child, but the third gene (digit) in the second child flips its value. Therefore, the final conclusion is that the two children are C1: C2: 01011 and 00110. Returning to base 10, the two parents correspond to the solutions, x  3 and x  10, whereas their children would have been (barring mutations) x  11 and x  2. However, because of the mutation, the children become x  11 and x  6. For this particular example, any integer value of x such that 0  x 31 (in base 10) is a feasible solution, so every 5-digit number in base 2 also is a feasible solution. Therefore, the above process of creating children never results in a miscarriage (an infeasible solution). However, if the upper bound on x were, say, x  25 instead, then miscarriages would occur occasionally. Whenever a miscarriage occurs, the solution is discarded and the entire process of creating a child is repeated until a feasible solution is obtained. This example includes only a single variable. For a nonlinear programming problem with multiple variables, each member of the population again would use base 2 to show the value of each variable. The above process of generating children from parents then would be done in the same way one variable at a time. Table 14.7 shows the application of the complete algorithm to this example through both the initialization step (part a of the table) and iteration 1 (part b of the table). In the initialization step, each of the members of the initial population were generated by generating five random numbers and using the correspondence between a random number and a binary digit given earlier to obtain the five binary digits in turn. The corresponding value of x in base 10 then is plugged into the objective function given at the beginning of Sec. 14.1 to evaluate the fitness of that member of the population. The five members of the initial population that have the highest degree of fitness (in order) are members 10, 8, 4, 1, and 7. To randomly select four of these members to become parents, a random number is used to select one member to be rejected, where 0.0000– 0.1999 corresponds to ejecting the first member listed (member 10), 0.2000–0.3999 corresponds to rejecting the second member, and so forth. In this case, the random number was 0.9665, so the fifth member listed (member 7) does not become a parent. From among the five less fit members of the initial population (members 2, 1, 6, 5, and 9), random numbers now are used to select which two of these members will become parents. In this case, the random numbers were 0.5634 and 0.1270. For the first random number, 0.0000–0.1999 corresponds to selecting the first member listed (member 2), 0.2000–0.3999 corresponds to selecting the second member, and so forth, so the third member listed (member 6) is the one selected in this case. Since only four members (2, 1, 5, and 9) now remain for selecting the last parent, the corresponding intervals for the second random number are 0.0000–0.2499, 0.2500–0.4999, 0.5000–0.7499, and 650 CHAPTER 14 METAHEURISTICS ■ TABLE 14.7 Application of the genetic algorithm to the integer nonlinear programming example through (a) the initialization step and (b) iteration 1 Member (a) (b) Initial Population 1 2 3 4 5 6 7 8 9 10 0 0 0 1 0 0 0 1 1 1 1 0 1 0 1 1 0 0 1 0 1 1 0 1 0 0 1 0 1 1 1 0 0 1 1 0 0 1 1 0 1 0 0 1 0 1 1 0 0 1 Value of x Fitness 15 4 8 23 10 9 5 18 30 21 3,628,125 3,234,688 3,055,616 3,962,091 2,950,000 2,978,613 3,303,125 4,239,216 1,350,000 4,353,187 Member Parents Children Value of x Fitness 10 2 10101 00100 00101 10001 5 17 3,303,125 4,064,259 8 4 10010 10111 10011 10100 19 20 4,357,164 4,400,000 1 6 01111 01001 01011 01111 11 15 2,980,637 3,628,125 0.7500–0.9999. Because 0.1270 falls in the first of these intervals, the first remaining member listed (member 2) is selected to be a parent. The next step is to pair up the six parents—members 10, 8, 4, 1, 6, and 2. Let us begin by using a random number to determine the mate of the first member listed (member 10). The random number 0.8204 indicated that it should be paired up with the fifth of the other five parents listed (member 2). To pair up the next member listed (member 8), the next random number was 0.0198, which is in the interval 0.0000–0.3333, so the first of the three remaining parents listed (member 4) is chosen to be the mate of member 8. This then leaves the two remaining parents (members 1 and 6) to become the last couple. Part (b) of Table 14.7 shows the children that were reproduced by these parents by using the process illustrated earlier in this subsection. Note that mutations occurred in the third gene of the second child and the fourth gene of the fourth child. By and large, the six children have a relatively high degree of fitness. In fact, for each pair of parents, both of the children turned out to be more fit than one of the parents. This does not always occur but is fairly common. In the case of the second pair of parents, both of the children happen to be more fit than both parents. Fortuitously, both of these children (x  19 and x  20) actually are superior to any of the members of the preceding population given in part (a) of the table. To form the new population for the next iteration, all six children are retained along with the four most fit members of the preceding population (members 10, 8, 4, and 1). Subsequent iterations would proceed in a similar fashion. Since we know from the discussion in Sec. 14.1 (see Fig. 14.1) that x  20 (the best trial solution generated in iteration 1) actually is the optimal solution for this example, subsequent iterations would not provide any further improvement. Therefore, the stopping rule would terminate the algorithm after five more iterations and provide x = 20 as the final solution. Your IOR Tutorial includes a procedure for applying this same genetic algorithm to other very small integer nonlinear programming problems. (The form and size restrictions are the same as specified in Sec. 14.3 for nonlinear programming problems.) 14.4 GENETIC ALGORITHMS 651 You might find it interesting to apply this procedure in IOR Tutorial to this same example. Because of the randomness inherent in the algorithm, different intermediate results are obtained each time that it is applied. (Problem 14.4-3 asks you to apply the algorithm to this example several times.) Although this was a discrete example, genetic algorithms can also be applied to continuous problems such as a nonlinear programming problem without an integer constraint. In this case, the value of a continuous variable would be represented (or closely approximated) by a decimal number in base 2. For example, x  2385 is 10111.10100 in base 2, and x  23.66 is closely approximated by 10111.10101 in base 2. All the binary digits on both sides of the decimal point can be treated just as before to have parents reproduce children, and so forth. The Traveling Salesman Problem Example Sections 14.2 and 14.3 illustrated how a tabu search algorithm and a simulated annealing algorithm would be applied to the particular traveling salesman problem introduced in Sec. 14.1 (see Fig. 14.4). Now let us see how our genetic algorithm can be applied using this same example. Rather than using binary digits in this case, we will continue to represent each solution (tour) in the natural way as a sequence of cities visited. For example, the first solution considered in Sec. 14.1 is the tour of the cities in the following order: 1-2-3-4-5-67-1, where city 1 is the home base where the tour must begin and end. We should point out, however, that genetic algorithms for traveling salesman problems frequently use other methods for encoding solutions. In general, clever methods of representing solutions (often by using strings of binary digits) can make it easier to generate children, create mutations, maintain feasibility, and so forth, in a natural way. The development of an appropriate encoding scheme is a key part of developing an effective genetic algorithm for any application. A complication with this particular example is that, in a sense, it is too easy. Because of the rather limited number of links between pairs of cities in Fig. 14.4, this problem barely has 10 distinct feasible solutions if we rule out a tour that is simply a previously considered tour in the reverse direction. Therefore, it is not possible to have an initial population with 10 distinct trial solutions such that the resulting six parents then reproduce distinct children that also are distinct from the members of the initial population (including the parents). Fortunately, a genetic algorithm can still operate reasonably well when there is a modest amount of duplication in the trial solutions in a population or in two consecutive populations. For example, even when both parents in a couple are identical, it still is possible for their children to differ from the parents because of mutations. The genetic algorithm for traveling salesman problems in your IOR Tutorial does not do anything to avoid duplication in the trial solutions considered. Each of the 10 trial solutions in the initial population is generated in turn as follows. Starting from the home base city, random numbers are used to select the next city from among those that have a link to the home base city (cities 2, 3, and 7 in Fig. 14.4). Random numbers then are used to select the third city from among the remaining cities that have a link to the second city. This process is continued until either every city is included once in the tour (plus a return to the home base city from the last city) or a dead end is reached because there is no link from the current city to any of the remaining cities that still need to be visited. In the latter case, the entire process for generating a trial solution is restarted from the beginning with new random numbers. Random numbers are also used to reproduce children from a pair of parents. To illustrate this process, consider the following pair of parents: P1: P2: 1-2-3-4-5-6-7-1 1-2-4-6-5-7-3-1 As we describe the process of generating a child from these parents, we also summarize the results in Table 14.8 to help you follow the progression. 652 CHAPTER 14 METAHEURISTICS ■ TABLE 14.8 Illustration of the process of generating a child for the traveling salesman problem example Parent P1: Parent P2: Link 1 2 3 4 5 6 7 1-2-3-4-5-6-7-1 1-2-4-6-5-7-3-1 Options 1-2, 1-7, 1-2, 1-3 2-3, 2-4 4-3, 4-5, 4-6 3-5*, 3-7 5-6, 5-6, 5-7 6-7 7-1 Random Selection 1-2 2-4 4-3 3-5* 5-6 6-7 7-1 Tour 1-2 1-2-4 1-2-4-3 1-2-4-3-5 1-2-4-3-5-6 1-2-4-3-5-6-7 1-2-4-3-5-6-7-1 *A link that completes a sub-tour reversal Ignoring the possibility of mutations for the time being, here is the main idea for how to generate a child. Inheriting Links: Genes correspond to the links in a tour. Therefore, each of the links (genes) inherited by a child should come from one parent or the other (or both). (One other possibility described later is that a parent also can pass down a sub-tour reversal.) These links being inherited are randomly selected one at a time until a complete tour (the child) has been generated. To start this process with the above parents, since a tour must begin in city 1, a child’s initial link must come from one of the parent’s links that connect city 1 to another city. For parent P1, these are links 1-2 and 1-7. (Link 1-7 qualifies since it is equivalent to take the tour in either direction.) For parent P2, the corresponding links are 1-2 (again) and 1-3. The fact that both parents have link 1-2 doubles the probability that it will be inherited by a child. Therefore, when using a random number to determine which link the child will inherit, the interval 0.0000–0.4999 (or any interval of this size) corresponds to inheriting link 1-2 whereas the intervals 0.50000–0.7499 and 0.7500–0.9999 then would correspond to the choice of link 1-7 and link 1-3, respectively. Suppose 1-2 is selected, as shown in the first row of Table 14.8. After 1-2, one parent next uses link 2-3 whereas the other uses 2-4. Therefore, in generating the child, a random choice should be made between these two options. Suppose 2-4 is selected. (See the second row of Table 14.8.) There now are three options for the link to follow 1-2-4 because the first parent uses two links (4-3 and 4-5) to connect city 4 in its tour and the second parent uses link 4-6 (link 4-2 is ignored because city 2 already is in the child’s tour). When randomly selecting one of these options, suppose 4-3 is chosen to form 1-2-4-3 as the beginning of the child’s tour thus far, as shown in the third row of Table 14.8. We now come to an additional feature of this process for generating a child’s tour, namely, using a sub-tour reversal from a parent. Inheriting a Sub-Tour Reversal: One other possibility for a link inherited by a child is a link that is needed to complete a sub-tour reversal that the child’s tour is making in a portion of a parent’s tour. To illustrate how this possibility can arise, note that the next city beyond 1-2-4-3 needs to be one of the cities not yet visited (city 5, 6, or 7), but the first parent does not have a link from city 3 to any of these other cities. The reason is that the child is using a subtour reversal (reversing 3-4) of this parent’s tour, 1-2-3-4-5-6-7-1. Completing this subtour reversal requires adding the link 3-5, so this becomes one of the options for the next 14.4 GENETIC ALGORITHMS 653 link in the child’s tour. The other option is link 3-7 provided by the second parent (link 3-1 is not an option because city 1 must come at the very end of the tour). One of these two options is selected randomly. Suppose the choice is link 3-5, which provides 1-2-4-3-5 as the child’s tour thus far, as shown in the fourth row of Table 14.8. To continue this tour, the options for the next link are 5-6 (provided by both parents) and 5-7 (provided by the second parent). Suppose that the random choice among 5-6, 5-6, and 5-7 is 5-6, so that the tour thus far is 1-2-4-3-5-6. (See the fifth row of Table 14.8.) Since the only city not yet visited is city 7, link 6-7 is automatically added next, followed by link 7-1 to return to home base. Thus, as shown in the last row of Table 14.8, the complete tour for the child is C1: 1-2-4-3-5-6-7-1 Figure 14.5 in Sec. 14.1 displays how closely this child resembles the first parent, since the only difference is the sub-tour reversal obtained by reversing 3-4 in the parent. If link 5-7 had been chosen instead to follow 1-2-4-3-5, the tour would have been completed automatically as 1-2-4-3-5-7-6-1. However, there is no link 6-1 (see Fig. 14.4), so a dead end is reached at city 6. When this happens, a miscarriage occurs and the entire process needs to be restarted from the beginning with new random numbers until a child with a complete tour is obtained. Then this process is repeated to obtain the second child. We now need to add one more feature—the possibility of mutations—to complete the description of the process of generating children. Mutations of Inherited Links: Whenever a particular link normally would be inherited from a parent of a child, there is a small possibility that a mutation will occur that will reject that link and instead randomly select one of the other links from the current city to another city not already on the tour, regardless of whether that link is used by either parent. Our genetic algorithm for traveling salesman problems implemented in your IOR Tutorial uses a probability of 0.1 that a mutation will occur each time the next link in the child’s tour needs to be selected. Thus, whenever the corresponding random number is less than 0.1000, the choice of the link made in the normal manner described above is rejected (if any other possible choice exists). Instead, all the other links from the current city to a city not already in the tour (including links not provided by either parent) are identified, and one of these links is randomly selected to be the next link in the tour. For example, suppose that a mutation occurs when generating the very first link for the child. Even though 1-2 had been the random choice as the first link, this link now would be rejected because of the mutation. Since city 1 also has links to cities 3 and 7 (see Fig. 14.4), either link 1-3 or link 1-7 would be randomly selected to be the first tour. (Since the parents end their tours by using one or the other of these links, this can be viewed in this case as starting the child’s tour by reversing the direction of one of the parents’ tours.) We now can outline the general procedure for generating a child from a pair of parents. Procedure for Generating a Child 1. Initialization: To start, designate the home base city as the current city. 2. Options for the next link: Identify all the links from the current city to another city not already in the child’s tour that are used by either parent in either direction. Also, add any link that is needed to complete a sub-tour reversal that the child’s tour is making in a portion of a parent’s tour. 3. Selection of the next link: Use a random number to randomly select one of the options identified in step 2. 4. Check for a mutation: If the next random number is less than 0.1000, a mutation occurs and the link selected in step 3 is rejected (unless there is no other link from the current 654 CHAPTER 14 METAHEURISTICS city to another city not already in the tour). If the link is rejected, identify all the other links from the current city to another city not already in the tour (including links not used by either parent). Use a random number to randomly select one of these other links. 5. Continuation: Add the link selected in step 3 (if no mutation occurs) or in step 4 (if a mutation occurs) to the end of the child’s current incomplete tour and redesignate the city at the end of this link as the current city. If there still remains more than one city not included on the tour (plus the return to the home base city), return to steps 2–4 to select the next link. Otherwise, go to step 6. 6. Completion: With only one city remaining that has not yet been added to the child’s tour, add the link from the current city to this remaining city. Then add the link from this last city back to the home base city to complete the tour for the child. However, if the needed link does not exist, a miscarriage occurs and the procedure must restart again from step 1. This procedure is applied for each pair of parents to obtain each of their two children. The genetic algorithm for traveling salesman problems in your IOR Tutorial incorporates this procedure for generating children as part of the overall algorithm outlined near the beginning of this section. Table 14.9 shows the results from applying this algorithm to the example through the initialization step and the first iteration of the overall algorithm. Because of the randomness built into the algorithm, its intermediate results (and perhaps the final best solution as well) will vary each time the algorithm is run to its completion. (To explore this further, Prob. 14.4-7 asks you to use your IOR Tutorial to apply the complete algorithm to this example several times.) The fact that the example has only a relatively small number of distinct feasible solutions is reflected in the results shown in Table 14.9. Members 1, 4, 6, and 10 are identical, as are members 2, 7, and 9 (except that member 2 takes its tour in the reverse direction). Therefore, the random generation of the 10 members of the initial population resulted in only five distinct feasible solutions. Similarly, four of the six children generated (members 12, 14, 15, and 16) are identical to one of its parents (except that member 14 takes its tour in the opposite direction of its first parent). Two of the children (members ■ TABLE 14.9 One application of the genetic algorithm in IOR Tutorial to the traveling salesman problem example through (a) the initialization step and (b) iteration 1 (a) (b) Member Initial Population Distance 1 2 3 4 5 6 7 8 9 10 1-2-4-6-5-3-7-1 1-2-3-5-4-6-7-1 1-7-5-6-4-2-3-1 1-2-4-6-5-3-7-1 1-3-7-5-6-4-2-1 1-2-4-6-5-3-7-1 1-7-6-4-5-3-2-1 1-3-7-6-5-4-2-1 1-7-6-4-5-3-2-1 1-2-4-6-5-3-7-1 64 65 65 64 66 64 65 69 65 64 Member Parents Children Member Distance 1 7 1-2-4-6-5-3-7-1 1-7-6-4-5-3-2-1 1-2-4-5-6-7-3-1 1-2-4-6-5-3-7-1 11 12 69 64 2 6 1-2-3-5-4-6-7-1 1-2-4-6-5-3-7-1 1-2-4-5-6-7-3-1 1-7-6-4-5-3-2-1 13 14 69 65 4 5 1-2-4-6-5-3-7-1 1-3-7-5-6-4-2-1 1-2-4-6-5-3-7-1 1-3-7-5-6-4-2-1 15 16 64 66 14.5 CONCLUSIONS 655 12 and 15) have a better fitness (shorter distance) than one of its parents, but neither improved upon both of its parents. None of these children provide an optimal solution (which has a distance of 63). This illustrates the fact that a genetic algorithm may require many generations (iterations) on some problems before the survival-of-the-fittest phenomenon results in clearly superior populations. The Solved Examples section of the book’s website provides another example of applying this genetic algorithm to a traveling salesman problem. This problem has a somewhat larger number of distinct feasible solutions than the above example, so there is a greater diversity in its initial population, the resulting parents, and their children. Genetic algorithms are well suited for dealing with the traveling salesman problem and good progress has been made on developing considerably more sophisticated versions than the one described above. In fact, at the time of this writing, a particularly powerful version that has successfully obtained high-quality solutions for problems with up to 200,000 cities (!) has just been announced.2 ■ 14.5 CONCLUSIONS Some optimization problems (including various combinatorial optimization problems) are sufficiently complex that it may not be possible to solve for an optimal solution with the kinds of exact algorithms presented in previous chapters. In such cases, heuristic methods are commonly used to search for a good (but not necessarily optimal) feasible solution. Several metaheuristics are available that provide a general structure and strategy guidelines for designing a specific heuristic method to fit a particular problem. A key feature of these metaheuristic procedures is their ability to escape from local optima and perform a robust search of a feasible region. This chapter has introduced three prominent types of metaheuristics. Tabu search moves from the current trial solution to the best neighboring trial solution at each iteration, much like a local improvement procedure, except that it allows a nonimproving move when an improving move is not available. It then incorporates short-term memory of the past search to encourage moving toward new parts of the feasible region rather than cycling back to previously considered solutions. In addition, it may employ intensification and diversification strategies based on long-term memory to focus the search on promising continuations. Simulated annealing also moves from the current trial solution to a neighboring trial solution at each iteration while occasionally allowing nonimproving moves. However, it selects the neighboring trial solution randomly and then uses the analogy to a physical annealing process to determine if this neighbor should be rejected as the next trial solution if it is not as good as the current trial solution. The third type of metaheuristic, genetic algorithms, works with an entire population of trial solutions at each iteration. It then uses the analogy to the biological theory of evolution, including the concept of survival of the fittest, to discard some of the trial solutions (especially the poorer ones) and replace them by some new ones. This replacement process has pairs of surviving members of the population pass on some of their features to pairs of new members just as if they were parents reproducing children. For the sake of concreteness, we have described one basic algorithm for each metaheuristic and then adapted this algorithm to two specific types of problems (including the traveling salesman problem), using simple examples. However, many variations of each algorithm also have been developed by researchers and used by practitioners to better fit the characteristics of the complex problems being addressed. For example, literally dozens 2 Nagata, Y., and S. Kobayashi: “A Powerful Genetic Algorithm Using Edge Assembly Crossover for the Traveling Salesman Problem,” INFORMS Journal on Computing, 25(2): 346–369, Spring 2013. 656 CHAPTER 14 METAHEURISTICS of variations of the basic genetic algorithm for traveling salesman problems presented in Sec. 14.4 (including different procedures for generating children) have been proposed, and research is continuing to determine what is most effective. (Some of the best methods for traveling salesman problems use special “k-opt” and “ejection chain” strategies that are carefully tailored to take advantage of the problem structure.) Therefore, the important lessons from this chapter are the basic concepts and intuition incorporated into each metaheuristic rather than the details of the particular algorithms presented here. There are several other important types of metaheuristics in addition to the three that are featured in this chapter. These include, for example, ant colony optimization, scatter search, and artificial neural networks. (These suggestive names give a hint of the key idea that drives each of these metaheuristics.) Selected Reference 3 provides a thorough coverage of both these other metaheuristics and the three presented here. Some heuristic algorithms actually are a hybrid of different types of metaheuristics in order to combine their better features. For example, short-term tabu search (without a diversification component) is very good at finding local optima but not as good at thoroughly exploring the various parts of a feasible region to find the part containing the global optimum, whereas a genetic algorithm has the opposite characteristics. Therefore, an improved algorithm sometimes can be obtained by beginning with a genetic algorithm to try to find the tallest hills (when the objective is maximization) and then switch to a basic tabu search at the very end to climb quickly to the top of these hills. The key for designing an effective heuristic algorithm is to incorporate whatever ideas work best for the problem at hand rather than adhering rigidly to the philosophy of a particular metaheuristic. ■ SELECTED REFERENCES 1. Coello, C., D. A. Van Veldhuizen, and G. B. Lamont: Evolutionary Algorithms for Solving MultiObjective Problems, Kluwer Academic Publishers (now Springer), Boston, 2002. 2. Gen, M., and R. Cheng, Genetic Algorithms and Engineering Optimization, Wiley, New York, 2000. 3. Gendreau, M., and J.-Y. Potvin (eds): Handbook of Metaheuristics, 2nd ed., Springer, New York, 2010. 4. Glover, F.: “Tabu Search: A Tutorial,” Interfaces, 20(4): 74–94, July–August 1990. 5. Glover, F., and M. Laguna: Tabu Search, Kluwer Academic Publishers (now Springer), Boston, MA, 1997. 6. Gutin, G., and A. Punnen (eds.): The Traveling Salesman Problem and Its Variations, Kluwer Academic Publishers (now Springer), Boston, MA, 2002. 7. Haupt, R. L., and S. E. Haupt: Practical Genetic Algorithms, Wiley, Hoboken NJ, 1998. 8. Michalewicz, Z., and D. B. Fogel: How To Solve It: Modern Heuristics, Springer, Berlin, 2002. 9. Mitchell, M.: An Introduction to Genetic Algorithms, MIT Press, Cambridge, MA, 1998. 10. Reeves, C. R.: “Genetic Algorithms for the Operations Researcher,” INFORMS Journal on Computing, 9: 231–250, 1997. (Also see pp. 251–265 for commentaries on this feature article.) 11. Sarker, R., M. Mohammadian, and X. Yao (eds.): Evolutionary Optimization, Kluwer Academic Publishers (now Springer), Boston, MA, 2002. 12. Talbi, E.: Metaheuristics: From Design to Implementation, Wiley, Hoboken, NJ, 2009. ■ LEARNING AIDS FOR THIS CHAPTER ON OUR WEBSITE (www.mhhe.com/hillier) Solved Examples: Examples for Chapter 14 Automatic Procedures in IOR Tutorial: Tabu Search Algorithm for Traveling Salesman Problems Simulated Annealing Algorithm for Traveling Salesman Problems 657 PROBLEMS Simulated Annealing Algorithm for Nonlinear Programming Problems Genetic Algorithm for Integer Nonlinear Programming Problems Genetic Algorithm for Traveling Salesman Problems Glossary for Chapter 14 See Appendix 1 for documentation of the software. ■ PROBLEMS The symbol A to the left of some of the problems (or their parts) has the following meaning: (d) Apply the sub-tour reversal algorithm to this problem when starting with 1-4-2-3-5-1 as the initial trial solution. A: You should use the corresponding automatic procedure in IOR Tutorial. The printout will record the results obtained at each iteration. 14.1-2. Reconsider the example of a traveling salesman problem shown in Fig. 14.4. (a) When the sub-tour reversal algorithm was applied to this problem in Sec. 14.1, the first iteration resulted in a tie for which of two sub-tour reversals (reversing 3-4 or 4-5) provided the largest decrease in the distance of the tour, so the tie was broken arbitrarily in favor of the first reversal. Determine what would have happened if the second of these reversals (reversing 4-5) had been chosen instead. (b) Apply the sub-tour reversal algorithm to this problem when starting with 1-2-4-5-6-7-3-1 as the initial trial solution. An asterisk on the problem number indicates that at least a partial answer is given in the back of the book. Instructions for Obtaining Random Numbers For each problem or its part where random numbers are needed, obtain them from the consecutive random digits in Table 20.3 in Sec. 20.3 as follows. Start from the front of the top row of the table and form five-digit random numbers by placing a decimal point in front of each group of five random digits (0.09656, 0.96657, etc.) in the order that you need random numbers. Always restart from the front of the top row for each new problem or its part. 14.1-3. Consider the traveling salesman problem shown below, where city 1 is the home city. 14.1-1. Consider the traveling salesman problem shown below, where city 1 is the home city. 2 13 15 2 7 3 1 5 3 4 8 7 3 4 7 16 6 5 11 1 8 8 12 9 9 5 8 13 7 4 6 6 4 (a) List all the possible tours, except exclude those that are simply the reverse of previously listed tours. Calculate the distance of each of these tours and thereby identify the optimal tour. (b) Starting with 1-2-3-4-5-1 as the initial trial solution, apply the sub-tour reversal algorithm to this problem. (c) Apply the sub-tour reversal algorithm to this problem when starting with 1-2-4-3-5-1 as the initial trial solution. (a) List all the possible tours, except exclude those that are simply the reverse of previously listed tours. Calculate the distance of each of these tours and thereby identify the optimal solution. (b) Starting with 1-2-3-4-5-6-1 as the initial trial solution, apply the sub-tour reversal algorithm to this problem. (c) Apply the sub-tour reversal algorithm to this problem when starting with 1-2-5-4-3-6-1 as the initial trial solution. 14.2-1. Read the referenced article that fully describes the OR study summarized in the application vignette presented in Sec. 14.2. Briefly describe how tabu search was applied in this study. Then list the various financial and nonfinancial benefits that resulted from this study. 658 CHAPTER 14 METAHEURISTICS 14.2-2.* Consider the minimum spanning tree problem depicted below, where the dashed lines represent the potential links that could be inserted into the network and the number next to each dashed line represents the cost associated with inserting that particular link. B 12 A 4 18 C 0 D E This problem also has the following two constraints: Constraint 1: No more than one of the three links—AB, BC, and AE—can be included. Constraint 2: Link AB can be included only if link BD also is included. Starting with the initial trial solution where the inserted links are AB, AC, AE, and CD, apply the basic tabu search algorithm presented in Sec. 14.2 to this problem. 14.2-3. Reconsider the example of a constrained minimum spanning tree problem presented in Sec. 14.2 (see Fig. 14.7(a) for the data before introducing the constraints). Starting with a different initial trial solution, namely, the one with links AB, AD, BE, and CD, apply the basic tabu search algorithm again to this problem. 14.2-4. Reconsider the example of an unconstrained minimum spanning tree problem given in Sec. 10.4. Suppose that the following constraints are added to the problem: Constraint 1: Either link AD or link ET must be included. Constraint 2: At most one of the three links—AO, BC, and DE—can be included. Starting with the optimal solution for the unconstrained problem given at the end of Sec. 10.4 as the initial trial solution, apply the basic tabu search algorithm to this problem. 14.2-5. Reconsider the traveling salesman problem shown in Prob. 14.1-1. Starting with 1-2-4-3-5-1 as the initial trial solution, apply the basic tabu search algorithm by hand to this problem. 14.2-6. Consider the 8-city traveling salesman problem whose links have the associated distances shown in the following table (where a dash indicates the absence of a link). A 2 3 4 5 6 7 8 1 2 3 4 5 6 7 14 15 13 — 14 11 — 20 21 11 — — 17 10 15 — — 9 8 18 9 17 21 9 20 — — 13 36 24 16 City City 1 is the home city. Starting with each of the initial trial solutions listed below, apply the basic tabu search algorithm in your IOR Tutorial to this problem. In each case, count the number of times that the algorithm makes a nonimproving move. Also point out any tabu moves that are made anyway because they result in the best trial solution found so far. (a) Use 1-2-3-4-5-6-7-8-1 as the initial trial solution. (b) Use 1-2-5-6-7-4-8-3-1 as the initial trial solution. (c) Use 1-3-2-5-6-4-7-8-1 as the initial trial solution. 14.2-7. Consider the 10-city traveling salesman problem whose links have the associated distances shown in the following table. A City 2 3 4 5 6 7 8 9 10 1 2 3 4 5 6 7 8 9 13 25 26 15 21 11 21 29 18 10 9 21 23 13 12 19 31 28 19 11 10 18 23 44 34 37 25 32 8 16 34 24 27 14 23 10 15 10 35 29 36 25 35 16 14 City 1 is the home city. Starting with each of the initial trial solutions listed below, apply the basic tabu search algorithm in your IOR Tutorial to this problem. In each case, count the number of times that the algorithm makes a nonimproving move. Also point out any tabu moves that are made anyway because they result in the best trial solution found so far. (a) Use 1-2-3-4-5-6-7-8-9-10-1 as the initial trial solution. (b) Use 1-3-4-5-7-6-9-8-10-2-1 as the initial trial solution. (c) Use 1-9-8-10-2-4-3-6-7-5-1 as the initial trial solution. 14.3-1. While applying a simulated annealing algorithm to a certain problem, you have come to an iteration where the current value of T is T  2 and the value of the objective function for the current trial solution is 30. This trial solution has four immediate neighbors and their objective function values are 29, 34, 31, and 24. For each of these four immediate neighbors in turn, you wish to determine the probability that the move selection rule would accept this immediate 659 PROBLEMS neighbor if it is randomly selected to become the current candidate to be the next trial solution. (a) Determine this probability for each of the immediate neighbors when the objective is maximization of the objective function. (b) Determine this probability for each of the immediate neighbors when the objective is minimization of the objective function. A 14.3-2. Because of its use of random numbers, a simulated annealing algorithm will provide slightly different results each time it is run. Table 14.5 shows one application of the basic simulated annealing algorithm in IOR Tutorial to the example of a traveling salesman problem depicted in Fig. 14.4. Starting with the same initial trial solution (1-2-3-4-5-6-7-1), use your IOR Tutorial to apply this same algorithm to this same example five more times. How many times does it again find the optimal solution (1-3-5-7-6-4-2-1 or, equivalently, 1-2-4-6-7-5-3-1)? 14.3-3. Reconsider the traveling salesman problem shown in Prob. 14.1-1. Using 1-2-3-4-5-1 as the initial trial solution, you are to follow the instructions below for applying the basic simulated annealing algorithm presented in Sec. 14.3 to this problem. (a) Perform the first iteration by hand. Follow the instructions given at the beginning of the Problems section to obtain the needed random numbers. Show your work, including the use of the random numbers. A (b) Use your IOR Tutorial to apply this algorithm. Observe the progress of the algorithm and record for each iteration how many (if any) candidates to be the next trial solution are rejected before one is accepted. Also count the number of iterations where a nonimproving move is accepted. A 14.3-4. Follow the instructions of Prob. 14.3-3 for the traveling salesman problem described in Prob. 14.2-6, using 1-2-3-4-5-6-7-8-1 as the initial trial solution. 14.3-5. Follow the instructions of Prob. 14.3-3 for the traveling salesman problem described in Prob. 14.2-7, using 1-9-8-10-2-43-6-7-5-1 as the initial trial solution. A A 14.3-6. Because of its use of random numbers, a simulated annealing algorithm will provide slightly different results each time it is run. Table 14.6 shows one application of the basic simulated annealing algorithm in IOR Tutorial to the nonlinear programming example introduced in Sec. 14.1. Starting with the same initial trial solution (x  15.5), use your IOR Tutorial to apply this same algorithm to this same example five more times. What is the best solution found in these five applications? Is it closer to the optimal solution (x  20 with f(x)  4,400,000) than the best solution shown in Table 14.6? 14.3-7. Consider the following nonconvex programming problem. Maximize subject to 0  x  31. f(x)  x3  60x2  900x  100, (a) Use the first and second derivatives of f(x) to determine the critical points (along with the end points of the feasible region) where x is either a local maximum or a local minimum. (b) Roughly plot the graph of f(x) by hand over the feasible region. (c) Using x  15.5 as the initial trial solution, perform the first iteration of the basic simulated annealing algorithm presented in Sec. 14.3 by hand. Follow the instructions given at the beginning of the Problems section to obtain the needed random numbers. Show your work, including the use of the random numbers. A (d) Use your IOR Tutorial to apply this algorithm, starting with x  15.5 as the initial trial solution. Observe the progress of the algorithm and record for each iteration how many (if any) candidates to be the next trial solution are rejected before one is accepted. Also count the number of iterations where a nonimproving move is accepted. 14.3-8. Consider the example of a nonconvex programming problem presented in Sec. 13.10 and depicted in Fig. 13.18. (a) Using x  2.5 as the initial trial solution, perform the first iteration of the basic simulated annealing algorithm presented in Sec. 14.3 by hand. Follow the instructions given at the beginning of the Problems section to obtain the random numbers. Show your work, including the use of the random numbers. A (b) Use your IOR Tutorial to apply this algorithm, starting with x  2.5 as the initial trial solution. Observe the progress of the algorithm and record for each iteration how many (if any) candidates to be the next trial solution are rejected before one is accepted. Also count the number of iterations where a nonimproving move is accepted. 14.3-9. Follow the instructions of Prob. 14.3-8 for the following nonconvex programming problem when starting with x  25 as the initial trial solution. A Maximize f(x)  x6  136x5  6800x4  155,000x3  1,570,000x2  5,000,000x, subject to 0  x  50. 14.3-10. Follow the instructions of Prob. 14.3-8 for the following nonconvex programming problem when starting with (x1, x2)  (18, 25) as the initial trial solution. A Maximize f(x1, x2)  x51  81x41  2330x31  28,750x21  150,000x1  0.5x52  65x42  2950x32  53,500x22  305,000x2, subject to x1  2x2  110 3x1  x2  120 and 0  x1  36, 0  x2  50. 14.4-1. For each of the following pairs of parents, generate their two children when applying the basic genetic algorithm presented 660 CHAPTER 14 METAHEURISTICS in Sec. 14.4 to an integer nonlinear programming problem involving only a single variable x, which is restricted to integer values over the interval 0  x  63. (Follow the instructions given at the beginning of the Problems section to obtain the needed random numbers, and then show your use of these random numbers.) (a) The parents are 010011 and 100101. (b) The parents are 000010 and 001101. (c) The parents are 100000 and 101000. 14.4-2.* Consider an 8-city traveling salesman problem (cities 1, 2, . . . , 8) where city 1 is the home city and links exist between all pairs of cities. For each of the following pairs of parents, generate their two children when applying the basic genetic algorithm presented in Sec. 14.4. (Follow the instructions given at the beginning of the Problems section to obtain the needed random numbers, and then show your use of these random numbers.) (a) The parents are 1-2-3-4-7-6-5-8-1 and 1-5-3-6-7-8-2-4-1. (b) The parents are 1-6-4-7-3-8-2-5-1 and 1-2-5-3-6-8-4-7-1. (c) The parents are 1-5-7-4-6-2-3-8-1 and 1-3-7-2-5-6-8-4-1. 14.4-3. Table 14.7 shows the application of the basic genetic algorithm described in Sec. 14.4 to an integer nonlinear programming example through the initialization step and the first iteration. (a) Use your IOR Tutorial to apply this same algorithm to this same example, starting from another randomly selected initial population and proceeding to the end of the algorithm. Does this application again obtain the optimal solution (x  20), just as was found during the first iteration in Table 14.7? (b) Because of its use of random numbers, a genetic algorithm will provide slightly different results each time it is run. Use your IOR Tutorial to apply the basic genetic algorithm described in Sec. 14.4 to this same example five more times. How many times does it again find the optimal solution (x  20)? A 14.4-4. Reconsider the nonconvex programming problem shown in Prob. 14.3-7. Suppose now that the variable x is restricted to be an integer. (a) Perform the initialization step and the first iteration of the basic genetic algorithm presented in Sec. 14.4 by hand. Follow the instructions given at the beginning of the Problems section to obtain the needed random numbers. Show your work, including the use of the random numbers. A (b) Use your IOR Tutorial to apply this algorithm. Observe the progress of the algorithm and record the number of times that a pair of parents give birth to a child whose fitness is better than for both parents. Also count the number of iterations where the best solution found is better than any previously found. 14.4-5. Follow the instructions of Prob. 14.4-4 for the nonconvex programming problem shown in Prob. 14.3-9 when the variable x is restricted to be an integer. A 14.4-6. Follow the instructions of Prob. 14.4-4 for the nonconvex programming problem shown in Prob. 14.3-10 when both of the variables x1 and x2 are restricted to be integer. A 14.4-7. Table 14.9 shows the application of the basic genetic algorithm described in Sec. 14.4 to the example of a traveling salesman problem depicted in Fig. 14.4 through the initialization step and first iteration of the algorithm. (a) Use your IOR Tutorial to apply this same algorithm to this same example, starting from another randomly selected initial population and proceeding to the end of the algorithm. Does this application find the optimal solution (1-3-5-7-6-4-2-1 or, equivalently, 1-2-4-6-7-5-3-1)? (b) Because of its use of random numbers, a genetic algorithm will provide slightly different results each time it is run. Use your IOR Tutorial to apply the basic genetic algorithm described in Sec. 14.4 to this same example five more times. How many times does it find the optimal solution? A 14.4-8. Reconsider the traveling salesman problem shown in Prob. 14.1-1. (a) Perform the initialization step and the first iteration of the basic genetic algorithm presented in Sec. 14.4 by hand. Follow the instructions given at the beginning of the Problems section to obtain the needed random numbers. Show your work, including the use of the random numbers. A (b) Use your IOR Tutorial to apply this algorithm. Observe the progress of the algorithm and record the number of times that a pair of parents gives birth to a child whose tour has a shorter distance than for both parents. Also count the number of iterations where the best solution found has a shorter distance than any previously found. 14.4-9. Follow the instructions of Prob. 14.4-8 for the traveling salesman problem described in Prob. 14.2-6. A 14.4-10. Follow the instructions of Prob. 14.4-8 for the traveling salesman problem described in Prob. 14.2-7. A 14.4-11. Read the referenced article that fully describes the OR study summarized in the application vignette presented in Sec. 14.4. Briefly describe how a genetic algorithm was applied in this study. Then list the various financial and nonfinancial benefits that resulted from this study. 14.5-1. Use your IOR Tutorial to apply the basic algorithm for all three metaheuristics presented in this chapter to the traveling salesman problem described in Prob. 14.2-6. (Use 1-2-3-4-5-6-78-1 as the initial trial solution for the tabu search and simulated annealing algorithms.) Which metaheuristic happened to provide the best solution on this particular problem? A A 14.5-2. Use your IOR Tutorial to apply the basic algorithm for all three metaheuristics presented in this chapter to the traveling salesman problem described in Prob. 14.2-7. (Use 1-2-3-4-5-6-7-89-10-1 as the initial trial solution for the tabu search and simulated annealing algorithms.) Which metaheuristic happened to provide the best solution on this particular problem? 15 C H A P T E R Game Theory L ife is full of conflict and competition. Numerous examples involving adversaries in conflict include parlor games, military battles, political campaigns, advertising and marketing campaigns by competing business firms, and so forth. A basic feature in many of these situations is that the final outcome depends primarily upon the combination of strategies selected by the adversaries. Game theory is a mathematical theory that deals with the general features of competitive situations like these in a formal, abstract way. It places particular emphasis on the decision-making processes of the adversaries. Because competitive situations are so ubiquitous, game theory has applications in a variety of areas, including in business and economics. For example, Selected Reference 2 presents various business applications of game theory. The 1994 Nobel Prize for Economic Sciences was won by John F. Nash, Jr. (whose story is told in the movie A Beautiful Mind ), John C. Harsanyi, and Reinhard Selton for their analysis of equilibria in the theory of noncooperative games. Then Robert J. Aumann and Thomas C. Schelling won the 2005 Nobel Prize for Economic Sciences for enhancing our understanding of conflict and cooperation through game-theory analysis. As briefly surveyed in Sec. 15.6, research on game theory continues to delve into rather complicated types of competitive situations. However, the focus in this chapter is on the simplest case, called two-person, zero-sum games. As the name implies, these games involve only two adversaries or players (who may be armies, teams, firms, and so on). They are called zero-sum games because one player wins whatever the other one loses, so that the sum of their net winnings is zero. Section 15.1 introduces the basic model for two-person, zero-sum games, and the next four sections describe and illustrate different approaches to solving such games. The chapter concludes by mentioning some other kinds of competitive situations that are dealt with by other branches of game theory. ■ 15.1 THE FORMULATION OF TWO-PERSON, ZERO-SUM GAMES To illustrate the basic characteristics of two-person, zero-sum games, consider the game called odds and evens. This game consists simply of each player simultaneously showing either one finger or two fingers. If the number of fingers matches, so that the total number for both players is even, then the player taking evens (say, player 1) wins the bet (say, $1) 661 662 CHAPTER 15 GAME THEORY ■ TABLE 15.1 Payoff table for the odds and evens game Player 2 Strategy Player 1 1 2 1 2 1 ⫺1 ⫺1 1 from the player taking odds (player 2). If the number does not match, player 1 pays $1 to player 2. Thus, each player has two strategies: to show either one finger or two fingers. The resulting payoff to player 1 in dollars is shown in the payoff table given in Table 15.1. In general, a two-person game is characterized by 1. The strategies of player 1. 2. The strategies of player 2. 3. The payoff table. Before the game begins, each player knows the strategies she or he has available, the ones the opponent has available, and the payoff table. The actual play of the game consists of each player simultaneously choosing a strategy without knowing the opponent’s choice. A strategy may involve only a simple action, such as showing a certain number of fingers in the odds and evens game. On the other hand, in more complicated games involving a series of moves, a strategy is a predetermined rule that specifies completely how one intends to respond to each possible circumstance at each stage of the game. For example, a strategy for one side in chess would indicate how to make the next move for every possible position on the board, so the total number of possible strategies would be astronomical. Applications of game theory normally involve far less complicated competitive situations than chess does, but the strategies involved can be fairly complex. The payoff table shows the gain (positive or negative) for player 1 that would result from each combination of strategies for the two players. It is given only for player 1 because the table for player 2 is just the negative of this one, due to the zero-sum nature of the game. The entries in the payoff table may be in any units desired, such as dollars, provided that they accurately represent the utility to player 1 of the corresponding outcome. However, utility is not necessarily proportional to the amount of money (or any other commodity) when large quantities are involved. For example, $2 million (after taxes) is probably worth much less than twice as much as $1 million to a poor person. In other words, given the choice between (1) a 50 percent chance of receiving $2 million rather than nothing and (2) being sure of getting $1 million, a poor person probably would much prefer the latter. On the other hand, the outcome corresponding to an entry of 2 in a payoff table should be “worth twice as much” to player 1 as the outcome corresponding to an entry of 1. Thus, given the choice, he or she should be indifferent between a 50 percent chance of receiving the former outcome (rather than nothing) and definitely receiving the latter outcome instead.1 A primary objective of game theory is the development of rational criteria for selecting a strategy. Two key assumptions are made: 1. Both players are rational. 2. Both players choose their strategies solely to promote their own welfare (no compassion for the opponent). 1 See Sec. 16.6 for a further discussion of the concept of utility. 15.2 SOLVING SIMPLE GAMES—A PROTOTYPE EXAMPLE 663 Game theory contrasts with decision analysis (see Chap. 16), where the assumption is that the decision maker is playing a game with a passive opponent—nature—which chooses its strategies in some random fashion. We shall develop the standard game theory criteria for choosing strategies by means of illustrative examples. In particular, the end of the next section describes how game theory says the odds and evens game should be played. (Problems 15.3-1, 15.4-1, and 15.5-1 also invite you to apply the techniques developed in this chapter to solve for the optimal way to play this game.) In addition, the next section presents a prototype example that illustrates the formulation of a two-person, zero-sum game and its solution in some simple situations. A more complicated variation of this game is then carried into Sec. 15.3 to develop a more general criterion. Sections 15.4 and 15.5 describe a graphical procedure and a linear programming formulation for solving such games. ■ 15.2 SOLVING SIMPLE GAMES—A PROTOTYPE EXAMPLE Two politicians are running against each other for the U.S. Senate. Campaign plans must now be made for the final two days, which are expected to be crucial because of the closeness of the race. Therefore, both politicians want to spend these days campaigning in two key cities, Bigtown and Megalopolis. To avoid wasting campaign time, they plan to travel at night and spend either one full day in each city or two full days in just one of the cities. However, since the necessary arrangements must be made in advance, neither politician will learn his (or her)2 opponent’s campaign schedule until after he has finalized his own. Therefore, each politician has asked his campaign manager in each of these cities to assess what the impact would be (in terms of votes won or lost) from the various possible combinations of days spent there by himself and by his opponent. He then wishes to use this information to choose his best strategy on how to use these two days. Formulation as a Two-Person, Zero-Sum Game To formulate this problem as a two-person, zero-sum game, we must identify the two players (obviously the two politicians), the strategies for each player, and the payoff table. As the problem has been stated, each player has the following three strategies: Strategy 1  spend one day in each city. Strategy 2  spend both days in Bigtown. Strategy 3  spend both days in Megalopolis. By contrast, the strategies would be more complicated in a different situation where each politician learns where his opponent will spend the first day before he finalizes his own plans for his second day. In that case, a typical strategy would be: Spend the first day in Bigtown; if the opponent also spends the first day in Bigtown, then spend the second day in Bigtown; however, if the opponent spends the first day in Megalopolis, then spend the second day in Megalopolis. There would be eight such strategies, one for each combination of the two firstday choices, the opponent’s two first-day choices, and the two second-day choices. Each entry in the payoff table for player 1 represents the utility to player 1 (or the negative utility to player 2) of the outcome resulting from the corresponding strategies used by the two players. From the politician’s viewpoint, the objective is to win votes, and each additional vote (before he learns the outcome of the election) is of equal value to him. Therefore, the appropriate entries for the payoff table for politician 1 are the 2 We use only his or only her in some examples and problems for ease of reading: we do not mean to imply that only men or only women are engaged in the various activities. 664 CHAPTER 15 GAME THEORY ■ TABLE 15.2 Form of the payoff table for politician 1 for the political campaign problem Total Net Votes Won by Politician 1 (in Units of 1,000 Votes) Politician 2 Strategy Politician 1 1 2 3 1 2 3 total net votes won from the opponent (i.e., the sum of the net vote changes in the two cities) resulting from these two days of campaigning. Using units of 1,000 votes, this formulation is summarized in Table 15.2. Game theory assumes that both players are using the same formulation (including the same payoffs for player 1) for choosing their strategies. However, we should also point out that this payoff table would not be appropriate if additional information were available to the politicians. In particular, assume that they know exactly how the populace is planning to vote two days before the election, so that each politician knows exactly how many net votes (positive or negative) he needs to switch in his favor during the last two days of campaigning to win the election. Consequently, the only significance of the data prescribed by Table 15.2 would be to indicate which politician would win the election with each combination of strategies. Because the ultimate goal is to win the election and because the size of the plurality is relatively inconsequential, the utility entries in the table then should be some positive constant (say, 1) when politician 1 wins and ⫺1 when he loses. Even if only a probability of winning can be determined for each combination of strategies, appropriate entries would be the probability of winning minus the probability of losing because they then would represent expected utilities. However, sufficiently accurate data to make such determinations usually are not available, so this example uses the thousands of total net votes won by politician 1 as the entries in the payoff table. Using the form given in Table 15.2, we give three alternative sets of data for the payoff table to illustrate how to solve three different kinds of games. Variation 1 of the Example Given that Table 15.3 is the payoff table for player 1 (politician 1), which strategy should each player select? ■ TABLE 15.3 Payoff table for player 1 for variation 1 of the political campaign problem Player 2 Strategy Player 1 1 2 3 1 2 3 1 1 0 2 0 1 4 5 ⫺1 15.2 SOLVING SIMPLE GAMES—A PROTOTYPE EXAMPLE 665 This situation is a rather special one, where the answer can be obtained just by applying the concept of dominated strategies to rule out a succession of inferior strategies until only one choice remains. A strategy is dominated by a second strategy if the second strategy is always at least as good (and sometimes better) regardless of what the opponent does. A dominated strategy can be eliminated immediately from further consideration. At the outset, Table 15.3 includes no dominated strategies for player 2. However, for player 1, strategy 3 is dominated by strategy 1 because the latter has larger payoffs (1 ⬎ 0, 2 ⬎ 1, 4 ⬎ ⫺1) regardless of what player 2 does. Eliminating strategy 3 from further consideration yields the following reduced payoff table: 1 2 1 2 3 1 1 2 0 4 5 Because both players are assumed to be rational, player 2 also can deduce that player 1 has only these two strategies remaining under consideration. Therefore, player 2 now does have a dominated strategy—strategy 3, which is dominated by both strategies 1 and 2 because they always have smaller losses for player 2 (payoffs to player 1) in this reduced payoff table (for strategy 1: 1 ⬍ 4, 1 ⬍ 5; for strategy 2: 2 ⬍ 4, 0 ⬍ 5). Eliminating this strategy yields 1 2 1 2 1 1 2 0 At this point, strategy 2 for player 1 becomes dominated by strategy 1 because the latter is better in column 2 (2 ⬎ 0) and equally good in column 1 (1 ⫽ 1). Eliminating the dominated strategy leads to 1 1 2 1 2 Strategy 2 for player 2 now is dominated by strategy 1 (1 ⬍ 2), so strategy 2 should be eliminated. Consequently, both players should select their strategy 1. Player 1 then will receive a payoff of 1 from player 2 (that is, politician 1 will gain 1,000 votes from politician 2). If you would like to see another example of solving a game by using the concept of dominated strategies, one is provided in the Solved Examples section of the book’s website. In general, the payoff to player 1 when both players play optimally is referred to as the value of the game. A game that has a value of 0 is said to be a fair game. Since this particular game has a value of 1, it is not a fair game. The concept of a dominated strategy is a very useful one for reducing the size of the payoff table that needs to be considered and, in unusual cases like this one, actually identifying the optimal solution for the game. However, most games require another approach to at least finish solving, as illustrated by the next two variations of the example. 666 CHAPTER 15 GAME THEORY Variation 2 of the Example Now suppose that the current data give Table 15.4 as the payoff table for player 1 (politician 1). This game does not have dominated strategies, so it is not obvious what the players should do. What line of reasoning does game theory say they should use? Consider player 1. By selecting strategy 1, he could win 6 or could lose as much as 3. However, because player 2 is rational and thus will seek a strategy that will protect himself from large payoffs to player 1, it seems likely that player 1 would incur a loss by playing strategy 1. Similarly, by selecting strategy 3, player 1 could win 5, but more probably his rational opponent would avoid this loss and instead administer a loss to player 1 which could be as large as 4. On the other hand, if player 1 selects strategy 2, he is guaranteed not to lose anything and he could even win something. Therefore, because it provides the best guarantee (a payoff of 0), strategy 2 seems to be a “rational” choice for player 1 against his rational opponent. (This line of reasoning assumes that both players are averse to risking larger losses than necessary, in contrast to those individuals who enjoy gambling for a large payoff against long odds.) Now consider player 2. He could lose as much as 5 or 6 by using strategy 1 or 3, but is guaranteed at least breaking even with strategy 2. Therefore, by the same reasoning of seeking the best guarantee against a rational opponent, his apparent choice is strategy 2. If both players choose their strategy 2, the result is that both break even. Thus, in this case, neither player improves upon his best guarantee, but both also are forcing the opponent into the same position. Even when the opponent deduces a player’s strategy, the opponent cannot exploit this information to improve his position. Stalemate. The end product of this line of reasoning is that each player should play in such a way as to minimize his maximum losses whenever the resulting choice of strategy cannot be exploited by the opponent to then improve his position. This so-called minimax criterion is a standard criterion proposed by game theory for selecting a strategy. In effect, this criterion says to select a strategy that would be best even if the selection were being announced to the opponent before the opponent chooses a strategy. In terms of the payoff table, it implies that player 1 should select the strategy whose minimum payoff is largest, whereas player 2 should choose the one whose maximum payoff to player 1 is the smallest. This criterion is illustrated in Table 15.4, where strategy 2 is identified as the maximin strategy for player 1 and strategy 2 is the minimax strategy for player 2. The resulting payoff of 0 is the value of the game, so this is a fair game. Notice the interesting fact that the same entry in this payoff table yields both the maximin and minimax values. The reason is that this entry is both the minimum in its row and the maximum of its column. The position of any such entry is called a saddle point. ■ TABLE 15.4 Payoff table for player 1 for variation 2 of the political campaign problem Player 2 Strategy Player 1 1 2 3 Maximum: 1 2 3 ⫺3 ⫺2 ⫺5 ⫺2 ⫺0 ⫺2 ⫺6 ⫺2 ⫺4 5 ⫺0 ⫺6 ↑ Minimax value Minimum ⫺3 ⫺0 ← Maximin value ⫺4 667 15.2 SOLVING SIMPLE GAMES—A PROTOTYPE EXAMPLE The fact that this game possesses a saddle point was actually crucial in determining how it should be played. Because of the saddle point, neither player can take advantage of the opponent’s strategy to improve his own position. In particular, when player 2 predicts or learns that player 1 is using strategy 2, player 2 would incur a loss instead of breaking even if he were to change from his original plan of using his strategy 2. Similarly, player 1 would only worsen his position if he were to change his plan. Thus, neither player has any motive to consider changing strategies, either to take advantage of his opponent or to prevent the opponent from taking advantage of him. Therefore, since this is a stable solution (also called an equilibrium solution), players 1 and 2 should exclusively use their maximin and minimax strategies, respectively. As the next variation illustrates, some games do not possess a saddle point, in which case a more complicated analysis is required. Variation 3 of the Example Late developments in the campaign result in the final payoff table for player 1 (politician 1) given by Table 15.5. How should this game be played? Suppose that both players attempt to apply the minimax criterion in the same way as in variation 2. Player 1 can guarantee that he will lose no more than 2 by playing strategy 1. Similarly, player 2 can guarantee that he will lose no more than 2 by playing strategy 3. However, notice that the maximin value (⫺2) and the minimax value (2) do not coincide in this case. The result is that there is no saddle point. What are the resulting consequences if both players plan to use the strategies just derived? It can be seen that player 1 would win 2 from player 2, which would make player 2 unhappy. Because player 2 is rational and can therefore foresee this outcome, he would then conclude that he can do much better, actually winning 2 rather than losing 2, by playing strategy 2 instead. Because player 1 is also rational, he would anticipate this switch and conclude that he can improve considerably, from ⫺2 to 4, by changing to strategy 2. Realizing this, player 2 would then consider switching back to strategy 3 to convert a loss of 4 to a gain of 3. This possibility of a switch would cause player 1 to consider again using strategy 1, after which the whole cycle would start over again. Therefore, even though this game is being played only once, any tentative choice of a strategy leaves that player with a motive to consider changing strategies, either to take advantage of his opponent or to prevent the opponent from taking advantage of him. In short, the originally suggested solution (player 1 to play strategy 1 and player 2 to play strategy 3) is an unstable solution, because the payoff table does not have a saddle point so it is necessary to develop a more satisfactory solution. But what kind of solution should it be? ■ TABLE 15.5 Payoff table for player 1 for variation 3 of the political campaign problem Player 2 Strategy 1 1 2 3 Player 1 Maximum: 2 3 Minimum ⫺0 ⫺5 ⫺2 ⫺2 ⫺4 ⫺3 ⫺2 ⫺3 ⫺4 5 ⫺4 ⫺2 ↑ Minimax value ⫺2 ← Maximin value ⫺3 ⫺4 668 CHAPTER 15 GAME THEORY The key fact seems to be that whenever one player’s strategy is predictable, the opponent can take advantage of this information to improve his position. Therefore, an essential feature of a rational plan for playing a game such as this one is that neither player should be able to deduce which strategy the other will use. Hence, in this case, rather than applying some known criterion for determining a single strategy that will definitely be used, it is necessary to choose among alternative acceptable strategies on some kind of random basis. By doing this, neither player knows in advance which of his own strategies will be used, let alone what his opponent will do. The same situation arises with the odds and evens game introduced in Sec. 15.1. The payoff table for this game shown in Table 15.1 does not have a saddle point, so the game does not have a stable solution regarding which strategy (show one finger or two fingers) each player should choose for each play of the game. In fact, it would be foolish for a player to always show the same number of fingers, since then the opponent could begin to always show the number of fingers that would win every time. Even if a player’s strategy were to become only somewhat predictable because of past tendencies or patterns, the opponent can take advantage of this information to improve his chances of winning. According to game theory, the rational way to play the odds and evens game is to make the choice of the strategy completely randomly each time. This can be done, for example, by flipping a coin (without showing the result to the opponent) and then showing, say, one finger if the coin comes up heads and showing two fingers if the coin comes up tails. This suggests, in very general terms, the kind of approach that is required for games lacking a saddle point. In the next section we discuss the approach more fully. Given this foundation, the following two sections will develop procedures for finding an optimal way of playing such games. Variation 3 of the political campaign problem will continue to be used to illustrate these ideas as they are developed. ■ 15.3 GAMES WITH MIXED STRATEGIES Whenever a game does not possess a saddle point, game theory advises each player to assign a probability distribution over her set of strategies. To express this mathematically, let xi  probability that player 1 will use strategy i (i  1, 2, . . . , m), yj  probability that player 2 will use strategy j ( j  1, 2, . . . , n), where m and n are the respective numbers of available strategies. Thus, player 1 would specify her plan for playing the game by assigning values to x1, x2, . . . , xm. Because these values are probabilities, they would need to be nonnegative and add to 1. Similarly, the plan for player 2 would be described by the values she assigns to her decision variables y1, y2, . . . , yn. These plans (x1, x2, . . . , xm) and (y1, y2, . . . , yn) are usually referred to as mixed strategies, and the original strategies are then called pure strategies. When the game is actually played, it is necessary for each player to use one of her pure strategies. However, this pure strategy would be chosen by using some random device to obtain a random observation from the probability distribution specified by the mixed strategy, where this observation would indicate which particular pure strategy to use. To illustrate, suppose that players 1 and 2 in variation 3 of the political campaign problem (see Table 15.5) select the mixed strategies (x1, x2, x3)  (21ᎏᎏ, 21ᎏᎏ, 0) and (y1, y2, y3)  (0, 21ᎏᎏ, 21ᎏᎏ), respectively. This selection would say that player 1 is giving an equal chance (probability of ᎏ12ᎏ) of choosing either (pure) strategy 1 or 2, but he is discarding strategy 3 entirely. Similarly, player 2 is randomly choosing between his last two pure strategies. To play the game, each player could then flip a coin to determine which of his two acceptable pure strategies he will actually use. 15.3 GAMES WITH MIXED STRATEGIES 669 Although no completely satisfactory measure of performance is available for evaluating mixed strategies, a very useful one is the expected payoff. By applying the probability theory definition of expected value, this quantity is m Expected payoff for player 1  冱 n 冱 pij xi yj, i1 j1 where pij is the payoff if player 1 uses pure strategy i and player 2 uses pure strategy j. In the example of mixed strategies just given, there are four possible payoffs (⫺2, 2, 4, ⫺3), each occurring with a probability of 41ᎏᎏ, so the expected payoff is 41ᎏᎏ(⫺2 ⫹ 2 ⫹ 4 ⫺ 3) ⫽ 41ᎏᎏ. Thus, this measure of performance does not disclose anything about the risks involved in playing the game, but it does indicate what the average payoff will tend to be if the game is played many times. By using this measure, game theory extends the concept of the minimax criterion to games that lack a saddle point and thus need mixed strategies. In this context, the minimax criterion says that a given player should select the mixed strategy that minimizes the maximum expected loss to himself. Equivalently, when we focus on payoffs (player 1) rather than losses (player 2), this criterion says to maximin instead, i.e., maximize the minimum expected payoff to the player. By the minimum expected payoff we mean the smallest possible expected payoff that can result from any mixed strategy with which the opponent can counter. Thus, the mixed strategy for player 1 that is optimal according to this criterion is the one that provides the guarantee (minimum expected payoff) that is best (maximal). (The value of this best guarantee is the maximin value, denoted by v.) Similarly, the optimal strategy for player 2 is ᎏ the one that provides the best guarantee, where best now means minimal and guarantee refers to the maximum expected loss that can be administered by any of the opponent’s mixed strategies. (This best guarantee is the minimax value, denoted by v.) Recall that when only pure strategies were used, games not having a saddle point turned out to be unstable (no stable solutions). The reason was essentially that vᎏ ⬍ v, so that the players would want to change their strategies to improve their positions. Similarly, for games with mixed strategies, it is necessary that vᎏ ⫽ v for the optimal solution to be stable. Fortunately, according to the minimax theorem of game theory, this condition always holds for such games. Minimax theorem: If mixed strategies are allowed, the pair of mixed strategies that is optimal according to the minimax criterion provides a stable solution with ᎏv ⫽ v ⫽ v (the value of the game), so that neither player can do better by unilaterally changing her or his strategy. One proof of this theorem is included in Sec. 15.5. Although the concept of mixed strategies becomes quite intuitive if the game is played repeatedly, it requires some interpretation when the game is to be played just once. In this case, using a mixed strategy still involves selecting and using one pure strategy (randomly selected from the specified probability distribution), so it might seem more sensible to ignore this randomization process and just choose the one “best” pure strategy to be used. However, when a game does not have a saddle point, we have already illustrated in the preceding section for both variation 3 of the political campaign problem and the odds and evens game that a player must not allow the opponent to deduce what his strategy will be (i.e., the solution procedure under the rules of game theory must not definitely identify which pure strategy will be used when the game is unstable). Furthermore, even if the opponent is able to use only his knowledge of the tendencies of the first player to deduce probabilities (for the pure strategy chosen) that are different from those for the optimal mixed strategy, then the opponent still can take advantage of this knowledge to reduce the expected payoff to the first player. Therefore, the only way to guarantee attaining the optimal expected payoff v is 670 CHAPTER 15 GAME THEORY to randomly select the pure strategy to be used from the probability distribution for the optimal mixed strategy. (Valid statistical procedures for making such a random selection are discussed in Sec. 20.4.) Now we need to show how to find the optimal mixed strategy for each player. There are several methods of doing this. One is a graphical procedure that may be used whenever one of the players has only two (undominated) pure strategies; this approach is described in the next section. When larger games are involved, the usual method is to transform the problem to a linear programming problem that then can be solved by the simplex method on a computer; Sec. 15.5 discusses this approach. ■ 15.4 GRAPHICAL SOLUTION PROCEDURE Consider any game with mixed strategies such that, after dominated strategies are eliminated, one of the players has only two pure strategies. To be specific, let this player be player 1. Because her mixed strategies are (x1, x2) and x2 ⫽ 1 ⫺ x1, it is necessary for her to solve only for the optimal value of x1. However, it is straightforward to plot the expected payoff as a function of x1 for each of her opponent’s pure strategies. This graph can then be used to identify the point that maximizes the minimum expected payoff. The opponent’s minimax mixed strategy can also be identified from the graph. To illustrate this procedure, consider variation 3 of the political campaign problem (see Table 15.5). Notice that the third pure strategy for player 1 is dominated by her second, so the payoff table can be reduced to the form given in Table 15.6. Therefore, for each of the pure strategies available to player 2, the expected payoff for player 1 will be: (y1, y2, y3) Expected Payoff (1, 0, 0) (0, 1, 0) (0, 0, 1) 0x1 ⫹ 5(1 ⫺ x1) ⫽ 5 ⫺ 5x1 ⫺2x1 ⫹ 4(1 ⫺ x1) ⫽ 4 ⫺ 6x1 2x1 ⫺ 3(1 ⫺ x1) ⫽ ⫺3 ⫹ 5x1 Now plot these expected-payoff lines on a graph, as shown in Fig. 15.1. For any given values of x1 and (y1, y2, y3), the expected payoff will be the appropriate weighted average of the corresponding points on these three lines. In particular, Expected payoff for player 1 ⫽ y1(5 ⫺ 5x1) ⫹ y2(4 ⫺ 6x1) ⫹ y3(⫺3 ⫹ 5x1). Remember that player 2 wants to minimize this expected payoff for player 1. Given x1, player 2 can minimize this expected payoff by choosing the pure strategy that corresponds to the “bottom” line for that x1 in Fig. 15.1 (either ⫺3 ⫹ 5x1 or 4 ⫺ 6x1, but never 5 ⫺ 5x1). According to the minimax criterion (which actually is a maximin criterion from ■ TABLE 15.6 Reduced payoff table for player 1 for variation 3 of the political campaign problem Player 2 Player 1 Probability y1 y2 y3 Probability Pure Strategy 1 2 3 x1 1 ⫺ x1 1 2 0 5 ⫺2 ⫺4 ⫺2 ⫺3 671 15.4 GRAPHICAL SOLUTION PROCEDURE 6 5 Expected payoff 4 3 2 5x 4⫺ 6x 1 1 1 0 11 4 4 ⫺1 ⫺2 ■ FIGURE 15.1 Graphical procedure for solving games. Maximin point 5⫺ ⫺3 ⫹5 11 2 2 33 4 4 1.0 x1 x1 ⫺3 ⫺4 the viewpoint of player 1), player 1 wants to maximize this minimum expected payoff. Consequently, player 1 should select the value of x1 where the bottom line peaks, i.e., where the (⫺3 ⫹ 5x1) and (4 ⫺ 6x1) lines intersect, which yields an expected payoff of vᎏ ⫽ v ⫽ max {min{⫺3 ⫹ 5x1, 4 ⫺ 6x1}}. 0ⱕx1ⱕ1 To solve algebraically for this optimal value of x1 at the intersection of the two lines ⫺3 ⫹ 5x1 and 4 ⫺ 6x1, we set ⫺3 ⫹ 5x1 ⫽ 4 ⫺ 6x1, which yields x1 ⫽ 1ᎏ7ᎏ1. Thus, (x1, x2) ⫽ (1ᎏ7ᎏ1, 1ᎏ4ᎏ1) is the optimal mixed strategy for player 1, and 冢 冣 7 2 v ⫽ v ⫽ ⫺3 ⫹ 5 ᎏᎏ ⫽ ᎏᎏ ᎏ 11 11 is the value of the game. To find the corresponding optimal mixed strategy for player 2, we now reason as follows. According to the definition of the minimax value v and the minimax theorem, the expected payoff resulting from the optimal strategy (y1, y2, y3) ⫽ (y*1, y*2, y*3) will satisfy the condition 2 y*1(5 ⫺ 5x1) ⫹ y*2(4 ⫺ 6x1) ⫹ y*3(⫺3 ⫹ 5x1) ⱕ v ⫽ v ⫽ ᎏᎏ 11 for all values of x1 (0 ⱕ x1 ⱕ 1). Furthermore, when player 1 is playing optimally (that is, x1 ⫽ ᎏ17ᎏ1), this inequality will be an equality (by the minimax theorem), so that 20 2 2 2 ᎏᎏy*1 ⫹ ᎏᎏy*2 ⫹ ᎏᎏy*3 ⫽ v ⫽ ᎏᎏ. 11 11 11 11 Because (y1, y2, y3) is a probability distribution, it is also known that y*1 ⫹ y*2 ⫹ y*3 ⫽ 1. Therefore, y*1 ⫽ 0 because y*1 ⬎ 0 would violate the next-to-last equation; i.e., the expected payoff on the graph at x1 ⫽ ᎏ17ᎏ1 would be above the maximin point. (In general, any 672 CHAPTER 15 GAME THEORY line that does not pass through the maximin point must be given a zero weight to avoid increasing the expected payoff above this point.) Hence,  ⱕ ᎏ2ᎏ for 0 ⱕ x1 ⱕ 1,  11 y*2 (4 ⫺ 6x1) ⫹ y*3 (⫺3 ⫹ 5x1)  7  ⫽ ᎏ2ᎏ for x1 ⫽ ᎏᎏ.  11 11 But y*2 and y*3 are numbers, so the left-hand side is the equation of a straight line, which is a fixed weighted average of the two “bottom” lines on the graph. Because the ordinate of this line must equal ᎏ12ᎏ1 at x1 ⫽ ᎏ17ᎏ1, and because it must never exceed ᎏ12ᎏ1, the line necessarily is horizontal. (This conclusion is always true unless the optimal value of x1 is either 0 or 1, in which case player 2 also should use a single pure strategy.) Therefore, 2 y*2(4 ⫺ 6x1) ⫹ y*3(⫺3 ⫹ 5x1) ⫽ ᎏᎏ, 11 for 0 ⱕ x1 ⱕ 1. Hence, to solve for y*2 and y*3, select two values of x1 (say, 0 and 1), and solve the resulting two simultaneous equations. Thus, 2 ⫺4y*2 ⫺ 3y*3 ⫽ ᎏᎏ, 11 2 ⫺2y*2 ⫹ 2y*3 ⫽ ᎏᎏ, 11 which has a simultaneous solution of y*2 ⫽ ᎏ151ᎏ and y*3 ⫽ ᎏ161ᎏ. Therefore, the optimal mixed strategy for player 2 is ( y1, y2, y3) ⫽ (0, ᎏ151ᎏ, ᎏ161ᎏ). If, in another problem, there should happen to be more than two lines passing through the maximin point, so that more than two of the y*j values can be greater than zero, this condition would imply that there are many ties for the optimal mixed strategy for player 2. One such strategy can then be identified by setting all but two of these y*j values equal to zero and solving for the remaining two in the manner just described. For the remaining two, the associated lines must have positive slope in one case and negative slope in the other. Although this graphical procedure has been illustrated for only one particular problem, essentially the same reasoning can be used to solve any game with mixed strategies that has only two undominated pure strategies for one of the players. The Solved Examples section of the book’s website provides another example where, in this case, it is player 2 that has only two undominated strategies, so the graphical solution procedure is applied initially from the viewpoint of that player. ■ 15.5 SOLVING BY LINEAR PROGRAMMING Any game with mixed strategies can be solved by transforming the problem to a linear programming problem. As you will see, this transformation requires little more than applying the minimax theorem and using the definitions of the maximin value v and miniᎏ max value v. First, consider how to find the optimal mixed strategy for player 1. As indicated in Sec. 15.3, m n  pij xi yj i⫽1 j⫽1 Expected payoff for player 1 ⫽  15.5 SOLVING BY LINEAR PROGRAMMING 673 and the strategy (x1, x2, . . . , xm) is optimal if m n 冱 冱 pij xi yj i1 j1 vᎏ  v for all opposing strategies (y1, y2, . . . , yn ). Thus, this inequality will need to hold, e.g., for each of the pure strategies of player 2, that is, for each of the strategies (y1, y2, . . . , yn ) where one yj  1 and the rest equal 0. Substituting these values into the inequality yields m 冱 pij xi i1 v for j  1, 2, . . . , n, so that the inequality implies this set of n inequalities. Furthermore, this set of n inequalities implies the original inequality (rewritten) n m n 冱 yj 冢 i1 冱 pij xi 冣 j1 冱 yjv  v, j1 since n 冱 yj  1. j1 Because the implication goes in both directions, it follows that imposing this set of n linear inequalities is equivalent to requiring the original inequality to hold for all strategies (y1, y2, . . . , yn ). But these n inequalities are legitimate linear programming constraints, as are the additional constraints x1  x2   xm  1 xi 0, for i  1, 2, . . . , m that are required to ensure that the xi are probabilities. Therefore, any solution (x1, x2, . . . , xm ) that satisfies this entire set of linear programming constraints is the desired optimal mixed strategy. Consequently, the problem of finding an optimal mixed strategy has been reduced to finding a feasible solution for a linear programming problem, which can be done as described in Chap. 4. The two remaining difficulties are that (1) v is unknown and (2) the linear programming problem has no objective function. Fortunately, both these difficulties can be resolved at one stroke by replacing the unknown constant v by the variable xm1 and then maximizing xm1, so that xm1 automatically will equal v (by definition) at the optimal solution for the linear programming problem! The Linear Programming Formulation To summarize, player 1 would find his optimal mixed strategy by using the simplex method to solve the linear programming problem: Maximize xm1, subject to p11x1  p21x2   pm1xm ⫺ xm⫹1 ⱖ 0 p12x1 ⫹ p22x2 ⫹ ⭈⭈⭈ ⫹ pm2xm ⫺ xm⫹1 ⱖ 0 ⭈⭈⭈⭈⭈⭈⭈⭈⭈⭈⭈⭈⭈⭈⭈⭈⭈⭈⭈⭈⭈⭈⭈⭈⭈⭈⭈⭈⭈⭈⭈⭈⭈⭈⭈⭈⭈⭈⭈⭈⭈⭈⭈⭈⭈⭈⭈⭈⭈⭈⭈ p1nx1 ⫹ p2nx2 ⫹ ⭈⭈⭈ ⫹ pmnxm ⫺ xm⫹1 ⱖ 0 x1 ⫹ x2 ⫹ ⭈⭈⭈ ⫹ xm ⫽ 1 674 CHAPTER 15 GAME THEORY and xi 0, for i  1, 2, . . . , m. Note that xm1 is not restricted to be nonnegative, whereas the simplex method can be applied only after all the variables have nonnegativity constraints. However, this matter can be easily rectified, as will be discussed shortly. Now consider player 2. He could find his optimal mixed strategy by rewriting the payoff table as the payoff to himself rather than to player 1 and then by proceeding exactly as just described. However, it is enlightening to summarize his formulation in terms of the original payoff table. By proceeding in a way that is completely analogous to that just described, player 2 would conclude that his optimal mixed strategy is given by an optimal solution to the linear programming problem: Minimize yn1, subject to p11y1  p12 y2   p1n yn ⫺ yn⫹1 ⱕ 0 p21y1 ⫹ p22 y2 ⫹ ⭈⭈⭈ ⫹ p2n yn ⫺ yn⫹1 ⱕ 0 ⭈⭈⭈⭈⭈⭈⭈⭈⭈⭈⭈⭈⭈⭈⭈⭈⭈⭈⭈⭈⭈⭈⭈⭈⭈⭈⭈⭈⭈⭈⭈⭈⭈⭈⭈⭈⭈⭈⭈⭈⭈⭈⭈⭈⭈⭈⭈⭈⭈⭈⭈⭈ pm1y1 ⫹ pm2 y2 ⫹ ⭈⭈⭈ ⫹ pmn yn ⫺ yn⫹1 ⱕ 0 y1 ⫹ y2 ⫹ ⭈⭈⭈ ⫹ yn ⫽ 1 and yj ⱖ 0, for j ⫽ 1, 2, . . . , n. It is easy to show (see Prob. 15.5-6 and its hint) that this linear programming problem and the one given for player 1 are dual to each other in the sense described in Secs. 6.1 and 6.4. This fact has several important implications. One implication is that the optimal mixed strategies for both players can be found by solving only one of the linear programming problems because the optimal dual solution is an automatic by-product of the simplex method calculations to find the optimal primal solution. A second implication is that this brings all duality theory (described in Chap. 6) to bear upon the interpretation and analysis of games. A related implication is that this provides a simple proof of the minimax theorem. Let x*m⫹1 and y*n⫹1 denote the value of xm⫹1 and yn⫹1 in the optimal solution of the respective linear programming problems. It is known from the strong duality property given in Sec. 6.1 that ⫺x*m⫹1 ⫽ ⫺y*n⫹1, so that x*m⫹1 ⫽ y*n⫹1. However, it is evident from the definition of vᎏ and v that vᎏ ⫽ x*m⫹1 and v ⫽ y*n⫹1, so it follows that vᎏ ⫽ v, as claimed by the minimax theorem. One remaining loose end needs to be tied up, namely, what to do about xm⫹1 and yn⫹1 being unrestricted in sign in the linear programming formulations. If it is clear that v ⱖ 0 so that the optimal values of xm⫹1 and yn⫹1 are nonnegative, then it is safe to introduce nonnegativity constraints for these variables for the purpose of applying the simplex method. However, if v ⬍ 0, then an adjustment needs to be made. One possibility is to use the approach described in Sec. 4.6 for replacing a variable without a nonnegativity constraint by the difference of two nonnegative variables. Another is to reverse players 1 and 2 so that the payoff table would be rewritten as the payoff to the original player 2, which would make the corresponding value of v positive. A third, and the most commonly used, procedure is to add a sufficiently large fixed constant to all the entries in the payoff table that the new value of the game will be positive. (For example, setting this constant equal to the absolute value of the largest negative entry will suffice.) Because this same constant is added to every entry, this adjustment cannot alter the optimal mixed strategies in any way, so they can now be obtained in the usual manner. The indicated 15.5 SOLVING BY LINEAR PROGRAMMING 675 value of the game would be increased by the amount of the constant, but this value can be readjusted after the solution has been obtained. Application to Variation 3 of the Political Campaign Problem To illustrate this linear programming approach, consider again variation 3 of the political campaign problem after dominated strategy 3 for player 1 is eliminated (see Table 15.6). Because there are some negative entries in the reduced payoff table, it is unclear at the outset whether the value of the game v is nonnegative (it turns out to be). For the moment, let us assume that v 0 and proceed without making any of the adjustments discussed in the preceding paragraph. To write out the linear programming model for player 1 for this example, note that pij in the general model is the entry in row i and column j of Table 15.6, for i  1, 2 and j  1, 2, 3. The resulting model is Maximize x3, subject to 5x2 ⫺ x3 ⱖ 0 ⫺2x1 ⫹ 4x2 ⫺ x3 ⱖ 0 2x1 ⫺ 3x2 ⫺ x3 ⱖ 0 x1 ⫹ x2 ⫽1 and x1 ⱖ 0, x2 ⱖ 0. Applying the simplex method to this linear programming problem (after adding the constraint x3 ⱖ 0) yields x*1 ⫽ 1ᎏ7ᎏ1, x*2 ⫽ 1ᎏ4ᎏ1, x*3 ⫽ 1ᎏ2ᎏ1 as the optimal solution. (See Probs. 15.5-8 and 15.5-9.) Consequently, just as was found by the graphical procedure in the preceding section, the optimal mixed strategy for player 1 according to the minimax criterion is (x1, x2) ⫽ (ᎏ17ᎏ1, ᎏ14ᎏ1), and the value of the game is v ⫽ x*3 ⫽ ᎏ12ᎏ1. The simplex method also yields the optimal solution for the dual (given next) of this problem, namely, y*1 ⫽ 0, y*2 ⫽ ᎏ15ᎏ1, y*3 ⫽ ᎏ16ᎏ1, y*4 ⫽ ᎏ12ᎏ1, so the optimal mixed strategy for player 2 is (y1, y2, y3) ⫽ (0, ᎏ15ᎏ1, ᎏ16ᎏ1). The dual of the preceding problem is just the linear programming model for player 2 (the one with variables y1, y2, . . . , yn, yn⫹1) shown earlier in this section. (See Prob. 15.5-7.) By plugging in the values of pij from Table 15.6, this model is Minimize y4, subject to ⫺ 2y2 ⫹ 2y3 ⫺ y4 ⱕ 0 5y1 ⫹ 4y2 ⫺ 3y3 ⫺ y4 ⱕ 0 y1 ⫹ y2 ⫹ y3 ⫽1 and y1 ⱖ 0, y2 ⱖ 0, y3 ⱖ 0. Applying the simplex method directly to this model (after adding the constraint y4 ⱖ 0) yields the optimal solution: y*1 ⫽ 0, y*2 ⫽ ᎏ151ᎏ, y*3 ⫽ ᎏ161ᎏ, y*4 ⫽ ᎏ121ᎏ (as well as the optimal dual solution x*1 ⫽ ᎏ171ᎏ, x*2 ⫽ ᎏ141ᎏ, x*3 ⫽ ᎏ121ᎏ). Thus, the optimal mixed strategy for player 2 is ( y1, y2, y3) ⫽ (0, ᎏ151ᎏ, ᎏ161ᎏ), and the value of the game is again seen to be v ⫽ y*4 ⫽ ᎏ121ᎏ. Because we already had found the optimal mixed strategy for player 2 while dealing with the first model, we did not have to solve the second one. In general, you always can find optimal mixed strategies for both players by choosing just one of the models (either 676 CHAPTER 15 GAME THEORY one) and then using the simplex method to solve for both an optimal solution and an optimal dual solution. When the simplex method was applied to both of these linear programming models, a nonnegativity constraint was added that assumed that v 0. If this assumption were violated, both models would have no feasible solutions, so the simplex method would stop quickly with this message. To avoid this risk, we could have added a positive constant, say, 3 (the absolute value of the largest negative entry), to all the entries in Table 15.6. This then would increase by 3 all the coefficients of x1, x2, y1, y2, and y3 in the inequality constraints of the two models. (See Prob. 15.5-2.) ■ 15.6 EXTENSIONS Although this chapter has considered only two-person, zero-sum games with a finite number of pure strategies, game theory extends far beyond this kind of game. In fact, extensive research has been done on a number of more complicated types of games, including the ones summarized in this section. The simplest generalization is to the two-person, constant-sum game. In this case, the sum of the payoffs to the two players is a fixed constant (positive or negative) regardless of which combination of strategies is selected. The only difference from a two-person, zerosum game is that, in the latter case, the constant must be zero. A nonzero constant may arise instead because, in addition to one player winning whatever the other one loses, the two players may share some reward (if the constant is positive) or some cost (if the constant is negative) for participating in the game. Adding this fixed constant does nothing to affect which strategies should be chosen. Therefore, the analysis for determining optimal strategies is exactly the same as described in this chapter for two-person, zero-sum games. A more complicated extension is to the n-person game, where more than two players may participate in the game. This generalization is particularly important because, in many kinds of competitive situations, frequently more than two competitors are involved. This may occur, for example, in competition among business firms, in international diplomacy, and so forth. Unfortunately, the existing theory for such games is less satisfactory than it is for two-person games. Another generalization is the nonzero-sum game, where the sum of the payoffs to the players need not be 0 (or any other fixed constant). This case reflects the fact that many competitive situations include noncompetitive aspects that contribute to the mutual advantage or mutual disadvantage of the players. For example, the advertising strategies of competing companies can affect not only how they will split the market but also the total size of the market for their competing products. However, in contrast to a constant-sum game, the size of the mutual gain (or loss) for the players depends on the combination of strategies chosen. Because mutual gain is possible, nonzero-sum games are further classified in terms of the degree to which the players are permitted to cooperate. At one extreme is the noncooperative game, where there is no preplay communication between the players. At the other extreme is the cooperative game, where preplay discussions and binding agreements are permitted. For example, competitive situations involving trade regulations between countries, or collective bargaining between labor and management, might be formulated as cooperative games. When there are more than two players, cooperative games also allow some of or all the players to form coalitions. Still another extension is to the class of infinite games, where the players have an infinite number of pure strategies available to them. These games are designed for the kind of situation where the strategy to be selected can be represented by a continuous decision variable. For example, this decision variable might be the time at which to take a certain action, or the proportion of one’s resources to allocate to a certain activity, in a competitive situation. LEARNING AIDS FOR THIS CHAPTER ON OUR WEBSITE 677 However, the analysis required in these extensions beyond the two-person, zero-sum, finite game is relatively complex and will not be pursued further here. (See any of Selected References 6, 7, 8, and 10 for further information.) ■ 15.7 CONCLUSIONS The general problem of how to make decisions in a competitive environment is a very common and important one. The fundamental contribution of game theory is that it provides a basic conceptual framework for formulating and analyzing such problems in simple situations. However, there is a considerable gap between what the theory can handle and the complexity of most competitive situations arising in practice. Therefore, the conceptual tools of game theory usually play just a supplementary role in dealing with these situations. Because of the importance of the general problem, research is continuing with some success to extend the theory to more complex situations. ■ SELECTED REFERENCES 1. Bier, V. M., and M. N. Azaiez (eds.): Game Theoretic Risk Analysis of Security Threats, Springer, New York, 2009. 2. Chatterjee, K., and W. F. Samuelson (eds.): Game Theory and Business Applications, 2nd ed., Springer, New York, 2013. 3. Denardo, E. V.: Linear Programming and Generalizations: A Problem-based Introduction with Spreadsheets, Springer, New York, 2012, chaps. 14–16. 4. Geckil, I. K., and P. L. Anderson: Applied Game Theory and Strategic Behavior, CRC Press, Boca Raton, FL, 2009. 5. Kimbrough, S.: Agents, Games, and Evolution: Strategies at Work and Play, Chapman and Hall/CRC Press, Boca Raton, FL, 2012. 6. Leyton-Brown, K., and Y. Shoham: Essentials of Game Theory: A Concise Multidisciplinary Introduction, Morgan and Claypool Publishers, San Rafael, CA, 2008. 7. Mendelson, E.: Introducing Game Theory and Its Applications, Chapman and Hall/CRC Press, Boca Raton, FL, 2005. 8. Meyerson, R. B.: Game Theory: Analysis of Conflict, Harvard University Press, Cambridge, MA, 1991. 9. Washburn, A.: Two-Person Zero-Sum Games, 4th ed., Springer, New York, scheduled for publication in 2014. 10. Webb, J. N.: Game Theory: Decisions, Interaction and Evolution, Springer, New York, 2007. ■ LEARNING AIDS FOR THIS CHAPTER ON OUR WEBSITE (www.mhhe.com/hillier) Solved Examples: Examples for Chapter 15 “Ch. 15—Game Theory” Files for Solving the Examples: Excel Files LINGO/LINDO File MPL/Solvers File Glossary for Chapter 15 See Appendix 1 for documentation of the software. 678 CHAPTER 15 GAME THEORY ■ PROBLEMS The symbol to the left of some of the problems (or their parts) has the following meaning: C: Use the computer with any of the software options available to you (or as instructed by your instructor) to solve the problem. An asterisk on the problem number indicates that at least a partial answer is given in the back of the book. 15.1-1. The labor union and management of a particular company have been negotiating a new labor contract. However, negotiations have now come to an impasse, with management making a “final” offer of a wage increase of $1.10 per hour and the union making a “final” demand of a $1.60 per hour increase. Therefore, both sides have agreed to let an impartial arbitrator set the wage increase somewhere between $1.10 and $1.60 per hour (inclusively). The arbitrator has asked each side to submit to her a confidential proposal for a fair and economically reasonable wage increase (rounded to the nearest dime). From past experience, both sides know that this arbitrator normally accepts the proposal of the side that gives the most from its final figure. If neither side changes its final figure, or if they both give in the same amount, then the arbitrator normally compromises halfway between ($1.35 in this case). Each side now needs to determine what wage increase to propose for its own maximum advantage. Formulate this problem as a two-person, zero-sum game. 15.1-2. Two manufacturers currently are competing for sales in two different but equally profitable product lines. In both cases the sales volume for manufacturer 2 is three times as large as that for manufacturer 1. Because of a recent technological breakthrough, both manufacturers will be making a major improvement in both products. However, they are uncertain as to what development and marketing strategy to follow. If both product improvements are developed simultaneously, either manufacturer can have them ready for sale in 12 months. Another alternative is to have a “crash program” to develop only one product first to try to get it marketed ahead of the competition. By doing this, manufacturer 2 could have one product ready for sale in 9 months, whereas manufacturer 1 would require 10 months (because of previous commitments for its production facilities). For either manufacturer, the second product could then be ready for sale in an additional 9 months. For either product line, if both manufacturers market their improved models simultaneously, it is estimated that manufacturer 1 would increase its share of the total future sales of this product by 8 percent of the total (from 25 to 33 percent). Similarly, manufacturer 1 would increase its share by 20, 30, and 40 percent of the total if it marketed the product sooner than manufacturer 2 by 2, 6, and 8 months, respectively. On the other hand, manufacturer 1 would lose 4, 10, 12, and 14 percent of the total if manufacturer 2 marketed it sooner by 1, 3, 7, and 10 months, respectively. Formulate this problem as a two-person, zero-sum game, and then determine which strategy the respective manufacturers should use according to the minimax criterion. 15.1-3. Consider the following parlor game to be played between two players. Each player begins with three chips: one red, one white, and one blue. Each chip can be used only once. To begin, each player selects one of her chips and places it on the table, concealed. Both players then uncover the chips and determine the payoff to the winning player. In particular, if both players play the same kind of chip, it is a draw; otherwise, the following table indicates the winner and how much she receives from the other player. Next, each player selects one of her two remaining chips and repeats the procedure, resulting in another payoff according to the following table. Finally, each player plays her one remaining chip, resulting in the third and final payoff. Winning Chip Payoff ($) Red beats white White beats blue Blue beats red Matching colors 50 40 30 0 Formulate this problem as a two-person, zero-sum game by identifying the form of the strategies and payoffs. 15.2-1. Reconsider Prob. 15.1-1. (a) Use the concept of dominated strategies to determine the best strategy for each side. (b) Without eliminating dominated strategies, use the minimax criterion to determine the best strategy for each side. 15.2-2.* For the game having the following payoff table, determine the optimal strategy for each player by successively eliminating dominated strategies. (Indicate the order in which you eliminated strategies.) Player 2 Strategy Player 1 1 2 3 1 2 3 ⫺3 ⫺1 ⫺1 1 2 0 ⫺2 ⫺1 ⫺2 15.2-3. Consider the game having the following payoff table: Player 2 Strategy Player 1 1 2 3 1 2 3 4 ⫺2 ⫺1 ⫺1 ⫺3 ⫺1 ⫺2 ⫺1 ⫺2 ⫺1 1 2 3 Determine the optimal strategy for each player by successively eliminating dominated strategies. Give a list of the dominated strategies 679 PROBLEMS (and the corresponding dominating strategies) in the order in which you were able to eliminate them. 15.2-4. Find the saddle point for the game having the following payoff table. Player 2 Strategy Player 1 1 2 3 1 2 3 ⫺1 ⫺2 ⫺3 ⫺1 ⫺0 ⫺1 1 3 2 (b) Now identify and eliminate dominated strategies as far as possible. Make a list of the dominated strategies, showing the order in which you were able to eliminate them. Then show the resulting reduced payoff table with no remaining dominated strategies. 15.2-7.* Two politicians soon will be starting their campaigns against each other for a certain political office. Each must now select the main issue she will emphasize as the theme of her campaign. Each has three advantageous issues from which to choose, but the relative effectiveness of each one would depend upon the issue chosen by the opponent. In particular, the estimated increase in the vote for politician 1 (expressed as a percentage of the total vote) resulting from each combination of issues is as follows: Use the minimax criterion to find the best strategy for each player. Does this game have a saddle point? Is it a stable game? Issue for Politician 2 15.2-5. Find the saddle point for the game having the following payoff table. Issue for Politician 1 Player 2 Strategy Player 1 1 2 3 1 2 3 4 ⫺3 ⫺4 ⫺1 ⫺3 ⫺2 ⫺1 ⫺2 ⫺1 ⫺2 ⫺4 ⫺1 ⫺0 Use the minimax criterion to find the best strategy for each player. Does this game have a saddle point? Is it a stable game? 15.2-6. Two companies share the bulk of the market for a particular kind of product. Each is now planning its new marketing plans for the next year in an attempt to wrest some sales away from the other company. (The total sales for the product are relatively fixed, so one company can increase its sales only by winning them away from the other.) Each company is considering three possibilities: (1) better packaging of the product, (2) increased advertising, and (3) a slight reduction in price. The costs of the three alternatives are quite comparable and sufficiently large that each company will select just one. The estimated effect of each combination of alternatives on the increased percentage of the sales for company 1 is as follows: 2 3 ⫺7 ⫺1 ⫺5 ⫺1 ⫺0 ⫺3 ⫺3 ⫺2 ⫺1 However, because considerable staff work is required to research and formulate the issue chosen, each politician must make her own choice before learning the opponent’s choice. Which issue should she choose? For each of the situations described here, formulate this problem as a two-person, zero-sum game, and then determine which issue should be chosen by each politician according to the specified criterion. (a) The current preferences of the voters are very uncertain, so each additional percent of votes won by one of the politicians has the same value to her. Use the minimax criterion. (b) A reliable poll has found that the percentage of the voters currently preferring politician 1 (before the issues have been raised) lies between 45 and 50 percent. (Assume a uniform distribution over this range.) Use the concept of dominated strategies, beginning with the strategies for politician 1. (c) Suppose that the percentage described in part (b) actually were 45 percent. Should politician 1 use the minimax criterion? Explain. Which issue would you recommend? Why? 15.2-8. Briefly describe what you feel are the advantages and disadvantages of the minimax criterion. Player 2 Strategy 1 2 3 1 2 3 2 1 3 ⫺3 ⫺4 ⫺2 ⫺1 ⫺0 ⫺1 Player 1 1 2 3 1 Each company must make its selection before learning the decision of the other company. (a) Without eliminating dominated strategies, use the minimax (or maximin) criterion to determine the best strategy for each company. 15.3-1. Consider the odds and evens game introduced in Sec. 15.1 and whose payoff table is shown in Table 15.1. (a) Show that this game does not have a saddle point. (b) Write an expression for the expected payoff for player 1 (the evens player) in terms of the probabilities of the two players using their respective pure strategies. Then show what this expression reduces to for the following three cases: (i) Player 2 definitely uses his first strategy, (ii) player 2 definitely uses his second strategy, (iii) player 2 assigns equal probabilities to using his two strategies. (c) Repeat part (b) when player 1 becomes the odds player instead. 680 CHAPTER 15 GAME THEORY 15.3-2. Consider the following parlor game between two players. It begins when a referee flips a coin, notes whether it comes up heads or tails, and then shows this result to player 1 only. Player 1 may then (i) pass and thereby pay $5 to player 2 or (ii) bet. If player 1 passes, the game is terminated. However, if he bets, the game continues, in which case player 2 may then either (i) pass and thereby pay $5 to player 1 or (ii) call. If player 2 calls, the referee then shows him the coin; if it came up heads, player 2 pays $10 to player 1; if it came up tails, player 2 receives $10 from player 1. (a) Give the pure strategies for each player. (Hint: Player 1 will have four pure strategies, each one specifying how he would respond to each of the two results the referee can show him; player 2 will have two pure strategies, each one specifying how he will respond if player 1 bets.) (b) Develop the payoff table for this game, using expected values for the entries when necessary. Then identify and eliminate any dominated strategies. (c) Show that none of the entries in the resulting payoff table are a saddle point. Then explain why any fixed choice of a pure strategy for each of the two players must be an unstable solution, so mixed strategies should be used instead. (d) Write an expression for the expected payoff for player 1 in terms of the probabilities of the two players using their respective pure strategies. Then show what this expression reduces to for the following three cases: (i) Player 2 definitely uses his first strategy, (ii) player 2 definitely uses his second strategy, (iii) player 2 assigns equal probabilities to using his two strategies. 15.4-1. Consider the odds and evens game introduced in Sec. 15.1 and whose payoff table is shown in Table 15.1. Use the graphical procedure described in Sec. 15.4 from the viewpoint of player 1 (the evens player) to determine the optimal mixed strategy for each player according to the minimax criterion. Then do this again from the viewpoint of player 2 (the odds player). Also give the corresponding value of the game. 15.4-2. Reconsider Prob. 15.3-2. Use the graphical procedure described in Sec. 15.4 to determine the optimal mixed strategy for each player according to the minimax criterion. Also give the corresponding value of the game. 15.4-3. Consider the game having the following payoff table: Player 2 Strategy Player 1 1 2 1 2 ⫺3 ⫺1 ⫺2 ⫺2 Use the graphical procedure described in Sec. 15.4 to determine the value of the game and the optimal mixed strategy for each player according to the minimax criterion. Check your answer for player 2 by constructing his payoff table and applying the graphical procedure directly to this table. 15.4-4.* For the game having the following payoff table, use the graphical procedure described in Sec. 15.4 to determine the value of the game and the optimal mixed strategy for each player according to the minimax criterion. Player 2 Strategy 1 2 3 1 2 4 0 3 1 1 2 Player 1 15.4-5. The A. J. Swim Team soon will have an important swim meet with the G. N. Swim Team. Each team has a star swimmer (John and Mark, respectively) who can swim very well in the 100yard butterfly, backstroke, and breaststroke events. However, the rules prevent them from being used in more than two of these events. Therefore, their coaches now need to decide how to use them to maximum advantage. Each team will enter three swimmers per event (the maximum allowed). For each event, the following table gives the best time previously achieved by John and Mark as well as the best time for each of the other swimmers who will definitely enter that event. (Whichever event John or Mark does not swim, his team’s third entry for that event will be slower than the two shown in the table.) Butterfly stroke Backstroke Breaststroke A. J. Swim Team G. N. Swim Team Entry Entry 1 2 John Mark 1 2 1:01.6 1:06.8 1:13.9 59.1 1:05.6 1:12.5 57.5 1:03.3 1:04.7 58.4 1:02.6 1:06.1 1:03.2 1:04.9 1:15.3 59.8 1:04.1 1:11.8 The points awarded are 5 points for first place, 3 points for second place, 1 point for third place, and none for lower places. Both coaches believe that all swimmers will essentially equal their best times in this meet. Thus, John and Mark each will definitely be entered in two of these three events. (a) The coaches must submit all their entries before the meet without knowing the entries for the other team, and no changes are permitted later. The outcome of the meet is very uncertain, so each additional point has equal value for the coaches. Formulate this problem as a two-person, zero-sum game. Eliminate dominated strategies, and then use the graphical procedure described in Sec. 15.4 to find the optimal mixed strategy for each team according to the minimax criterion. (b) The situation and assignment are the same as in part (a), except that both coaches now believe that the A. J. team will win the swim meet if it can win 13 or more points in these three events, but will lose with less than 13 points. [Compare the resulting optimal mixed strategies with those obtained in part (a).] (c) Now suppose that the coaches submit their entries during the meet one event at a time. When submitting his entries for an event, the coach does not know who will be swimming that event for the other team, but he does know who has swum in 681 PROBLEMS preceding events. The three key events just discussed are swum in the order listed in the table. Once again, the A. J. team needs 13 points in these events to win the swim meet. Formulate this problem as a two-person, zero-sum game. Then use the concept of dominated strategies to determine the best strategy for the G. N. team that actually “guarantees” it will win under the assumptions being made. (d) The situation is the same as in part (c). However, now assume that the coach for the G. N. team does not know about game theory and so may, in fact, choose any of his available strategies that have Mark swimming two events. Use the concept of dominated strategies to determine the best strategies from which the coach for the A. J. team should choose. If this coach knows that the other coach has a tendency to enter Mark in the butterfly and the backstroke more often than in the breaststroke, which strategy should she choose? 15.5-1. Consider the odds and evens game introduced in Sec. 15.1 and whose payoff table is shown in Table 15.1. (a) Use the approach described in Sec. 15.5 to formulate the problem of finding optimal mixed strategies according to the minimax criterion as two linear programming problems, one for player 1 (the evens player) and the other for player 2 (the odds player) as the dual of the first problem. C (b) Use the simplex method to find these optimal mixed strategies. 15.5-2. Refer to the last paragraph of Sec. 15.5. Suppose that 3 were added to all the entries of Table 15.6 to ensure that the corresponding linear programming models for both players have feasible solutions with x3 0 and y4 0. Write out these two models. Based on the information given in Sec. 15.5, what are the optimal solutions for these two models? What is the relationship between x*3 and y*4? What is the relationship between the value of the original game v and the values of x*3 and y*4? 15.5-3.* Consider the game having the following payoff table: Player 2 Strategy 1 2 3 4 1 2 3 5 2 3 0 4 2 3 3 0 1 2 4 Player 1 (a) Use the approach described in Sec. 15.5 to formulate the problem of finding optimal mixed strategies according to the minimax criterion as a linear programming problem. C (b) Use the simplex method to find these optimal mixed strategies. 15.5-4. Follow the instructions of Prob. 15.5-3 for the game having the following payoff table: Player 2 Strategy Player 1 1 2 3 1 2 3 ⫺4 ⫺1 ⫺2 2 0 3 ⫺3 ⫺3 ⫺2 15.5-5. Follow the instructions of Prob. 15.5-3 for the game having the following payoff table: Player 2 Strategy Player 1 1 2 3 4 1 2 3 4 5 ⫺1 ⫺2 ⫺0 ⫺4 ⫺3 ⫺3 ⫺4 ⫺0 ⫺2 ⫺0 ⫺1 ⫺2 ⫺2 ⫺3 ⫺3 ⫺2 ⫺1 ⫺2 ⫺2 ⫺1 15.5-6. Section 15.5 presents a general linear programming formulation for finding an optimal mixed strategy for player 1 and for player 2. Using Table 6.14, show that the linear programming problem given for player 2 is the dual of the problem given for player 1. (Hint: Remember that a dual variable with a nonpositivity constraint yi⬘ ⱕ 0 can be replaced by yi ⫽ ⫺yi⬘ with a nonnegativity constraint yi ⱖ 0.) 15.5-7. Consider the linear programming models for players 1 and 2 given near the end of Sec. 15.5 for variation 3 of the political campaign problem (see Table 15.6). Follow the instructions of Prob. 15.5-6 for these two models. 15.5-8. Consider variation 3 of the political campaign problem (see Table 15.6). Refer to the resulting linear programming model for player 1 given near the end of Sec. 15.5. Ignoring the objective function variable x3, plot the feasible region for x1 and x2 graphically (as described in Sec. 3.1). (Hint: This feasible region consists of a single line segment.) Next, write an algebraic expression for the maximizing value of x3 for any point in this feasible region. Finally, use this expression to demonstrate that the optimal solution must, in fact, be the one given in Sec. 15.5. 15.5-9. Consider the linear programming model for player 1 given near the end of Sec. 15.5 for variation 3 of the political campaign problem (see Table 15.6). Verify the optimal mixed strategies for both players given in Sec. 15.5 by applying an automatic routine for the simplex method to this model to find both its optimal solution and its optimal dual solution. C 15.5-10. Consider the general m ⫻ n, two-person, zero-sum game. Let pij denote the payoff to player 1 if he plays his strategy i (i ⫽ 1, . . . , m) and player 2 plays her strategy j ( j ⫽ 1, . . . , n). Strategy 1 (say) for player 1 is said to be weakly dominated by strategy 2 (say) if p1j ⱕ p2j for j ⫽ 1, . . . , n and p1j ⫽ p2j for one or more values of j. (a) Assume that the payoff table possesses one or more saddle points, so that the players have corresponding optimal pure strategies under the minimax criterion. Prove that eliminating weakly dominated strategies from the payoff table cannot eliminate all these saddle points and cannot produce any new ones. (b) Assume that the payoff table does not possess any saddle points, so that the optimal strategies under the minimax criterion are mixed strategies. Prove that eliminating weakly dominated pure strategies from the payoff table cannot eliminate all optimal mixed strategies and cannot produce any new ones. 16 C H A P T E R Decision Analysis T he previous chapters have focused mainly on decision making when the consequences of alternative decisions are known with a reasonable degree of certainty. This decision-making environment enabled formulating helpful mathematical models (linear programming, integer programming, nonlinear programming, etc.) with objective functions that specify the estimated consequences of any combination of decisions. Although these consequences usually cannot be predicted with complete certainty, they could at least be estimated with enough accuracy to justify using such models (along with sensitivity analysis, etc.). However, decisions often must be made in environments that are much more fraught with uncertainty. Here are a few examples. 1. A manufacturer introducing a new product into the marketplace. What will be the reaction of potential customers? How much should be produced? Should the product be test marketed in a small region before deciding upon full distribution? How much advertising is needed to launch the product successfully? 2. A financial firm investing in securities. Which are the market sectors and individual securities with the best prospects? Where is the economy headed? How about interest rates? How should these factors affect the investment decisions? 3. A government contractor bidding on a new contract. What will be the actual costs of the project? Which other companies might be bidding? What are their likely bids? 4. An agricultural firm selecting the mix of crops and livestock for the upcoming season. What will be the weather conditions? Where are prices headed? What will costs be? 5. An oil company deciding whether to drill for oil in a particular location. How likely is oil there? How much? How deep will they need to drill? Should geologists investigate the site further before drilling? These are the kinds of decision making in the face of great uncertainty that decision analysis is designed to address. Decision analysis provides a framework and methodology for rational decision making when the outcomes are uncertain. Chapter 15 describes how game theory also can be used for certain kinds of decision making in the face of uncertainty. There are some similarities in the approaches used by game theory and decision analysis. However, there also are differences because they are designed for different kinds of applications. We will describe these similarities and differences in Sec. 16.2. 682 683 16.1 A PROTOTYPE EXAMPLE Frequently, one question to be addressed with decision analysis is whether to make the needed decision immediately or to first do some testing (at some expense) to reduce the level of uncertainty about the outcome of the decision. For example, the testing might be field testing of a proposed new product to test consumer reaction before making a decision on whether to proceed with full-scale production and marketing of the product. This testing is referred to as performing experimentation. Therefore, decision analysis divides decision making between the cases of without experimentation and with experimentation. The first section introduces a prototype example that will be carried throughout the chapter for illustrative purposes. Sections 16.2 and 16.3 then present the basic principles of decision making without experimentation and decision making with experimentation. We next describe decision trees, a useful tool for depicting and analyzing the decision process when a series of decisions needs to be made. Section 16.5 then discusses how spreadsheets are used to perform sensitivity analysis on decision trees. Section 16.6 introduces utility theory, which provides a way of calibrating the possible outcomes of the decision to reflect the true value of these outcomes to the decision maker. We then conclude the chapter by discussing the practical application of decision analysis and summarizing a variety of applications that have been very beneficial to the organizations involved. ■ 16.1 A PROTOTYPE EXAMPLE The GOFERBROKE COMPANY owns a tract of land that may contain oil. A consulting geologist has reported to management that she believes there is one chance in four of oil. Because of this prospect, another oil company has offered to purchase the land for $90,000. However, Goferbroke is considering holding the land in order to drill for oil itself. The cost of drilling is $100,000. If oil is found, the resulting expected revenue will be $800,000, so the company’s expected profit (after deducting the cost of drilling) will be $700,000. A loss of $100,000 (the drilling cost) will be incurred if the land is dry (no oil). Table 16.1 summarizes these data. Section 16.2 discusses how to approach the decision of whether to drill or sell based just on these data. (We will refer to this as the first Goferbroke Co. problem.) However, before deciding whether to drill or sell, another option is to conduct a detailed seismic survey of the land to obtain a better estimate of the probability of finding oil. (This more involved decision process will be referred to as the full Goferbroke problem.) Section 16.3 discusses this case of decision making with experimentation, at which point the necessary additional data will be provided. This company is operating without much capital, so a loss of $100,000 would be quite serious. In Sec. 16.6, we describe how to refine the evaluation of the consequences of the various possible outcomes. ■ TABLE 16.1 Prospective profits for the Goferbroke Company Status of Land Alternative Drill for oil Sell the land Chance of status Payoff Oil $700,000 $ 90,000 1 in 4 Dry ⫺$100,000 ⫺$ 90,000 3 in 4 684 ■ 16.2 CHAPTER 16 DECISION ANALYSIS DECISION MAKING WITHOUT EXPERIMENTATION Before seeking a solution to the first Goferbroke Co. problem, we will formulate a general framework for decision making. In general terms, the decision maker must choose an alternative from a set of possible decision alternatives. The set contains all the feasible alternatives under consideration for how to proceed with the problem of concern. This choice of an alternative must be made in the face of uncertainty, because the outcome will be affected by random factors that are outside the control of the decision maker. These random factors determine what situation will be found at the time that the decision alternative is executed. Each of these possible situations is referred to as a possible state of nature. For each combination of a decision alternative and a state of nature, the decision maker knows what the resulting payoff would be. The payoff is a quantitative measure of the value to the decision maker of the consequences of the outcome. For example, the payoff frequently is represented by the net monetary gain (profit), although other measures also can be used (as described in Sec. 16.6). If the consequences of the outcome do not become completely certain even when the state of nature is given, then the payoff becomes an expected value (in the statistical sense) of the measure of the consequences. A payoff table commonly is used to provide the payoff for each combination of an action and a state of nature. If you previously studied game theory (Chap. 15), we should point out an interesting analogy between this decision analysis framework and the two-person, zero-sum games described in Chap. 15. The decision maker and nature can be viewed as the two players of such a game. The alternatives and the possible states of nature can then be viewed as the available strategies for these respective players, where each combination of strategies results in some payoff to player 1 (the decision maker). From this viewpoint, the decision analysis framework can be summarized as follows: 1. The decision maker needs to choose one of the decision alternatives. 2. Nature then would choose one of the possible states of nature. 3. Each combination of a decision alternative and state of nature would result in a payoff, which is given as one of the entries in a payoff table. 4. This payoff table should be used to find an optimal alternative for the decision maker according to an appropriate criterion. Soon we will present three possibilities for this criterion, where the first one (the maximin payoff criterion) comes from game theory. However, this analogy to two-person, zero-sum games breaks down in one important respect. In game theory, both players are assumed to be rational and choosing their strategies to promote their own welfare. This description still fits the decision maker, but certainly not nature. By contrast, nature now is a passive player that chooses its strategies (states of nature) in some random fashion. This change means that the game theory criterion for how to choose an optimal strategy (alternative) will not appeal to many decision makers in the current context. One additional element needs to be added to the decision analysis framework. The decision maker generally will have some information that should be taken into account about the relative likelihood of the possible states of nature. Such information can usually be translated to a probability distribution, acting as though the state of nature is a random variable, in which case this distribution is referred to as a prior distribution. Prior distributions are often subjective in that they may depend upon the experience or intuition An Application Vignette Following the merger of Conoco Inc. and the Phillips Petroleum Company in 2002, ConocoPhillips became the third-largest integrated energy company in the United States. Especially active in exploration and production, it was the world’s largest independent pure-play exploration and production company in 2013, with operations and activities in 30 countries. Like any company in this industry, the management of ConocoPhillips must grapple continually with decisions about the allocation of limited investment capital across a set of risky petroleum exploration projects. These decisions have a great impact on the profitability of the company. In the early 1990s, the then Phillips Petroleum Company became an industry leader in the application of sophisticated OR methodology to aid these decisions by developing a decision analysis software package called DISCOVERY. The user interface allows a geologist or engineer to model the uncertainties associated with a project and then the software interprets the inputs and con- structs a decision tree that shows all the decision nodes (including opportunities to obtain additional seismic information) and the intervening event nodes. A key feature of the software is the use of an exponential utility function (to be introduced in Sec. 16.6) to incorporate management’s attitudes about financial risk. An intuitive questionnaire is used to measure corporate risk preferences in order to determine an appropriate value of the risk tolerance parameter for this utility function. Management uses the software to (1) evaluate petroleum exploration projects with a consistent risk-taking policy across the company, (2) rank projects in terms of overall preference, (3) identify the firm’s appropriate level of participation in these projects, and (4) stay within budget. Source: M. R. Walls, G. T. Morahan, and J. S. Dyer: “Decision Analysis of Exploration Opportunities in the Onshore US at Phillips Petroleum Company,” Interfaces, 25(6): 39–56, Nov.–Dec. 1995. (A link to this article is provided on our website, www.mhhe.com/hillier.) of an individual. The probabilities for the respective states of nature provided by the prior distribution are called prior probabilities. Formulation of the Prototype Example in This Framework As indicated in Table 16.1, the Goferbroke Co. has two possible decision alternatives under consideration: drill for oil or sell the land. The possible states of nature are that the land contains oil and that it does not, as designated in the column headings of Table 16.1 by oil and dry. Since the consulting geologist has estimated that there is one chance in four of oil (and so three chances in four of no oil), the prior probabilities of the two states of nature are 0.25 and 0.75, respectively. Therefore, with the payoff in units of thousands of dollars of profit, the payoff table can be obtained directly from Table 16.1, as shown in Table 16.2. We will use this payoff table next to find the optimal alternative according to each of the three criteria described below. The Maximin Payoff Criterion If the decision maker’s problem were to be viewed as a game against nature, then game theory would say to choose the decision alternative according to the minimax criterion ■ TABLE 16.2 Payoff table for the decision analysis formulation of the first Goferbroke Co. problem State of Nature Alternative Oil Dry 1. Drill for oil 2. Sell the land 700 90 ⫺100 90 Prior probability 0.25 0.75 686 CHAPTER 16 DECISION ANALYSIS ■ TABLE 16.3 Application of the maximin payoff criterion to the first Goferbroke Co. problem State of Nature Alternative Oil Dry Minimum 1. Drill for oil 2. Sell the land 700 90 ⫺100 90 ⫺100 90 Prior probability 0.25 0.75 ← Maximin value (as described in Sec. 15.2). From the viewpoint of player 1 (the decision maker), this criterion is more aptly named the maximin payoff criterion, as summarized below: Maximin payoff criterion: For each possible decision alternative, find the minimum payoff over all possible states of nature. Next, find the maximum of these minimum payoffs. Choose the alternative whose minimum payoff gives this maximum. Table 16.3 shows the application of this criterion to the prototype example. Thus, since the minimum payoff for selling (90) is larger than that for drilling (⫺100), the former alternative (sell the land) will be chosen. The rationale for this criterion is that it provides the best guarantee of the payoff that will be obtained. Regardless of what the true state of nature turns out to be for the example, the payoff from selling the land cannot be less than 90, which provides the best available guarantee. Thus, this criterion takes the pessimistic viewpoint that, regardless of which alternative is selected, the worst state of nature for that alternative is likely to occur, so we should choose the alternative which provides the best payoff with its worst state of nature. This rationale is quite valid when one is competing against a rational and malevolent opponent. However, this criterion is not often used in games against nature because it is an extremely conservative criterion in this context. In effect, it assumes that nature is a conscious opponent that wants to inflict as much damage as possible on the decision maker. Nature is not a malevolent opponent, and the decision maker does not need to focus solely on the worst possible payoff from each alternative. This is especially true when the worst possible payoff from an alternative comes from a relatively unlikely state of nature. Thus, this criterion normally is of interest only to a very cautious decision maker. The Maximum Likelihood Criterion The next criterion focuses on the most likely state of nature, as summarized below. Maximum likelihood criterion: Identify the most likely state of nature (the one with the largest prior probability). For this state of nature, find the decision alternative with the maximum payoff. Choose this decision alternative. Applying this criterion to the example, Table 16.4 indicates that the Dry state has the largest prior probability. In the Dry column, the sell alternative has the maximum payoff, so the choice is to sell the land. The appeal of this criterion is that the most important state of nature is the most likely one, so the alternative chosen is the best one for this particularly important state of nature. Basing the decision on the assumption that this state of nature will occur tends to give a better chance of a favorable outcome than assuming any other state of nature. 687 16.2 DECISION MAKING WITHOUT EXPERIMENTATION ■ TABLE 16.4 Application of the maximum likelihood criterion to the first Goferbroke Co. problem State of Nature Alternative Oil Dry 1. Drill for oil 2. Sell the land 700 90 ⫺100 90 Prior probability 0.25 0.75 ⫺100 90 ← Maximum in this column ↑ Maximum Furthermore, the criterion does not rely on questionable subjective estimates of the probabilities of the respective states of nature other than identifying the most likely state. The major drawback of the criterion is that it completely ignores much relevant information. No state of nature is considered other than the most likely one. In a problem with many possible states of nature, the probability of the most likely one may be quite small, so focusing on just this one state of nature is quite unwarranted. Even in the example, where the prior probability of the Dry state is 0.75, this criterion ignores the extremely attractive payoff of 700 if the company drills and finds oil. In effect, the criterion does not permit gambling on a low-probability big payoff, no matter how attractive the gamble may be. Bayes’ Decision Rule1 Our third criterion, and the one commonly chosen, is Bayes’ decision rule, described below: Bayes’ decision rule: Using the best available estimates of the probabilities of the respective states of nature (currently the prior probabilities), calculate the expected value of the payoff for each of the possible decision alternatives. Choose the decision alternative with the maximum expected payoff. For the prototype example, these expected payoffs are calculated directly from Table 16.2 as follows: E[Payoff (drill)]  0.25(700)  0.75(⫺100) ⫽ 100. E[Payoff (sell)] ⫽ 0.25(90) ⫹ 0.75(90) ⫽ 90. Since 100 is larger than 90, the alternative selected is to drill for oil. Note that this choice contrasts with the selection of the sell alternative under each of the two preceding criteria. The big advantage of Bayes’ decision rule is that it incorporates all the available information, including all the payoffs and the best available estimates of the probabilities of the respective states of nature. It is sometimes argued that these estimates of the probabilities necessarily are largely subjective and so are too shaky to be trusted. There is no accurate way of predicting the 1 The origin of this name is that this criterion is often credited to the Reverend Thomas Bayes, a nonconforming 18th-century English minister who won renown as a philosopher and mathematician. (The same basic idea has even longer roots in the field of economics.) This decision rule also is sometimes called the expected monetary value (EMF) criterion, although this is a misnomer for those cases where the measure of the payoff is something other than monetary value (as in Sec. 16.6). 688 CHAPTER 16 DECISION ANALYSIS future, including a future state of nature, even in probability terms. This argument has some validity. The reasonableness of the estimates of the probabilities should be assessed in each individual situation. Nevertheless, under many circumstances, past experience and current evidence enable one to develop reasonable estimates of the probabilities. Using this information should provide better grounds for a sound decision than ignoring it. Furthermore, experimentation frequently can be conducted to improve these estimates, as described in the next section. Therefore, we will be using only Bayes’ decision rule throughout the remainder of the chapter. To assess the effect of possible inaccuracies in the prior probabilities, it often is helpful to conduct sensitivity analysis, as described below. Sensitivity Analysis with Bayes’ Decision Rule Sensitivity analysis commonly is used with various applications of operations research to study the effect if some of the numbers included in the mathematical model are not correct. In this case, the mathematical model is represented by the payoff table shown in Table 16.2. The numbers in this table that are most questionable are the prior probabilities. We will focus the sensitivity analysis on these numbers, although a similar approach could be applied to the payoffs given in the table. The sum of the two prior probabilities must equal 1, so increasing one of these probabilities automatically decreases the other one by the same amount, and vice versa. Goferbroke’s management feels that the true chances of having oil on the tract of land are likely to lie somewhere between 15 and 35 percent. In other words, the true prior probability of having oil is likely to be in the range from 0.15 to 0.35, so the corresponding prior probability of the land being dry would range from 0.85 to 0.65. Letting p  prior probability of oil, the expected payoff from drilling for any p is E[Payoff (drill)]  700p ⫺ 100(1 ⫺ p) ⫽ 800p ⫺ 100. The slanting line in Fig. 16.1 shows the plot of this expected payoff versus p. Since the payoff from selling the land would be 90 for any p, the flat line in Fig. 16.1 gives E[Payoff (sell)] versus p. The four dots in Fig. 16.1 show the expected payoff for the two decision alternatives when p ⫽ 0.15 or p ⫽ 0.35. When p ⫽ 0.15, the decision swings over to selling the land by a wide margin (an expected payoff of 90 versus only 20 for drilling). However, when p ⫽ 0.35, the decision is to drill by a wide margin (expected payoff ⫽ 180 versus only 90 for selling). Thus, the decision is very sensitive to p. This sensitivity analysis has revealed that it is important to do more, if possible, to develop a more precise estimate of the true value of p. The point in Fig. 16.1 where the two lines intersect is the crossover point where the decision shifts from one alternative (sell the land) to the other (drill for oil) as the prior probability increases. To find this point, we set E[Payoff (drill)] ⫽ E[Payoff (sell)] 800p ⫺ 100 ⫽ 90 190 p ⫽   0.2375 800 Conclusion: Should sell the land if p ⬍ 0.2375. Should drill for oil if p ⬎ 0.2375. 689 16.2 DECISION MAKING WITHOUT EXPERIMENTATION Expected payoff (EP) 700 Drill for oil 600 500 400 Region where the decision should be to drill for oil Region where the decision should be to sell the land 300 200 ■ FIGURE 16.1 Graphical display of how the expected payoff for each decision alternative changes when the prior probability of oil changes for the first Goferbroke Co. problem. 100 0 ⫺100 Sell the land 0.2 Crossover point 0.4 0.6 0.8 1.0 Prior probability of oil (p) Thus, when trying to refine the estimate of the true value of p, the key question is whether it is smaller or larger than 0.2375. For other problems that have more than two decision alternatives, the same kind of analysis can be applied. The main difference is that there now would be more than two lines (one per alternative) in the graphical display corresponding to Fig. 16.1. However, the top line for any particular value of the prior probability still indicates which alternative should be chosen. With more than two lines, there might be more than one crossover point where the decision shifts from one alternative to another. You can see another example of performing this kind of analysis with three decision alternatives in the Solved Examples section of the book’s website. (This same example also illustrates the application of all three decision criteria considered in this section.) For a problem with more than two possible states of nature, the most straightforward approach is to focus the sensitivity analysis on only two states at a time as described above. This again would involve investigating what happens when the prior probability of one state increases as the prior probability of the other state decreases by the same amount, holding fixed the prior probabilities of the remaining states. This procedure then can be repeated for as many other pairs of states as desired. Because the decision the Goferbroke Co. should make depends so critically on the true probability of oil, serious consideration should be given to conducting a seismic survey to estimate this probability more closely. We will explore this option in the next two sections. 690 ■ 16.3 CHAPTER 16 DECISION ANALYSIS DECISION MAKING WITH EXPERIMENTATION Frequently, additional testing (experimentation) can be done to improve the preliminary estimates of the probabilities of the respective states of nature provided by the prior probabilities. These improved estimates are called posterior probabilities. We first update the Goferbroke Co. example to incorporate experimentation, then describe how to derive the posterior probabilities, and finally discuss how to decide whether it is worthwhile to conduct experimentation. Continuing the Prototype Example As mentioned at the end of Sec. 16.1, an available option before making a decision is to conduct a detailed seismic survey of the land to obtain a better estimate of the probability of oil. The cost is $30,000. A seismic survey obtains seismic soundings that indicate whether the geological structure is favorable to the presence of oil. We will divide the possible findings of the survey into the following two categories: USS: Unfavorable seismic soundings; oil is fairly unlikely. FSS: Favorable seismic soundings; oil is fairly likely. Based on past experience, if there is oil, then the probability of unfavorable seismic soundings is P(USSState  Oil)  0.4, so P(FSSState  Oil)  1 ⫺ 0.4 ⫽ 0.6. Similarly, if there is no oil (i.e., the true state of nature is Dry), then the probability of unfavorable seismic soundings is estimated to be P(USSState ⫽ Dry) ⫽ 0.8, so P(FSSState ⫽ Dry) ⫽ 1 ⫺ 0.8 ⫽ 0.2. We soon will use these data to find the posterior probabilities of the respective states of nature given the seismic soundings. Posterior Probabilities Proceeding now in general terms, we let n ⫽ number of possible states of nature; P(State ⫽ state i) ⫽ prior probability that true state of nature is state i, for i ⫽ 1, 2, . . . , n; Finding ⫽ finding from experimentation (a random variable); Finding j ⫽ one possible value of finding; P(State ⫽ state iFinding ⫽ finding j) ⫽ posterior probability that true state of nature is state i, given that Finding ⫽ finding j, for i ⫽ 1, 2, . . . , n. The question currently being addressed is the following: Given P(State ⫽ state i) and P(Finding ⫽ finding jState ⫽ state i), for i ⫽ 1, 2, . . . , n, what is P(State ⫽ state iFinding ⫽ finding j)? This question is answered by combining the following standard formulas of probability theory: P(State ⫽ state i, Finding ⫽ finding j) P(State ⫽ state iFinding ⫽ finding j) ⫽  P(Finding  finding j) An Application Vignette The Workers’ Compensation Board (WCB) of British Columbia, Canada, is responsible for the occupational health and safety, rehabilitation, and compensation interests of this province’s workers and employers. In 2011, it accepted nearly 104,000 claims for compensation. A key factor in controlling WCB costs is to identify those short-term disability claims that pose a potentially high financial risk of converting into a far more expensive long-term disability claim unless there is intensive early claim-management intervention to provide the needed medical treatment and rehabilitation. The question was how to accurately identify these high-risk claims so as to minimize the expected total cost of claim compensation and claim-management intervention. An OR team was formed to study this problem by applying decision analysis. For each of numerous categories of injury claims, based on the nature of the injury, the gender and age of the worker, etc., a decision tree was used to evaluate whether that category should be classified as low risk (not requiring intervention) or high risk (requiring intervention), depending on the severity of the injury. For each category, a calculation was made of the cutoff point on the critical number of short-term disability claim days paid that would trigger claim-management intervention, so as to minimize the expected cost of claim payments and intervention. A key in making this calculation was assessing the posterior probability that a claim would become a long-term disability claim, given the number of short-term disability claim days paid. This application of decision analysis with decision trees is now saving WCB approximately US $4 million per year while also enabling some injured workers to return to work sooner. Source: E. Urbanovich, E. E. Young, M. L. Puterman, and S. O. Fattedad: “Early Detection of High-Risk Claims at the Workers’ Compensation Board of British Columbia,” Interfaces, 33(4): 15–26, July–Aug. 2003. (A link to this article is provided on our website, www.mhhe.com/hillier.) n P(Finding  finding j)  冱 P(State  state k, Finding  finding j) k1 P(State  state i, Finding  finding j)  P(Finding  finding jState  state i) P(State  state i). Therefore, for each i  1, 2, . . . , n, the desired formula for the corresponding posterior probability is P(State  state iFinding  finding j)  P(Finding  finding jState  state i) P(State  state i)  n 冱 P(Finding  finding jState  state k) P(State  state k) k1 (This formula often is referred to as Bayes’ theorem because it was developed by Thomas Bayes, the same 18th-century mathematician who is credited with developing Bayes’ decision rule.) Now let us return to the prototype example and apply this formula. If the finding of the seismic survey is unfavorable seismic soundings (USS), then the posterior probabilities are 0.4(0.25) 1 P(State  OilFinding  USS)    , 0.4(0.25)  0.8(0.75) 7 1 6 P(State  DryFinding  USS)  1 ⫺   . 7 7 Similarly, if the seismic survey gives favorable seismic soundings (FSS), then 0.6(0.25) 1 P(State  OilFinding  FSS)    , 0.6(0.25)  0.2(0.75) 2 1 1 P(State  DryFinding  FSS)  1 ⫺   . 2 2 692 CHAPTER 16 DECISION ANALYSIS Prior Probabilities P(state) 5 0.2 il O Joint Probabilities P(state and finding) 0.6 n Oil ve , gi FSS US S, g 0.4 iven Oil 0.7 Dr y ■ FIGURE 16.2 Probability tree diagram for the full Goferbroke Co. problem showing all the probabilities leading to the calculation of each posterior probability of the state of nature given the finding of the seismic survey. Conditional Probabilities P(finding|state) 5 y 0.2 en Dr v , gi FSS US S, g 0.8 iven Dry Posterior Probabilities P(state|finding) 0.25(0.6) = 0.15 Oil and FSS 0.15 = 0.5 0.3 Oil, given FSS 0.25(0.4) = 0.1 Oil and USS 0.1 = 0.14 0.7 Oil, given USS 0.75(0.2) = 0.15 Dry and FSS 0.15 = 0.5 0.3 Dry, given FSS 0.75(0.8) = 0.6 Dry and USS 0.6 = 0.86 0.7 Dry, given USS Unconditional probabilities: P(FSS) = 0.15 + 0.15 = 0.3 P(USS) = 0.1 + 0.6 = 0.7 P(finding) The probability tree diagram in Fig. 16.2 shows a nice way of organizing these calculations in an intuitive manner. The prior probabilities in the first column and the conditional probabilities in the second column are part of the input data for the problem. Multiplying each probability in the first column by a probability in the second column gives the corresponding joint probability in the third column. Each joint probability then becomes the numerator in the calculation of the corresponding posterior probability in the fourth column. Cumulating the joint probabilities with the same finding (as shown at the bottom of the figure) provides the denominator for each posterior probability with this finding. (If you would like to see another example of using a probability tree diagram to determine the posterior probabilities, one is included in the Solved Examples section of the book’s website.) Your OR Courseware also includes an Excel template for computing these posterior probabilities, as shown in Fig. 16.3. After these computations have been completed, Bayes’ decision rule can be applied just as before, with the posterior probabilities now replacing the prior probabilities. Again, by using the payoffs (in units of thousands of dollars) from Table 16.2 and subtracting the cost of the experimentation, we obtain the results shown below. Expected payoffs if finding is unfavorable seismic soundings (USS): 1 6 E[Payoff (drillFinding  USS)]  (700)  (⫺100) ⫺ 30 7 7 ⫽ ⫺15.7. 1 6 E[Payoff (sellFinding ⫽ USS)] ⫽ (90)  (90) ⫺ 30 7 7 ⫽ 60. 693 16.3 DECISION MAKING WITH EXPERIMENTATION ■ FIGURE 16.3 This posterior probabilities template in your OR Courseware enables efficient calculation of posterior probabilities, as illustrated here for the full Goferbroke Co. problem. Expected payoffs if finding is favorable seismic soundings (FSS): 1 1 E[Payoff (drillFinding  FSS)]  (700)  (⫺100) ⫺ 30 2 2 ⫽ 270. 1 1 E[Payoff (sellFinding ⫽ FSS)] ⫽ (90)  (90) ⫺ 30 2 2 ⫽ 60. Since the objective is to maximize the expected payoff, these results yield the optimal policy shown in Table 16.5. However, what this analysis does not answer is whether it is worth spending $30,000 to conduct the experimentation (the seismic survey). Perhaps it would be better to forgo ■ TABLE 16.5 The optimal policy with experimentation, under Bayes’ decision rule, for the full Goferbroke Co. problem Finding from Seismic Survey Optimal Alternative Expected Payoff Excluding Cost of Survey Expected Payoff Including Cost of Survey USS FSS Sell the land Drill for oil 90 300 60 270 694 CHAPTER 16 DECISION ANALYSIS this major expense and just use the optimal solution without experimentation (drill for oil, with an expected payoff of $100,000). We address this issue next. The Value of Experimentation Before performing any experiment, we should determine its potential value. We present two complementary methods of evaluating its potential value. The first method assumes (unrealistically) that the experiment will remove all uncertainty about what the true state of nature is, and then this method makes a very quick calculation of what the resulting improvement in the expected payoff would be (ignoring the cost of the experiment). This quantity, called the expected value of perfect information, provides an upper bound on the potential value of the experiment. Therefore, if this upper bound is less than the cost of the experiment, the experiment definitely should be forgone. However, if this upper bound exceeds the cost of the experiment, then the second (slower) method should be used next. This method calculates the actual improvement in the expected payoff (ignoring the cost of the experiment) that would result from performing the experiment. Comparing this improvement (called the expected value of experimentation) with the cost indicates whether the experiment should be performed. Expected Value of Perfect Information. Suppose now that the experiment could definitely identify what the true state of nature is, thereby providing “perfect” information. Whichever state of nature is identified, you naturally choose the action with the maximum payoff for that state. We do not know in advance which state of nature will be identified, so a calculation of the expected payoff with perfect information (ignoring the cost of the experiment) requires weighting the maximum payoff for each state of nature by the prior probability of that state of nature. This calculation is shown at the bottom of Table 16.6 for the full Goferbroke Co. problem, where the expected value of perfect information is 242.5. Thus, if the Goferbroke Co. could learn before choosing its action whether the land contains oil, the expected payoff as of now (before acquiring this information) would be $242,500 (excluding the cost of the experiment generating the information). To evaluate whether the experiment should be conducted, we now use this quantity to calculate the expected value of perfect information. The expected value of perfect information, abbreviated EVPI, is calculated as EVPI  expected payoff with perfect information ⫺ expected payoff without experimentation.2 ■ TABLE 16.6 Expected payoff with perfect information for the full Goferbroke Co. problem State of Nature Alternative Oil Dry 1. Drill for oil 2. Sell the land 700 90 ⫺100 90 Maximum payoff Prior probability 700 0.25 90 0.75 Expected payoff with perfect information ⫽ 0.25(700) ⫹ 0.75(90) ⫽ 242.5 2 The value of perfect information is a random variable equal to the payoff with perfect information minus the payoff without experimentation. EVPI is the expected value of this random variable. 16.3 DECISION MAKING WITH EXPERIMENTATION 695 Thus, since experimentation usually cannot provide perfect information, EVPI provides an upper bound on the expected value of experimentation. For this same example, we found in Sec. 16.2 that the expected payoff without experimentation (under Bayes’ decision rule) is 100. Therefore, EVPI  242.5 ⫺ 100 ⫽ 142.5. Since 142.5 far exceeds 30, the cost of experimentation (a seismic survey), it may be worthwhile to proceed with the seismic survey. To find out for sure, we now go to the second method of evaluating the potential benefit of experimentation. Expected Value of Experimentation. Rather than just obtain an upper bound on the expected increase in payoff (excluding the cost of the experiment) due to performing experimentation, we now will do somewhat more work to calculate this expected increase directly. This quantity is called the expected value of experimentation. (It also is sometimes called the expected value of sample information.) Calculating this quantity requires first computing the expected payoff with experimentation (excluding the cost of the experiment). Obtaining this latter quantity requires doing all the work described earlier to find all the posterior probabilities, the resulting optimal policy with experimentation, and the corresponding expected payoff (excluding the cost of the experiment) for each possible finding from the experiment. Then each of these expected payoffs needs to be weighted by the probability of the corresponding finding, that is, Expected payoff with experimentation ⫽ 冱P(Finding ⫽ finding j) j E[payoffFinding ⫽ finding j ], where the summation is taken over all possible values of j. For the prototype example, we have already done all the work to obtain the terms on the right side of this equation. The values of P(Finding ⫽ finding j) for the two possible findings from the seismic survey—unfavorable (USS) and favorable (FSS)—were calculated at the bottom of the probability tree diagram in Fig. 16.2 as P(USS) ⫽ 0.7, P(FSS) ⫽ 0.3. For the optimal policy with experimentation, the corresponding expected payoff (excluding the cost of the seismic survey) for each finding was obtained in the third column of Table 16.5 as E(PayoffFinding ⫽ USS) ⫽ 90, E(PayoffFinding ⫽ FSS) ⫽ 300. With these numbers, Expected payoff with experimentation ⫽ 0.7(90) ⫹ 0.3(300) ⫽ 153. Now we are ready to calculate the expected value of experimentation: The expected value of experimentation, abbreviated EVE, is calculated as EVE ⫽ expected payoff with experimentation ⫺ expected payoff without experimentation. Thus, EVE identifies the potential value of experimentation. For the Goferbroke Co., EVE ⫽ 153 ⫺ 100 ⫽ 53. 696 CHAPTER 16 DECISION ANALYSIS Since this value exceeds 30, the cost of conducting a detailed seismic survey (in units of thousands of dollars), this experimentation should be done. ■ 16.4 DECISION TREES Decision trees provide a useful way of visually displaying the problem and then organizing the computational work already described in the preceding two sections. These trees are especially helpful when a sequence of decisions must be made. Constructing the Decision Tree The prototype example involves a sequence of two decisions: 1. Should a seismic survey be conducted before an action is chosen? 2. Which action (drill for oil or sell the land) should be chosen? The corresponding decision tree (before adding numbers and performing computations) is displayed in Fig. 16.4. The junction points in the decision tree are referred to as nodes (or forks), and the lines are called branches. A decision node, represented by a square, indicates that a decision needs to be made at that point in the process. An event node (or chance node), represented by a circle, indicates that a random event occurs at that point. ■ FIGURE 16.4 The decision tree (before including any numbers) for the full Goferbroke Co. problem. Oil l f il Dr Dry c Sell le b ora fav Un Oil b vey l sur Fav orab sm ic g il Dr Dry le sei d Do Sell a Oil No sei sm ic l sur il Dr ve y e Sell h Dry An Application Vignette The Westinghouse Science and Technology Center historically has been the Westinghouse Electric Corporation’s main research and development (R&D) arm to develop new technology. The process of evaluating R&D projects to decide which ones should be initiated and then which ones should be continued as progress is made (or not made) is particularly challenging for management because of the great uncertainties and very long time horizons involved. The actual launch date for an embryonic technology may be years, even decades, removed from its inception as a modest R&D proposal to investigate the technology’s potential. As the Center came under increasing pressure to reduce costs and deliver high-impact technology quickly, the Center’s controller funded an operations research project to improve this evaluation process. The OR team developed a decision tree approach to analyzing any R&D proposal while considering its complete sequence of key decision points. The first decision point is whether to fund the proposed embryonic project for the first year or so. If its early technical milestones are reached, the next decision point is whether to continue funding the project for some period. This may then be repeated one or more times. If the late technical milestones are reached, the next decision point is whether to prelaunch because the innovation still meets strategic business objectives. If a strategic fit is achieved, the final decision point is whether to commercialize the innovation now or to delay its launch, or to abandon it altogether. A decision tree with a progression of decision nodes and intervening event nodes provides a natural way of depicting and analyzing such an R&D project. Source: R. K. Perdue, W. J. McAllister, P. V. King, and B. G. Berkey: “Valuation of R and D Projects Using Options Pricing and Decision Analysis Models,” Interfaces, 29(6): 57–74, Nov.–Dec. 1999. (A link to this article is provided on our website, www.mhhe.com/hillier.) Thus, in Fig. 16.4, the first decision is represented by decision node a. Node b is an event node representing the random event of the outcome of the seismic survey. The two branches emanating from event node b represent the two possible outcomes of the survey. Next comes the second decision (nodes c, d, and e) with its two possible choices. If the decision is to drill for oil, then we come to another event node (nodes f, g, and h), where its two branches correspond to the two possible states of nature. Note that the path followed from node a to reach any terminal branch (except the bottom one) is determined both by the decisions made and by random events that are outside the control of the decision maker. This is characteristic of problems addressed by decision analysis. The next step in constructing the decision tree is to insert numbers into the tree as shown in Fig. 16.5. The numbers under or over the branches that are not in parentheses are the cash flows (in thousands of dollars) that occur at those branches. For each path through the tree from node a to a terminal branch, these same numbers then are added to obtain the resulting total payoff shown in boldface to the right of that branch. The last set of numbers is the probabilities of random events. In particular, since each branch emanating from an event node represents a possible random event, the probability of this event occurring from this node has been inserted in parentheses along this branch. From event node h, the probabilities are the prior probabilities of these states of nature, since no seismic survey has been conducted to obtain more information in this case. However, event nodes f and g lead out of a decision to do the seismic survey (and then to drill). Therefore, the probabilities from these event nodes are the posterior probabilities of the states of nature, given the finding from the seismic survey, where these numbers are given in Figs. 16.2 and 16.3. Finally, we have the two branches emanating from event node b. The numbers here are the probabilities of these findings from the seismic survey, Favorable (FSS) or Unfavorable (USS), as given underneath the probability tree diagram in Fig. 16.2 or in cells C15:C16 of Fig. 16.3. Performing the Analysis Having constructed the decision tree, including its numbers, we now are ready to analyze the problem by using the following procedure: 698 CHAPTER 16 DECISION ANALYSIS Payoff f l il Dr c ble ra vo fa Un 7) (0. 0 ⫺ 10 90 Sell .3) d y urv e −3 0 ic s le (0 Do sei sm g l il Dr 0 10 ■ FIGURE 16.5 The decision tree in Fig. 16.4 after adding both the probabilities of random events and the payoffs. sei sm ic 0 ve il Dr y e 0 Dry (0.5) 670 ⫺130 60 h l sur . 5) Oil (0 800 ⫺ 90 Sell a No ⫺130 0 0 orab 670 60 b Fav .143) Oil (0 800 0 Dry (0.857 ) 0 10 ⫺ 90 Sell Oil (0.2 800 5) 0 Dry (0.75) 700 ⫺100 90 1. Start at the right side of the decision tree and move left one column at a time. For each column, perform either step 2 or step 3 depending upon whether the nodes in that column are event nodes or decision nodes. 2. For each event node, calculate its expected payoff by multiplying the expected payoff of each branch (shown in boldface to the right of the branch) by the probability of that branch and then summing these products. Record this expected payoff for each decision node in boldface next to the node, and designate this quantity as also being the expected payoff for the branch leading to this node. 3. For each decision node, compare the expected payoffs of its branches and choose the alternative whose branch has the largest expected payoff. In each case, record the choice on the decision tree by inserting a double dash as a barrier through each rejected branch. To begin the procedure, consider the rightmost column of nodes, namely, event nodes f, g, and h. Applying step 2, their expected payoffs (EP) are calculated as 1 6 EP  (670)  (⫺130) ⫽ ⫺15.7, 7 7 1 1 EP ⫽ (670)  (⫺130) ⫽ 270, 2 2 1 3 EP ⫽ (700)  (⫺100) ⫽ 100, 4 4 for node f, for node g, for node h. 699 16.4 DECISION TREES Payoff 60 c ble ora v nfa U 7) (0. 15.7 f ill r D 0 10  90 Sell sei sm ic sur −3 vey 0 270 g l il Dr 0 orab le (0 Fav 270 d Do .3) 0 sei s mi c 0 10 ve y il Dr 100 e .5) Oil (0 800 0 Dry (0.5)  90 Sell l sur 130 0 0 10  90 Sell 670 130 60 123 a No 670 60 123 b ■ FIGURE 16.6 The final decision tree that records the analysis for the full Goferbroke Co. problem when using monetary payoffs. .143) Oil (0 800 0 Dry (0.857 ) 100 h Oil (0.2 800 5) 0 Dry (0.75) 700 100 90 These expected payoffs then are placed above these nodes, as shown in Fig. 16.6. Next, we move one column to the left, which consists of decision nodes c, d, and e. The expected payoff for a branch that leads to an event node now is recorded in boldface over that event node. Therefore, step 3 can be applied as follows: Node c: Drill alternative has EP ⫽ ⫺15.7. Sell alternative has EP ⫽ 60. 60  15.7, so choose the Sell alternative. Node d: Drill alternative has EP  270. Sell alternative has EP  60. 270  60, so choose the Drill alternative. Node e: Drill alternative has EP  100. Sell alternative has EP  90. 100  90, so choose the Drill alternative. The expected payoff for each chosen alternative now would be recorded in boldface over its decision node, as already shown in Fig. 16.6. The chosen alternative also is indicated by inserting a double dash as a barrier through each rejected branch. Next, moving one more column to the left brings us to node b. Since this is an event node, step 2 of the procedure needs to be applied. The expected payoff for each 700 CHAPTER 16 DECISION ANALYSIS of its branches is recorded over the following decision node. Therefore, the expected payoff is EP  0.7(60)  0.3(270)  123, for node b, as recorded over this node in Fig. 16.6. Finally, we move left to node a, a decision node. Applying step 3 yields Node a: Do seismic survey has EP  123. No seismic survey has EP  100. 123 ⬎ 100, so choose Do seismic survey. This expected payoff of 123 now would be recorded over the node, and a double dash inserted to indicate the rejected branch, as already shown in Fig. 16.6. This procedure has moved from right to left for analysis purposes. However, having completed the decision tree in this way, the decision maker now can read the tree from left to right to see the actual progression of events. The double dashes have closed off the undesirable paths. Therefore, given the payoffs for the final outcomes shown on the right side, Bayes’ decision rule says to follow only the open paths from left to right to achieve the largest possible expected payoff. Following the open paths from left to right in Fig. 16.6 yields the following optimal policy, according to Bayes’ decision rule: Optimal policy: Do the seismic survey. If the result is unfavorable, sell the land. If the result is favorable, drill for oil. The expected payoff (including the cost of the seismic survey) is 123 ($123,000). This (unique) optimal solution naturally is the same as that obtained in the preceding section without the benefit of a decision tree. (See the optimal policy with experimentation given in Table 16.5 and the conclusion at the end of Sec. 16.3 that experimentation is worthwhile.) For any decision tree, this backward induction procedure always will lead to the optimal policy (or policies) after the probabilities are computed for the branches emanating from an event node. Another example of solving a decision tree in this way is included in the Solved Examples section of the book’s website. ■ 16.5 USING SPREADSHEETS TO PERFORM SENSITIVITY ANALYSIS ON DECISION TREES Some helpful spreadsheet software now is available for constructing and analyzing decision trees on spreadsheets. We will describe and illustrate how to use Analytic Solver Platform for Education (ASPE) to construct and analyze decision trees in Excel. Instructions for installing this software are on the very first page of the book (before the title page) and also on the book’s website, www.mhhe/hillier. If you are a Mac user (ASPE is not compatible with Mac versions of Excel) or you or your instructor simply prefer to use different software, a supplement to this chapter on the website contains instructions for TreePlan, another popular Excel add-in for constructing and analyzing decision trees in Excel. To simplify the discussion, we will begin by illustrating the construction of a small decision tree for the first Goferbroke Co. problem (no consideration of conducting a seismic survey) before considering the full problem. 16.5 USING SPREADSHEETS TO PERFORM SENSITIVITY ANALYSIS 701 Using ASPE to Construct the Decision Tree for the First Goferbroke Co. Problem Consider the first Goferbroke Co. problem (no seismic survey) as summarized earlier in Table 16.2. To begin creating a decision tree using ASPE, select Add Node from the Decision Tree/Node menu. This brings up the dialog box shown in Fig. 16.7. Here you can choose the type of node (Decision or Event), give names to each of the branches, and specify a value for each branch (the partial payoff associated with that branch). The default names for the branches of a decision node in ASPE are Decision 1 and Decision 2. These can be changed (or more branches added) by double-clicking on the branch name (or in the next blank row to add a branch) and typing in a new name. The initial node in the first Goferbroke problem is a decision node with two branches: Drill and Sell. The payoff associated with drilling is –100 (the $100,000 cost of drilling) and the payoff associated with selling is 90 (the $90,000 selling price). After making all of these entries as shown in Fig. 16.7, clicking OK then yields the decision tree shown in Fig. 16.8. If the decision is to drill, the next event is to learn whether or not the land contains oil. To create an event node, click on the cell containing the triangle terminal node at the end of the drill branch (cell F3 in Fig. 16.8) and choose Add Node from the Decision Tree/Node menu on the ASPE ribbon to bring up the dialog box shown in Fig. 16.9. The node is an event node with two branches, Oil and Dry, with probabilities 0.25 and 0.75, respectively, and values (partial payoffs) of 800 and 0, respectively, as entered into the dialog box in Fig. 16.9. After clicking OK, the final decision tree is shown in Fig. 16.10. (Note that ASPE, by default, shows all probabilities as a percentage, with 25% and 75% in H1 and H6, rather than 0.25 and 0.75.) ■ FIGURE 16.7 The Decision Tree dialog box used to specify that the initial node of the first Goferbroke problem is a decision node with two branches, Drill and Sell, with values (partial payoffs) of –100 and 90, respectively. 702 ■ FIGURE 16.8 The initial, partial decision tree created by ASPE by selecting Add Node from the Decision Tree/Node menu on the ASPE ribbon and specifying a Decision node with two branches named Drill and Sell, with partial payoffs of –100 and 90, respectively. ■ FIGURE 16.9 The Decision Tree dialog box used to specify that the second node of the first Goferbroke problem is an event node with two branches, Oil and Dry, with values (partial payoffs) of 800 and 0, and with probabilities of 0.25 and 0.75, respectively. ■ FIGURE 16.10 The decision tree constructed and solved by ASPE for the first Goferbroke Co. problem as presented in Table 16.2, where the 1 in cell B9 indicates that the top branch (the Drill alternative) should be chosen. CHAPTER 16 DECISION ANALYSIS 16.5 USING SPREADSHEETS TO PERFORM SENSITIVITY ANALYSIS 703 At any time, you also can click on any existing node and make changes using various choices under the Decision Tree menu on the ASPE ribbon. For example, under the Node submenu, you can choose Add Node, Change Node, Delete Node, Copy Node, or Paste Node. Under the Branch submenu, you can choose Add Branch, Change Branch, or Delete Branch. The Decision Tree for the Full Goferbroke Co. Problem Now consider the full Goferbroke Co. problem, where the first decision to be made is whether to conduct a seismic survey. Continuing the procedure described above, ASPE would be used to construct and solve the decision tree shown in Fig. 16.11. Although the form is somewhat different, note that this decision tree is completely equivalent to the one in Fig. 16.6. Besides the convenience of constructing the tree directly on a spreadsheet, ASPE also provides the key advantage of automatically solving the decision tree. Rather than relying on hand calculations as in Fig. 16.6, ASPE instantaneously calculates all the expected payoffs at each stage of the tree, as shown next to each node, as soon as the decision tree is constructed. Instead of using double dashes, ASPE puts a number inside each decision node indicating which branch should be chosen (assuming the branches emanating from that node are numbered consecutively from top to bottom). Organizing the Spreadsheet to Perform Sensitivity Analysis The end of Sec. 16.2 illustrated how sensitivity analysis can be performed on a small problem (the first Goferbroke Co. problem), where only a single decision (drill or sell) needs to be made. In that case, the analysis was quite straightforward because the expected payoff for each decision alternative could be expressed as a simple function of the model parameter (the prior probability of oil) being considered. By contrast, when a sequence of decisions needs to be made, as for the full Goferbroke Co. problem, sensitivity analysis becomes somewhat more involved. There now are more model parameters (the various costs, revenues, and probabilities) that might have sufficient uncertainty to warrant performing sensitivity analysis. Furthermore, finding the maximum expected payoff for any particular values of the model parameters now requires solving a decision tree. Therefore, using spreadsheet software such as ASPE that automatically solves the decision tree becomes very helpful. Beginning with the spreadsheet that already contains the decision tree, the next step is to expand and organize this spreadsheet for performing sensitivity analysis. We now will illustrate this for the full Goferbroke Co. problem by starting with the spreadsheet in Fig. 16.11 that contains the decision tree constructed by ASPE. It is helpful to begin by consolidating the data and results into a new section, as shown on the right-hand side of Fig. 16.12. All the data cells in the decision tree now would need to make reference to the consolidated data cells (cells V4:V11), as illustrated by the formulas shown for cells P6 and P11 at the bottom of the figure. Similarly, the summarized results to the right of the decision tree make reference to the output cells within the decision tree (the decision nodes in cells B29, F41, J11, and J26, as well as the expected payoff in cell A30) by using the formulas for cells U19, V15, V26, and W19:W20 displayed at the bottom of Fig. 16.12. The probability data in the decision tree are complicated by the fact that the posterior probabilities will need to be updated any time a change is made in any of the prior probability data. Fortunately, the template for calculating posterior probabilities (as shown in Fig. 16.3) can be used to do these calculations. The relevant portion of this template (B3:H19) has been copied (using the Copy and Paste commands in the Edit menu) to the 704 CHAPTER 16 DECISION ANALYSIS ■ FIGURE 16.11 The decision tree constructed and solved by ASPE for the full Goferbroke Co. problem that also considers whether to do a seismic survey. spreadsheet in Fig. 16.12 (now appearing in U30:AA46). The data for the template refer to the probability data in the data cells PriorProbabilityOfOil (V9), ProbFSSGivenOil (V10), and ProbUSSGivenDry (V11), as shown in the formulas for cells V33:X34 at the bottom of Fig. 16.12. The template automatically calculates the probability of each finding and the posterior probabilities (in cells V42:X43) based on these data. The decision tree then refers to these calculated probabilities when they are needed, as shown in the formulas for cells P3:P11 in Fig. 16.12. Consolidating the data and results offers a couple of advantages. First, it ensures that each piece of data is in only one place. Each time that piece of data is needed in the decision tree, a reference is made to the single data cell. This greatly simplifies 16.5 USING SPREADSHEETS TO PERFORM SENSITIVITY ANALYSIS 705 Fig. 16.12 missing. Best substitute: Fig. 15.11 of 9th edition on page 705b (next). ■ FIGURE 16.12 In preparation for performing sensitivity analysis on the full Goferbroke problem, the data and results have been consolidated on the spreadsheet to the right of the decision tree. sensitivity analysis. To change a piece of data, you need to change it in only one place rather than searching through the entire tree to find and change all occurrences of that piece of data. A second advantage of consolidating the data and results is that it makes it easy for anyone to interpret the model. It is not necessary to understand ASPE or how to read a decision tree in order to see what data were used in the model or what the suggested plan of action and expected payoff are. While it takes some time and effort to consolidate the data and results, including all the necessary cross-referencing, this step is truly essential for performing sensitivity analysis. Many pieces of data are used in several places on the decision tree. For example, the revenue if Goferbroke finds oil appears in cells P6, P21, and L36. Performing sensitivity 705b 15.5 USING SPREADSHEETS TO PERFORM SENSITIVITY ANALYSIS 695 From 9th edition c2010 ■ FIGURE 15.11 In preparation for performing sensitivity analysis on the full Goferbroke problem, the data and results have been consolidated on the spreadsheet to the right of the decision tree. bottom of Fig. 15.11. The template automatically calculates the probability of each finding and the posterior probabilities (in cells V42:X43) based on these data. The decision tree then refers to these calculated probabilities when they are needed, as shown in the formulas for cells P3:P11 in Fig. 15.11. Consolidating the data and results offers a couple of advantages. First, it assures that each piece of data is in only one place. Each time that piece of data is needed in the decision tree, a reference is made to the single data cell. This greatly simplifies sensitivity analysis. To change a piece of data, you need to change it in only one place rather than searching through the entire tree to find and change all occurrences of that piece of data. A second advantage of consolidating the data and results is that it makes it easy for anyone to interpret the model. It is not necessary to understand TreePlan or how to read a decision tree in order to see what data were used in the model or what the suggested plan of action and expected payoff are. While it takes some time and effort to consolidate the data and results, including all the necessary cross-referencing, this step is truly essential for performing sensitivity analysis. Many pieces of data are used in several places on the decision tree. For example, the revenue if Goferbroke finds oil appears in cells P6, P21, and L36. Performing sensitivity 706 CHAPTER 16 DECISION ANALYSIS analysis on this piece of data now requires changing its value in only one place (cell V6) rather than three (cells P6, P21, and L36). The benefits of consolidation are even more important for the probability data. Changing any prior probability may cause all the posterior probabilities to change. By including the posterior probability template, you can change the prior probability in one place, and then all the other probabilities are calculated and updated appropriately. After making any change in the cost data, revenue data, or probability data in Fig. 16.12, the spreadsheet nicely summarizes the new results after the actual work to obtain these results is instantly done by the posterior probability template and the decision tree. Therefore, experimenting with alternative data values in a trial-and-error manner is one useful way of performing sensitivity analysis. Now let’s see how this sensitivity analysis can be done more systematically by using a data table. Using a Data Table to Do Sensitivity Analysis Systematically To systematically determine how the decisions and expected payoffs change as the prior probability of oil (or any other data) changes, we could continue selecting new trial values of the prior probability of oil at random. However, a better approach is to systematically consider a range of values. A feature built into Excel, called a data table, is designed to perform just this sort of analysis. Data tables are used to show the results of a certain output cells for various trial values of a data cell. To use data tables, first make a table on the spreadsheet with headings as shown in columns Y through AD in Fig.16.13. In the first column of the table (Y5:Y15), list the trial values for the data cell (the prior probability of oil), except leave the first row blank. The headings of the next columns specify which output will be evaluated. For each of these columns, use the first row of the table (cells Y4:AD4) to write an equation that refers to the relevant output cell. In this case, the cells of interest are (1) the decision of whether to do the survey (V15), (2) if so, whether to drill if the survey is favorable or unfavorable (W19 and W20), (3) if not, whether to drill (U19), and (4) the value of ExpectedPayoff (V26). The equations for Y4:AD4 referring to these output cells are shown below the spreadsheet in Fig. 16.13. Next, select the entire table (Y4:AD15) and then choose Data Table from the WhatIf Analysis menu of the Data tab. In the Data Table dialog box (as shown at the bottom right of Fig.16.13), indicate the column input cell (V9), which refers to the data cell that is being changed in the first column of the table. Clicking OK then generates the table shown in Fig. 16.13. For each trial value for the data cell listed in the first column of the table, the corresponding output cell values are calculated and displayed in the other columns of the table. Some of the output in the data table is not relevant. For example, when the decision is to not do the survey in column Z, the results in columns AA and AB (what to do given favorable or unfavorable survey results) are not relevant. Similarly, when the decision is to do the survey in column Z, the results in column AC (what to do if you don’t do the survey) are not relevant. The relevant output has been formatted in boldface to make it stand out compared to the irrelevant output. Figure 16.13 reveals that the optimal initial decision switches from Sell without a survey to doing the survey somewhere between 0.1 and 0.2 for the prior probability of oil, and then switches again to Drill without a survey somewhere between 0.3 and 0.4. Using the spreadsheet in Fig. 16.12, trial-and-error analysis soon leads to the following conclusions about how the optimal policy depends on this probability. 707 16.6 UTILITY THEORY Y Z AA AB AC AD 1 Prior 2 Probability Do If Survey If Survey If NO Payoff 3 of Oil Survey? Favorable Unfavorable Survey ($thousands) Yes Drill Sell Drill 123 90 4 Expected 5 0 No Sell Sell Sell 6 0.1 No Drill Sell Sell 90 7 0.2 Yes Drill Sell Sell 102.8 8 0.3 Yes Drill Sell Drill 143.2 9 0.4 No Drill Drill Drill 220 10 0.5 No Drill Drill Drill 300 11 0.6 No Drill Drill Drill 380 12 0.7 No Drill Drill Drill 460 13 0.8 No Drill Drill Drill 540 14 0.9 No Drill Drill Drill 620 15 1 No Drill Drill Drill 700 Y Prior Probability of Oil Z 1 2 3 4 Do Survey? =V15 AB AA If Survey If Survey Favorable UnFavorable =W20 =W19 AC If No Survey =U19 AD Expected Payoff ($thousands) =ExpectedPayoff ■ FIGURE 16.13 The data table that shows the optimal policy and expected payoff for various trial values of the prior probability of oil. Optimal Policy Let p  Prior probability of oil. If p  0.168, then sell the land (no seismic survey). If 0.169  p  0.308, then do the survey: drill if favorable and sell if not. If p  0.309, then drill for oil (no seismic survey). ■ 16.6 UTILITY THEORY Thus far, when applying Bayes’ decision rule, we have assumed that the expected payoff in monetary terms is the appropriate measure of the consequences of taking an action. However, in many situations this assumption is inappropriate. For example, suppose that an individual is offered the choice of (1) accepting a 50:50 chance of winning $100,000 or nothing or (2) receiving $40,000 with certainty. Many people would prefer the $40,000 even though the expected payoff on the 50:50 chance of winning $100,000 is $50,000. A company may be unwilling to invest a large sum of money in a new product even when the expected profit is substantial if there is a risk of losing its investment and thereby becoming bankrupt. People buy insurance even though it is a poor investment from the viewpoint of the expected payoff. 708 CHAPTER 16 DECISION ANALYSIS Do these examples invalidate Bayes’ decision rule? Fortunately, the answer is no, because there is a way of transforming monetary values to an appropriate scale that reflects the decision maker’s preferences. This scale is called the utility function for money. Utility Functions for Money Figure 16.14 shows a typical utility function U(M) for money M. It indicates that an individual having this utility function would value obtaining $30,000 twice as much as $10,000 and would value obtaining $100,000 twice as much as $30,000. This reflects the fact that the person’s highest-priority needs would be met by the first $10,000. Having this decreasing slope of the function as the amount of money increases is referred to as having a decreasing marginal utility for money. Such an individual is referred to as being risk-averse. However, not all individuals have a decreasing marginal utility for money. Some people are risk seekers instead of risk-averse, and they go through life looking for the “big score.” The slope of their utility function increases as the amount of money increases, so they have an increasing marginal utility for money. The intermediate case is that of a risk-neutral individual, who prizes money at its face value. Such an individual’s utility for money is simply proportional to the amount of money involved. Although some people appear to be risk-neutral when only small amounts of money are involved, it is unusual to be truly risk-neutral with very large amounts. It also is possible to exhibit a mixture of these kinds of behavior. For example, an individual might be essentially risk-neutral with small amounts of money, then become a risk seeker with moderate amounts, and then turn risk-averse with large amounts. In addition, one’s attitude toward risk can shift over time depending upon circumstances. An individual’s attitude toward risk also may be different when dealing with one’s personal finances than when making decisions on behalf of an organization. For example, managers of a business firm need to consider the company’s circumstances and the collective philosophy of top management in determining the appropriate attitude toward risk when making managerial decisions.3 The fact that different people have different utility functions for money has an important implication for decision making in the face of uncertainty: When a utility function for money is incorporated into a decision analysis approach to a problem, this utility function must be constructed to fit the preferences and values of the decision maker involved. (The decision maker can be either a single individual or a group of people.) The scale of the utility function is irrelevant. In other words, it doesn’t matter whether the value of U(M) at the dashed lines in Fig. 16.14 are 0.25, 0.5, 0.75, 1 (as shown) or 10,000, 20,000, 30,000, 40,000, or whatever. All the utilities can be multiplied by any positive constant without affecting which alternative course of action will have the largest expected utility. It also is possible to add the same constant (positive or negative) to all the utilities without affecting which course of action will have the largest expected utility. For these reasons, we have the liberty to set the value of U(M) arbitrarily for two values of M, so long as the higher monetary value has the higher utility. It is particularly 3 For a survey of the shape of the utility function for 332 owner-managers and the impact of this shape on organizational behavior, see J. M. E. Pennings and A. Smidts, “The Shape of Utility Functions and Organizational Behavior,” Management Science, 49: 1251–1263, 2003. 709 16.6 UTILITY THEORY ■ FIGURE 16.14 A typical utility function for money, where U(M) is the utility of obtaining an amount of money M. U(M) 1 0.75 0.5 0.25 0 $10,000 $30,000 $60,000 $100,000 M convenient (although certainly not necessary) to set U(M)  0 for the smallest value of M under consideration and to set U(M)  1 for the largest M, as was done in Fig. 16.14. By assigning a utility of 0 to the worst outcome and a utility of 1 to the best outcome, and then determining the utilities of the other outcomes accordingly, it becomes easy to see the relative utility of each outcome along the scale from worst to best. The key to constructing the utility function for money to fit the decision maker is the following fundamental property of utility functions: Fundamental Property: Under the assumptions of utility theory, the decision maker’s utility function for money has the property that the decision maker is indifferent between two alternative courses of action if the two alternatives have the same expected utility. To illustrate how this fundamental property can be used, suppose that the decision maker has the utility function shown in Fig. 16.14. Thus, for example, the utility of receiving $10,000 is 0.25. To see how this utility of 0.25 could have been obtained, suppose that the decision maker is asked what value of p would make her indifferent between the first alternative of definitely receiving the $10,000 or instead accepting the following offer: Offer: An opportunity to obtain either $100,000 (utility  1) with probability p or nothing (utility  0) with probability (1  p). 710 CHAPTER 16 DECISION ANALYSIS Thus, E(utility)  p, for this offer. Now see what happens if the decision maker chooses p  0.25 as her point of indifference between the two alternatives: One alternative: Accept the offer with p  0.25. This yields E(utility)  0.25. The other alternative: Definitely receive $10,000. Since the decision maker is indifferent between the two alternatives, the fundamental property says they must have the same expected utility. Therefore, this alternative’s utility also is 0.25, just as shown in Fig. 16.14. This example illustrates one way in which the decision maker’s utility function for money in Fig. 16.14 would have been constructed in the first place. The decision maker would be made the same hypothetical offer to obtain either a large amount of money ($100,000) with probability p or nothing. Then, for each of a few smaller amounts of money ($10,000, $30,000, and $60,000), the decision maker would be asked to choose a value of p that would make her indifferent between the offer and definitely obtaining that amount of money. The utility of the smaller amount of money then is p. Choosing p  0.25, 0.5, and 0.75 when considering $10,000, $30,000, and $60,000, respectively, yields Fig. 16.14. This procedure, called the equivalent lottery method for determining utilities, is outlined below. Equivalent Lottery Method 1. Determine the largest potential payoff, M  maximum, and assign it some utility, e.g., U(maximum)  1. 2. Determine the smallest potential payoff, M  minimum, and assign it some utility smaller than in step 1, e.g., U(minimum)  0. 3. To determine the utility of another potential payoff M, the decision maker is offered the following two hypothetical alternatives: A1: Obtain a payoff of maximum with probability p, Obtain a payoff of minimum with probability 1  p. A2: Definitely obtain a payoff of M. Question to the decision maker: What value of p makes you indifferent between these two alternatives? The resulting utility of M then is U(M)  p U(maximum)  (1  p) U(minimum), which simplifies to U(M)  p, if U(minimum)  0, U(maximum)  1. Now we are ready to summarize the basic role of utility functions in decision analysis. When the decision maker’s utility function for money is used to measure the relative worth of the various possible monetary outcomes, Bayes’ decision rule replaces monetary payoffs by the corresponding utilities. Therefore, the optimal action (or series of actions) is the one which maximizes the expected utility. Only utility functions for money have been discussed here. However, we should mention that utility functions can sometimes still be constructed when some of or all the important consequences of the alternative courses of action are not monetary. (For example, 711 16.6 UTILITY THEORY the consequences of a doctor’s decision alternatives in treating a patient involve the future health of the patient.) Nevertheless, under these circumstances, it is important to incorporate such value judgments into the decision process. This is not necessarily easy, since it may require making value judgments about the relative desirability of rather intangible consequences. Nevertheless, under these circumstances, it is important to incorporate such value judgments into the decision process. Applying Utility Theory to the Full Goferbroke Co. Problem At the end of Sec. 16.1, we mentioned that the Goferbroke Co. was operating without much capital, so a loss of $100,000 would be quite serious. The owner of the company already has gone heavily into debt to keep going. The worst-case scenario would be to come up with $30,000 for a seismic survey and then still lose $100,000 by drilling when there is no oil. This scenario would not bankrupt the company at this point, but definitely would leave it in a precarious financial position. On the other hand, striking oil is an exciting prospect, since earning $700,000 finally would put the company on a fairly solid financial footing. To apply the owner’s (decision maker’s) utility function for money to the problem as described in Secs. 16.1 and 16.3, it is necessary to identify the utilities for all the possible monetary payoffs. In units of thousands of dollars, these possible payoffs and the corresponding utilities are given in Table 16.7. We now will discuss how these utilities were obtained. As a starting point in constructing the utility function, since we have the liberty to set the value of U(M) arbitrarily for two values of M (so long as the higher monetary value has the higher utility), it was convenient to set U(130)  0 and U(700)  1. Then the equivalent lottery method was applied to determine the utility for another of the possible monetary payoffs, M  90, by posing the following question to the decision maker (the owner of the Goferbroke Co.). Suppose you have only the following two alternatives. In units of thousands of dollars, alternative 1 is to obtain a payoff of 700 with probability p and a payoff of 130 (loss of 130) with probability 1  p. Alternative 2 is to definitely obtain a payoff of 90. What value of p makes you indifferent between these two alternatives? The decision maker’s choice: p  1–3, so U(90)  0.333. Next, the equivalent lottery method was applied in the same way to M  100. In this 1 , so U(100)  0.05. case, the decision maker’s point of indifference was p  — 20 At this point, a smooth curve was drawn through U(130), U(100), U(90), and U(700) to obtain the decision maker’s utility function for money shown in Fig. 16.15. The values on this curve at M  60 and M  670 provide the corresponding utilities, U(60)  0.30 and U(670)  0.97, which completes the list of utilities given in the right column of Table 16.7. ■ TABLE 16.7 Utilities for the full Goferbroke Co. problem Monetary Payoff Utility 130 100 60 90 670 700 0 0.05 0.30 0.333 0.97 1 712 ■ FIGURE 16.15 The utility function for money of the owner of the Goferbroke Co. CHAPTER 16 DECISION ANALYSIS U(M) 1.00 0.75 on ity 0.50 til su ’ r e ti nc fu n Ow lu ne isk u tra on cti un f y lit ti R 0.25 100 −100 200 300 400 Thousand of dollars 500 600 700 M The shape of this curve indicates that the owner of the Goferbroke Co. is moderately risk averse. By contrast, the dashed line drawn at 45° in Fig. 16.15 shows what his utility function would have been if he were risk-neutral. By nature, the owner of the Goferbroke Co. actually is inclined to be a risk seeker. However, the difficult financial circumstances of his company, which he badly wants to keep solvent, have forced him to adopt a moderately risk-averse stance in addressing his current decisions. Another Approach for Estimating U(M) The above procedure for constructing U(M) asks the decision maker to repeatedly make a difficult decision about which probability would make him or her indifferent between two alternatives. Many individuals would be uncomfortable with making this kind of decision. Therefore, an alternative approach is sometimes used instead to estimate the utility function for money. This approach is to assume that the utility function has a certain mathematical form, and then adjust this form to fit the decision maker’s attitude toward risk as closely as possible. For example, one particularly popular form to assume (because of its relative simplicity) is the exponential utility function, M U(M)  1  eR, 16.6 UTILITY THEORY 713 where R is the decision maker’s risk tolerance. This utility function has a decreasing marginal utility for money, so it is designed to fit a risk-averse individual. A great aversion to risk corresponds to a small value of R (which would cause the utility function curve to bend sharply), whereas a small aversion to risk corresponds to a large value of R (which gives a much more gradual bend in the curve). Since the owner of the Goferbroke Co. has a relatively small aversion to risk, the utility function curve in Fig. 16.15 bends quite slowly. It bends particularly slowly for the large values of M near the right side of Fig. 16.15, so the corresponding value of R in this region is approximately R  2000. On the other hand, the owner becomes much more risk-averse when large losses can occur, since this now would threaten bankruptcy, so the utility function curve has considerably more curvature in this region where M has large negative values. Therefore, the corresponding value of R is considerably smaller, only about R  500, in this region. Unfortunately, it is not possible to use two different values of R for the same utility function. A drawback of the exponential utility function is that it assumes a constant aversion to risk (a fixed value of R), regardless of how much (or how little) money the decision maker currently has. This doesn’t fit the Goferbroke Co. situation, since the current shortage of money makes the owner much more concerned than usual about incurring a large loss. In other situations where the consequences of the potential losses are not as severe, assuming an exponential utility function may provide a reasonable approximation. In such a case, here is an easy (slightly approximate) way of estimating the appropriate value of R. The decision maker would be asked to choose the number R that would make him (or her) indifferent between the following two alternatives: R A1: A 50-50 gamble where he would gain R dollars with probability 0.5 and lose  2 dollars with probability 0.5. A2: Neither gain nor lose anything. ASPE includes the option of using the exponential utility function. Clicking on the Options button on the ASPE ribbon reveals an Options dialog box. Under the Tree tab, choose Exponential Utility Function and specify the value of R in the Risk Tolerance box. Clicking OK then revises the decision tree to incorporate the exponential utility function. (We will not pursue this approach any further and now will return to the Goferbroke example while using the utilities obtained with the equivalent lottery method.) Using a Decision Tree to Analyze the Goferbroke Co. Problem with Utilities Now that the utility function for money of the owner of the Goferbroke Co. has been obtained in Table 16.7 (and Fig. 16.15), this information can be used with a decision tree as summarized next: The procedure for using a decision tree to analyze the problem now is identical to that described in the preceding section except for substituting utilities for monetary payoffs. Therefore, the value obtained to evaluate each node of the tree now is the expected utility there rather than the expected (monetary) payoff. Consequently, the optimal decisions selected by Bayes’ decision rule maximize the expected utility for the overall problem. Thus, our final decision tree shown in Fig. 16.16 closely resembles the one in Fig. 16.6 given in Sec. 16.4. The nodes and branches are exactly the same, as are the probabilities for the branches emanating from the event nodes. For informational purposes, 714 CHAPTER 16 DECISION ANALYSIS ■ FIGURE 16.16 The final decision tree for the full Goferbroke Co. problem, using the owner’s utility function for money to maximize expected utility. Monetary Utility Payoff 0.139 f ill Dr 7) 670 0.97 Dry (0.857 130 0 60 0.3 .5) 670 0.97 Dry (0.5) 130 0 60 0.3 5) 700 1 Dry (0.75) 100 0.05 90 0.333 ) 0.3 c 0. e( Sell l b ora .143) Oil (0 fav Un vey 0.356 b sur Fav 0.485 g l ril D sei sm ic orab le (0 .3) 0.485 d Do Sell 0.356 a No Oil (0 sei sm ic 0.2875 l h ril D sur ve y Oil (0.2 0.333 e Sell the total monetary payoffs still are given to the right of the terminal branches (but we no longer bother to show the individual monetary payoffs next to any of the branches). However, we now have added the utilities on the right side. It is these numbers that have been used to compute the expected utilities given next to all the nodes. These expected utilities lead to the same decisions at nodes a, c, and d as in Fig. 16.6, but the decision at node e now switches to sell instead of drill. However, the backward induction procedure still leaves node e on a closed path. Therefore, the overall optimal policy remains the same as given at the end of Sec. 16.4 (do the seismic survey; sell if the result is unfavorable; drill if the result is favorable). The approach used in the preceding sections of maximizing the expected monetary payoff amounts to assuming that the decision maker is risk-neutral, so that U(M)  M. By using utility theory, the optimal solution now reflects the decision maker’s attitude about risk. Because the owner of the Goferbroke Co. adopted only a moderately risk-averse stance, the optimal policy did not change from before. For a somewhat more risk-averse owner, the optimal solution would switch to the more conservative approach of immediately selling the land (no seismic survey). (See Prob. 16.6-1.) The current owner is to be commended for incorporating utility theory into a decision analysis approach to his problem. Utility theory helps to provide a rational approach to decision making in the face of uncertainty. However, many decision makers are not 16.7 THE PRACTICAL APPLICATION OF DECISION ANALYSIS 715 sufficiently comfortable with the relatively abstract notion of utilities, or with working with probabilities to construct a utility function, to be willing to use this approach. Consequently, utility theory is not yet used very widely in practice. ■ 16.7 THE PRACTICAL APPLICATION OF DECISION ANALYSIS In one sense, this chapter’s prototype example (the Goferbroke Co. problem) is a typical application of decision analysis. Like other applications, management needed to make some decisions (Do a seismic survey? Drill for oil or sell the land?) in the face of great uncertainty. The decisions were difficult because their payoffs were so unpredictable. The outcome depended on factors that were outside management’s control (does the land contain oil or is it dry?). Therefore, management needed a framework and methodology for rational decision making in this uncertain environment. These are the usual characteristics of applications of decision analysis. However, in other ways, the Goferbroke problem is not such a typical application. It was oversimplified to include only two possible states of nature (Oil and Dry), whereas there actually would be a considerable number of distinct possibilities. For example, the actual state might be dry, a small amount of oil, a moderate amount, a large amount, and a huge amount, plus different possibilities concerning the depth of the oil and soil conditions that impact the cost of drilling to reach the oil. Management also was considering only two alternatives for each of two decisions. Real applications commonly involve more decisions, more alternatives to be considered for each one, and many possible states of nature. When dealing with larger problems, the decision tree can explode in size, with perhaps many thousand terminal branches. In this case, it clearly would not be feasible to construct the tree by hand, including computing posterior probabilities, and calculating the expected payoffs (or utilities) for the various nodes, and then identifying the optimal decisions. Fortunately, some excellent software packages (mainly for personal computers) are available specifically for doing this work. (See Selected Reference 11 for a survey of these software packages.) Furthermore, special algebraic techniques are being developed and incorporated into the computer solvers for dealing with ever larger problems.4 Sensitivity analysis also can become unwieldy on large problems. Although it normally is supported by the computer software, the amount of data generated can easily overwhelm an analyst or decision maker. Therefore, some graphical techniques, such as tornado charts, have been developed to organize the data in a readily understandable way.5 Other kinds of graphical techniques also are available to complement the decision tree in representing and solving decision analysis problems. One that has become quite popular is called the influence diagram, and researchers continue to develop others as well.6 4 For example, see C. W. Kirkwood, “An Algebraic Approach to Formulating and Solving Large Models for Sequential Decisions under Uncertainty,” Management Science, 39: 900–913, July 1993. 5 For further information, see T. G. Eschenbach, “Spiderplots versus Tornado Diagrams for Sensitivity Analysis,” Interfaces, 22: 40–46, Nov.–Dec. 1992. Also see Chapter 5 in Selected Reference 4. 6 For example, see C. Bielza and P. P. Shenoy, “A Comparison of Graphical Techniques for Asymmetric Decision Problems,” Management Science, 45(11): 1552–1569, Nov. 1999. Also see Chapters 3 and 4 in Selected Reference 4. 716 CHAPTER 16 DECISION ANALYSIS Many strategic business decisions are made collectively by several members of management. One technique for group decision making is called decision conferencing. This is a process where the group comes together for discussions in a decision conference with the help of an analyst and a group facilitator. The facilitator works directly with the group to help it structure and focus discussions, think creatively about the problem, bring assumptions to the surface, and address the full range of issues involved. The analyst uses decision analysis to assist the group in exploring the implications of the various decision alternatives. With the assistance of a computerized group decision support system, the analyst builds and solves models on the spot, and then performs sensitivity analysis to respond to what-if questions from the group.7 Applications of decision analysis commonly involve a partnership between the managerial decision maker (whether an individual or a group) and an analyst (whether an individual or a team) with training in OR. Some companies do not have a staff member who is qualified to serve as the analyst. Therefore, a considerable number of management consulting firms specializing in decision analysis have been formed to fill this role. If you would like to do more reading about the practical application of decision analysis, we suggest that you turn to Selected Reference 9. This article was the leadoff paper in the first issue of the journal Decision Analysis that focuses on applied research in decision analysis. The article provides a detailed discussion of various publications that present applications of decision analysis. ■ 16.8 CONCLUSIONS Decision analysis has become an important technique for decision making in the face of uncertainty. It is characterized by enumerating all the available decision alternatives, identifying the payoffs for all possible outcomes, and quantifying the subjective probabilities for all the possible random events. When these data are available, decision analysis becomes a powerful tool for determining an optimal course of action. One option that can be readily incorporated into the analysis is to perform experimentation to obtain better estimates of the probabilities of the possible states of nature. Decision trees are a useful visual tool for analyzing this option or any series of decisions. Utility theory provides a way of incorporating the decision maker’s attitude toward risk into the analysis. Good software (including ASPE in your OR Courseware) is becoming widely available for performing decision analysis. (Selected Reference 11 provides a survey of such software.) ■ SELECTED REFERENCES 1. Bleichrodt, H., J. M. Abellan-Perpiñan, J. L. Pinto-Prades, and I. Mendez-Martinez: “Resolving Inconsistencies in Utility Measurement Under Risk: Tests of Generalizations of Expected Utility,” Management Science, 53(3): 469–482, March 2007. 2. Bleichrodt, H., U. Schmidt, and H. Zank: “Additive Utility in Prospect Theory,” Management Science, 55(5): 863–873, May 2009. 7 For further information, see the two articles on decision conferencing in the November–December 1992 issue of Interfaces, where one describes an application in Australia and the other summarizes the experience of 26 decision conferences in Hungary. Although somewhat dated now, this issue of Interfaces is a special issue devoted entirely to decision analysis and risk analysis that contains many interesting articles. LEARNING AIDS FOR THIS CHAPTER ON OUR WEBSITE 717 3. Chelst, , K., and B. Canbolat: Value-Added Decision Making for Managers, Chapman and Hall/CRC Press, Boca Raton, FL, 2012. 4. Clemen, R. T., and T. Reilly: Making Hard Decisions: with Decision Tools, Updated ed., Duxbury Press, Pacific Grove, CA, 2005. 5. Ehrgott, M., J. R. Figueira, and S. Greco (eds): Trends in Multiple Criteria Decision Analysis, Springer, New York, 2010. 6. Fishburn, P. C.: Nonlinear Preference and Utility Theory, The Johns Hopkins Press, Baltimore, MD, 1988. 7. Hammond, J. S., R. L. Keeney, and H. Raiffa: Smart Choices: A Practical Guide to Making Better Decisions, Harvard Business School Press, Cambridge, MA, 1999. 8. Hillier, F. S., and M. S. Hillier: Introduction to Management Science: A Modeling and Case Studies Approach with Spreadsheets, 5th ed., McGraw-Hill/Irwin, Burr Ridge, IL, 2014, chap. 9. 9. Keefer, D. L., C. W. Kirkwood, and J. L. Corner: “Perspective on Decision Analysis Applications,” Decision Analysis, 1(1): 4–22, 2004. 10. McGrayne, S. B.: The Theory That Would Not Die: How Bayes’ Rule Cracked the Enigma Code, Hunted Down Russian Submarines and Emerged Triumphant From Two Centuries of Controversy, Yale University Press, New Haven, CT, 2011. 11. Patchak, W. M.: “Decision Analysis Software Survey,” OR/MS Today, 39(5): 38-49, October 2012. 12. Smith, J. Q: Bayesian Decision Analysis: Principles and Practice, Cambridge University Press, Cambridge, UK, 2011. ■ LEARNING AIDS FOR THIS CHAPTER ON OUR WEBSITE (www.mhhe.com/hillier) Solved Examples: Examples for Chapter 16 “Ch. 16—Decision Analysis” Excel Files: Template for Posterior Probabilities Decision Tree for First Goferbroke Co. Problem Decision Tree for Full Goferbroke Problem Data Table for Full Goferbroke Problem “Ch. 16—Decision Analysis” LINGO File for Selected Examples Excel Add-In: Analytic Solver Platform for Education (ASPE) Glossary for Chapter 16 Supplement to this Chapter: Using TreePlan Software for Decision Trees See Appendix 1 for documentation of the software. 718 CHAPTER 16 DECISION ANALYSIS ■ PROBLEMS The symbols to the left of some of the problems (or their parts) have the following meaning: T: The Excel template for posterior probabilities can be helpful. A: ASPE should be used. An asterisk on the problem number indicates that at least a partial answer is given in the back of the book. 16.2-1. Read the referenced article that fully describes the OR study summarized in the application vignette presented in Sec. 16.2. Briefly describe how decision analysis was applied in this study. Then list the various financial and nonfinancial benefits that resulted from this study. 16.2-2.* Silicon Dynamics has developed a new computer chip that will enable it to begin producing and marketing a personal computer if it so desires. Alternatively, it can sell the rights to the computer chip for $15 million. If the company chooses to build computers, the profitability of the venture depends upon the company’s ability to market the computer during the first year. It has sufficient access to retail outlets that it can guarantee sales of 10,000 computers. On the other hand, if this computer catches on, the company can sell 100,000 computers. For analysis purposes, these two levels of sales are taken to be the two possible outcomes of marketing the computer, but it is unclear what their prior probabilities are. If the decision is to go ahead with producing and marketing the computer, the company will produce as many chips as it finds it will be able to sell, but not more. The cost of setting up the assembly line is $6 million. The difference between the selling price and the variable cost of each computer is $600. (a) Develop a decision analysis formulation of this problem by identifying the decision alternatives, the states of nature, and the payoff table. (b) Develop a graph that plots the expected payoff for each of the decision alternatives versus the prior probability of selling 10,000 computers. (c) Referring to the graph developed in part (b), use algebra to solve for the crossover point. Explain the significance of this point. A (d) Develop a graph that plots the expected payoff (when using Bayes’ decision rule) versus the prior probability of selling 10,000 computers. (e) Assuming the prior probabilities of the two levels of sales are both 0.5, which decision alternative should be chosen? 16.2-3. Jean Clark is the manager of the Midtown Saveway Grocery Store. She now needs to replenish her supply of strawberries. Her regular supplier can provide as many cases as she wants. However, because these strawberries already are very ripe, she will need to sell them tomorrow and then discard any that remain unsold. Jean estimates that she will be able to sell 12, 13, 14, or 15 cases tomorrow. She can purchase the strawberries for $7 per case and sell them for $18 per case. Jean now needs to decide how many cases to purchase. Jean has checked the store’s records on daily sales of strawberries. On this basis, she estimates that the prior probabilities are 0.1, 0.3, 0.4, and 0.2 for being able to sell 12, 13, 14, and 15 cases of strawberries tomorrow. (a) Develop a decision analysis formulation of this problem by identifying the decision alternatives, the states of nature, and the payoff table. (b) How many cases of strawberries should Jean purchase if she uses the maximin payoff criterion? (c) How many cases should be purchased according to the maximum likelihood criterion? (d) How many cases should be purchased according to Bayes’ decision rule? (e) Jean thinks she has the prior probabilities just about right for selling 12 cases and selling 15 cases, but is uncertain about how to split the prior probabilities for 13 cases and 14 cases. Reapply Bayes’ decision rule when the prior probabilities of 13 and 14 cases are (i) 0.2 and 0.5, (ii) 0.4 and 0.3, and (iii) 0.5 and 0.2. 16.2-4.* Warren Buffy is an enormously wealthy investor who has built his fortune through his legendary investing acumen. He currently has been offered three major investments and he would like to choose one. The first one is a conservative investment that would perform very well in an improving economy and only suffer a small loss in a worsening economy. The second is a speculative investment that would perform extremely well in an improving economy but would do very badly in a worsening economy. The third is a countercyclical investment that would lose some money in an improving economy but would perform well in a worsening economy. Warren believes that there are three possible scenarios over the lives of these potential investments: (1) an improving economy, (2) a stable economy, and (3) a worsening economy. He is pessimistic about where the economy is headed, and so has assigned prior probabilities of 0.1, 0.5, and 0.4, respectively, to these three scenarios. He also estimates that his profits under these respective scenarios are those given by the following table: Conservative investment Speculative investment Countercyclical investment Prior probability Improving Economy Stable Economy Worsening Economy $30 million $ 5 million $10 million $40 million $10 million $30 million $10 million 0 $15 million 0.1 0.5 0.4 Which investment should Warren make under each of the following criteria? (a) Maximin payoff criterion. (b) Maximum likelihood criterion. (c) Bayes’ decision rule. 719 PROBLEMS 16.2-5. Reconsider Prob. 16.2-4. Warren Buffy decides that Bayes’ decision rule is his most reliable decision criterion. He believes that 0.1 is just about right as the prior probability of an improving economy, but is quite uncertain about how to split the remaining probabilities between a stable economy and a worsening economy. Therefore, he now wishes to do sensitivity analysis with respect to these latter two prior probabilities. (a) Reapply Bayes’ decision rule when the prior probability of a stable economy is 0.3 and the prior probability of a worsening economy is 0.6. (b) Reapply Bayes’ decision rule when the prior probability of a stable economy is 0.7 and the prior probability of a worsening economy is 0.2. (c) Graph the expected profit for each of the three investment alternatives versus the prior probability of a stable economy (with the prior probability of an improving economy fixed at 0.1). Use this graph to identify the crossover points where the decision shifts from one investment to another. (d) Use algebra to solve for the crossover points identified in part (c). A (e) Develop a graph that plots the expected profit (when using Bayes’ decision rule) versus the prior probability of a stable economy. 16.2-6. You are given the following payoff table (in units of thousands of dollars) for a decision analysis problem: State of Nature Alternative S1 S2 S3 A1 A2 220 200 170 180 110 150 Prior probability 0.6 0.3 0.1 (a) Which alternative should be chosen under the maximin payoff criterion? (b) Which alternative should be chosen under the maximum likelihood criterion? (c) Which alternative should be chosen under Bayes’ decision rule? (d) Using Bayes’ decision rule, do sensitivity analysis graphically with respect to the prior probabilities of states S1 and S2 (without changing the prior probability of state S3) to determine the crossover point where the decision shifts from one alternative to the other. Then use algebra to calculate this crossover point. (e) Repeat part (d) for the prior probabilities of states S1 and S3. (f) Repeat part (d ) for the prior probabilities of states S2 and S3. (g) If you feel that the true probabilities of the states of nature are within 10 percent of the given prior probabilities, which alternative would you choose? 16.2-7. Dwight Moody is the manager of a large farm with 1,000 acres of arable land. For greater efficiency, Dwight always devotes the farm to growing one crop at a time. He now needs to make a decision on which one of four crops to grow during the upcoming growing season. For each of these crops, Dwight has obtained the following estimates of crop yields and net incomes per bushel under various weather conditions. Expected Yield, Bushels/Acre Weather Crop 1 Crop 2 Crop 3 Crop 4 Dry Moderate Damp 20 35 40 15 20 30 30 25 25 40 40 40 $1.00 $1.50 $1.00 $0.50 Net income per bushel After referring to historical meteorological records, Dwight also estimated the following prior probabilities for the weather during the growing season: Dry Moderate Damp 0.3 0.5 0.2 (a) Develop a decision analysis formulation of this problem by identifying the decision alternatives, the states of nature, and the payoff table. (b) Use Bayes’ decision rule to determine which crop to grow. (c) Using Bayes’ decision rule, do sensitivity analysis with respect to the prior probabilities of moderate weather and damp weather (without changing the prior probability of dry weather) by re-solving when the prior probability of moderate weather is 0.2, 0.3, 0.4, and 0.6. 16.2-8.* A new type of airplane is to be purchased by the Air Force, and the number of spare engines to be ordered must be determined. The Air Force must order these spare engines in batches of five, and it can choose among only 15, 20, or 25 spares. The supplier of these engines has two plants, and the Air Force must make its decision prior to knowing which plant will be used. However, the Air Force knows from past experience that two-thirds of all types of airplane engines are produced in Plant A, and only one-third are produced in Plant B. The Air Force also knows that the number of spare engines required when production takes place at Plant A is approximated by a Poisson distribution with mean ␪  21, whereas the number of spare engines required when production takes place at Plant B is approximated by a Poisson distribution with mean ␪  24. The cost of a spare engine purchased now is $400,000, whereas the cost of a spare engine purchased at a later date is $900,000. Spares must always be supplied if they are demanded, and unused engines will be scrapped when the airplanes become obsolete. Holding costs and interest are to be neglected. From these data, the total costs (negative payoffs) have been computed as follows: 720 CHAPTER 16 DECISION ANALYSIS State of Nature Alternative Order 15 Order 20 Order 25 ␪ ⴝ 21 1.155 1.012 1.047 State of Nature ␪ ⴝ 24 7 10 107 107 1.414 1.207 1.135 7 10 107 107 Alternative S1 S2 S3 A1 A2 A3 50 0 20 100 10 40 100 10 40 Prior probability 0.5 0.3 0.2 Determine the optimal alternative under Bayes’ decision rule. 16.3-1. Read the referenced article that fully describes the OR study summarized in the application vignette presented in Sec. 16.3. Briefly describe how decision analysis was applied in this study. Then list the various financial and nonfinancial benefits that resulted from this study. 16.3-2.* Reconsider Prob. 16.2-2. Management of Silicon Dynamics now is considering doing full-fledged market research at a cost of $1 million to predict which of the two levels of demand is likely to occur. Previous experience indicates that such market research is correct two-thirds of the time. Assume that the prior probabilities of the two levels of sales are both 0.5. (a) Find EVPI for this problem. (b) Does the answer in part (a) indicate that it might be worthwhile to perform this market research? (c) Develop a probability tree diagram to obtain the posterior probabilities of the two levels of demand for each of the two possible outcomes of the market research. T (d) Use the Excel template for posterior probabilities to check your answers in part (c). (e) Find EVE. Is it worthwhile to perform the market research? 16.3-3. You are given the following payoff table (in units of thousands of dollars) for a decision analysis problem: State of Nature Alternative S1 S2 S3 A1 A2 A3 4 0 3 0 2 0 0 0 1 Prior probability 0.2 0.5 0.3 (a) According to Bayes’ decision rule, which alternative should be chosen? (b) Find EVPI. (c) You are given the opportunity to spend $1,000 to obtain more information about which state of nature is likely to occur. Given your answer to part (b), might it be worthwhile to spend this money? 16.3-4.* Betsy Pitzer makes decisions according to Bayes’ decision rule. For her current problem, Betsy has constructed the following payoff table (in units of dollars): (a) Which alternative should Betsy choose? (b) Find EVPI. (c) What is the most that Betsy should consider paying to obtain more information about which state of nature will occur? 16.3-5. Using Bayes’ decision rule, consider the decision analysis problem having the following payoff table (in units of thousands of dollars): State of Nature Alternative S1 S2 S3 A1 A2 A3 100 10 10 10 20 10 100 50 60 Prior probability 0.2 0.3 0.5 (a) Which alternative should be chosen? What is the resulting expected payoff? (b) You are offered the opportunity to obtain information which will tell you with certainty whether the first state of nature S1 will occur. What is the maximum amount you should pay for the information? Assuming you will obtain the information, how should this information be used to choose an alternative? What is the resulting expected payoff (excluding the payment)? (c) Now repeat part (b) if the information offered concerns S2 instead of S1. (d) Now repeat part (b) if the information offered concerns S3 instead of S1. (e) Now suppose that the opportunity is offered to provide information which will tell you with certainty which state of nature will occur (perfect information). What is the maximum amount you should pay for the information? Assuming you will obtain the information, how should this information be used to choose an alternative? What is the resulting expected payoff (excluding the payment)? (f) If you have the opportunity to do some testing that will give you partial additional information (not perfect information) about the state of nature, what is the maximum amount you should consider paying for this information? 16.3-6. Reconsider the Goferbroke Co. prototype example, including its analysis in Sec. 16.3. With the help of a consulting 721 PROBLEMS geologist, some historical data have been obtained that provide more precise information on the likelihood of obtaining favorable seismic soundings on similar tracts of land. Specifically, when the land contains oil, favorable seismic soundings are obtained 80 percent of the time. This percentage changes to 40 percent when the land is dry. (a) Revise Fig. 16.2 to find the new posterior probabilities. T (b) Use the Excel template for posterior probabilities to check your answers in part (a). (c) What is the resulting optimal policy? 16.3-7. You are given the following payoff table (in units of dollars): State of Nature Alternative S1 S2 A1 A2 400 0 100 100 Prior probability 0.4 0.6 You have the option of paying $100 to have research done to better predict which state of nature will occur. When the true state of nature is S1, the research will accurately predict S1 60 percent of the time (but will inaccurately predict S2 40 percent of the time). When the true state of nature is S2, the research will accurately predict S2 80 percent of the time (but will inaccurately predict S1 20 percent of the time). (a) Given that the research is not done, use Bayes’ decision rule to determine which decision alternative should be chosen. (b) Find EVPI. Does this answer indicate that it might be worthwhile to do the research? (c) Given that the research is done, find the joint probability of each of the following pairs of outcomes: (i) the state of nature is S1 and the research predicts S1, (ii) the state of nature is S1 and the research predicts S2, (iii) the state of nature is S2 and the research predicts S1, and (iv) the state of nature is S2 and the research predicts S2. (d) Find the unconditional probability that the research predicts S1. Also find the unconditional probability that the research predicts S2. (e) Given that the research is done, use your answers in parts (c) and (d ) to determine the posterior probabilities of the states of nature for each of the two possible predictions of the research. T (f) Use the Excel template for posterior probabilities to obtain the answers for part (e). (g) Given that the research predicts S1, use Bayes’ decision rule to determine which decision alternative should be chosen and the resulting expected payoff. (h) Repeat part (g) when the research predicts S2. (i) Given that research is done, what is the expected payoff when using Bayes’ decision rule? (j) Use the preceding results to determine the optimal policy regarding whether to do the research and the choice of the decision alternative. 16.3-8.* Reconsider Prob. 16.2-8. Suppose now that the Air Force knows that a similar type of engine was produced for an earlier version of the type of airplane currently under consideration. The order size for this earlier version was the same as for the current type. Furthermore, the probability distribution of the number of spare engines required, given the plant where production takes place, is believed to be the same for this earlier airplane model and the current one. The engine for the current order will be produced in the same plant as the previous model, although the Air Force does not know which of the two plants this is. The Air Force does have access to the data on the number of spares actually required for the older version, but the supplier has not revealed the production location. (a) How much money is it worthwhile to pay for perfect information on which plant will produce these engines? (b) Assume that the cost of the data on the old airplane model is free and that 30 spares were required. You are given that the probability of 30 spares, given a Poisson distribution with mean ␪, is 0.013 for ␪  21 and 0.036 for ␪  24. Find the optimal action under Bayes’ decision rule. 16.3-9.* Vincent Cuomo is the credit manager for the Fine Fabrics Mill. He is currently faced with the question of whether to extend $100,000 credit to a potential new customer, a dress manufacturer. Vincent has three categories for the creditworthiness of a company: poor risk, average risk, and good risk, but he does not know which category fits this potential customer. Experience indicates that 20 percent of companies similar to this dress manufacturer are poor risks, 50 percent are average risks, and 30 percent are good risks. If credit is extended, the expected profit for poor risks is $15,000, for average risks $10,000, and for good risks $20,000. If credit is not extended, the dress manufacturer will turn to another mill. Vincent is able to consult a credit-rating organization for a fee of $5,000 per company evaluated. For companies whose actual credit record with the mill turns out to fall into each of the three categories, the following table shows the percentages that were given each of the three possible credit evaluations by the creditrating organization. Actual Credit Record Credit Evaluation Poor Average Good Poor Average Good 50% 40 10 40% 50 10 20% 40 40 (a) Develop a decision analysis formulation of this problem by identifying the decision alternatives, the states of nature, and the payoff table when the credit-rating organization is not used. (b) Assuming the credit-rating organization is not used, use Bayes’ decision rule to determine which decision alternative should be chosen. (c) Find EVPI. Does this answer indicate that consideration should be given to using the credit-rating organization? 722 CHAPTER 16 DECISION ANALYSIS (d) Assume now that the credit-rating organization is used. Develop a probability tree diagram to find the posterior probabilities of the respective states of nature for each of the three possible credit evaluations of this potential customer. T (e) Use the Excel template for posterior probabilities to obtain the answers for part (d ). (f) Determine Vincent’s optimal policy. 16.3-10. An athletic league does drug testing of its athletes, 10 percent of whom use drugs. This test, however, is only 95 percent reliable. That is, a drug user will test positive with probability 0.95 and negative with probability 0.05, and a nonuser will test negative with probability 0.95 and positive with probability 0.05. Develop a probability tree diagram to determine the posterior probability of each of the following outcomes of testing an athlete. (a) The athlete is a drug user, given that the test is positive. (b) The athlete is not a drug user, given that the test is positive. (c) The athlete is a drug user, given that the test is negative. (d) The athlete is not a drug user, given that the test is negative. T (e) Use the Excel template for posterior probabilities to check your answers in the preceding parts. 16.3-11. Management of the Telemore Company is considering developing and marketing a new product. It is estimated to be twice as likely that the product would prove to be successful as unsuccessful. It it were successful, the expected profit would be $1,500,000. If unsuccessful, the expected loss would be $1,800,000. A marketing survey can be conducted at a cost of $300,000 to predict whether the product would be successful. Past experience with such surveys indicates that successful products have been predicted to be successful 80 percent of the time, whereas unsuccessful products have been predicted to be unsuccessful 70 percent of the time. (a) Develop a decision analysis formulation of this problem by identifying the decision alternatives, the states of nature, and the payoff table when the market survey is not conducted. (b) Assuming the market survey is not conducted, use Bayes’ decision rule to determine which decision alternative should be chosen. (c) Find EVPI. Does this answer indicate that consideration should be given to conducting the market survey? T (d) Assume now that the market survey is conducted. Find the posterior probabilities of the respective states of nature for each of the two possible predictions from the market survey. (e) Find the optimal policy regarding whether to conduct the market survey and whether to develop and market the new product. 16.3-12. The Hit-and-Miss Manufacturing Company produces items that have a probability p of being defective. These items are produced in lots of 150. Past experience indicates that p for an entire lot is either 0.05 or 0.25. Furthermore, in 80 percent of the lots produced, p equals 0.05 (so p equals 0.25 in 20 percent of the lots). These items are then used in an assembly, and ultimately their quality is determined before the final assembly leaves the plant. Initially the company can either screen each item in a lot at a cost of $10 per item and replace defective items or use the items directly without screening. If the latter action is chosen, the cost of rework is ultimately $100 per defective item. Because screening requires scheduling of inspectors and equipment, the decision to screen or not screen must be made 2 days before the screening is to take place. However, one item can be taken from the lot and sent to a laboratory for inspection, and its quality (defective or nondefective) can be reported before the screen/no screen decision must be made. The cost of this initial inspection is $125. (a) Develop a decision analysis formulation of this problem by identifying the decision alternatives, the states of nature, and the payoff table if the single item is not inspected in advance. (b) Assuming the single item is not inspected in advance, use Bayes’ decision rule to determine which decision alternative should be chosen. (c) Find EVPI. Does this answer indicate that consideration should be given to inspecting the single item in advance? T (d) Assume now that the single item is inspected in advance. Find the posterior probabilities of the respective states of nature for each of the two possible outcomes of this inspection. (e) Find EVE. Is inspecting the single item worthwhile? (f) Determine the optimal policy. T 16.3-13.* Consider two weighted coins. Coin 1 has a probability of 0.3 of turning up heads, and coin 2 has a probability of 0.6 of turning up heads. A coin is tossed once; the probability that coin 1 is tossed is 0.6, and the probability that coin 2 is tossed is 0.4. The decision maker uses Bayes’ decision rule to decide which coin is tossed. The payoff table is as follows: State of Nature Alternative Coin 1 Tossed Coin 2 Tossed Say coin 1 tossed Say coin 2 tossed 0 1 1 0 Prior probability 0.6 0.4 (a) What is the optimal alternative before the coin is tossed? (b) What is the optimal alternative after the coin is tossed if the outcome is heads? If it is tails? 16.3-14. There are two biased coins with probabilities of landing heads of 0.8 and 0.4, respectively. One coin is chosen at random (each with probability 12) to be tossed twice. You are to receive $100 if you correctly predict how many heads will occur in two tosses. (a) Using Bayes’ decision rule, what is the optimal prediction, and what is the corresponding expected payoff? 723 PROBLEMS (b) Suppose now that you may observe a practice toss of the chosen coin before predicting. Use the Excel template for posterior probabilities to find the posterior probabilities for which coin is being tossed. (c) Determine your optimal prediction after observing the practice toss. What is the resulting expected payoff? (d) Find EVE for observing the practice toss. If you must pay $30 to observe the practice toss, what is your optimal policy? T 16.4-1. Read the referenced article that fully describes the OR study summarized in the application vignette presented in Sec. 16.4. Briefly describe how decision analysis was applied in this study. Then list the various financial and nonfinancial benefits that resulted from this study. 16.4-2.* Reconsider Prob.16.3-2. The management of Silicon Dynamics now wants to see a decision tree displaying the entire problem. Construct and solve this decision tree by hand. 16.4-3. You are given the decision tree below, where the numbers in parentheses are probabilities and the numbers on the far right are payoffs at these terminal points. Analyze this decision tree to obtain the optimal policy. 2,500 (0.4) (0.6) 700 (0.2) will raise $3 million. If the team has a losing season (L), few will contribute and the campaign will lose $2 million. If no campaign is undertaken, no costs are incurred. On September 1, just before the football season begins, the Athletic Department needs to make its decision about whether to hold the campaign next year. (a) Develop a decision analysis formulation of this problem by identifying the decision alternatives, the states of nature, and the payoff table. (b) According to Bayes’ decision rule, should the campaign be undertaken? (c) Find EVPI. (d) A famous football guru, William Walsh, has offered his services to help evaluate whether the team will have a winning season. For $100,000, he will carefully evaluate the team throughout spring practice and then throughout preseason workouts. William then will provide his prediction on September 1 regarding what kind of season, W or L, the team will have. In similar situations in the past when evaluating teams that have winning seasons 50 percent of the time, his predictions have been correct 75 percent of the time. Considering that this team has more of a winning tradition, if William predicts a winning season, what is the posterior probability that the team actually will have a winning season? What is the posterior probability of a losing season? If Williams predicts a losing season instead, what is the posterior probability of a winning season? Of a losing season? Show how these answers are obtained from a probability tree diagram. T (e) Use the Excel template for posterior probabilities to obtain the answers requested in part (d). (f ) Draw the decision tree for this entire problem by hand. Analyze this decision tree to determine the optimal policy regarding whether to hire William and whether to undertake the campaign. (0.8) 900 800 16.4-5. The comptroller of the Macrosoft Corporation has $100 million of excess funds to invest. She has been instructed to invest the entire amount for one year in either stocks or bonds (but not both) and then to reinvest the entire fund in either stocks or bonds (but not both) for one year more. The objective is to maximize the expected monetary value of the fund at the end of the second year. The annual rates of return on these investments depend on the economic environment, as shown in the following table: Rate of Return 750 16.4-4.* The Athletic Department of Leland University is considering whether to hold an extensive campaign next year to raise funds for a new athletic field. The response to the campaign depends heavily upon the success of the football team this fall. In the past, the football team has had winning seasons 60 percent of the time. If the football team has a winning season (W) this fall, then many of the alumnae and alumni will contribute and the campaign Economic Environment Stocks Bonds Growth Recession Depression 20% 10 50 5% 10 20 The probabilities of growth, recession, and depression for the first year are 0.7, 0.3, and 0, respectively. If growth occurs in the first 724 CHAPTER 16 DECISION ANALYSIS year, these probabilities remain the same for the second year. However, if a recession occurs in the first year, these probabilities change to 0.2, 0.7, and 0.1, respectively, for the second year. (a) Construct the decision tree for this problem by hand. (b) Analyze the decision tree to identify the optimal policy. 16.4-6 On Monday, a certain stock closed at $10 per share. On Tuesday, you expect the stock to close at $9, $10, or $11 per share, with respective probabilities 0.3, 0.3, and 0.4. On Wednesday, you expect the stock to close 10 percent lower, unchanged, or 10 percent higher than Tuesday’s close, with the following probabilities: Today’s Close 10% Lower Unchanged 10% Higher $ 9 $10 $11 0.4 0.2 0.1 0.3 0.2 0.2 0.3 0.6 0.7 On Tuesday, you are directed to buy 100 shares of the stock before Thursday. All purchases are made at the end of the day, at the known closing price for that day, so your only options are to buy at the end of Tuesday or at the end of Wednesday. You wish to determine the optimal strategy for whether to buy on Tuesday or defer the purchase until Wednesday, given the Tuesday closing price, to minimize the expected purchase price. Develop and evaluate a decision tree by hand for determining the optimal strategy. 16.4-7. Use the scenario given in Prob.16.3-9. (a) Draw and properly label the decision tree. Include all the payoffs but not the probabilities. T (b) Find the probabilities for the branches emanating from the event nodes. (c) Apply the backward induction procedure, and identify the resulting optimal policy. 16.4-8. Use the scenario given in Prob.16.3.-11. (a) Draw and properly label the decision tree. Include all the payoffs but not the probabilities. T (b) Find the probabilities for the branches emanating from the event nodes. (c) Apply the backward induction procedure, and identify the resulting optimal policy. 16.4-9. Use the scenario given in Prob.16.3-12. (a) Draw and properly label the decision tree. Include all the payoffs but not the probabilities. T (b) Find the probabilities for the branches emanating from the event nodes. (c) Apply the backward induction procedure, and identify the resulting optimal policy. 16.4-10. Use the scenario given in Prob.16.3-13. (a) Draw and properly label the decision tree. Include all the payoffs but not the probabilities. T (b) Find the probabilities for the branches emanating from the event nodes. (c) Apply the backward induction procedure, and identify the resulting optimal policy. 16.4-11. The executive search being conducted for Western Bank by Headhunters Inc. may finally be bearing fruit. The position to be filled is a key one—Vice President for Information Processing—because this person will have responsibility for developing a state-of-the-art management information system that will link together Western’s many branch banks. However, Headhunters feels they have found just the right person, Matthew Fenton, who has an excellent record in a similar position for a midsized bank in New York. After a round of interviews, Western’s president believes that Matthew has a probability of 0.7 of designing the management information system successfully. If Matthew is successful, the company will realize a profit of $2 million (net of Matthew’s salary, training, recruiting costs, and expenses). If he is not successful, the company will realize a net loss of $400,000. For an additional fee of $20,000, Headhunters will provide a detailed investigative process (including an extensive background check, a battery of academic and psychological tests, etc.) that will further pinpoint Matthew’s potential for success. This process has been found to be 90 percent reliable; i.e., a candidate who would successfully design the management information system will pass the test with probability 0.9, and a candidate who would not successfully design the system will fail the test with probability 0.9. Western’s top management needs to decide whether to hire Matthew and whether to have Headhunters conduct the detailed investigative process before making this decision. (a) Construct the decision tree for this problem. T (b) Find the probabilities for the branches emanating from the event nodes. (c) Analyze the decision tree to identify the optimal policy. (d) Now suppose that the Headhunters’ fee for administering its detailed investigative process is negotiable. What is the maximum amount that Western Bank should pay? A 16.5-1. Reconsider the original version of the Silicon Dynamics problem described in Prob.16.2-2. (a) Assuming the prior probabilities of the two levels of sales are both 0.5, use ASPE to construct and solve the decision tree for this problem. According to this analysis, which decision alternative should be chosen? (b) Perform sensitivity analysis systematically by generating a data table that shows the optimal decision alternative and the expected payoff (when using Bayes’ decision rule) when the prior probability of selling 10,000 computers is 0, 0.1, 0.2, ..., 1. A 16.5-2. Now reconsider the expanded version of the Silicon Dynamics problem described in Probs.16.3-2 and 16.4-2. (a) Use ASPE to construct and solve the decision tree for this problem. (b) Perform sensitivity analysis systematically by generating a data table that shows the optimal policy and the expected payoff (when using Bayes’ decision rule) when the prior probability of selling 10,000 computers is 0, 0.1, 0.2, … , 1. A 725 PROBLEMS (c) Assume now that the prior probabilities of the two levels of service are both 0.5. However, there is some uncertainty in the financial data ($15 million, $6 million, and $600) stated in Prob. 16.2.2. Each could vary from its base value by as much as 10 percent. For each one, perform sensitivity analysis to find what would happen if its value were at either end of this range of variability (without any change in the other two pieces of data) by adjusting the values in the data cells accordingly. Then do the same for the eight cases where all these pieces of data are at one end or the other of their ranges of variability. A A 16.5-3. Reconsider the decision tree given in Prob. 16.4-3. Use ASPE to construct and solve this decision tree. A 16.5-4. Reconsider Prob.16.4-5. Use ASPE to construct and solve the decision tree for this problem. probabilities) to construct and solve the decision tree for this problem. (d) Perform sensitivity analysis systematically for the option considered in part (c) by generating a data table that shows the optimal policy and the expected payoff when the prior probability that the new product will be successful is 0, 0.1, 0.2,…, 1. (e) Assume now that the prior probability that the new product will be successful is 0.5. However, there is some uncertainty in the stated profit and loss figures ($40 million and $15 million). Either could vary from its base by as much as 25 percent in either direction. Use ASPE calculations to generate a graph for each that plots the expected profit over this range of variability. A 16.5-5. Reconsider Prob.16.4-6. Use ASPE to construct and solve the decision tree for this problem. A 16.5-6. Jose Morales manages a large outdoor fruit stand in one of the less affluent neighborhoods of San Jose, California. To replenish his supply, Jose buys boxes of fruit early each morning from a grower south of San Jose. About 90 percent of the boxes of fruit turn out to be of satisfactory quality, but the other 10 percent are unsatisfactory. A satisfactory box contains 80 percent excellent fruit and will earn $200 profit for Jose. An unsatisfactory box contains 30 percent excellent fruit and will produce a loss of $1,000. Before Jose decides to accept a box, he is given the opportunity to sample one piece of fruit to test whether it is excellent. Based on that sample, he then has the option of rejecting the box without paying for it. Jose wonders (1) whether he should continue buying from this grower, (2) if so, whether it is worthwhile sampling just one piece of fruit from a box, and (3) if so, whether he should be accepting or rejecting the box based on the outcome of this sampling. Use ASPE (and the Excel template for posterior probabilities) to construct and solve the decision tree for this problem. A 16.5-7.* The Morton Ward Company is considering the introduction of a new product that is believed to have a 50-50 chance of being successful. One option is to try out the product in a test market, at a cost of $5 million, before making the introduction decision. Past experience shows that ultimately successful products are approved in the test market 80 percent of the time, whereas ultimately unsuccessful products are approved in the test market only 25 percent of the time. If the product is successful, the net profit to the company will be $40 million; if unsuccessful, the net loss will be $15 million. (a) Discarding the option of trying out the product in a test market, develop a decision analysis formulation of the problem by identifying the decision alternatives, states of nature, and payoff table. Then apply Bayes’ decision rule to determine the optimal decision alternative. (b) Find EVPI. A (c) Now include the option of trying out the product in a test market. Use ASPE (and the Excel template for posterior 16.5-8. Chelsea Bush is an emerging candidate for her party’s nomination for President of the United States. She now is considering whether to run in the high-stakes Super Tuesday primaries. If she enters the Super Tuesday (S.T.) primaries, she and her advisers believe that she will either do well (finish first or second) or do poorly (finish third or worse) with probabilities 0.4 and 0.6, respectively. Doing well on Super Tuesday will net the candidate’s campaign approximately $16 million in new contributions, whereas a poor showing will mean a loss of $10 million after numerous TV ads are paid for. Alternatively, she may choose not to run at all on Super Tuesday and incur no costs. Chelsea’s advisers realize that her chances of success on Super Tuesday may be affected by the outcome of the smaller New Hampshire (N.H.) primary occurring three weeks before Super Tuesday. Political analysts feel that the results of New Hampshire’s primary are correct two-thirds of the time in predicting the results of the Super Tuesday primaries. Among Chelsea’s advisers is a decision analysis expert who uses this information to calculate the following probabilities: A P{Chelsea does well in S.T. primaries, given she does well in N.H.}  47 P{Chelsea does well in S.T. primaries, given she does poorly in N.H.}  14 P{Chelsea does well in N.H. primary}  175 The cost of entering and campaigning in the New Hampshire primary is estimated to be $1.6 million. Chelsea feels that her chance of winning the nomination depends largely on having substantial funds available after the Super Tuesday primaries to carry on a vigorous campaign the rest of the way. Therefore, she wants to choose the strategy (whether to run in the New Hampshire primary and then whether to run in the Super Tuesday primaries) that will maximize her expected funds after these primaries. (a) Construct and solve the decision tree for this problem. (b) Perform sensitivity analysis systematically by generating a data table that shows Chelsea’s optimal policy and expected payoff when the prior probability that she will do well in the New Hampshire primary is each of the following multiples of 1/15: 0, 1, 2, … , 15. 726 CHAPTER 16 DECISION ANALYSIS (c) Assume now that the prior probability that Chelsea will do well in the New Hampshire primary is indeed 7/15. However, there is some uncertainty in the estimates of a gain of $16 million or a loss of $10 million depending on the showing on Super Tuesday. Either amount could differ from this estimate by as much as 25 percent in either direction. For each of these two financial figures, perform sensitivity analysis to check how the results in part (a) would change if the value of the financial figure were at either end of this range of variability (without any change in the value of the other financial figure). Then do the same for the four cases where both financial figures are at one end or the other of their ranges of variability. 16.6-1. Reconsider the Goferbroke Co. prototype example, including the application of utilities in Sec. 16.6. The owner now has decided that, given the company’s precarious financial situation, he needs to take a much more risk-averse approach to the problem. Therefore, he has revised the utilities given in Table 16.7 as follows: U(130)  0, U(100)  0.1, U(60)  0.4, U(90)  0.45, U(670)  0.985, and U(700)  1. (a) Analyze the revised decision tree corresponding to Fig. 16.16 by hand to obtain the new optimal policy. A (b) Use ASPE to construct and solve this revised decision tree. 16.6-2.* You live in an area that has a possibility of incurring a massive earthquake, so you are considering buying earthquake insurance on your home at an annual cost of $180. The probability of an earthquake damaging your home during one year is 0.001. If this happens, you estimate that the cost of the damage (fully covered by earthquake insurance) will be $160,000. Your total assets (including your home) are worth $250,000. (a) Apply Bayes’ decision rule to determine which alternative (take the insurance or not) maximizes your expected assets after one year. (b) You now have constructed a utility function that measures how much you value having total assets worth x dollars (x 0). This utility function is U(x)  兹x苶. Compare the utility of reducing your total assets next year by the cost of the earthquake insurance with the expected utility next year of not taking the earthquake insurance. Should you take the insurance? 16.6-3. For your graduation present from college, your parents are offering you your choice of two alternatives. The first alternative is to give you a money gift of $19,000. The second alternative is to make an investment in your name. This investment will quickly have the following two possible outcomes: Outcome Receive $10,000 Receive $30,000 Probability 0.3 0.7 Your utility for receiving M thousand dollars is given by the utility function U(M)  兹苶 M  6. Which choice should you make to maximize expected utility? 16.6-4.* Reconsider Prob.16.6-3. You now are uncertain about what your true utility function for receiving money is, so you are in the process of constructing this utility function. So far, you have found that U(19)  16.7 and U(30)  20 are the utility of receiving $19,000 and $30,000, respectively. You also have concluded that you are indifferent between the two alternatives offered to you by your parents. Use this information to find U(10). 16.6-5. You wish to construct your personal utility function U(M) for receiving M thousand dollars. After setting U(0)  0, you next set U(1)  1 as your utility for receiving $1,000. You next want to find U(10) and then U(5). (a) You offer yourself the following two hypothetical alternatives: A1: Obtain $10,000 with probability p. Obtain 0 with probability (1  p). A2: Definitely obtain $1,000. You then ask yourself the question: What value of p makes you indifferent between these two alternatives? Your answer is p  0.125. Find U(10). (b) You next repeat part (a) except for changing the second alternative to definitely receiving $5,000. The value of p that makes you indifferent between these two alternatives now is p  0.5625. Find U(5). (c) Repeat parts (a) and (b), but now use your personal choices for p. 16.6-6. You are given the following payoff table: State of Nature Alternative S1 S2 A1 A2 A3 25 100 0 36 0 49 Prior probability p 1p (a) Assume that your utility function for the payoffs is U(x)  兹x苶. Plot the expected utility of each alternative versus the value of p on the same graph. For each alternative, find the range of values of p over which this alternative maximizes the expected utility. A (b) Now assume that your utility function is the exponential utility function with a risk tolerance of R  50. Use ASPE to construct and solve the resulting decision tree in turn for p  0.25, p  0.5, and p  0.75. 727 PROBLEMS 16.6-7. Dr. Switzer has a seriously ill patient but has had trouble diagnosing the specific cause of the illness. The doctor now has narrowed the cause down to two alternatives: disease A or disease B. Based on the evidence so far, she feels that the two alternatives are equally likely. Beyond the testing already done, there is no test available to determine if the cause is disease B. One test is available for disease A, but it has two major problems. First, it is very expensive. Second, it is somewhat unreliable, giving an accurate result only 80 percent of the time. Thus, it will give a positive result (indicating disease A) for only 80 percent of patients who have disease A, whereas it will give a positive result for 20 percent of patients who actually have disease B instead. Disease B is a very serious disease with no known treatment. It is sometimes fatal, and those who survive remain in poor health with a poor quality of life thereafter. The prognosis is similar for victims of disease A if it is left untreated. However, there is a fairly expensive treatment available that eliminates the danger for those with disease A, and it may return them to good health. Unfortunately, it is a relatively radical treatment that always leads to death if the patient actually has disease B instead. The probability distribution for the prognosis for this patient is given for each case in the following table, where the column headings (after the first one) indicate the disease for the patient. In addition, these utilities should be incremented by 2 if the patient incurs the cost of the test for disease A and by 1 if the patient (or the patient’s estate) incurs the cost of the treatment for disease A. Use decision analysis with a complete decision tree to determine if the patient should undergo the test for disease A and then how to proceed (receive the treatment for disease A?) to maximize the patient’s expected utility. 16.6-8. You want to choose between decision alternatives A1 and A2 in the following decision tree, but you are uncertain about the value of the probability p, so you need to perform sensitivity analysis of p as well. Payoff p 1p A1 2p 10 5 3 Outcome Probabilities No Treatment Outcome Die Survive with poor health Return to good health Receive Treatment for Disease A A B A B 0.2 0.5 0 1.0 0.8 0.5 0.5 0 0 0 0.5 0 The patient has assigned the following utilities to the possible outcomes: Outcome Die Survive with poor health Return to good health Utility 0 10 30 12 A2 p 0.5 0.5 2 2 0 Your utility function for money (the payoff received) is U(M)   M2 M2 if M 0 if M  0. (a) For p  0.25, determine which alternative is optimal in the sense that it maximizes the expected utility of the payoff. (b) Determine the range of values of the probability p (0  p  0.5) for which this same alternative remains optimal. 728 CHAPTER 16 DECISION ANALYSIS ■ CASES CASE 16.1 Brainy Business While El Niño is pouring its rain on northern California, Charlotte Rothstein, CEO, major shareholder and founder of Cerebrosoft, sits in her office, contemplating the decision she faces regarding her company’s newest proposed product, Brainet. This has been a particularly difficult decision. Brainet might catch on and sell very well. However, Charlotte is concerned about the risk involved. In this competitive market, marketing Brainet also could lead to substantial losses. Should she go ahead anyway and start the marketing campaign? Or just abandon the product? Or perhaps buy additional marketing research information from a local market research company before deciding whether to launch the product? She has to make a decision very soon and so, as she slowly drinks from her glass of high protein-power multivitamin juice, she reflects on the events of the past few years. Cerebrosoft was founded by Charlotte and two friends after they had graduated from business school. The company is located in the heart of Silicon Valley. Charlotte and her friends managed to make money in their second year in business and continued to do so every year since. Cerebrosoft was one of the first companies to sell software over the Internet and to develop PC-based software tools for the multimedia sector. Two of the products generate 80 percent of the company’s revenues: Audiatur and Videatur. Each product has sold more than 100,000 units during the past year. Business is done over the Internet: customers can download a trial version of the software, test it, and if they are satisfied with what they see, they can purchase the product (by using a password that enables them to disable the time counter in the trial version). Both products are priced at $75.95 and are exclusively sold over the Internet. Users can “surf the Web,” accessing information available world wide. Users can also make files available on the Internet, and this is how Cerebrosoft generates its sales. Selling software over the Internet eliminates many of the traditional cost factors of consumer products: packaging, storage, distribution, sales force, and so on. Instead, potential customers can download a trial version, take a look at it (that is, use the product) before its trial period expires, and then decide whether to buy it. Furthermore, Cerebrosoft can always make the most recent files available to the customer, avoiding the problem of having outdated software in the distribution pipeline. Charlotte is interrupted in her thoughts by the arrival of Jeannie Korn. Jeannie is in charge of marketing for on-line products and Brainet has had her particular attention from the beginning. She is more than ready to provide the advice that Charlotte has requested. “Charlotte, I think we should really go ahead with Brainet. The software engineers have convinced me that the current version is robust and we want to be on the market with this as soon as possible! From the data for our product launches during the past two years we can get a rather reliable estimate of how the market will respond to the new product, don’t you think? And look!” She pulls out some presentation slides. “During that time period we launched 12 new products altogether and 4 of them sold more than 30,000 units during the first 6 months alone! Even better: the last two we launched even sold more than 40,000 copies during the first two quarters!” Charlotte knows these numbers as well as Jeannie does. After all, two of these launches have been products she herself helped to develop. But she feels uneasy about this particular product launch. The company has grown rapidly during the past three years and its financial capabilities are already rather stretched. A poor product launch for Brainet would cost the company a lot of money, something that isn’t available right now due to the investments Cerebrosoft has recently made. Later in the afternoon, Charlotte meets with Reggie Ruffin, a jack-of-all-trades and the production manager. Reggie has a solid track record in his field and Charlotte wants his opinion on the Brainet project. “Well, Charlotte, quite frankly I think that there are three main factors that are relevant to the success of this project: competition, units sold, and cost—ah, and of course our pricing. Have you decided on the price yet?” “I am still considering which of the three strategies would be most beneficial to us. Selling for $50.00 and trying to maximize revenues—or selling for $30.00 and trying to maximize market share. Of course, there is still your third alternative; we could sell for $40.00 and try to do both.” At this point Reggie focuses on the sheet of paper in front of him. “And I still believe that the $40.00 alternative is the best one. Concerning the costs, I checked the records; basically we have to amortize the development costs we incurred for Brainet. So far we have spent $800,000 and we expect to spend another $50,000 per year for support and shipping the CDs to those who want a hard copy on top of their downloaded software.” Reggie next hands a report to Charlotte. “Here we have some data on the industry. I just received that yesterday, hot off the press. Let’s see what we can learn about the industry here.” He shows Charlotte some of the highlights. Reggie then agrees to compile the most relevant information contained in the report and have it ready 729 CASES for Charlotte the following morning. It takes him long into the night to gather the data from the pages of the report, but in the end he produces three tables, one for each of the three alternative pricing strategies. Each table shows the corresponding probability of various amounts of sales given the level of competition (high, medium, or low) that develops from other companies. The next morning Charlotte is sipping from another power drink. Jeannie and Reggie will be in her office any moment now and, with their help, she will have to decide what to do with Brainet. Should they launch the product? If so, at what price? When Jeannie and Reggie enter the office, Jeannie immediately bursts out: “Guys, I just spoke to our marketing research company. They say that they could do a study for us about the competitive situation for the introduction of Brainet and deliver the results within a week.” “How much do they want for the study?” “I knew you’d ask that, Reggie. They want $10,000 and I think it’s a fair deal.” At this point Charlotte steps into the conversation. “Do we have any data on the quality of the work of this marketing research company?” “Yes, I do have some reports here. After analyzing them, I have come to the conclusion that the marketing research company is not very good in predicting the competitive environment for medium or low pricing. Therefore, we should not ask them to do the study for us if we decide on one of these two pricing strategies. However, in the case of high pricing, they do quite well: given that the competition turned out to be high, they predicted it correctly 80 percent of the time, while 15 percent of the time they predicted medium competition in that setting. Given that the competition turned out to be medium, they predicted high competition 15 percent of the time and medium competition 80 percent of the time. Finally, for the case of low competition, the numbers ■ TABLE 1 Probability distribution of unit sales, given a high price ($50) Level of Competition Sales High Medium Low 50,000 units 30,000 units 20,000 units 0.2 0.25 0.55 0.25 0.3 0.45 0.3 0.35 0.35 ■ TABLE 2 Probability distribution of unit sales, given a medium price ($40) Level of Competition Sales High Medium Low 50,000 units 30,000 units 20,000 units 0.25 0.35 0.40 0.30 0.40 0.30 0.40 0.50 0.10 ■ TABLE 3 Probability distribution of unit sales, given a low price ($30) Level of Competition Sales High Medium Low 50,000 units 30,000 units 20,000 units 0.35 0.40 0.25 0.40 0.50 0.10 0.50 0.45 0.05 730 CHAPTER 16 DECISION ANALYSIS were 90 percent of the time a correct prediction, 7 percent of the time a ‘medium’ prediction and 3 percent of the time a ‘high’ prediction.” Charlotte feels that all these numbers are too much for her. “Don’t we have a simple estimate of how the market will react?” “Some prior probabilities, you mean? Sure, from our past experience, the likelihood of facing high competition is 20 percent, whereas it is 70 percent for medium competition and 10 percent for low competition,” Jeannie has her numbers always ready when needed. All that is left to do now is to sit down and make sense of all this. . . . (a) For the initial analysis, ignore the opportunity of obtaining more information by hiring the marketing research company. Identify the decision alternatives and the states of nature. Construct the payoff table. Then formulate the decision problem in a decision tree. Clearly distinguish between decision and event nodes and include all the relevant data. (b) What is Charlotte’s decision if she uses the maximum likelihood criterion? The maximin payoff criterion? (c) What is Charlotte’s decision if she uses Bayes’ decision rule? (d) Now consider the possibility of doing the market research. Develop the corresponding decision tree. Calculate the relevant probabilities and analyze the decision tree. Should Cerebrosoft pay the $10,000 for the marketing research? What is the overall optimal policy? ■ PREVIEW OF ADDED CASES ON OUR WEBSITE (www.mhhe.com/hillier) CASE 16.2 Smart Steering Support The CEO of Bay Area Automobile Gadgets is contemplating whether to add a road scanning device to the company’s driver support system. A series of decisions need to be made. Should basic research into the road scanning device be undertaken? If the research is successful, should the company develop the product or sell the technology? In the case of successful product development, should the company market the product or sell the product concept? Decision analysis needs to be applied to address these issues. Part of the analysis will involve using the CEO’s utility function. CASE 16.3 Who Wants to be a Millionaire? You are a contestant on “Who Wants to be a Millionaire?” and have just answered the $250,000 question correctly. If you decide to go on to the $500,000 question and then to the $1,000,000 question, you still have the option available of using the “phone a friend” lifeline on one of the questions to improve your chances of answering correctly. You now want to use decision analysis (including a decision tree and utility theory) to decide how to proceed. CASE 16.4 University Toys and the Engineering Professor Action Figures University Toys has developed a series of Engineering Professor Action Figures for the local engineering school and management needs to decide how to market the dolls in the face of uncertainty about the demand. One option is to immediately ramp up for full production, advertising, and sales. Another option is to test-market the product first. A complication with this option is a rumor that a competitor is about to enter the market with a similar product. Decision analysis (including a decision tree and sensitivity analysis) now needs to be used to decide how to proceed. 17 C H A P T E R Queueing Theory Q ueues (waiting lines) are a part of everyday life. We all wait in queues to buy a movie ticket, make a bank deposit, pay for groceries, mail a package, obtain food in a cafeteria, start a ride in an amusement park, etc. We have become accustomed to considerable amounts of waiting, but still get annoyed by unusually long waits. However, having to wait is not just a petty personal annoyance. The amount of time that a nation’s populace wastes by waiting in queues is a major factor in both the quality of life there and the efficiency of the nation’s economy. Great inefficiencies also occur because of other kinds of waiting than people standing in line. For example, making machines wait to be repaired may result in lost production. Vehicles (including ships and trucks) that need to wait to be unloaded may delay subsequent shipments. Airplanes waiting to take off or land may disrupt later travel schedules. Delays in telecommunication transmissions due to saturated lines may cause data glitches. Causing manufacturing jobs to wait to be performed may disrupt subsequent production. Delaying service jobs beyond their due dates may result in lost future business. Queueing theory is the study of waiting in all these various guises. It uses queueing models to represent the various types of queueing systems (systems that involve queues of some kind) that arise in practice. Formulas for each model indicate how the corresponding queueing system should perform, including the average amount of waiting that will occur, under a variety of circumstances. Therefore, these queueing models are very helpful for determining how to operate a queueing system in the most effective way. Providing too much service capacity to operate the system involves excessive costs. But not providing enough service capacity results in excessive waiting and all its unfortunate consequences. The models enable finding an appropriate balance between the cost of service and the amount of waiting. After some general discussion, this chapter presents most of the more elementary queueing models and their basic results. Section 17.10 discusses how the information provided by queueing theory can be used to design queueing systems that minimize the total cost of service and waiting, and then Chap. 26 (on the book’s website) elaborates considerably further on the application of queueing theory in this way. 731 732 ■ 17.1 CHAPTER 17 QUEUEING THEORY PROTOTYPE EXAMPLE The emergency room of COUNTY HOSPITAL provides quick medical care for emergency cases brought to the hospital by ambulance or private automobile. At any hour there is always one doctor on duty in the emergency room. However, because of a growing tendency for emergency cases to use these facilities rather than go to a private physician, the hospital has been experiencing a continuing increase in the number of emergency room visits each year. As a result, it has become quite common for patients arriving during peak usage hours (the early evening) to have to wait until it is their turn to be treated by the doctor. Therefore, a proposal has been made that a second doctor should be assigned to the emergency room during these hours, so that two emergency cases can be treated simultaneously. The hospital’s management engineer has been assigned to study this question. The management engineer began by gathering the relevant historical data and then projecting these data into the next year. Recognizing that the emergency room is a queueing system, she applied several alternative queueing theory models to predict the waiting characteristics of the system with one doctor and with two doctors, as you will see in the latter sections of this chapter (see Tables 17.2 and 17.3). ■ 17.2 BASIC STRUCTURE OF QUEUEING MODELS The Basic Queueing Process The basic process assumed by most queueing models is the following. Customers requiring service are generated over time by an input source. These customers enter the queueing system and join a queue if service is not immediately available. At certain times, a member of the queue is selected for service by some rule known as the queue discipline. The required service is then performed for the customer by the service mechanism, after which the customer leaves the queueing system. This process is depicted in Fig. 17.1. Many alternative assumptions can be made about the various elements of the queueing process; they are discussed next. Input Source (Calling Population) One characteristic of the input source is its size. The size is the total number of customers that might require service from time to time, i.e., the total number of distinct potential customers. This population from which arrivals come is referred to as the calling population. The size may be assumed to be either infinite or finite (so that the input source also is said to be either unlimited or limited ). Because the calculations are far easier for the infinite case, this assumption often is made even when the actual size is some relatively ■ FIGURE 17.1 The basic queueing process. Queueing system Input source Customers Queue Service mechanism Served customers 17.2 BASIC STRUCTURE OF QUEUEING MODELS 733 large finite number; and it should be taken to be the implicit assumption for any queueing model that does not state otherwise. The finite case is more difficult analytically because the number of customers in the queueing system affects the number of potential customers outside the system at any time. However, the finite assumption must be made if the rate at which the input source generates new customers is significantly affected by the number of customers in the queueing system. The statistical pattern by which customers are generated over time must also be specified. The common assumption is that they are generated according to a Poisson process; i.e., the number of customers generated until any specific time has a Poisson distribution. As we discuss in Sec. 17.4, this case is the one where arrivals to the queueing system occur randomly but at a certain fixed mean rate, regardless of how many customers already are there (so the size of the input source is infinite). An equivalent assumption is that the probability distribution of the time between consecutive arrivals is an exponential distribution. (The properties of this distribution are described in Sec. 17.4.) The time between consecutive arrivals is referred to as the interarrival time. Any unusual assumptions about the behavior of arriving customers must also be specified. One example is balking, where the customer refuses to enter the system and is lost if the queue is too long. Queue The queue is where customers wait before being served. A queue is characterized by the maximum permissible number of customers that it can contain. Queues are called infinite or finite, according to whether this number is infinite or finite. The assumption of an infinite queue is the standard one for most queueing models, even for situations where there actually is a (relatively large) finite upper bound on the permissible number of customers, because dealing with such an upper bound would be a complicating factor in the analysis. However, for queueing systems where this upper bound is small enough that it actually would be reached with some frequency, it becomes necessary to assume a finite queue. Queue Discipline The queue discipline refers to the order in which members of the queue are selected for service. For example, it may be first-come-first-served, random, according to some priority procedure, or some other order. First-come-first-served usually is assumed by queueing models, unless it is stated otherwise. Service Mechanism The service mechanism consists of one or more service facilities, each of which contains one or more parallel service channels, called servers. If there is more than one service facility, the customer may receive service from a sequence of these (service channels in series). At a given facility, the customer enters one of the parallel service channels and is completely serviced by that server. A queueing model must specify the arrangement of the facilities and the number of servers (parallel channels) at each one. Most elementary models assume one service facility with either one server or a finite number of servers. The time elapsed from the commencement of service to its completion for a customer at a service facility is referred to as the service time (or holding time). A model of a particular queueing system must specify the probability distribution of service times for each server (and possibly for different types of customers), although it is common to assume the same distribution for all servers (all models in this chapter make this assumption). The service-time distribution that is most frequently assumed in practice (largely because it is far more tractable than any other) is the exponential distribution discussed in Sec. 17.4, 734 CHAPTER 17 QUEUEING THEORY and most of our models will be of this type. Other important service-time distributions are the degenerate distribution (constant service time) and the Erlang (gamma) distribution, as illustrated by models in Sec. 17.7. An Elementary Queueing Process As we have already suggested, queueing theory has been applied to many different types of waiting-line situations. However, the most prevalent type of situation is the following: A single waiting line (which may be empty at times) forms in the front of a single service facility, within which are stationed one or more servers. Each customer generated by an input source is serviced by one of the servers, perhaps after some waiting in the queue (waiting line). The queueing system involved is depicted in Fig. 17.2. Notice that the queueing process in the prototype example of Sec. 17.1 is of this type. The input source generates customers in the form of emergency cases requiring medical care. The emergency room is the service facility, and the doctors are the servers. A server need not be a single individual; it may be a group of persons, e.g., a repair crew that combines forces to perform simultaneously the required service for a customer. Furthermore, servers need not even be people. In many cases, a server can instead be a machine, a vehicle, an electronic device, etc. By the same token, the customers in the waiting line need not be people. For example, they may be items waiting for a certain operation by a given type of machine, or they may be cars waiting in front of a tollbooth. It is not necessary that there actually be a physical waiting line forming in front of a physical structure that constitutes the service facility. The members of the queue may instead be scattered throughout an area, waiting for a server to come to them, e.g., machines waiting to be repaired. The server or group of servers assigned to a given area constitutes the service facility for that area. Queueing theory still gives the average number waiting, the average waiting time, and so on, because it is irrelevant whether the customers wait together in a group. The only essential requirement for queueing theory to be applicable is that changes in the number of customers waiting for a given service occur just as though the physical situation described in Fig. 17.2 (or a legitimate counterpart) prevailed. Except for Sec. 17.9, all the queueing models discussed in this chapter are of the elementary type depicted in Fig. 17.2. Many of these models further assume that all Served customers ■ FIGURE 17.2 An elementary queueing system (each customer is indicated by a C and each server by an S). Queueing system Customers Queue C C C C C C C C C C C Served customers S S S S Service facility 17.2 BASIC STRUCTURE OF QUEUEING MODELS 735 interarrival times are independent and identically distributed and that all service times are independent and identically distributed. Such models conventionally are labeled as follows: Distribution of service times –/–/– Number of servers Distribution of interarrival times, where M  exponential distribution (Markovian), as described in Sec. 17.4, D  degenerate distribution (constant times), as discussed in Sec. 17.7, Ek  Erlang distribution (shape parameter  k), as described in Sec. 17.7, G  general distribution (any arbitrary distribution allowed),1 as discussed in Sec. 17.7. For example, the M/M/s model discussed in Sec. 17.6 assumes that both interarrival times and service times have an exponential distribution and that the number of servers is s (any positive integer). The M/G/1 model discussed in Sec. 17.7 assumes that interarrival times have an exponential distribution, but it places no restriction on what the distribution of service times must be, whereas the number of servers is restricted to be exactly 1. Various other models that fit this labeling scheme also are introduced in Sec. 17.7. Terminology and Notation Unless otherwise noted, the following standard terminology and notation will be used: State of system  number of customers in queueing system. Queue length  number of customers waiting for service to begin.  state of system minus number of customers being served. N(t)  number of customers in queueing system at time t (t  0). Pn(t)  probability of exactly n customers in queueing system at time t, given number at time 0. s  number of servers (parallel service channels) in queueing system. ␭n  mean arrival rate (expected number of arrivals per unit time) of new customers when n customers are in system. ␮n  mean service rate for overall system (expected number of customers completing service per unit time) when n customers are in system. Note: ␮n represents combined rate at which all busy servers (those serving customers) achieve service completions. ␭, ␮, ␳  see following paragraph. When ␭n is a constant for all n, this constant is denoted by ␭. When the mean service rate per busy server is a constant for all n  1, this constant is denoted by ␮. (In this case, ␮n  s␮ when n  s, that is, when all s servers are busy.) Under these circumstances, 1/␭ and 1/␮ are the expected interarrival time and the expected service time, respectively. Also, ␳  ␭/(s␮) is the utilization factor for the service facility, i.e., the expected fraction of 1 When we refer to interarrival times, it is conventional to replace the symbol G by GI  general independent distribution. 736 CHAPTER 17 QUEUEING THEORY time the individual servers are busy, because ␭/(s␮) represents the fraction of the system’s service capacity (s␮) that is being utilized on the average by arriving customers (␭). Certain notation also is required to describe steady-state results. When a queueing system has recently begun operation, the state of the system (number of customers in the system) will be greatly affected by the initial state and by the time that has since elapsed. The system is said to be in a transient condition. However, after sufficient time has elapsed, the state of the system becomes essentially independent of the initial state and the elapsed time (except under unusual circumstances).2 The system has now essentially reached a steady-state condition, where the probability distribution of the state of the system remains the same (the steady-state or stationary distribution) over time. Queueing theory has tended to focus largely on the steady-state condition, partially because the transient case is more difficult analytically. (Some transient results exist, but they are generally beyond the technical scope of this book.) The following notation assumes that the system is in a steady-state condition: Pn  probability of exactly n customers in queueing system. ⴥ L  expected number of customers in queueing system  冱 nPn. n0 ⴥ Lq  expected queue length (excludes customers being served)  冱 (n ⫺ s)Pn. ns ᐃ W ᐃq Wq     waiting time in system (includes service time) for each individual customer. E(ᐃ). waiting time in queue (excludes service time) for each individual customer. E(ᐃq). Relationships between L, W, Lq, and Wq Assume that ␭n is a constant ␭ for all n. It has been proved that in a steady-state queueing process, L  ␭W. (Because John D. C. Little provided the first rigorous proof, this equation sometimes is referred to as Little’s formula.) Furthermore, the same proof also shows that Lq  ␭Wq. If the ␭n are not equal, then ␭ can be replaced in these equations by ␭ , the average arrival rate over the long run. (We shall show later how  ␭ can be determined for some basic cases.) Now assume that the mean service time is a constant, 1/␮ for all n  1. It then follows that 1 W  Wq  . ␮ These relationships are extremely important because they enable all four of the fundamental quantities—L, W, Lq, and Wq—to be immediately determined as soon as 2 When ␭ and ␮ are defined, these unusual circumstances are that ␳  1, in which case the state of the system tends to grow continually larger as time goes on. 17.3 EXAMPLES OF REAL QUEUEING SYSTEMS 737 one is found analytically. This situation is fortunate because some of these quantities often are much easier to find than others when a queueing model is solved from basic principles. ■ 17.3 EXAMPLES OF REAL QUEUEING SYSTEMS Our description of queueing systems in Sec. 17.2 may appear relatively abstract and applicable to only rather special practical situations. On the contrary, queueing systems are surprisingly prevalent in a wide variety of contexts. To broaden your horizons on the applicability of queueing theory, we shall briefly mention various examples of real queueing systems that fall into several broad categories. We then will describe queueing systems in several prominent companies (plus one city) and the award-winning studies that were conducted to design these systems. Some Classes of Queueing Systems One important class of queueing systems that we all encounter in our daily lives is commercial service systems, where outside customers receive service from commercial organizations. Many of these involve person-to-person service at a fixed location, such as a barber shop (the barbers are the servers), bank teller service, checkout stands at a grocery store, and a cafeteria line (service channels in series). However, many others do not, such as home appliance repairs (the server travels to the customers), a vending machine (the server is a machine), and a gas station (the cars are the customers). Another important class is transportation service systems. For some of these systems the vehicles are the customers, such as cars waiting at a tollbooth or traffic light (the server), a truck or ship waiting to be loaded or unloaded by a crew (the server), and airplanes waiting to land or take off from a runway (the server). (An unusual example of this kind is a parking lot, where the cars are the customers and the parking spaces are the servers, but there is no queue because arriving customers go elsewhere to park if the lot is full.) In other cases, the vehicles, such as taxicabs, fire trucks, and elevators, are the servers. In recent years, queueing theory probably has been applied most to internal service systems, where the customers receiving service are internal to the organization. Examples include materials-handling systems, where materials-handling units (the servers) move loads (the customers); maintenance systems, where maintenance crews (the servers) repair machines (the customers); and inspection stations, where quality control inspectors (the servers) inspect items (the customers). Employee facilities and departments servicing employees also fit into this category. In addition, machines can be viewed as servers whose customers are the jobs being processed. A related example is a computer laboratory, where each computer is viewed as the server. There is now growing recognition that queueing theory also is applicable to social service systems. For example, a judicial system is a queueing network, where the courts are service facilities, the judges (or panels of judges) are the servers, and the cases waiting to be tried are the customers. A legislative system is a similar queueing network, where the customers are the bills waiting to be processed. Various health-care systems also are queueing systems. You already have seen one example in Sec. 17.1 (a hospital emergency room), but you can also view ambulances, X-ray machines, and hospital beds as servers in their own queueing systems. Similarly, families waiting for low- and moderate-income housing, or other social services, can be viewed as customers in a queueing system. 738 CHAPTER 17 QUEUEING THEORY Although these are four broad classes of queueing systems, they still do not exhaust the list. In fact, queueing theory first began early in the 20th century with applications to telephone engineering (the founder of queueing theory, A. K. Erlang, was an employee of the Danish Telephone Company in Copenhagen), and telephone engineering still is an important application. Furthermore, we all have our own personal queues—homework assignments, books to be read, and so forth. However, these examples are sufficient to suggest that queueing systems do indeed pervade many areas of society. Some Award-Winning Studies to Design Queueing Systems The prestigious Franz Edelman Awards for Achievement in Operations Research and the Management Sciences are awarded annually by the Institute of Operations Research and the Management Sciences (INFORMS) for the year’s best applications of OR. A rather substantial number of these awards have been given for innovative applications of queueing theory to the design of queueing systems. Two of these award-winning applications of queueing theory are described in application vignettes later in this chapter (Secs. 17.6 and 17.9). The selected references at the end of the chapter also include a sampling of articles describing some other awardwinning applications. (A link to all these articles, including for the application vignettes, is provided on the book’s website.) We briefly describe below a few of these other applications of queueing theory that now are considered classics in the field. As described in Selected Reference A1, one of the early first-prize winners of the Edelman competition was the Xerox Corporation. The company had recently introduced a major new duplicating system that was proving to be particularly valuable for its owners. Consequently, these customers were demanding that Xerox’s tech reps reduce the waiting times to repair the machines. An OR team then applied queueing theory to study how to best meet the new service requirements. This resulted in replacing the previous one-person tech rep territories by larger three-person tech rep territories. This change had the dramatic effect of both substantially reducing the average waiting times of the customers and increasing the utilization of the tech reps by over 50 percent. (Chapter 11 of Selected Reference 9 presents a case study that is based on this application of queueing theory by the Xerox Corporation.) L.L. Bean, Inc., the large telemarketer and mail-order catalog house, relied mainly on queueing theory for its award-winning study of how to allocate its telecommunications resources that is described in Selected Reference A5. The telephone calls coming in to its call center to place orders are the customers in a large queueing system, with the telephone agents as the servers. The key questions being asked during the study were the following: 1. How many telephone trunk lines should be provided for incoming calls to the call center? 2. How many telephone agents should be scheduled at various times? 3. How many hold positions should be provided for customers waiting for a telephone agent? (Note that the limited number of hold positions causes the system to have a finite queue.) For each interesting combination of these three quantities, queueing models provide the measures of performance of the queueing system. Given these measures, the OR team carefully assessed the cost of lost sales due to making some customers either incur a busy signal or be placed on hold too long. By adding the cost of the telemarketing resources, 17.4 THE ROLE OF THE EXPONENTIAL DISTRIBUTION 739 the team then was able to find the combination of the three quantities that minimizes the expected total cost. This resulted in cost savings of $9 to $10 million per year. Another first prize in the Edelman competition was won by AT&T for a study that combined the use of queueing theory and simulation (the subject of Chap. 20). As described in Selected Reference A2, the queueing models are of both AT&T’s telecommunication network and the call center environment for the typical business customers of AT&T that have such a center. The purpose of the study was to develop a user-friendly PC-based system that AT&T’s business customers can use to guide them in how to design or redesign their call centers. Since call centers have been one of the United States’ fastest-growing industries, this system had been used about 2,000 times by AT&T’s business customers by the time of the article. This resulted in more than $750 million in annual profit for these customers. Hewlett-Packard (HP) is a leading multinational manufacturer of electronic equipment. Some years ago, the company installed a mechanized assembly-line system for manufacturing ink-jet printers at its plant in Vancouver, Washington, to meet the exploding demand for such printers. It soon became apparent that the system installed would not be fast enough or reliable enough to meet the company’s production goals. Therefore, a joint team of management scientists from HP and the Massachusetts Institute of Technology (MIT) was formed to study how to redesign the system to improve its performance. As described in Selected Reference A4 for this award-winning study, the HP/MIT team quickly realized that the assembly-line system could be modeled as a special kind of queueing system where the customers (the printers to be assembled) go through a series of servers (assembly operations) in a fixed sequence. A special queueing model for this kind of system quickly provided the analytical results that were needed to determine how the system should be redesigned to achieve the required capacity in the most economical way. The changes included adding some buffer storage space at strategic points to better maintain the flow of work to the subsequent stations and to dampen the effect of machine failures. The new design increased productivity about 50 percent and yielded incremental revenues of approximately $280 million in printer sales as well as additional revenue from ancillary products. This innovative application of the special queueing model also provided HP with a new method for creating rapid and effective system designs subsequently in other areas of the company. ■ 17.4 THE ROLE OF THE EXPONENTIAL DISTRIBUTION The operating characteristics of queueing systems are determined largely by two statistical properties, namely, the probability distribution of interarrival times (see “Input Source” in Sec. 17.2) and the probability distribution of service times (see “Service Mechanism” in Sec. 17.2). For real queueing systems, these distributions can take on almost any form. (The only restriction is that negative values cannot occur.) However, to formulate a queueing theory model as a representation of the real system, it often is necessary to specify the assumed form of each of these distributions. To be useful, the assumed form should be sufficiently realistic that the model provides reasonable predictions while, at the same time, being sufficiently simple that the model is mathematically tractable. Based on these considerations, the most important probability distribution in queueing theory is the exponential distribution. Suppose that a random variable T represents either interarrival or service times. (We shall refer to the occurrences marking the end of these times—arrivals or service 740 CHAPTER 17 QUEUEING THEORY completions—as events.) This random variable is said to have an exponential distribution with parameter ␣ if its probability density function is 0␣e ⫺␣t fT (t) ⫽ for t ⱖ 0 for t ⬍ 0, as shown in Fig. 17.3. In this case, the cumulative probabilities are P{T ⱕ t} ⫽ 1 ⫺ e⫺␣t P{T ⬎ t} ⫽ e⫺␣t (t ⱖ 0), and the expected value and variance of T are, respectively, 1 E(T) ⫽ ᎏᎏ, ␣ 1 var(T) ⫽ ᎏᎏ. ␣2 What are the implications of assuming that T has an exponential distribution for a queueing model? To explore this question, let us examine six key properties of the exponential distribution. Property 1: fT (t) is a strictly decreasing function of t (t ⱖ 0). One consequence of Property 1 is that P{0 ⱕ T ⱕ ⌬t} ⬎ P{t ⱕ T ⱕ t ⫹ ⌬t} for any strictly positive values of ⌬t and t. [This consequence follows from the fact that these probabilities are the area under the fT (t) curve over the indicated interval of length ⌬t, and the average height of the curve is less for the second probability than for the first.] Therefore, it is not only possible but also relatively likely that T will take on a small value near zero. In fact,  冧 1 1 P 0 ⱕ T ⱕ ᎏᎏ ᎏᎏ ⫽ 0.393 2 ␣ whereas  冧 1 1 3 1 P ᎏᎏ ᎏᎏ ⱕ T ⱕ ᎏᎏ ᎏᎏ ⫽ 0.383, 2 ␣ 2 ␣ so that the value T takes on is more likely to be “small” [i.e., less than half of E(T)] than “near” its expected value [i.e., no further away than half of E(T)], even though the second interval is twice as wide as the first. ■ FIGURE 17.3 Probability density function for the exponential distribution. fT (t) ␣ 0 1 E(T) ⫽ ␣ t 17.4 THE ROLE OF THE EXPONENTIAL DISTRIBUTION 741 Is this really a reasonable property for T in a queueing model? If T represents service times, the answer depends upon the general nature of the service involved, as discussed next. If the service required is essentially identical for each customer, with the server always performing the same sequence of service operations, then the actual service times tend to be near the expected service time. Small deviations from the mean may occur, but usually because of only minor variations in the efficiency of the server. A small service time far below the mean is essentially impossible, because a certain minimum time is needed to perform the required service operations even when the server is working at top speed. The exponential distribution clearly does not provide a close approximation to the service-time distribution for this type of situation. On the other hand, consider the type of situation where the specific tasks required of the server differ among customers. The broad nature of the service may be the same, but the specific type and amount of service differ. For example, this is the case in the County Hospital emergency room problem discussed in Sec. 17.1. The doctors encounter a wide variety of medical problems. In most cases, they can provide the required treatment rather quickly, but an occasional patient requires extensive care. Similarly, bank tellers and grocery store checkout clerks are other servers of this general type, where the required service is often brief but must occasionally be extensive. An exponential service-time distribution would seem quite plausible for this type of service situation. If T represents interarrival times, Property 1 rules out situations where potential customers approaching the queueing system tend to postpone their entry if they see another customer entering ahead of them. On the other hand, it is entirely consistent with the common phenomenon of arrivals occurring “randomly,” described by subsequent properties. Thus, when arrival times are plotted on a time line, they sometimes have the appearance of being clustered with occasional large gaps separating clusters, because of the substantial probability of small interarrival times and the small probability of large interarrival times, but such an irregular pattern is all part of true randomness. Property 2: Lack of memory. This property can be stated mathematically as P{T ⬎ t ⫹ ⌬t⏐T ⬎ ⌬t}  P{T ⬎ t} for any positive quantities t and ⌬t. In other words, the probability distribution of the remaining time until the event (arrival or service completion) occurs always is the same, regardless of how much time (⌬t) already has passed. In effect, the process “forgets” its history. This surprising phenomenon occurs with the exponential distribution because P{T ⬎ ⌬t, T ⬎ t ⫹ ⌬t} P{T ⬎ t ⫹ ⌬t⏐T ⬎ ⌬t}   P{T ⬎ ⌬t} P{T ⬎ t ⫹ ⌬t}   P{T ⬎ ⌬t} e⫺␣(t⫹⌬t)   e⫺␣⌬t  e⫺␣t  P{T ⬎ t}. For interarrival times, this property describes the common situation where the time until the next arrival is completely uninfluenced by when the last arrival occurred. For service times, the property is more difficult to interpret. We should not expect it to hold in a situation where the server must perform the same fixed sequence of operations for each customer, because then a long elapsed service should imply that probably little 742 CHAPTER 17 QUEUEING THEORY remains to be done. However, in the type of situation where the required service operations differ among customers, the mathematical statement of the property may be quite realistic. For this case, if considerable service has already elapsed for a customer, the only implication may be that this particular customer requires more extensive service than most. Property 3: The minimum of several independent exponential random variables has an exponential distribution. To state this property mathematically, let T1, T2, . . . , Tn be independent exponential random variables with parameters ␣1, ␣2, . . . , ␣n, respectively. Also let U be the random variable that takes on the value equal to the minimum of the values actually taken on by T1, T2, . . . , Tn; that is, U  min {T1, T2, . . . , Tn}. Thus, if Ti represents the time until a particular kind of event occurs, then U represents the time until the first of the n different events occurs. Now note that for any t  0, t}  P{T1 ⬎ t, T2 ⬎ t, . . . , Tn ⬎ t}  P{T1 ⬎ t}P{T2 ⬎ t} ⭈⭈⭈ P{Tn ⬎ t}  e⫺␣1te⫺␣2t ⭈⭈⭈ e⫺␣nt P{U 冢 n 冣  exp ⫺冱 ␣it , i1 so that U indeed has an exponential distribution with parameter n ␣  冱 ␣i. i1 This property has some implications for interarrival times in queueing models. In particular, suppose that there are several (n) different types of customers, but the interarrival times for each type (type i) have an exponential distribution with parameter ␣i (i  1, 2, . . . , n). By Property 2, the remaining time from any specified instant until the next arrival of a customer of type i has this same distribution. Therefore, let Ti be this remaining time, measured from the instant a customer of any type arrives. Property 3 then tells us that U, the interarrival times for the queueing system as a whole, has an exponential distribution with parameter ␣ defined by the last equation. As a result, you can choose to ignore the distinction between customers and still have exponential interarrival times for the queueing model. However, the implications are even more important for service times in multiple-server queueing models than for interarrival times. For example, consider the situation where all the servers have the same exponential service-time distribution with parameter ␮. For this case, let n be the number of servers currently providing service, and let Ti be the remaining service time for server i (i  1, 2, . . . , n), which also has an exponential distribution with parameter ␣i  ␮. It then follows that U, the time until the next service completion from any of these servers, has an exponential distribution with parameter ␣  n␮. In effect, the queueing system currently is performing just like a single-server system where service times have an exponential distribution with parameter n␮. We shall make frequent use of this implication for analyzing multiple-server models later in the chapter. When using this property, it sometimes is useful to also determine the probabilities for which of the exponential random variables will turn out to be the one which has the minimum value. For example, you might want to find the probability that a particular server j will finish serving a customer first among n busy exponential servers. It is fairly straightforward (see Prob. 17.4-9) to show that this probability is proportional to the 17.4 THE ROLE OF THE EXPONENTIAL DISTRIBUTION 743 parameter ␣j. In particular, the probability that Tj will turn out to be the smallest of the n random variables is ␣j P{Tj  U}  n , for j  1, 2, . . . , n. ␣ 冱 i i1 Property 4: Relationship to the Poisson distribution. Suppose that the time between consecutive occurrences of some particular kind of event (e.g., arrivals or service completions by a continuously busy server) has an exponential distribution with parameter ␣. Property 4 then has to do with the resulting implication about the probability distribution of the number of times this kind of event occurs over a specified time. In particular, let X(t) be the number of occurrences by time t (t  0), where time 0 designates the instant at which the count begins. The probability distribution of a random variable X(t) defined in this way is the Poisson distribution with parameter ␣t. The form of this distribution is (␣t)ne⫺␣t P{X(t) ⫽ n} ⫽ ᎏᎏ, n! for n ⫽ 0, 1, 2, . . . . For example, with n ⫽ 0, P{X(t) ⫽ 0} ⫽ e⫺␣t, which is just the probability from the exponential distribution that the first event occurs after time t. The mean of this Poisson distribution is E{X(t)} ⫽ ␣t, so that the expected number of events per unit time is ␣. Thus, ␣ is said to be the mean rate at which the events occur. When the events are counted on a continuing basis, the counting process {X(t); t ⱖ 0} is said to be a Poisson process with parameter ␣ (the mean rate). This property provides useful information about service completions when service times have an exponential distribution with parameter ␮. We obtain this information by defining X(t) as the number of service completions achieved by a continuously busy server in elapsed time t, where ␣ ⫽ ␮. For multiple-server queueing models, X(t) can also be defined as the number of service completions achieved by n continuously busy servers in elapsed time t, where ␣ ⫽ n␮. The property is particularly useful for describing the probabilistic behavior of arrivals when interarrival times have an exponential distribution with parameter ␭. In this case, X(t) is the number of arrivals in elapsed time t, where ␣ ⫽ ␭ is the mean arrival rate. Therefore, arrivals occur according to a Poisson input process with parameter ␭. Such queueing models also are described as assuming a Poisson input. Arrivals sometimes are said to occur randomly, meaning that they occur in accordance with a Poisson input process. One intuitive interpretation of this phenomenon is that every time period of fixed length has the same chance of having an arrival regardless of when the preceding arrival occurred, as suggested by the following property. Property 5: For all positive values of t, P{T ⱕ t ⫹ ⌬t⏐T ⬎ t} ⬇ ␣ ⌬t, for small ⌬t. Continuing to interpret T as the time from the last event of a certain type (arrival or service completion) until the next such event, we suppose that a time t already has elapsed without the event’s occurring. We know from Property 2 that the probability that the event will occur within the next time interval of fixed length ⌬t is a constant (identified in the next paragraph), regardless of how large or small t is. Property 5 goes further 744 CHAPTER 17 QUEUEING THEORY to say that when the value of ⌬t is small, this constant probability can be approximated very closely by ␣ ⌬t. Furthermore, when considering different small values of ⌬t, this probability is essentially proportional to ⌬t, with proportionality factor ␣. In fact, ␣ is the mean rate at which the events occur (see Property 4), so that the expected number of events in the interval of length ⌬t is exactly ␣ ⌬t. The only reason that the probability of an event’s occurring differs slightly from this value is the possibility that more than one event will occur, which has negligible probability when ⌬t is small. To see why Property 5 holds mathematically, note that the constant value of our probability (for a fixed value of ⌬t ⬎ 0) is just P{T  t ⫹ ⌬t⏐T ⬎ t}  P{T ⱕ ⌬t} ⫽ 1 ⫺ e⫺␣ ⌬t, for any t  0. Therefore, because the series expansion of ex for any exponent x is ⴥ xn ex  1  x  冱 , n2 n! it follows that ⴥ (⫺␣ ⌬t)n P{T  t ⫹ ⌬t⏐T ⬎ t}  1 ⫺ 1 ⫹ ␣ ⌬t ⫺ 冱  n! n2 ⬇ ␣ ⌬t, for small ⌬t,3 because the summation terms become relatively negligible for sufficiently small values of ␣ ⌬t. Because T can represent either interarrival or service times in queueing models, this property provides a convenient approximation of the probability that the event of interest occurs in the next small interval (⌬t) of time. An analysis based on this approximation also can be made exact by taking appropriate limits as ⌬t  0. Property 6: Unaffected by aggregation or disaggregation. This property is relevant primarily for verifying that the input process is Poisson. Therefore, we shall describe it in these terms, although it also applies directly to the exponential distribution (exponential interarrival times) because of Property 4. We first consider the aggregation (combining) of several Poisson input processes into one overall input process. In particular, suppose that there are several (n) different types of customers, where the customers of each type (type i) arrive according to a Poisson input process with parameter ␭i (i  1, 2, . . . , n). Assuming that these are independent Poisson processes, the property says that the aggregate input process (arrival of all customers without regard to type) also must be Poisson, with parameter (mean arrival rate) ␭  ␭1  ␭2 ⫹ ⭈ ⭈ ⭈ ⫹ ␭n. In other words, having a Poisson process is unaffected by aggregation. This part of the property follows directly from Properties 3 and 4. The latter property implies that the interarrival times for customers of type i have an exponential distribution with parameter ␭i. For this identical situation, we already discussed for Property 3 that it implies that the interarrival times for all customers also must have an exponential distribution, with parameter ␭  ␭1  ␭2 ⫹ ⭈ ⭈ ⭈ ⫹ ␭n. Using Property 4 again then implies that the aggregate input process is Poisson. The second part of Property 6 (“unaffected by disaggregation”) refers to the reverse case, where the aggregate input process (the one obtained by combining the input processes 3 More precisely, P{T  t ⫹ ⌬t⏐T ⬎ t} lim   ␣. ⌬t ⌬t→0 17.5 THE BIRTH-AND-DEATH PROCESS 745 for several customer types) is known to be Poisson with parameter ␭, but the question now concerns the nature of the disaggregated input processes (the individual input processes for the individual customer types). Assuming that each arriving customer has a fixed probability pi of being of type i (i  1, 2, . . . , n), with n ␭i  pi␭ and 冱 pi  1, i1 the property says that the input process for customers of type i also must be Poisson with parameter ␭i. In other words, having a Poisson process is unaffected by disaggregation. As one example of the usefulness of this second part of the property, consider the following situation. Indistinguishable customers arrive according to a Poisson process with parameter ␭. Each arriving customer has a fixed probability p of balking (leaving without entering the queueing system), so the probability of entering the system is 1 ⫺ p. Thus, there are two types of customers—those who balk and those who enter the system. The property says that each type arrives according to a Poisson process, with parameters p␭ and (1 ⫺ p)␭, respectively. Therefore, by using the latter Poisson process, queueing models that assume a Poisson input process can still be used to analyze the performance of the queueing system for those customers who enter the system. Another example in the Solved Examples section of the book’s website illustrates the application of several of the properties of the exponential distribution presented in this section. ■ 17.5 THE BIRTH-AND-DEATH PROCESS Most elementary queueing models assume that the inputs (arriving customers) and outputs (leaving customers) of the queueing system occur according to the birth-and-death process. This important process in probability theory has applications in various areas. However, in the context of queueing theory, the term birth refers to the arrival of a new customer into the queueing system, and death refers to the departure of a served customer. The state of the system at time t (t  0), denoted by N(t), is the number of customers in the queueing system at time t. The birth-and-death process describes probabilistically how N(t) changes as t increases. Broadly speaking, it says that individual births and deaths occur randomly, where their mean occurrence rates depend only upon the current state of the system. More precisely, the assumptions of the birth-and-death process are the following: Assumption 1. Given N(t)  n, the current probability distribution of the remaining time until the next birth (arrival) is exponential with parameter ␭n (n  0, 1, 2, . . .). Assumption 2. Given N(t)  n, the current probability distribution of the remaining time until the next death (service completion) is exponential with parameter ␮n (n  1, 2, . . .). Assumption 3. The random variable of assumption 1 (the remaining time until the next birth) and the random variable of assumption 2 (the remaining time until the next death) are mutually independent. The next transition in the state of the process is either n  n  1 (a single birth) or n  n ⫺ 1 (a single death), depending on whether the former or latter random variable is smaller. 746 CHAPTER 17 QUEUEING THEORY 1 0 State: ■ FIGURE 17.4 Rate diagram for the birthand-death process. 0 1 1 2 2 n⫺2 2 3 3 . . . n⫺2 n⫺1 n⫺1 n ⫺ 1 n n⫹1 n n . . . n  1 For a queueing system, ␭n and ␮n respectively represent the mean arrival rate and the mean rate of service completions, when there are n customers in the system. For some queueing systems, the values of the ␭n will be the same for all values of n, and the ␮n also will be the same for all n except for such small n (e.g., n  0) that a server is idle. However, the ␭n and the ␮n also can vary considerably with n for some queueing systems. For example, one of the ways in which ␭n can be different for different values of n is if potential arriving customers become increasingly likely to balk (refuse to enter the system) as n increases. Similarly, ␮n can be different for different n because customers in the queue become increasingly likely to renege (leave without being served) as the queue size increases. Another example in the Solved Examples section of the book’s website illustrates a queueing system where both balking and reneging occur. This example then demonstrates how the general results for the birth-and-death process lead directly to various measures of performance for this queueing system. Analysis of the Birth-and-Death Process The assumptions of the birth-and-death process indicate that probabilities involving how the process will evolve in the future depend only on the current state of the process, and so are independent of events in the past. This “lack-of-memory property” is the key characteristic of any Markov chain. Therefore, the birth-and-death process is a special type of continuous time Markov chain. (Section 29.8 provides a detailed description of continuous time Markov chains and their properties, including an introduction to the general procedure for finding steady-state probabilities that will be applied in the remainder of this section.) Recall that the exponential distribution has the lack-of-memory property (Property 2 in Sec. 17.4.) Therefore, queueing models that are based exclusively on exponential distributions (which include all the models in the next section that are based on the birth-and-death process) can be represented by a continuous time Markov chain. Such queueing models are far more tractable analytically than any other. Thus, the rich theory of continuous time Markov chains plays a fundamental role in the background for the analysis of many queueing models, including those based on the birthand-death process. However, we will not need to delve explicitly into this theory in this introductory chapter on queueing theory. Therefore, you will not need any prior background on continuous time Markov chains for this chapter and we will not mention them again. Because Property 4 for the exponential distribution (see Sec. 17.4) implies that the ␭n and ␮n are mean rates, we can summarize these assumptions by the rate diagram shown in Fig. 17.4. The arrows in this diagram show the only possible transitions in the state of the system (as specified by assumption 3), and the entry for each arrow gives the mean rate for that transition (as specified by assumptions 1 and 2) when the system is in the state at the base of the arrow. Except for a few special cases, analysis of the birth-and-death process is very difficult when the system is in a transient condition. Some results about the probability distribution of N(t) have been obtained, but they are too complicated to be of much practical use. On the other hand, it is relatively straightforward to derive this distribution after 17.5 THE BIRTH-AND-DEATH PROCESS 747 the system has reached a steady-state condition (assuming that this condition can be reached). This derivation can be done directly from the rate diagram, as outlined next. Consider any particular state of the system n (n  0, 1, 2, . . .). Starting at time 0, suppose that a count is made of the number of times that the process enters this state and the number of times it leaves this state, as denoted below: En(t)  number of times that process enters state n by time t. Ln(t)  number of times that process leaves state n by time t. Because the two types of events (entering and leaving) must alternate, these two numbers must always either be equal or differ by just 1; that is, ⏐En(t) ⫺ Ln(t)⏐  1. Dividing through both sides by t and then letting t  ⴥ gives E (t) L (t) 1 n ⫺ n  , t t t so E (t) L (t) lim n ⫺ n  0. t t t→ⴥ Dividing En(t) and Ln(t) by t gives the actual rate (number of events per unit time) at which these two kinds of events have occurred, and letting t  ⴥ then gives the mean rate (expected number of events per unit time): E (t) lim n  mean rate at which process enters state n. t t→ⴥ L (t) lim n  mean rate at which process leaves state n. t t→ⴥ These results yield the following key principle: Rate In ⴝ Rate Out Principle. For any state of the system n (n  0, 1, 2, . . .), mean entering rate  mean leaving rate. The equation expressing this principle is called the balance equation for state n. After constructing the balance equations for all the states in terms of the unknown Pn probabilities, we can solve this system of equations (plus an equation stating that the probabilities must sum to 1) to find these probabilities. To illustrate a balance equation, consider state 0. The process enters this state only from state 1. Thus, the steady-state probability of being in state 1 (P1) represents the proportion of time that it would be possible for the process to enter state 0. Given that the process is in state 1, the mean rate of entering state 0 is ␮1. (In other words, for each cumulative unit of time that the process spends in state 1, the expected number of times that it would leave state 1 to enter state 0 is ␮1.) From any other state, this mean rate is 0. Therefore, the overall mean rate at which the process leaves its current state to enter state 0 (the mean entering rate) is ␮1P1  0(1 ⫺ P1)  ␮1P1. By the same reasoning, the mean leaving rate from state 0 must be ␭0P0, so the balance equation for state 0 is ␮1P1  ␭0P0. For every other state there are two possible transitions both into and out of the state. Therefore, each side of the balance equations for these states represents the sum of the mean rates for the two transitions involved. Otherwise, the reasoning is just the same as for state 0. These balance equations are summarized in Table 17.1. 748 CHAPTER 17 QUEUEING THEORY ■ TABLE 17.1 Balance equations for the birth-and-death process State Rate In ⴝ Rate Out ␮1P1  ␭0P0 ␭0P0  ␮2P2  (␭1  ␮1)P1 ␭1P1  ␮3P3  (␭2  ␮2)P2 ⯗ ␭n⫺2Pn⫺2 ⫹ ␮nPn  (␭n⫺1 ⫹ ␮n⫺1)Pn⫺1 ␭n⫺1Pn⫺1 ⫹ ␮n⫹1Pn⫹1  (␭n  ␮n)Pn ⯗ 0 1 2 ⯗ n⫺1 n ⯗ Notice that the first balance equation contains two variables for which to solve (P0 and P1), the first two equations contain three variables (P0, P1, and P2), and so on, so that there always is one “extra” variable. Therefore, the procedure in solving these equations is to solve in terms of one of the variables, the most convenient one being P0. Thus, the first equation is used to solve for P1 in terms of P0; this result and the second equation are then used to solve for P2 in terms of P0; and so forth. At the end, the requirement that the sum of all the probabilities equal 1 can be used to evaluate P0. Results for the Birth-and-Death Process Applying this procedure yields the following results: State: 0: 1: 2: ⯗ n ⫺ 1: n: ⯗ ␭0  P ␮1 0 ␭1 1 P2  P  (␮ P ⫺ ␭0P0) ␮2 1 ␮2 1 1 ␭2 1 P3  P  (␮ P ⫺ ␭1P1) ␮3 2 ␮3 2 2 ⯗ ␭ 1 1 Pn  n⫺ P ⫹ (␮ P ⫺ ␭n⫺2Pn⫺2) ␮n n⫺1 ␮n n⫺1 n⫺1 ␭n 1 Pn1  P  (␮ P ⫺ ␭n⫺1Pn⫺1) ␮n1 n ␮n1 n n ⯗ P1 ␭1  P ␮2 1 ␭2  P ␮3 2 ␭ ␭0  1P ␮2␮1 0 ␭2␭1␭0  P ␮3␮2␮1 0 ␭ 1 ␭n⫺1␭n⫺2 ⭈⭈⭈ ␭0  n⫺ P  P ␮n n⫺1 ␮n␮n⫺1 ⭈⭈⭈ ␮1 0 ␭n ␭n␭n⫺1 ⭈⭈⭈ ␭0  P  P ␮n1 n ␮n1␮n ⭈⭈⭈ ␮1 0 To simplify notation, let Cn  ␭n⫺1␭n⫺2 ⭈⭈⭈ ␭0 , for n  1, 2, . . . , ␮n␮n⫺1 ⭈⭈⭈ ␮1 and then define Cn  1 for n  0. Thus, the steady-state probabilities are Pn  CnP0, The requirement that ⴥ 冱 Pn  1 n0 for n  0, 1, 2, . . . . 17.5 THE BIRTH-AND-DEATH PROCESS 749 implies that ⴥ 冱 Cn冣P0  1, 冢n0 so that ⴥ P0  ⫺1 冱 Cn冣 冢n0 . When a queueing model is based on the birth-and-death process, so the state of the system n represents the number of customers in the queueing system, the key measures of performance for the queueing system (L, Lq, W, and Wq) can be obtained immediately after calculating the Pn from the above formulas. The definitions of L and Lq given in Sec. 17.2 specify that ⴥ ⴥ L  冱 nPn, Lq  冱 (n ⫺ s)Pn. n0 ns Furthermore, the relationships given at the end of Sec. 17.2 yield L W  ,  ␭ Lq Wq  ,  ␭ where ␭  is the average arrival rate over the long run. Because ␭n is the mean arrival rate while the system is in state n (n  0, 1, 2, . . .) and Pn is the proportion of time that the system is in this state, ⴥ    ␭n Pn. ␭ n0 Several of the expressions just given involve summations with an infinite number of terms. Fortunately, these summations have analytic solutions for a number of interesting special cases,4 as seen in the next section. Otherwise, they can be approximated by summing a finite number of terms on a computer. These steady-state results have been derived under the assumption that the ␭n and ␮n parameters have values such that the process actually can reach a steady-state condition. This assumption always holds if ␭n  0 for some value of n greater than the initial state, so that only a finite number of states (those less than this n) are possible. It also always holds when ␭ and ␮ are defined (see “Terminology and Notation” in Sec. 17.2) and ␳  ␭/(s␮) 1. It does not hold if ⴥ n1 Cn  ⴥ. Section 17.6 describes several queueing models that are special cases of the birth-anddeath process. Therefore, the general steady-state results just given in shaded boxes will be used over and over again to obtain the specific steady-state results for these models. 4 These solutions are based on the following known results for the sum of any geometric series: N 1 ⫺ xN⫹1 ,  xn   1⫺x n0 ⴥ 1 ,  xn   1 ⫺x n0 for any x  1, if ⏐x⏐ 1. 750 ■ 17.6 CHAPTER 17 QUEUEING THEORY QUEUEING MODELS BASED ON THE BIRTH-AND-DEATH PROCESS Because each of the mean rates ␭0, ␭1, . . . and ␮1, ␮2, . . . for the birth-and-death process can be assigned any nonnegative value, we have great flexibility in modeling a queueing system. Probably the most widely used models in queueing theory are based directly upon this process. Because of assumptions 1 and 2 (and Property 4 for the exponential distribution), these models are said to have a Poisson input and exponential service times. The models differ only in their assumptions about how the ␭n and ␮n change with n. We present three of these models in this section for three important types of queueing systems. The M/M/s Model As described in Sec. 17.2, the M/M/s model assumes that all interarrival times are independently and identically distributed according to an exponential distribution (i.e., the input process is Poisson), that all service times are independent and identically distributed according to another exponential distribution, and that the number of servers is s (any positive integer). Consequently, this model is just the special case of the birth-and-death process where the queueing system’s mean arrival rate and mean service rate per busy server are constant (␭ and ␮, respectively) regardless of the state of the system. When the system has just a single server (s  1), the implication is that the parameters for the birth-and-death process are ␭n  ␭ (n  0, 1, 2, . . .) and ␮n  ␮ (n  1, 2, . . .). The resulting rate diagram is shown in Fig. 17.5a. However, when the system has multiple servers (s ⬎ 1), the ␮n cannot be expressed this simply, as explained below. System Service Rate: The system service rate ␮n represents the mean rate of service completions for the overall queueing system when there are n customers in the system. With multiple servers and n ⬎ 1, ␮n is not the same as ␮, the mean service rate per busy server. Instead, ␮n = n␮ ␮n = s␮ when n  s, when n  s. Using these formulas for ␮n, the rate diagram for the birth-and-death process shown in Fig. 17.4 reduces to the rate diagrams shown in Fig. 17.5 for the M/M/s model. When s␮ exceeds the mean arrival rate ␭, that is, when ␭ ␳   1, s␮ ■ FIGURE 17.5 Rate diagrams for the M/M/s model. (a) Single-server case (s  1) State: 0 1  2  3 … 1  2 2 for n  0, 1, 2, ... for n  1, 2, ... n⫺2 n⫺1   (b) Multiple-server case (s ⬎ 1) State: 0 ␭n  , n  , 3 3 … n  , n  n⫹1 … s⫹1 …  for n  0, 1, 2, ... n, n  s, for n  1, 2, ..., s for n  s, s  1, ... s⫺2 s⫺1 (s ⫺ 1) s s s An Application Vignette KeyCorp is a major bank holding company in the United States. The company emphasizes consumer banking and, as of the beginning of 2013, it operated well over a thousand branch banks in 14 states. To help grow its business, KeyCorp management initiated an extensive OR study some years ago to determine how to improve customer service (defined primarily as reducing customer waiting time before beginning service) while also providing cost-effective staffing. A servicequality goal was set that at least 90 percent of the customers should have waiting times of less than 5 minutes. The key tool in analyzing this problem was the M/M/s queueing model, which proved to fit this application very well. To apply this model, data were gathered that revealed that the average service time required to process a customer was a distressingly high 246 seconds. With this average service time and typical mean arrival rates, the model indicated that a 30 percent increase in the number of tellers would be needed to meet the service-quality goal. This prohibitively expensive option led management to conclude that an extensive campaign needed to be undertaken to drastically reduce the average service time by both reengineering the customer session and providing better management of staff. Over a period of three years, this campaign led to a reduction in the average service time all the way down to 115 seconds. Frequent reapplication of the M/M/s model then revealed how the service-quality goal can be substantially surpassed while actually reducing personnel levels through improved scheduling of the personnel in the various branch banks. The net result has been savings of nearly $20 million per year with vastly improved service that enables 96 percent of the customers to wait less than 5 minutes. This improvement extended throughout the company since the percentage of branch banks who meet the service-quality goal has increased from 42 percent to 94 percent. Surveys also confirm a great increase in customer satisfaction. Source: S. K. Kotha, M. P. Barnum, and D. A. Bowen: “KeyCorp Service Excellence Management System,” Interfaces, 26(1): 54–74, Jan.–Feb. 1996. (A link to this article is provided on our website, www.mhhe.com/hillier.) a queueing system fitting this model will eventually reach a steady-state condition. (Recall from Sec. 17.2 that ␳ is referred to as the utilization factor because it represents the expected fraction of time that the individual servers are busy.) In this situation, the steadystate results derived in Sec. 17.5 for the general birth-and-death process are directly applicable. However, these results simplify considerably for this model and yield closedform expressions for Pn, L, Lq, and so forth, as shown next. Results for the Single-Server Case (M/M/1). For s  1, the Cn factors for the birth-and-death process reduce to 冢 冣 ␭ Cn   ␮ n  ␳n, for n  0, 1, 2, . . . Therefore, using the results given in Sec. 17.5, Pn  ␳nP0, for n  0, 1, 2, . . . , where ⴥ ⫺1 冢冱 ␳ 冣 1  冢冣 1⫺␳ n P0  n0 ⫺1  1 ⫺ ␳. Thus, Pn  (1 ⫺ ␳)␳n, Consequently, ⴥ L n(1 ⫺ ␳)␳n 冱 n0 for n  0, 1, 2, . . . . 752 CHAPTER 17 QUEUEING THEORY ⴥ  (1 ⫺ ␳)␳ d 冱  (␳n) n0 d␳ d  (1 ⫺ ␳)␳  d␳ ⴥ 冱 ␳n冣 冢n0 冢 d 1  (1 ⫺ ␳)␳   d␳ 1 ⫺ ␳ ␳ ␭    . 1⫺␳ ␮⫺␭ 冣 Similarly, ⴥ Lq  冱 (n ⫺ 1)Pn n1  L ⫺ 1(1 ⫺ P0) ␭2  . ␮(␮ ⫺ ␭) When ␭  ␮, so that the mean arrival rate exceeds the mean service rate, the preceding solution “blows up” (because the summation for computing P0 diverges). For this case, the queue would “explode” and grow without bound. If the queueing system begins operation with no customers present, the server might succeed in keeping up with arriving customers over a short period of time, but this is impossible in the long run. (Even when ␭  ␮, the expected number of customers in the queueing system slowly grows without bound over time because, even though a temporary return to no customers present always is possible, the probabilities of huge numbers of customers present become increasingly significant over time.) Assuming again that ␭ ␮, we now can derive the probability distribution of the waiting time in the system (so including service time) ᐃ for a random arrival when the queue discipline is first-come-first-served. If this arrival finds n customers already in the system, then the arrival will have to wait through n  1 exponential service times, including his or her own. (For the customer currently being served, recall the lack-of-memory property for the exponential distribution discussed in Sec. 17.4.) Therefore, let T1, T2, . . . be independent service-time random variables having an exponential distribution with parameter ␮, and let Sn1  T1  T2   Tn1, for n  0, 1, 2, . . . , so that Sn1 represents the conditional waiting time given n customers already in the system. As discussed in Sec. 17.7, Sn1 is known to have an Erlang distribution.5 Because the probability that the random arrival will find n customers in the system is Pn, it follows that ⴥ P{ᐃ ⬎ t}  冱 Pn P{Sn1 ⬎ t}, n0 which reduces after considerable manipulation (see Prob. 17.6-17) to P{ᐃ ⬎ t}  e⫺␮(1⫺␳)t, for t  0. The surprising conclusion is that ᐃ has an exponential distribution with parameter ␮(1 ⫺ ␳). Therefore, 1 W  E(ᐃ)   ␮(1 ⫺ ␳) 5 Outside queueing theory, this distribution is known as the gamma distribution. 17.6 QUEUEING MODELS BASED ON THE BIRTH-AND-DEATH PROCESS 753 1  . ␮⫺␭ These results include service time in the waiting time. In some contexts (e.g., the County Hospital emergency room problem described in Sec. 17.1), the more relevant waiting time is just until service begins. Thus, consider the waiting time in the queue (so excluding service time) ᐃq for a random arrival when the queue discipline is first-come-first-served. If this arrival finds no customers already in the system, then the arrival is served immediately, so that P{ᐃq  0}  P0  1 ⫺ ␳. If this arrival finds n ⬎ 0 customers already there instead, then the arrival has to wait through n exponential service times until his or her own service begins, so that ⴥ P{ᐃq ⬎ t}  冱 Pn P{Sn ⬎ t} n1 ⴥ  冱 (1 ⫺ ␳)␳n P{Sn ⬎ t} n1 ⴥ ␳ 冱 Pn P{Sn1 ⬎ t} n0  ␳P{ᐃ ⬎ t}  ␳e⫺␮(1⫺␳)t, for t  0. Note that Wq does not quite have an exponential distribution, because P{ᐃq  0} ⬎ 0. However, the conditional distribution of ᐃq, given that ᐃq ⬎ 0, does have an exponential distribution with parameter ␮(1 ⫺ ␳), just as ᐃ does, because P{ᐃq ⬎ t} P{ᐃq ⬎ t⏐ᐃq ⬎ 0}    e⫺␮(1⫺␳)t, P{ᐃq ⬎ 0} for t  0. By deriving the mean of the (unconditional) distribution of ᐃq (or applying either Lq  ␭Wq or Wq  W ⫺ 1/␮), ␭ Wq  E(ᐃq)  . ␮(␮ ⫺ ␭) If you would like to see another example that applies the M/M/1 model to determine which type of materials handling equipment a company should purchase, one is provided in the Solved Examples section of the book’s website. Results for the Multiple-Server Case (s ⬎ 1).  (␭/␮)   n! Cn    (␭/␮)s ␭   s  ! s␮ When s ⬎ 1, the Cn factors become n for n  1, 2, . . . , s 冢 冣 n⫺s (␭/␮)n  n⫺  s!s s for n  s, s  1, . . . . Consequently, if ␭ s␮ [so that ␳  ␭/(s␮) 1], then plugging these expressions into the results for the birth-and-death process given in Sec. 17.5 yields s⫺1 n s ⴥ (␭/␮) (␭/␮) ␭    冱 冢冣 冥 冫 1冱  n! s! s␮ P0  1 n1 ns n⫺s 754 CHAPTER 17 QUEUEING THEORY s⫺1 n s (␭/␮) (␭/␮) 1    冥, 冫 冱 n! s! 1 ⫺ ␭/(s␮) 1 n0 where the n  0 term in the last summation yields the correct value of 1 because of the convention that n!  1 when n  0. These Cn factors also give  (␭/␮)n   P0  n! Pn    (␭/␮)n P   s!sn⫺s 0 if 0  n  s if n  s. Furthermore, ⴥ Lq  冱 (n ⫺ s)Pn ns ⴥ  冱 jPsj j0  j P0 冱 j s␳ ! j0 ⴥ (␭/␮)s (␭/␮)s ⴥ d  P0␳ 冱  (␳ j) s! j0 d␳ (␭/␮)s d  P0␳  s! d␳ ⴥ 冢冱␳冣 j j0 (␭/␮)s d 1  P0␳   d␳ 1 ⫺ ␳ s! 冢 冣 s P0(␭/␮) ␳  ; s!(1 ⫺ ␳)2 Lq Wq  ; ␭ 1 W  Wq  ; ␮ 1 ␭ L  ␭ Wq    Lq  . ␮ ␮ 冢 冣 Figure 17.6 shows how L changes with ␳ for various values of s. The single-server method for finding the probability distribution of waiting times also can be extended to the multiple-server case. This yields6 (for t  0) P{ᐃ  s 1 ⫺ e⫺␮t(s⫺1⫺␭/␮) t}  e⫺␮t 1 ⫹ P0(␭/␮)  s!(1 ⫺ ␳) s ⫺ 1 ⫺ ␭/␮  and P{ᐃq ⬎ t}  (1 ⫺ P{ᐃq  0})e⫺s␮(1⫺␳)t, where s⫺1 P{ᐃq  0}   Pn. n0 6 When s ⫺ 1 ⫺ ␭/␮  0, (1 ⫺ e⫺␮t(s⫺1⫺␭/␮))/(s ⫺ 1 ⫺ ␭/␮) should be replaced by ␮t. 755 17.6 QUEUEING MODELS BASED ON THE BIRTH-AND-DEATH PROCESS L 10 1.0 25 2 1 s 0.1 ■ FIGURE 17.6 Values for L for the M/M/s model (Sec. 17.6). s 20 s 5 1 s 10 s 7 s 5 s 4 s 3 s s Steady-state expected number of customers in the queueing system 100 0 0.1 0.2 0.3 0.4 0.5 0.6 Utilization factor 0.7 0.8 0.9 1.0  s The above formulas for the various measures of performance (including the Pn) are relatively imposing for hand calculations. However, this chapter’s Excel file in your OR Courseware includes an Excel template that performs all these calculations simultaneously for any values of t, s, ␭, and ␮ you want, provided that ␭ s␮. If ␭  s␮, so that the mean arrival rate exceeds the maximum mean rate of service completions, then the queue grows without bound, so the preceding steady-state solutions are not applicable. The County Hospital Example with the M/M/s Model. For the County Hospital emergency room problem (see Sec. 17.1), the management engineer has concluded that the emergency cases arrive pretty much at random (a Poisson input process), so that interarrival times have an exponential distribution. She also has concluded that the time spent by a doctor treating the cases approximately follows an exponential distribution. Therefore, she has chosen the M/M/s model for a preliminary study of this queueing system. By projecting the available data for the early evening shift into next year, she estimates that patients will arrive at an average rate of 1 every 12 hour. A doctor requires an average of 20 minutes to treat each patient. Thus, with one hour as the unit of time, 1 1    hour per customer ␭ 2 756 CHAPTER 17 QUEUEING THEORY and 1 1    hour per customer, ␮ 3 so that ␭  2 customers per hour and ␮  3 customers per hour. The two alternatives being considered are to continue having just one doctor during this shift (s  1) or to add a second doctor (s  2). In both cases, ␭ ␳   s␮ 1, so that the system should approach a steady-state condition. (Actually, because ␭ is somewhat different during other shifts, the system will never truly reach a steady-state condition, but the management engineer feels that steady-state results will provide a good approximation.) Therefore, the preceding equations are used to obtain the results shown in Table 17.2. ■ TABLE 17.2 Steady-state results from the M/M/s model for the County Hospital problem sⴝ1 L 2 Wq 2  hour 3 W 1 hour P{ᐃq ⬎ 0} 1 P ᐃq ⬎  2 0.667 1  3 1  2 1  3 1 n  3 1  12 3  4 1  hour 24 3  hour 8 0.167 0.404 0.022 P{ᐃq ⬎ 1} 0.245 0.003 P{ᐃq ⬎ t} 2 e⫺t 3 P{ᐃ ⬎ t} e⫺t 1 e⫺4t 6 1 e⫺3t(3 ⫺ e⫺t ) 2 ␳ P0 P1 Pn for n  2 Lq  冧 2  3 1  3 2  9 1 2   3 3 4  3 sⴝ2 冢 冣 n 冢 冣 17.6 QUEUEING MODELS BASED ON THE BIRTH-AND-DEATH PROCESS 757 On the basis of these results, she tentatively concluded that a single doctor would be inadequate next year for providing the relatively prompt treatment needed in a hospital emergency room. You will see later (Sec. 17.8) how she checked this conclusion by applying another queueing model that provides a better representation of the real queueing system in one crucial way. You can see another example of an application of the M/M/1 model in the Solved Examples section of the book’s website, where the issue in this case is whether three employees in a fast-food restaurant should work together as one fast server or separately as three considerably slower servers. The Finite Queue Variation of the M/M/s Model (Called the M/M/s/K Model) We mentioned in the discussion of queues in Sec. 17.2 that queueing systems sometimes have a finite queue; i.e., the number of customers in the system is not permitted to exceed some specified number (denoted by K) so the queue capacity is K ⫺ s. Any customer that arrives while the queue is “full” is refused entry into the system and so leaves forever. From the viewpoint of the birth-and-death process, the mean input rate into the system becomes zero at these times. Therefore, the one modification needed in the M/M/s model to introduce a finite queue is to change the ␭n parameters to ␭n  0 ␭ for n  0, 1, 2, . . . , K ⫺ 1 for n  K. Because ␭n  0 for some values of n, a queueing system that fits this model always will eventually reach a steady-state condition, even when ␳  ␭/s␮  1. This model commonly is labeled M/M/s/K, where the presence of the fourth symbol distinguishes it from the M/M/s model. The single difference in the formulation of these two models is that K is finite for the M/M/s/K model and K  ⴥ for the M/M/s model. The usual physical interpretation for the M/M/s/K model is that there is only limited waiting room that will accommodate a maximum of K customers in the system. For example, for the County Hospital emergency room problem, this system actually would have a finite queue if the policy were to send arriving patients to another hospital whenever there already are K patients in the emergency room. Another possible interpretation is that arriving customers will leave and “take their business elsewhere” whenever they find too many customers (K) ahead of them in the system because they are not willing to incur a long wait. This balking phenomenon is quite common in commercial service systems. However, are other models available (e.g., see Prob. 17.5-5) that fit this interpretation even better. The rate diagram for this model is identical to that shown in Fig. 17.5 for the M/M/s model, except that it stops with state K. Results for the Single-Server Case (M/M/1/K). For this case,  ␭ n    ␳n Cn   ␮  0   for n  0, 1, 2, . . . , K for n ⬎ K. 758 CHAPTER 17 QUEUEING THEORY Therefore, for ␳  1,7 the results for the birth-and-death process in Sec. 17.5 reduce to 1 P0    n K n0 (␭/␮) 冫 1 ⫺ (␭/␮)K⫹1  1 ⫺ ␭/␮ 1⫺␳  , 1 ⫺ ␳K⫹1 1 so that 1⫺␳ Pn   ␳n, 1 ⫺ ␳K⫹1 for n  0, 1, 2, . . . , K. Hence, K L 冱 nPn n0 K 1⫺␳ d n   K⫹1 ␳ 冱 (␳ ) 1⫺␳ d n0 ␳ 1⫺␳ d   ␳  1 ⫺ ␳K⫹1 d␳ K 冱 ␳n冣 冢n0 1⫺␳ d 1 ⫺ ␳K⫹1   K⫹1 ␳   1⫺␳ d␳ 1 ⫺ ␳ 冢 冣 ⫺(K ⫹ 1)␳K ⫹ K␳K⫹1 ⫹ 1  ␳  (1 ⫺ ␳K⫹1)(1 ⫺ ␳) ␳ (K  1)␳K⫹1   ⫺  . 1⫺␳ 1 ⫺ ␳K⫹1 As usual (when s  1), Lq  L ⫺ (1 ⫺ P0). Notice that the preceding results do not require that ␭ ␮ (i.e., that ␳ 1). When ␳ 1, it can be verified that the second term in the final expression for L converges to 0 as K  ⴥ, so that all the preceding results do indeed converge to the corresponding results given earlier for the M/M/1 model. The waiting-time distributions can be derived by using the same reasoning as for the M/M/1 model (see Prob. 17.6-28). However, no simple expressions are obtained in this case, so computer calculations are required. Fortunately, even though L  ␭W and Lq  ␭Wq for the current model because the ␭n are not equal for all n (see the end of Sec. 17.2), the expected waiting times for customers entering the system still can be obtained directly from the expressions given at the end of Sec. 17.5: L W  ,  ␭ 7 Lq Wq  ,  ␭ If ␳  1, then Pn  1/(K  1) for n  0, 1, 2, . . . , K, so that L  K/2. 17.6 QUEUEING MODELS BASED ON THE BIRTH-AND-DEATH PROCESS 759 where ⴥ ␭   ␭nPn n0 K⫺1   ␭Pn n0  ␭(1 ⫺ PK). Results for the Multiple-Server Case (s ⬎ 1). Because this model does not allow more than K customers in the system, K is the maximum number of servers that could ever be used. Therefore, assume that s  K. In this case, Cn becomes (␭/␮)n   n !  Cn   (␭/␮)s ␭   s ! s␮  0 for n  0, 1, 2, . . . , s   n⫺s n (␭/␮)  n⫺  s!s s for n  s, s  1, . . . , K for n ⬎ K. Hence, (␭/␮)n  P0  n !  Pn   (␭/␮)n s P0 n⫺   s!s 0 for n  1, 2, . . . , s for n  s, s  1, . . . , K for n ⬎ K, where s P0  1 (␭/␮)n (␭/␮)s K      n ! s! ns1 s␮ n0 ␭ n⫺s . (These formulas continue to use the convention that n!  1 when n  0.) Adapting the derivation of Lq for the M/M/s model to this case yields P0(␭/␮)s␳ Lq    [1 ⫺ ␳K⫺s ⫺ (K ⫺ s)␳K⫺s(1 ⫺ ␳)], s! (1 ⫺ ␳)2 where ␳  ␭/(s␮).8 It can then be shown that s⫺1  s⫺1  L   nPn  Lq  s 1 ⫺  Pn . n0 n0 W and Wq are obtained from these quantities just as shown for the single-server case. This chapter’s Excel file includes an Excel template for calculating the above measures of performance (including the Pn) for this model. One interesting special case of this model is where K  s so the queue capacity is K ⫺ s  0. In this case, customers who arrive when all servers are busy will leave immediately and be lost to the system. This would occur, for example, in a telephone 8 If ␳  1, it is necessary to apply L’Hôpital’s rule twice to this expression for Lq. Otherwise, all these multipleserver results hold for all ␳ ⬎ 0. The reason that this queueing system can reach a steady-state condition even when ␳  1 is that ␭n  0 for n  K, so that the number of customers in the system cannot continue to grow indefinitely. 760 CHAPTER 17 QUEUEING THEORY network with s trunk lines so callers get a busy signal and hang up when all the trunk lines are busy. This kind of system (a “queueing system” with no queue) is referred to as Erlang’s loss system because it was first studied in the early 20th century by A. K. Erlang. (As mentioned in Sec.17.3, A. K. Erlang was a Danish telephone engineer who is considered the founder of queueing theory.) It is common now for the telephone system at a call center to provide some extra trunk lines that place the caller on hold, but additional callers then get a busy signal. Such a system also fits this model, where (K ⫺ s) is the number of extra trunk lines that place the caller on hold. Another example in the Solved Examples section of the book’s website illustrates the application of this model to such a system. The Finite Calling Population Variation of the M/M/s Model Now assume that the only deviation from the M/M/s model is that (as defined in Sec. 17.2) the size of the calling population is finite. For this case, let N denote the size of the calling population. Thus, when the number of customers in the queueing system is n (n  0, 1, 2, . . . , N ), there are only N ⫺ n potential customers remaining in the calling population. The most important application of this model has been to the machine repair problem, where one or more maintenance people are assigned the responsibility of maintaining in operational order a certain group of N machines by repairing each one that breaks down. The maintenance people are considered to be individual servers in the queueing system if they work individually on different machines, whereas the entire crew is considered to be a single server if crew members work together on each machine. The machines constitute the calling population. Each one is considered to be a customer in the queueing system when it is down waiting to be repaired, whereas it is outside the queueing system while it is operational. Note that each member of the calling population alternates between being inside and outside the queueing system. Therefore, the analog of the M/M/s model that fits this situation assumes that each member’s outside time (i.e., the elapsed time from leaving the system until returning for the next time) has an exponential distribution with parameter ␭. When n of the members are inside, and so N ⫺ n members are outside, the current probability distribution of the remaining time until the next arrival to the queueing system is the distribution of the minimum of the remaining outside times for the latter N ⫺ n members. Properties 2 and 3 for the exponential distribution imply that this distribution must be exponential with parameter ␭n  (N ⫺ n)␭. Hence, this model is just the special case of the birth-and-death process that has the rate diagram shown in Fig. 17.7. Because ␭n  0 for n  N, any queueing system that fits this model will eventually reach a steady-state condition. The available steady-state results are summarized as follows: Results for the Single-Server Case (s ⴝ 1). reduce to When s  1, the Cn factors in Sec. 17.5 n ! ␭ N(N ⫺ 1) ⭈⭈⭈ (N ⫺ n ⫹ 1) ␭  N   ␮ (N ⫺ n)! ␮ Cn    0 冢 冣 冢 冣 n for n  N for n ⬎ N, for this model. Therefore, again using the convention that n!  1 when n  0, N 冫冱 P0  1 n0 冢 冣 冥; N! ␭   (N ⫺ n)! ␮ n 761 17.6 QUEUEING MODELS BASED ON THE BIRTH-AND-DEATH PROCESS (N ⫺ n) , 0, n  , (a) Single-server case (s  1) N State: 0 (N ⫺ 1) … 1 …  2 n (N ⫺ n ⫹ 2) n⫺2  N State: 0 (N ⫺ 1) 1  (N ⫺ n ⫹ 1) n⫺1  n … s⫺2 2 for n  0, 1, 2, ..., N for n  N for n  1, 2, ..., s for n  s, s  1, ... (N ⫺ s ⫹ 1) s⫺1 s … N⫺1 s if n  1, 2, . . . , N; P0, N Lq  冱 (n ⫺ 1)Pn, n1 which can be reduced to ␭␮ Lq  N ⫺ (1 ⫺ P0); ␭ N L 冱 nPn  Lq  1 ⫺ P0 n0 ␮  N ⫺ (1 ⫺ P0). ␭ Finally, L W    ␭ Lq Wq  ,  ␭ and where ⴥ N n0 n0    ␭nPn   (N ⫺ n)␭Pn  ␭(N ⫺ L). ␭ Results for the Multiple-Server Case (s ⬎ 1).   For N  s n ! ␭ N  (N ⫺ n)!n! ␮  N! ␭ Cn     (N ⫺ n)!s!sn⫺s ␮  0 for n  0, 1, 2, . . . , s   n for n  s, s  1, . . . , N for n ⬎ N. N s n N! ␭ Pn    (N ⫺ n)! ␮ N   (s ⫺ 1) 冢 冣 N⫺1  (N ⫺ s ⫹ 2) 2 … n (N ⫺ n) , 0, n, ␮n  s, (b) Multiple-server case (s ⬎ 1) ■ FIGURE 17.7 Rate diagrams for the finite calling population variation of the M/M/s model. for n  0, 1, 2, ..., N for n  N for n  1, 2, ...  1, 762 CHAPTER 17 QUEUEING THEORY Hence, the results for the birth-and-death process in Sec. 17.5 yield ! ␭  N  (N ⫺ n)!n! ␮  N! Pn     (N ⫺ n)!s!sn⫺s  0 冢 冣P ␭ 冢␮冣 P n if 0  n  s 0 n if s  n  N 0 if n ⬎ N, where s⫺1 冫冱 P0  1 n0 冢 冣 N! ␭   (N ⫺ n)!n! ␮ n N N! ␭  冱  n⫺s  ␮ ns (N ⫺ n)!s!s 冢 冣 冥. n Finally, N Lq  冱 (n ⫺ s)Pn ns and s⫺1 冢 s⫺1 冣 L  冱 nPn  Lq  s 1 ⫺ 冱 Pn , n0 n0 which then yield W and Wq by the same equations as in the single-server case. This chapter’s Excel files include an Excel template for performing all the above calculations. Extensive tables of computational results also are available9 for this model for both the single-server and multiple-server cases. For both cases, it has been shown10 that the preceding formulas for Pn and P0 (and so for Lq, L, W, and Wq) also hold for a generalization of this model. In particular, we can drop the assumption that the times spent outside the queueing system by the members of the calling population have an exponential distribution, even though this takes the model outside the realm of the birth-and-death process. As long as these times are identically distributed with mean 1/␭ (and the assumption of exponential service times still holds), these outside times can have any probability distribution! ■ 17.7 QUEUEING MODELS INVOLVING NONEXPONENTIAL DISTRIBUTIONS Because all the queueing theory models in the preceding section (except for one generalization in the last paragraph) are based on the birth-and-death process, both their interarrival and service times are required to have exponential distributions. As discussed in Sec. 17.4, this type of probability distribution has many convenient properties for queueing theory, but it provides a reasonable fit for only certain kinds of queueing systems. In particular, the assumption of exponential interarrival times implies that arrivals occur randomly (a Poisson input process), which is a reasonable approximation in many situations but not when the arrivals are carefully scheduled or regulated. Furthermore, the actual service-time distribution frequently deviates greatly from the exponential 9 L. G. Peck and R. N. Hazelwood, Finite Queueing Tables, Wiley, New York, 1958. 10 B. D. Bunday and R. E. Scraton, “The G/M/r Machine Interference Model,” European Journal of Operational Research, 4: 399–402, 1980. 17.7 QUEUEING MODELS INVOLVING NONEXPONENTIAL DISTRIBUTIONS 763 form, particularly when the service requirements of the customers are quite similar. Therefore, it is important to have available other queueing models that use alternative distributions. Unfortunately, the mathematical analysis of queueing models with nonexponential distributions is much more difficult. However, it has been possible to obtain some useful results for a few such models. The derivations of these results are beyond the level of this book, but in this section we shall summarize the models and their results. The M/G/1 Model As introduced in Sec. 17.2, the M/G/1 model assumes that the queueing system has a single server and a Poisson input process (exponential interarrival times) with a fixed mean arrival rate ␭. As usual, it is assumed that the customers have independent service times with the same probability distribution. However, no restrictions are imposed on what this service-time distribution can be. In fact, it is only necessary to know (or estimate) the mean 1/␮ and variance ␴ 2 of this distribution. Any such queueing system can eventually reach a steady-state condition if ␳  ␭/␮ 1. The readily available steady-state results11 for this general model are the following: P0  1 ⫺ ␳, ␭2␴ 2 ⫹ ␳2 Lq  , 2(1 ⫺ ␳) L  ␳  Lq, Lq Wq  , ␭ 1 W  Wq  . ␮ Considering the complexity involved in analyzing a model that permits any service-time distribution, it is remarkable that such a simple formula can be obtained for Lq. This formula is one of the most important results in queueing theory because of its ease of use and the prevalence of M/G/1 queueing systems in practice. This equation for Lq (or its counterpart for Wq) commonly is referred to as the Pollaczek-Khintchine formula, named after two pioneers in the development of queueing theory who derived the formula independently in the early 1930s. For any fixed expected service time 1/␮, notice that Lq, L, Wq, and W all increase as ␴ 2 is increased. This result is important because it indicates that the consistency of the server has a major bearing on the performance of the service facility—not just the server’s average speed. This key point is illustrated in the next subsection. When the service-time distribution is exponential, ␴ 2  1/␮2, and the preceding results will reduce to the corresponding results for the M/M/1 model given at the beginning of Sec. 17.6. The complete flexibility in the service-time distribution provided by this model is extremely useful, so it is unfortunate that efforts to derive similar results for the multipleserver case have been unsuccessful. However, some multiple-server results have been obtained for the important special cases described by the following two models. (Excel 11 A recursion formula also is available for calculating the probability distribution of the number of customers in the system; see A. Hordijk and H. C. Tijms, “A Simple Proof of the Equivalence of the Limiting Distribution of the Continuous-Time and the Embedded Process of the Queue Size in the M/G/1 Queue,” Statistica Neerlandica, 36: 97–100, 1976. 764 CHAPTER 17 QUEUEING THEORY templates are available in this chapter’s Excel file for performing the calculations for both the M/G/1 model and the two models considered below when s  1.) The M/D/s Model When the service consists of essentially the same routine task to be performed for all customers, there tends to be little variation in the service time required. The M/D/s model often provides a reasonable representation for this kind of situation, because it assumes that all service times actually equal some fixed constant (the degenerate service-time distribution) and that we have a Poisson input process with a fixed mean arrival rate ␭. When there is just a single server, the M/D/1 model is just the special case of the M/G/1 model where ␴ 2  0, so that the Pollaczek-Khintchine formula reduces to ␳2 Lq  , 2(1 ⫺ ␳) where L, Wq, and W are obtained from Lq as just shown. Notice that these Lq and Wq are exactly half as large as those for the exponential service-time case of Sec. 17.6 (the M/M/1 model), where ␴ 2  1/␮2, so decreasing ␴ 2 can greatly improve the measures of performance of a queueing system. For the multiple-server version of this model (M/D/s), a complicated method is available12 for deriving the steady-state probability distribution of the number of customers in the system and its mean [assuming ␳  ␭/(s␮) 1]. These results have been tabulated for numerous cases,13 and the means (L) also are given graphically in Fig. 17.8. The M/Ek/s Model The M/D/s model assumes zero variation in the service times (␴  0), whereas the exponential service-time distribution assumes a very large variation (␴  1/␮). Between these two rather extreme cases lies a long middle ground (0 ␴ 1/␮), where most actual servicetime distributions fall. Another kind of theoretical service-time distribution that fills this middle ground is the Erlang distribution (named after the founder of queueing theory). The probability density function for the Erlang distribution is (␮k)k f(t)   t k⫺1e⫺k␮t, for t  0, (k ⫺ 1)! where ␮ and k are strictly positive parameters of the distribution and k is further restricted to be integer. (Except for this integer restriction and the definition of the parameters, this distribution is identical to the gamma distribution.) Its mean and standard deviation are 1 Mean   ␮ and 1 Standard deviation  1 . 兹k ␮ Thus, k is the parameter that specifies the degree of variability of the service times relative to the mean. It usually is referred to as the shape parameter. 12 See N. U. Prabhu: Queues and Inventories, Wiley, New York, 1965, pp. 32–34; also see pp. 286–288 in Selected Reference 5. 13 F. S. Hillier and O. S. Yu, with D. Avis, L. Fossett, F. Lo, and M. Reiman, Queueing Tables and Graphs, Elsevier North-Holland, New York, 1981. 17.7 QUEUEING MODELS INVOLVING NONEXPONENTIAL DISTRIBUTIONS 765 L Steady-state expected number of customers in the queueing system 100 10 s 15 s 10 s 7 s 5 s 4 s 3 s 2 1.0 s 1 s 0.1 ■ FIGURE 17.8 Values of L for the M/D/s model (Sec. 17.7). 25 s 0 2 0 0.1 0.2 0.3 0.4 0.5 0.6 Utilization factor 0.7 0.8 0.9 1.0  s The Erlang distribution is a very important distribution in queueing theory for two reasons. To describe the first one, suppose that T1, T2, . . . , Tk are k independent random variables with an identical exponential distribution whose mean is 1/(k␮). Then their sum T  T1  T2   Tk has an Erlang distribution with parameters ␮ and k. The discussion of the exponential distribution in Sec. 17.4 suggested that the time required to perform certain kinds of tasks might well have an exponential distribution. However, the total service required by a customer may involve the server’s performing not just one specific task but a sequence of k tasks. If the respective tasks have an independent and identical exponential distribution for their duration, the total service time will have an Erlang distribution. This will be the case, e.g., if the server must perform the same exponential task k independent times for each customer. The Erlang distribution also is very useful because it is a large (two-parameter) family of distributions permitting only nonnegative values. Hence, empirical service-time distributions can usually be reasonably approximated by an Erlang distribution. In fact, both the exponential and the degenerate (constant) distributions are special cases of the Erlang 766 CHAPTER 17 QUEUEING THEORY f (t) Probability density function  k k3 k2 k1 ■ FIGURE 17.9 A family of Erlang distributions with constant mean 1/␮. 0 Service time 1  t distribution, with k  1 and k  ⴥ, respectively. Intermediate values of k provide intermediate distributions with mean  1/␮, mode  (k ⫺ 1)/(k␮), and variance  1/(k␮2), as suggested by Fig. 17.9. Therefore, after estimating the mean and variance of an empirical service-time distribution, these formulas for the mean and variance can be used to choose the integer value of k that matches the estimates most closely. Now consider the M/Ek /1 model, which is just the special case of the M/G/1 model where service times have an Erlang distribution with shape parameter  k. Applying the Pollaczek-Khintchine formula with ␴ 2  1/(k␮2) (and the accompanying results given for M/G/1) yields ␭2/(k␮2)  ␳2 1k ␭2 Lq     , 2k ␮(␮ ⫺ ␭) 2(1 ⫺ ␳) 1k ␭ Wq   , 2k ␮(␮ ⫺ ␭) 1 W  Wq  , ␮ L  ␭W. With multiple servers (M/Ek /s), the relationship of the Erlang distribution to the exponential distribution just described can be exploited to formulate a modified birth-and-death process in terms of individual exponential service phases (k per customer) rather than complete customers. However, it has not been possible to derive a general steady-state solution [when ␳  ␭/(s␮) 1] for the probability distribution of the number of customers in the system as we did in Sec. 17.5. Instead, advanced theory is required to solve individual cases numerically. Once again, these results have been obtained and tabulated for numerous cases.14 The means (L) also are given graphically in Fig. 17.10 for some cases where s  2. The Solved Examples section of the book’s website includes another example that applies the M/Ek/s model for both s  1 and s  2 to choose the less costly alternative. 14 Ibid. 17.7 QUEUEING MODELS INVOLVING NONEXPONENTIAL DISTRIBUTIONS 767 L Steady-state expected number of customers in the queueing system 100 k1 k8 1.0 0.1 ■ FIGURE 17.10 Values of L for the M/Ek /2 model (Sec. 17.7). k2 10 0 0.1 0.2 0.3 0.4 0.5 0.6 Utilization factor 0.7 0.8 0.9 1.0  s Models without a Poisson Input All the queueing models presented thus far have assumed a Poisson input process (exponential interarrival times). However, this assumption is violated if the arrivals are scheduled or regulated in some way that prevents them from occurring randomly, in which case another model is needed. As long as the service times have an exponential distribution with a fixed parameter, three such models are readily available. These models are obtained by merely reversing the assumed distributions of the interarrival and service times in the preceding three models. Thus, the first new model (GI/M/s) imposes no restriction on what the interarrival time distribution can be. In this case, there are some steady-state results available15 (particularly in regard to waiting-time distributions) for both the single-server and multipleserver versions of the model, but these results are not nearly as convenient as the simple expressions given for the M/G/1 model. The second new model (D/M/s) assumes that all interarrival times equal some fixed constant, which would represent a queueing system where arrivals are scheduled at regular intervals. The third new model (Ek /M/s) assumes an Erlang interarrival time distribution, which provides a middle ground between 15 For example, see pp. 259–270 of Selected Reference 5. 768 CHAPTER 17 QUEUEING THEORY L Steady-state expected number of customers in the queueing system 100 10 1.0 0.1 ■ FIGURE 17.11 Values of L for the D/M/s model (Sec. 17.7). s 10 s 7 s 5 s 4 s 3 s 2 s 1 s 0 0.1 0.2 0.3 0.4 15 0.5 0.6 Utilization factor 0.7 0.8 0.9 1.0  s regularly scheduled (constant) and completely random (exponential) arrivals. Extensive computational results have been tabulated16 for these latter two models, including the values of L given graphically in Figs. 17.11 and 17.12. If neither the interarrival times nor the service times for a queueing system have an exponential distribution, then there are three additional queueing models for which computational results also are available.17 One of these models (Em /Ek /s) assumes an Erlang distribution for both these times. The other two models (Ek /D/s and D/Ek /s) assume that one of these times has an Erlang distribution and the other time equals some fixed constant. Other Models Although you have seen in this section a large number of queueing models that involve nonexponential distributions, we have far from exhausted the list. For example, another distribution that occasionally is used for either interarrival times or service times is the hyperexponential distribution. The key characteristic of this distribution is that even though only nonnegative values are allowed, its standard deviation ␴ actually is larger than 16 Hillier and Yu, op. cit. Ibid. 17 17.7 QUEUEING MODELS INVOLVING NONEXPONENTIAL DISTRIBUTIONS 769 L Steady-state expected number of customers in the queueing system 100 m1 m  16 1.0 0.1 ■ FIGURE 17.12 Values of L for the Ek /M/2 model (Sec. 17.7). m2 10 0 0.1 0.2 0.3 0.4 0.5 0.6 Utilization factor 0.7 0.8 0.9 1.0 ␳ s its mean 1/␮. This characteristic is in contrast to the Erlang distribution, where ␴ 1/␮ in every case except k  1 (exponential distribution), which has ␴  1/␮. To illustrate a typical situation where ␴ ⬎ 1/␮ can occur, we suppose that the service involved in the queueing system is the repair of some kind of machine or vehicle. If many of the repairs turn out to be routine (small service times) but occasional repairs require an extensive overhaul (very large service times), then the standard deviation of service times will tend to be quite large relative to the mean, in which case the hyperexponential distribution may be used to represent the service-time distribution. Specifically, this distribution would assume that there are fixed probabilities, p and (1 ⫺ p), for which kind of repair will occur, that the time required for each kind has an exponential distribution, but that the parameters for these two exponential distributions are different. (In general, the hyperexponential distribution is such a composite of two or more exponential distributions.) Another family of distributions coming into general use consists of phase-type distributions (some of which also are called generalized Erlangian distributions). These distributions are obtained by breaking down the total time into a number of phases, each having an exponential distribution, where the parameters of these exponential distributions may be different and the phases may be either in series or in parallel (or both). A group of phases being in parallel means that the process randomly selects one of the 770 CHAPTER 17 QUEUEING THEORY phases to go through each time according to specified probabilities. This approach is, in fact, how the hyperexponential distribution is derived, so this distribution is a special case of the phase-type distributions. Another special case is the Erlang distribution, which has the restrictions that all its k phases are in series and that these phases have the same parameter for their exponential distributions. Removing these restrictions means that phase-type distributions in general can provide considerably more flexibility than the Erlang distribution in fitting the actual distribution of interarrival times or service times observed in a real queueing system. This flexibility is especially valuable when using the actual distribution directly in the model is not analytically tractable, and the ratio of the mean to the standard deviation for the actual distribution does not closely match the available ratios (兹k for k  1, 2, . . .) for the Erlang distribution. Since they are built up from combinations of exponential distributions, queueing models using phase-type distributions still can be formulated in terms of transitions that only involve exponential distributions. The resulting model generally will have an infinite number of states, so solving for the steady-state distribution of the state of the system requires solving an infinite system of linear equations that has a relatively complicated structure. Solving such a system is far from a routine thing, but theoretical advances have enabled us to solve these queueing models numerically in some cases. An extensive tabulation of these results for models with various phase-type distributions (including the hyperexponential distribution) is available.18 ■ 17.8 PRIORITY-DISCIPLINE QUEUEING MODELS In priority-discipline queueing models, the queue discipline is based on a priority system. Thus, the order in which members of the queue are selected for service is based on their assigned priorities. Many real queueing systems fit these priority-discipline models much more closely than other available models. Rush jobs are taken ahead of other jobs, and important customers may be given precedence over others. Patients in a hospital emergency room also will generally be prioritized for treatment depending on the severity of their illness or injury. (We will return to the County Hospital example with priorities later in this section.) Therefore, the use of priority-discipline models often provides a very welcome refinement over the more usual queueing models. We present two basic priority-discipline models here. Since both models make the same assumptions, except for the nature of the priorities, we first describe the models together and then summarize their results separately. The Models Both models assume that there are N priority classes (class 1 has the highest priority and class N has the lowest) and that whenever a server becomes free to begin serving a new customer from the queue, the one customer selected is that member of the highest priority class represented in the queue who has waited longest. In other words, customers are selected to begin service in the order of their priority classes, but on a first-come-firstserved basis within each priority class. A Poisson input process and exponential service times are assumed for each priority class. Except for one special case considered later, the models also make the somewhat restrictive assumption that the expected service time is the same for all priority classes. However, the models do permit the mean arrival rate to differ among priority classes. 18 L. P. Seelen, H. C. Tijms, and M. H. Van Hoorn, Tables for Multi-Server Queues, North-Holland, Amsterdam, 1985. 17.8 PRIORITY-DISCIPLINE QUEUEING MODELS 771 The distinction between the two models is whether the priorities are nonpreemptive or preemptive. With nonpreemptive priorities, a customer being served cannot be ejected back into the queue (preempted) if a higher-priority customer enters the queueing system. Therefore, once a server has begun serving a customer, the service must be completed without interruption. The first model assumes nonpreemptive priorities. With preemptive priorities, the lowest-priority customer being served is preempted (ejected back into the queue) whenever a higher-priority customer enters the queueing system. A server is thereby freed to begin serving the new arrival immediately. (When a server does succeed in finishing a service, the next customer to begin receiving service is selected just as described at the beginning of this subsection, so a preempted customer normally will get back into service again and, after enough tries, will eventually finish.) Because of the lack-of-memory property of the exponential distribution (see Sec. 17.4), we do not need to worry about defining the point at which service begins when a preempted customer returns to service; the distribution of the remaining service time always is the same. (For any other service-time distribution, it becomes important to distinguish between preemptive-resume systems, where service for a preempted customer resumes at the point of interruption, and preemptive-repeat systems, where service must start at the beginning again.) The second model assumes preemptive priorities. For both models, if the distinction between customers in different priority classes were ignored, Property 6 for the exponential distribution (see Sec. 17.4) implies that all customers would arrive according to a Poisson input process. Furthermore, all customers have the same exponential distribution for service times. Consequently, the two models actually are identical to the M/M/s model studied in Sec. 17.6 except for the order in which customers are served. Therefore, when we count just the total number of customers in the system, the steady-state distribution for the M/M/s model also applies to both models. Consequently, the formulas for L and Lq also carry over, as do the expected waiting-time results (by Little’s formula) W and Wq, for a randomly selected customer. What changes is the distribution of waiting times, which was derived in Sec. 17.6 under the assumption of a first-come-first-served queue discipline. With a priority discipline, this distribution has a much larger variance, because the waiting times of customers in the highest priority classes tend to be much smaller than those under a first-come-first-served discipline, whereas the waiting times in the lowest priority classes tend to be much larger. By the same token, the breakdown of the total number of customers in the system tends to be disproportionately weighted toward the lower-priority classes. But this condition is just the reason for imposing priorities on the queueing system in the first place. We want to improve the measures of performance for each of the higher-priority classes at the expense of performance for the lower-priority classes. To determine how much improvement is being made, we need to obtain such measures as expected waiting time in the system and expected number of customers in the system for the individual priority classes. Expressions for these measures are given next for the two models in turn. Results for the Nonpreemptive Priorities Model Let Wk be the steady-state expected waiting time in the system (including service time) for a member of priority class k. Then 1 1 Wk    , ABk⫺1Bk ␮ where for k  1, 2, . . . , N, s␮ ⫺ ␭ s⫺1 r j A  s! 冱 j!  s␮, r s j0 B0  1, 772 CHAPTER 17 QUEUEING THEORY ki1 ␭i Bk  1 ⫺  , s␮ s  number of servers, ␮  mean service rate per busy server, ␭i  mean arrival rate for priority class i, N ␭ 冱 ␭i, i1 ␭ r  . ␮ (This result assumes that k 冱 ␭i s␮, i1 so that priority class k can reach a steady-state condition.) Little’s formula still applies to individual priority classes, so Lk, the steady-state expected number of members of priority class k in the queueing system (including those being served), is Lk  ␭kWk, for k  1, 2, . . . , N. To determine the expected waiting time in the queue (excluding service time) for priority class k, merely subtract 1/␮ from Wk; the corresponding expected queue length is again obtained by multiplying by ␭k. For the special case where s  1, the expression for A reduces to A  ␮2/␭. An Excel template is provided in your OR Courseware for performing the above calculations. The Solved Examples section of the book’s website provides an example that illustrates the application of the nonpreemptive priorities model for determining how many turret lathes a factory should have when the jobs fall into three priority classes. A Single-Server Variation of the Nonpreemptive Priorities Model The above assumption that the expected service time 1/␮ is the same for all priority classes is a fairly restrictive one. In practice, this assumption sometimes is violated because of differences in the service requirements for the different priority classes. Fortunately, for the special case of a single server, it is possible to allow different expected service times and still obtain useful results. Let 1/␮k denote the mean of the exponential service-time distribution for priority class k, so ␮k  mean service rate for priority class k, for k  1, 2, . . . , N. Then the steady-state expected waiting time in the system for a member of priority class k is ak 1 Wk    , for k  1, 2, . . . , N, bk⫺1bk ␮k where k ␭i ak  冱 , 2 ␮ i i1 b0  1, k ␭i bk  1 ⫺ 冱 . ␮ i i1 This result holds as long as k ␭ i 1, 冱 ␮ i i1 17.8 PRIORITY-DISCIPLINE QUEUEING MODELS 773 which enables priority class k to reach a steady-state condition. Little’s formula can be used as described above to obtain the other main measures of performance for each priority class. Results for the Preemptive Priorities Model For the preemptive priorities model, we need to reinstate the assumption that the expected service time is the same for all priority classes. Using the same notation as for the original nonpreemptive priorities model, having the preemption changes the total expected waiting time in the system (including the total service time) to 1/␮ Wk  , Bk⫺1Bk for k  1, 2, . . . , N, for the single-server case (s  1). When s ⬎ 1, Wk can be calculated by an iterative procedure that will be illustrated soon by the County Hospital example. The Lk continue to satisfy the relationship Lk  ␭kWk, for k  1, 2, . . . , N. The corresponding results for the queue (excluding customers in service) also can be obtained from Wk and Lk as just described for the case of nonpreemptive priorities. Because of the lack-of-memory property of the exponential distribution (see Sec. 17.4), preemptions do not affect the service process (occurrence of service completions) in any way. The expected total service time for any customer still is 1/␮. This chapter’s Excel files include an Excel template for calculating the above measures of performance for the single-server case. The County Hospital Example with Priorities For the County Hospital emergency room problem, the management engineer has noticed that the patients are not treated on a first-come-first-served basis. Rather, the admitting nurse seems to divide the patients into roughly three categories: (1) critical cases, where prompt treatment is vital for survival; (2) serious cases, where early treatment is important to prevent further deterioration; and (3) stable cases, where treatment can be delayed without adverse medical consequences. Patients are then treated in this order of priority, where those in the same category are normally taken on a first-come-first-served basis. A doctor will interrupt treatment of a patient if a new case in a higher-priority category arrives. Approximately 10 percent of the patients fall into the first category, 30 percent into the second, and 60 percent into the third. Because the more serious cases will be sent to the hospital for further care after receiving emergency treatment, the average treatment time by a doctor in the emergency room actually does not differ greatly among these categories. The management engineer has decided to use a priority-discipline queueing model as a reasonable representation of this queueing system, where the three categories of patients constitute the three priority classes in the model. Because treatment is interrupted by the arrival of a higher-priority case, the preemptive priorities model is the appropriate one. Given the previously available data (␮  3 and ␭  2), the preceding percentages yield ␭1  0.2, ␭2  0.6, and ␭3  1.2. Table 17.3 gives the resulting expected waiting times in the queue (so excluding treatment time) for the respective priority classes19 when there is one (s  1) or two (s  2) doctors on duty. (The corresponding results for the nonpreemptive priorities model also are given in Table 17.3 to show the effect of preempting.) 19 Note that these expected times can no longer be interpreted as the expected time before treatment begins when k ⬎ 1, because treatment may be interrupted at least once, causing additional waiting time before service is completed. 774 CHAPTER 17 QUEUEING THEORY ■ TABLE 17.3 Steady-state results from the priority-discipline models for the County Hospital problem Preemptive Priorities sⴝ1 sⴝ2 — 0.933 0.733 0.333 A B1 B2 B3 1 W1 ⫺  ␮ 1 W2 ⫺  ␮ 1 W3 ⫺  ␮ Nonpreemptive Priorities — — — — sⴝ1 4.5 0.933 0.733 0.333 sⴝ2 36 0.967 0.867 0.667 0.024 hour 0.00037 hour 0.238 hour 0.029 hour 0.154 hour 0.00793 hour 0.325 hour 0.033 hour 1.033 hours 0.06542 hour 0.889 hour 0.048 hour Deriving the Preemptive Priority Results. These preemptive priority results for s  2 were obtained as follows. Because the waiting times for priority class 1 customers are completely unaffected by the presence of customers in the lower-priority classes, W1 will be the same for any other values of ␭2 and ␭3, including ␭2  0 and ␭3  0. Therefore, W1 must equal W for the corresponding one-class model (the M/M/s model in Sec. 17.6) with s  2, ␮  3, and ␭  ␭1  0.2, which yields W1  W  0.33370 hour, for ␭  0.2 so 1 W1 ⫺   0.33370 ⫺ 0.33333  0.00037 hour. ␮ Now consider the first two priority classes. Again note that customers in these classes are completely unaffected by lower-priority classes ( just priority class 3 in this case), which can therefore be ignored in the analysis. Let  W1⫺2 be the expected waiting time in the system (so including service time) of a random arrival in either of these two classes, so the probability is ␭1/(␭1 ⫹ ␭2)  14 that this arrival is in class 1 and ␭2/(␭1  ␭2)  34 that it is in class 2. Therefore, 1 3 1⫺2  W1  W2. W 4 4 Furthermore, because the expected waiting time for this same random arrival is the same 1⫺2 must also equal W for the M/M/s model in Sec. 17.6, with for any queue discipline, W s  2, ␮  3, and ␭  ␭1  ␭2  0.8, which yields  W1⫺2  W  0.33937 hour, for ␭  0.8. Combining these facts gives 4 1 W2   0.33937 ⫺  (0.33370)  0.34126 hour. 3 4 1 W2 ⫺   0.00793 hour. ␮   Finally, let W 1⫺3 be the expected waiting time in the system (so including service time) for a random arrival in any of the three priority classes, so the probabilities are 0.1, 0.3, and 0.6 that it is in classes 1, 2, and 3, respectively. Therefore, 775 17.9 QUEUEING NETWORKS 1⫺3  0.1W1  0.3W2  0.6W3. W Furthermore, W 1⫺3 must also equal W for the M/M/s model in Sec. 17.6, with s  2, ␮  3, and ␭  ␭1  ␭2  ␭3  2, so that (from Table 17.2) W 1⫺3  W  0.375 hour, for ␭  2. Consequently, 1 W3   [0.375 ⫺ 0.1(0.33370) ⫺ 0.3(0.34126)] 0.6  0.39875 hour. 1 W3 ⫺   0.06542 hour. ␮ The corresponding Wq results for the M/M/s model in Sec. 17.6 also could have been used in exactly the same way to derive the Wk ⫺ 1/␮ quantities directly.   Conclusions. When s  1, the Wk ⫺ 1/␮ values in Table 17.3 for the preemptive priorities case indicate that providing just a single doctor would cause critical cases to wait about 121 minutes (0.024 hour) on the average, serious cases to wait more than 9 minutes, and stable cases to wait more than 1 hour. (Contrast these results with the average wait of Wq  32 hour for all patients that was obtained in Table 17.2 under the first-come-first-served queue discipline.) However, these values represent statistical expectations, so some patients have to wait considerably longer than the average for their priority class. This wait would not be tolerable for the critical and serious cases, where a few minutes can be vital. By contrast, the s  2 results in Table 17.3 (preemptive priorities case) indicate that adding a second doctor would virtually eliminate waiting for all but the stable cases. Therefore, the management engineer recommended that there be two doctors on duty in the emergency room during the early evening hours next year. The board of directors for County Hospital adopted this recommendation and simultaneously raised the charge for using the emergency room! ■ 17.9 QUEUEING NETWORKS Thus far we have considered only queueing systems that have a single service facility with one or more servers. However, queueing systems encountered in OR studies are sometimes actually queueing networks, i.e., networks of service facilities where customers must receive service at some of or all these facilities. For example, orders being processed through a job shop must be routed through a sequence of machine groups (service facilities). It is therefore necessary to study the entire network to obtain such information as the expected total waiting time, expected number of customers in the entire system, and so forth. Because of the importance of queueing networks, research into this area has been very active. However, this is a difficult area, so we limit ourselves to a brief introduction. One result is of such fundamental importance for queueing networks that this finding and its implications warrant special attention here. This fundamental result is the following equivalence property for the input process of arriving customers and the output process of departing customers for certain queueing systems. Equivalence property: Assume that a service facility with s servers and an infinite queue has a Poisson input with parameter ␭ and the same exponential service-time distribution with parameter ␮ for each server (the M/M/s model), where s␮ ⬎ ␭. Then the steady-state output of this service facility is also a Poisson process with parameter ␭. An Application Vignette For many decades, General Motors Corporation (GM) enjoyed its position as the world’s largest automotive manufacturer. However, ever since the late 1980s, when the productivity of GM’s plants ranked near the bottom in the industry, the company’s market position has been steadily eroding due to ever-increasing foreign competition. To counter this foreign competition, GM management initiated a long-term operations research project many years ago to predict and improve the throughput performance of the company’s several hundred production lines throughout the world. The goal was to greatly increase the company’s productivity throughout its manufacturing operations and thereby provide GM with a strategic competitive advantage. The most important analytical tool used in this project has been a complicated queueing model that uses a simple single-server model as a building block. The overall model begins by considering a two-station production line where each station is modeled as a single-server queueing system with constant interarrival times and constant service times with the following exceptions. The server (commonly a machine) at each station occasionally breaks down and does not resume serving until a repair is completed. The server at the first station also shuts down when it completes a service and the buffer between the stations is full. The server at the second station shuts down when it completes a service and has not yet received a job from the first station. The next step in the analysis is to extend this queueing model for a two-station production line to one for a production line with any number of stations. This larger queueing model then is used to analyze how production lines should be designed to maximize their throughput. (The technique of simulation described in Chap. 20 also is used for this purpose for relatively complex production lines.) This application of queueing theory (and simulation), along with supporting data-collection systems, has reaped remarkable benefits for GM. According to impartial industry sources, its plants, which once were among the least productive in the industry, now rank among the very best. The resulting improvements in production throughput in over 30 vehicle plants and 10 countries has yielded over $2.1 billion in documented savings and increased revenue. These dramatic results led to General Motors winning the prestigious First Prize in the 2005 international competition for the Franz Edelman Award for Achievement in Operations Research and the Management Sciences. Source: J. M. Alden, L. D. Burns, T. Costy, R. D. Hutton, C. A. Jackson, D. S. Kim, K. A. Kohls, J. H. Owen, M. A. Turnquist, and D. J. Vander Veen: “General Motors Increases Its Production Throughput,” Interfaces, 36(1): 6–25, Jan.–Feb. 2006. (A link to this article is provided on our website, www.mhhe.com/hillier.) Notice that this property makes no assumption about the type of queue discipline used. Whether it is first-come-first-served, random, or even a priority discipline as in Sec. 17.8, the served customers will leave the service facility according to a Poisson process. The crucial implication of this fact for queueing networks is that if these customers must then go to another service facility for further service, this second facility also will have a Poisson input. With an exponential service-time distribution, the equivalence property will hold for this facility as well, which can then provide a Poisson input for a third facility, etc. We discuss the consequences for two basic kinds of networks next. Infinite Queues in Series Suppose that customers must all receive service at a series of m service facilities in a fixed sequence. Assume that each facility has an infinite queue (no limitation on the number of customers allowed in the queue), so that the series of facilities form a system of infinite queues in series. Assume further that the customers arrive at the first facility according to a Poisson process with parameter ␭ and that each facility i (i  1, 2, . . . , m) has an exponential service-time distribution with parameter ␮i for its si servers, where si␮i ⬎ ␭. It then follows from the equivalence property that (under steady-state conditions) each service facility has a Poisson input with parameter ␭. Therefore, the elementary M/M/s model of Sec. 17.6 (or its priority-discipline counterparts in Sec. 17.8) can be used to analyze each service facility independently of the others! 17.9 QUEUEING NETWORKS 777 Being able to use the M/M/s model to obtain all measures of performance for each facility independently, rather than analyzing interactions between facilities, is a tremendous simplification. For example, the probability of having n customers at a given facility is given by the formula for Pn in Sec. 17.6 for the M/M/s model. The joint probability of n1 customers at facility 1, n2 customers at facility 2, . . . then is the product of the individual probabilities obtained in this simple way. In particular, this joint probability can be expressed as P{(N1, N2, . . . , Nm)  (n1, n2, . . . , nm)}  Pn1Pn2 Pnm. (This simple form for the solution is called the product form solution.) Similarly, the expected total waiting time and the expected number of customers in the entire system can be obtained by merely summing the corresponding quantities obtained at the respective facilities. Unfortunately, the equivalence property and its implications do not hold for the case of finite queues discussed in Sec. 17.6. This case is actually quite important in practice, because there is often a definite limitation on the queue length in front of service facilities in networks. For example, only a small amount of buffer storage space is typically provided in front of each facility (station) in a production-line system. For such systems of finite queues in series, no simple product form solution is available. The facilities must be analyzed jointly instead, and only limited results have been obtained. Jackson Networks Systems of infinite queues in series are not the only queueing networks where the M/M/s model can be used to analyze each service facility independently of the others. Another prominent kind of network with this property (a product form solution) is the Jackson network, named after the individual (James R. Jackson) who first characterized the network and showed that this property holds a few decades ago. The characteristics of a Jackson network are the same as assumed above for the system of infinite queues in series, except now the customers visit the facilities in different orders (and may not visit them all). For each facility, its arriving customers come from both outside the system (according to a Poisson process) and the other facilities. These characteristics are summarized below: A Jackson network is a system of m service facilities where facility i (i  1, 2, . . . , m) has 1. An infinite queue 2. Customers arriving from outside the system according to a Poisson input process with parameter ai 3. si servers with an exponential service-time distribution with parameter ␮i. A customer leaving facility i is routed next to facility j ( j  1, 2, . . . , m) with probability pij or departs the system with probability m qi  1 ⫺ 冱 pij. j1 Any such network has the following key property: Under steady-state conditions, each facility j ( j  1, 2, . . . , m) in a Jackson network behaves as if it were an independent M/M/s queueing system with arrival rate m ␭j  aj  冱 ␭i pij, i1 where sj␮j ⬎ ␭j. 778 CHAPTER 17 QUEUEING THEORY This key property cannot be proved directly from the equivalence property this time (the reasoning would become circular), but its intuitive underpinning is still provided by the latter property. The intuitive viewpoint (not quite technically correct) is that, for each facility i, its input processes from the various sources (outside and other facilities) are independent Poisson processes, so the aggregate input process is Poisson with parameter ␭i (Property 6 in Sec. 17.4). The equivalence property then says that the aggregate output process for facility i must be Poisson with parameter ␭i. By disaggregating this output process (Property 6 again), the process for customers going from facility i to facility j must be Poisson with parameter ␭i pij. This process becomes one of the Poisson input processes for facility j, thereby helping to maintain the series of Poisson processes in the overall system. The equation given for obtaining ␭j is based on the fact that ␭i is the departure rate as well as the arrival rate for all customers using facility i. Because pij is the proportion of customers departing from facility i who go next to facility j, the rate at which customers from facility i arrive at facility j is ␭i pij. Summing this product over all i, and then adding this sum to aj, gives the total arrival rate to facility j from all sources. To calculate ␭j from this equation requires knowing the ␭i for i  j, but these ␭i also are unknowns given by the corresponding equations. Therefore, the procedure is to solve simultaneously for ␭1, ␭2, . . . , ␭m by obtaining the simultaneous solution of the entire system of linear equations for ␭j for j  1, 2, . . . , m. Your IOR Tutorial includes an interactive procedure for solving for the ␭j in this way. To illustrate these calculations, consider a Jackson network with three service facilities that have the parameters shown in Table 17.4. Plugging into the formula for ␭j for j  1, 2, 3, we obtain ␭1  1  0.1␭2  0.4␭3 ␭2  4  0.6␭1  0.4␭3 ␭3  3  0.3␭1  0.3␭2. (Reason through each equation to see why it gives the total arrival rate to the corresponding facility.) The simultaneous solution for this system is 1 ␭1  5, ␭2  10, ␭3  7. 2 Given this simultaneous solution, each of the three service facilities now can be analyzed independently by using the formulas for the M/M/s model given in Sec. 17.6. For example, to obtain the distribution of the number of customers Ni  ni at facility i, note that 1 2  1 ␭i ␳i       si␮i 2 3   4 for i  1 for i  2 for i  3. ■ TABLE 17.4 Data for the example of a Jackson network pij Facility j sj ␮j aj iⴝ1 iⴝ2 iⴝ3 j1 j2 j3 1 2 1 10 10 10 1 4 3 0 0.6 0.3 0.1 0 0.3 0.4 0.4 0 17.10 THE APPLICATION OF QUEUEING THEORY 779 Plugging these values (and the parameters in Table 17.4) into the formula for Pn gives 冢冣 1 1 n1 Pn1    2 2  1 3  1 Pn2    3  1 1 n2⫺1    3 2 Pn3 冢冣 1 3  冢冣 4 4 n3 for facility 1, for n2  0 for n2  1 for facility 2, for n2  2 for facility 3. The joint probability of (n1, n2, n3) then is given simply by the product form solution P{(N1, N2, N3)  (n1, n2, n3)}  Pn1Pn2Pn3. In a similar manner, the expected number of customers Li at facility i can be calculated from Sec. 17.6 as 4 L1  1, L2  , L3  3. 3 The expected total number of customers in the entire system then is 1 L  L1  L2  L3  5. 3 Obtaining W, the expected total waiting time in the system (including service times) for a customer, is a little trickier. You cannot simply add the expected waiting times at the respective facilities, because a customer does not necessarily visit each facility exactly once. However, Little’s formula can still be used, where the system arrival rate ␭ is the sum of the arrival rates from outside to the facilities, ␭  a1  a2  a3  8. Thus, L 2 W    . 3 a1  a2  a3 In conclusion, we should point out that there do exist other (more complicated) kinds of queueing networks where the individual service facilities can be analyzed independently from the others. In fact, finding queueing networks with a product form solution has been the Holy Grail for research on queueing networks. Some sources of additional information are Selected References 1 and 2. ■ 17.10 THE APPLICATION OF QUEUEING THEORY Because of the wealth of information provided by queueing theory, it is widely used to guide the design (or redesign) of queueing systems. We now turn our focus to how queueing theory is applied in this way. A number of decisions may need to be made when designing a queueing system. The possible decisions include 1. 2. 3. 4. 5. Number of servers at a service facility. Efficiency of the servers. Number of service facilities. Amount of waiting space in the queue. Any priorities for different categories of customers. 780 CHAPTER 17 QUEUEING THEORY The first of these (how many servers?) is the decision that arises most frequently and we will focus our attention on this one a little later in this section. The two primary considerations in making these kinds of decisions typically are (1) the cost of the service capacity provided by the queueing system and (2) the consequences of making the customers wait in the queueing system. Providing too much service capacity causes excessive costs. Providing too little causes excessive waiting. Therefore, the goal is to find an appropriate trade-off between the service cost and the amount of waiting. Two basic approaches are available for seeking this trade-off. One is to establish one or more criteria for a satisfactory level of service in terms of how much waiting would be acceptable. For example, one possible criterion might be that the expected waiting time in the system should not exceed a certain number of minutes. Another might be that at least 95 percent of the customers should wait no longer than a certain number of minutes in the system. Similar criteria in terms of the expected number of customers in the system (or the probability distribution of this number) also could be used. The criteria also might be stated in terms of the waiting time or the number of customers in the queue instead of in the system. Once the criterion or criteria have been selected, it then is usually straightforward to use trial and error to find the least costly design of the queueing system that satisfies all the criteria. The other basic approach for seeking the best trade-off involves assessing the costs associated with the consequences of making customers wait. For example, suppose that the queueing system is an internal service system (as described in Sec. 17.3), where the customers are the employees of a for-profit company. Making these employees wait at the queueing system causes lost productivity, which results in lost profit. This lost profit is the waiting cost associated with the queueing system. By expressing this waiting cost as a function of the amount of waiting, the problem of determining the best design of the queueing system can now be posed as minimizing the expected total cost (service cost plus waiting cost) per unit time. We spell out this latter approach below for the problem of determining the optimal number of servers to provide. How Many Servers Should Be Provided? To formulate the objective function when the decision variable is the number of servers s at a particular service facility, let E(TC)  expected total cost per unit time, E(SC)  expected service cost per unit time, E(WC)  expected waiting cost per unit time. Then the objective is to choose the number of servers so as to Minimize E(TC)  E(SC)  E(WC). When each server costs the same, the service cost is E(SC)  Css, where Cs is the marginal cost of a server per unit time. To evaluate WC for any value of s, note that L  ␭W gives the expected total amount of waiting in the queueing system per unit time. Therefore, when the waiting cost is proportional to the amount of waiting, this cost can be expressed as E(WC)  CwL, 17.10 THE APPLICATION OF QUEUEING THEORY 781 where Cw is the waiting cost per unit time for each customer in the queueing system. Therefore, after estimating the constants, Cs and Cw, the goal is to choose the value of s so as to Minimize E(TC)  Css  CwL. By choosing the queueing model that fits the queueing system, the value of L can be obtained for various values of s. Increasing s decreases L, at first rapidly and then gradually more slowly. Figure 17.13 shows the general shape of the E(SC), E(WC), and E(TC) curves versus the number of servers s. (For better conceptualization, we have drawn these as smooth curves even though the only feasible values of s are s = 1, 2, . . . .) By calculating E(TC) for consecutive values of s until E(TC) stops decreasing and starts increasing instead, it is straightforward to find the number of servers that minimizes total cost. The following example illustrates this process. An Example ■ FIGURE 17.13 The shape of the expected cost curves for determining the number of servers to provide. Expected cost per unit time The Acme Machine Shop has a tool crib to store tools required by the shop mechanics. Two clerks run the tool crib. The clerks hand out the tools as the mechanics arrive and request them. The tools then are returned to the clerks when they are no longer needed. There have been complaints from supervisors that their mechanics have had to waste too much time waiting to be served at the tool crib, so it appears as if there should be more clerks. On the other hand, management is exerting pressure to reduce overhead in the plant, and this reduction would lead to fewer clerks. To resolve these conflicting pressures, an OR study is being conducted to determine just how many clerks the tool crib should have. The tool crib constitutes a queueing system, with the clerks as its servers and the mechanics as its customers. After gathering some data on interarrival times and service times, the OR team has concluded that the queueing model that fits this queueing system best is the M/M/s model. The estimates of the mean arrival rate ␭ and the mean service rate (per server) ␮ are ␭  120 customers per hour, ␮  80 customers per hour, Total cost Service cost Waiting cost Number of servers (s ) 782 CHAPTER 17 QUEUEING THEORY so the utilization factor for the two clerks is ␭ 120 ␳      0.75. s␮ 2(80) The total cost to the company of each tool crib clerk is about $20 per hour, so Cs  $20. While a mechanic is busy, the value to the company of his or her output averages about $48 per hour, so Cw  $48. Therefore, the OR team now needs to find the number of servers (tool crib clerks) s that will Minimize E(TC)  $20 s + $48 L. An Excel template has been provided in your OR Courseware for calculating these costs with the M/M/s model. All you need to do is enter the data for the model along with the unit service cost Cs, the unit waiting cost Cw, and the number of servers s you want to try. The template then calculates E(SC), E(WC), and E(TC). This is illustrated in Fig. 17.14 with s  3 for this example. By repeatedly entering alternative values of s, the template then can reveal which value minimizes E(TC) in a matter of seconds. Table 17.5 shows the data that would be generated from this template by repeating these calculations for s  1, 2, 3, 4, and 5. Since the utilization factor for s  1 is ␳  1.5, a single clerk would be unable to keep up with the customers, so this option is ruled out. All larger values of s are feasible, but s  3 has the smallest expected total cost. Furthermore, s  3 would decrease the current expected total cost for s  2 by $61 per hour. Therefore, despite management’s current drive to reduce overhead (which includes the cost of tool crib clerks), the OR team recommends that a third clerk be added to the tool crib. Note that this recommendation would decrease the utilization factor for the clerks from an already modest 0.75 all the way down to 0.5. However, because of the large improvement in the productivity of the mechanics (who are much more expensive than the clerks) through decreasing their time wasted waiting at the tool crib, management adopts the recommendation. Other Issues Chapter 26 on the book’s website expands considerably further on the application of queueing theory, including how to deal with some other issues not considered above. For example, the analysis displayed in Fig. 17.14 and Table 17.5 assumed that the waiting cost is proportional to the amount of waiting, but this sometimes is not the case. If a company has one or two of its employees in a queueing system, this may not be very serious in terms of their lost productivity because others may be able to handle all of the available productive work. However, having additional employees in the queueing system may result in a sharp increase in lost productivity and the resulting lost profit, so the waiting cost becomes a nonlinear function of the number in the system. Similarly, the consequences to a commercial service system for making its customers wait may be minimal for short waits but much more serious for long waits. In this case, the waiting cost becomes a nonlinear function of the waiting time. Section 26.3 describes the formulation of nonlinear waiting-cost functions and then the calculation of E(WC) with such functions. Section 26.4 discusses a decision model where the decision variables are both the number of servers and the mean service rate for the servers. An interesting issue that arises here is whether it is better have one fast server (several people working together to serve each customer rapidly) or several slow servers (several people working separately to serve different customers). Section 26.4 also presents a decision model where the decision variables are the number of service facilities and the number of servers per facility to provide service to a calling population of potential customers. Given the mean arrival rate for the entire calling population, increasing the number of facilities enables decreasing the mean arrival rate 783 17.10 THE APPLICATION OF QUEUEING THEORY A 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 18 19 20 B C D E F G Economic Analysis of Acme Machine Shop Example λ= µ= s= Data 120 80 3 (mean arrival rate) (mean service rate) (# servers) Pr(W > t) = 0.02581732 when t = 0.05 Prob(Wq > t) = 0.00058707 when t = 0.05 1 2 Economic Analysis: 1 Cs = 0 Cw = 0 0 Cost of Service 0 Cost of Waiting 0 Total Cost $20.00 $48.00 (cost / server / unit time) (waiting cost / unit time) $60.00 $83.37 $143.37 B C Cost of Service =Cs*s Cost of Waiting =Cw*L Total Cost =CostOfService+CostOfWaiting L= Lq = Results 1.736842105 0.236842105 W= Wq = 0.014473684 0.001973684 ρ= 0.5 n 0 1 2 3 4 5 6 7 Pn 0.210526316 0.315789474 0.236842105 0.118421053 0.059210526 0.029605263 0.014802632 0.007401316 Range Name CostOfService CostOfWaiting Cs Cw L s TotalCost Cells C18 C19 C15 C16 G4 C6 C20 ■ FIGURE 17.14 This Excel template for using economic analysis to choose the number of servers with the M/M/s model is applied here to the Acme Machine Shop example with s  3. ■ TABLE 17.5 Calculation of E(TC) for alternative s in the Acme Machine Shop example s ␳ L E(SC) ⴝ Css E(WC) ⴝ CwL E(TC) ⴝ E(SC) ⴙ E(WC) 1 2 3 4 5 1.50 0.75 0.50 0.375 0.30  3.43 1.74 1.54 1.51 $20 $40 $60 $80 $100  $164.57 $83.37 $74.15 $72.41  $204.57 $143.37 $154.15 $172.41 784 CHAPTER 17 QUEUEING THEORY (workload) at each facility. The number of service facilities also affects how much time each customer will need to spend in traveling to and from the nearest facility. The waiting cost now needs to be a function of the total time lost by a customer by either waiting at a service facility or traveling to and from the facility. Therefore, Sec. 26.5 presents some travel-time models for determining the expected round-trip travel time for each customer. ■ 17.11 CONCLUSIONS Queueing systems are prevalent throughout society. The adequacy of these systems can have an important effect on the quality of life and productivity. Queueing theory studies queueing systems by formulating mathematical models of their operation and then using these models to derive measures of performance. This analysis provides vital information for effectively designing queueing systems that achieve an appropriate balance between the cost of providing a service and the cost associated with waiting for that service. This chapter presented the most basic models of queueing theory for which particularly useful results are available. However, many other interesting models could be considered if space permitted. In fact, several thousand research papers formulating and/or analyzing queueing models have already appeared in the technical literature, and many more are being published each year! The exponential distribution plays a fundamental role in queueing theory for representing the distribution of interarrival and service times. One reason is that interarrival times commonly have this distribution and assuming this distribution for service times often provides a reasonable approximation as well. Another reason is that queueing models based on the exponential distribution are far more tractable than any others. For example, extensive results can be obtained for queueing models based on the birth-and-death process, which requires that both interarrival times and service times have exponential distributions. Phase-type distributions such as the Erlang distribution, where the total time is broken down into individual phases having an exponential distribution, also are somewhat tractable. Useful analytical results have been obtained for only a relatively few queueing models making other assumptions. Priority-discipline queueing models are useful for the common situation where some categories of customers are given priority over others for receiving service. In another common situation, customers must receive service at several different service facilities. Models for queueing networks are gaining widespread use for such situations. This is an area of especially active ongoing research. When no tractable model that provides a reasonable representation of the queueing system under study is available, a common approach is to obtain relevant performance data by developing a computer program for simulating the operation of the system. This technique is discussed in Chap. 20. Section 17.10 briefly describes how queueing theory can be used to help design effective queueing systems and then Chap. 26 (on the book’s website) expands considerably further on this subject. ■ SELECTED REFERENCES 1. Boucherie, R. J., and N. M. van Dijk (eds.): Queueing Networks: A Fundamental Approach, Springer, New York, 2011. 2. Chen, H., and D. D. Yao: Fundamentals of Queueing Networks: Performance, Asymptotics, and Optimization, Springer, New York, 2001. 3. El-Taha, M., and S. Stidham, Jr.: Sample-Path Analysis of Queueing Systems, Kluwer Academic Publishers (now Springer), Boston, 1998. 4. Gautam, N.: Analysis of Queues: Methods and Applications, CRC Press, Boca Raton, FL, 2012. LEARNING AIDS FOR THIS CHAPTER ON OUR WEBSITE 785 5. Gross, D., J. F. Shortle, J. M. Thompson, and C. M. Harris: Fundamentals of Queueing Theory, 4th ed., Wiley, Hoboken, NJ, 2008. 6. Hall, R. W. (ed.): Patient Flow: Reducing Delay in Healthcare Delivery, Springer, New York, 2006. 7. Hall, R. W.: Queueing Methods: For Services and Manufacturing, Prentice-Hall, Upper Saddle River, NJ, 1991. 8. Haviv, M.: Queues: A Course in Queueing Theory, Springer, New York, 2013. 9. Hillier, F. S., and M. S. Hillier: Introduction to Management Science: A Modeling and Case Studies Approach with Spreadsheets, 5th ed., McGraw-Hill/Irwin, Burr Ridge, IL, 2014, Chap. 11. 10. Jain, J. L., S. G. Mohanty, and W. Bohm: A Course on Queueing Models, Chapman & Hall/CRC, Boca Raton, FL, 2007. 11. Kaczynski, W. H., L. M. Leemis, and J. H. Drew: “Transient Queueing Analysis,” INFORMS Journal on Computing, 24(1): 10–28, Winter 2012. 12. Lipsky, L.: Queueing Theory: A Linear Algebraic Approach, 2nd ed., Springer, New York, 2009. 13. Little, J. D. C.: “Little’s Law as Viewed on Its 50th Anniversary,” Operations Research, 59(3): 536–549, May–June 2011. 14. Stidham, S., Jr.: “Analysis, Design, and Control of Queueing Systems,” Operations Research, 50: 197–216, 2002. 15. Stidham, S., Jr.: Optimal Design of Queueing Systems, CRC Press, Boca Raton, FL, 2009. Some Award-Winning Applications of Queueing Theory: (A link to all these articles is provided on our website, www.mhhe.com/hillier.) A1. Bleuel, W. H.: “Management Science’s Impact on Service Strategy,” Interfaces, 5(1, Part 2): 4–12, November 1975. A2. Brigandi, A. J., D. R. Dargon, M. J. Sheehan, and T. Spencer III: “AT&T’s Call Processing Simulator (CAPS) Operational Design for Inbound Call Centers,” Interfaces, 24(1): 6–28, January–February 1994. A3. Brown, S. M., T. Hanschke, I. Meents, B. R. Wheeler, and H. Zisgen: “Queueing Model Improves IBM’s Semiconductor Capacity and Lead-Time Management,” Interfaces, 40(5): 397–407, September–October 2010. A4. Burman, M., S. B. Gershwin, and C. Suyematsu: “Hewlett-Packard Uses Operations Research to Improve the Design of a Printer Production Line,” Interfaces, 28(1): 24–36, Jan.–Feb. 1998. A5. Quinn, P., B. Andrews, and H. Parsons: “Allocating Telecommunications Resources at L.L. Bean, Inc.,” Interfaces, 21(1): 75–91, January–February 1991. A6. Ramaswami, V., D. Poole, S. Ahn, S. Byers, and A. Kaplan: “Ensuring Access to Emergency Services in the Presence of Long Internet Dial-Up Calls,” Interfaces, 35(5): 411–422, September–October 2005. A7. Samuelson, D. A.: “Predictive Dialing for Outbound Telephone Call Centers,” Interfaces, 29(5): 66–81, September–October 1999. A8. Swersy, A. J., L. Goldring, and E. D. Geyer, Sr.: “Improving Fire Department Productivity: Merging Fire and Emergency Medical Units in New Haven,” Interfaces, 23(1): 109–129, January–February 1993. A9. Vandaele, N. J., M. R. Lambrecht, N. De Schuyter, and R. Cremmery: “Spicer Off-Highway Products Division—Brugge Improves Its Lead-Time and Scheduling Performance,” Interfaces, 30(1): 83–95, January–February 2000. ■ LEARNING AIDS FOR THIS CHAPTER ON OUR WEBSITE (www.mhhe.com/hillier) Solved Examples: Examples for Chapter 17 An Interactive Procedure in IOR Tutorial: Jackson Network 786 CHAPTER 17 QUEUEING THEORY “Ch. 17—Queueing Theory” Excel Files: Template Template Template Template Template Template Template Template Template for for for for for for for for for M/M/s Model Finite Queue Variation of M/M/s Model Finite Calling Population Variation of M/M/s Model M/G/1 Model M/D/1 Model M/Ek /1 Model Nonpreemptive Priorities Model Preemptive Priorities Model M/M/s Economic Analysis of Number of Servers “Ch. 17—Queueing Theory” LINGO File for Selected Examples Glossary for Chapter 17 See Appendix 1 for documentation of the software. ■ PROBLEMS 20 To the left of each of the following problems (or their parts), we have inserted a T whenever one of the templates listed above can be helpful. An asterisk on the problem number indicates that at least a partial answer is given in the back of the book. 17.2-1.* Consider a typical barber shop. Demonstrate that it is a queueing system by describing its components. 17.2-2.* Newell and Jeff are the two barbers in a barber shop they own and operate. They provide two chairs for customers who are waiting to begin a haircut, so the number of customers in the shop varies between 0 and 4. For n  0, 1, 2, 3, 4, the probability Pn that exactly n customers are in the shop is P0  116, P1  146, P2  166, P3  146, P4  116. (a) Calculate L. How would you describe the meaning of L to Newell and Jeff? (b) For each of the possible values of the number of customers in the queueing system, specify how many customers are in the queue. Then calculate Lq. How would you describe the meaning of Lq to Newell and Jeff? (c) Determine the expected number of customers being served. (d) Given that an average of 4 customers per hour arrive and stay to receive a haircut, determine W and Wq. Describe these two quantities in terms meaningful to Newell and Jeff. (e) Given that Newell and Jeff are equally fast in giving haircuts, what is the average duration of a haircut? 17.2-3. Mom-and-Pop’s Grocery Store has a small adjacent parking lot with three parking spaces reserved for the store’s customers. During store hours, cars enter the lot and use one of the spaces at a mean rate of 2 per hour. For n  0, 1, 2, 3, the probability Pn that exactly n spaces currently are being used is P0  0.2, P1  0.3, P2  0.3, P3  0.2. 20 (a) Describe how this parking lot can be interpreted as being a queueing system. In particular, identify the customers and the servers. What is the service being provided? What constitutes a service time? What is the queue capacity? (b) Determine the basic measures of performance—L, Lq, W, and Wq—for this queueing system. (c) Use the results from part (b) to determine the average length of time that a car remains in a parking space. 17.2-4. For each of the following statements about the queue in a queueing system, label the statement as true or false and then justify your answer by referring to a specific statement in the chapter. (a) The queue is where customers wait in the queueing system until their service is completed. (b) Queueing models conventionally assume that the queue can hold only a limited number of customers. (c) The most common queue discipline is first-come-first-served. 17.2-5. Midtown Bank always has two tellers on duty. Customers arrive to receive service from a teller at a mean rate of 40 per hour. A teller requires an average of 2 minutes to serve a customer. When both tellers are busy, an arriving customer joins a single line to wait for service. Experience has shown that customers wait in line an average of 1 minute before service begins. (a) Describe why this is a queueing system. (b) Determine the basic measures of performance—Wq, W, Lq, and L—for this queueing system. (Hint: We don’t know the probability distributions of interarrival times and service times for this queueing system, so you will need to use the relationships between these measures of performance to help answer the question.) See also the end of Chap. 26 (on the book’s website) for additional problems involving the application of queueing theory. PROBLEMS 17.2-6. Explain why the utilization factor ␳ for the server in a single-server queueing system must equal 1 ⫺ P0, where P0 is the probability of having 0 customers in the system. 17.2-7. You are given two queueing systems, Q1 and Q2. The mean arrival rate, the mean service rate per busy server, and the steadystate expected number of customers for Q2 are twice the corresponding values for Q1. Let Wi  the steady-state expected waiting time in the system for Qi, for i  1, 2. Determine W2/W1. 17.2-8. Consider a single-server queueing system with any servicetime distribution and any distribution of interarrival times (the GI/G/1 model). Use only basic definitions and the relationships given in Sec. 17.2 to verify the following general relationships: (a) L  Lq  (1 ⫺ P0). (b) L  Lq  ␳. (c) P0  1 ⫺ ␳. 17.2-9. Show that s⫺1 冢 s⫺1 冣 L  冱 nPn  Lq  s 1 ⫺ 冱 Pn n0 n0 by using the statistical definitions of L and Lq in terms of the Pn. 17.3-1. Identify the customers and the servers in the queueing system in each of the following situations: (a) The checkout stand in a grocery store. (b) A fire station. (c) The tollbooth for a bridge. (d) A bicycle repair shop. (e) A shipping dock. (f) A group of semiautomatic machines assigned to one operator. (g) The materials-handling equipment in a factory area. (h) A plumbing shop. (i) A job shop producing custom orders. (j) A secretarial typing pool. 17.4-1. Suppose that a queueing system has two servers, an exponential interarrival time distribution with a mean of 2 hours, and an exponential service-time distribution with a mean of 2 hours for each server. Furthermore, a customer has just arrived at 12:00 noon. (a) What is the probability that the next arrival will come (i) before 1:00 P.M., (ii) between 1:00 and 2:00 P.M., and (iii) after 2:00 P.M.? (b) Suppose that no additional customers arrive before 1:00 P.M. Now what is the probability that the next arrival will come between 1:00 and 2:00 P.M.? (c) What is the probability that the number of arrivals between 1:00 and 2:00 P.M. will be (i) 0, (ii) 1, and (iii) 2 or more? (d) Suppose that both servers are serving customers at 1:00 P.M. What is the probability that neither customer will have service completed (i) before 2:00 P.M., (ii) before 1:10 P.M., and (iii) before 1:01 P.M.? 17.4-2.* The jobs to be performed on a particular machine arrive according to a Poisson input process with a mean rate of two per hour. Suppose that the machine breaks down and will require 1 hour 787 to be repaired. What is the probability that the number of new jobs that will arrive during this time is (a) 0, (b) 2, and (c) 5 or more? 17.4-3. The time required by a mechanic to repair a machine has an exponential distribution with a mean of 4 hours. However, a special tool would reduce this mean to 2 hours. If the mechanic repairs a machine in less than 2 hours, he is paid $100; otherwise, he is paid $80. Determine the mechanic’s expected increase in pay per machine repaired if he uses the special tool. 17.4-4. A three-server queueing system has a controlled arrival process that provides customers in time to keep the servers continuously busy. Service times have an exponential distribution with mean 0.5. You observe the queueing system starting up with all three servers beginning service at time t  0. You then note that the first completion occurs at time t  1. Given this information, determine the expected amount of time after t  1 until the next service completion occurs. 17.4-5. A queueing system has three servers with expected service times of 20 minutes, 15 minutes, and 10 minutes. The service times have an exponential distribution. Each server has been busy with a current customer for 5 minutes. Determine the expected remaining time until the next service completion. 17.4-6. Consider a queueing system with two types of customers. Type 1 customers arrive according to a Poisson process with a mean rate of 5 per hour. Type 2 customers also arrive according to a Poisson process with a mean rate of 5 per hour. The system has two servers, both of which serve both types of customers. For both types, service times have an exponential distribution with a mean of 10 minutes. Service is provided on a first-come-first-served basis. (a) What is the probability distribution (including its mean) of the time between consecutive arrivals of customers of any type? (b) When a particular type 2 customer arrives, she finds two type 1 customers there in the process of being served but no other customers in the system. What is the probability distribution (including its mean) of this type 2 customer’s waiting time in the queue? 17.4-7. Consider a two-server queueing system where all service times are independent and identically distributed according to an exponential distribution with a mean of 10 minutes. Service is provided on a first-come-first-served basis. When a particular customer arrives, he finds that both servers are busy and no one is waiting in the queue. (a) What is the probability distribution (including its mean and standard deviation) of this customer’s waiting time in the queue? (b) Determine the expected value and standard deviation of this customer’s waiting time in the system. (c) Suppose that this customer still is waiting in the queue 5 minutes after its arrival. Given this information, how does this change the expected value and the standard deviation of this customer’s total waiting time in the system from the answers obtained in part (b)? 788 CHAPTER 17 QUEUEING THEORY 17.4-8. For each of the following statements regarding service times modeled by the exponential distribution, label the statement as true or false and then justify your answer by referring to specific statements in the chapter. (a) The expected value and variance of the service times are always equal. (b) The exponential distribution always provides a good approximation of the actual service-time distribution when each customer requires the same service operations. (c) At an s-server facility, s ⬎ 1, with exactly s customers already in the system, a new arrival would have an expected waiting time before entering service of 1/␮ time units, where ␮ is the mean service rate for each busy server. 17.4-9. As for Property 3 of the exponential distribution, let T1, T2, . . . , Tn be independent exponential random variables with parameters ␣1, ␣2, . . . , ␣n, respectively, and let U  min{T1, T2, . . . , Tn}. Show that the probability that a particular random variable Tj will turn out to be smallest of the n random variables is n 冫冱 ␣ , P{Tj  U}  ␣j i for j  1, 2, . . . , n. i1 P{Ti ⬎ Tj for all i ⫽ j⏐Tj  t}␣j e⫺␣j tdt.) (Hint: P{Tj  U}  冕ⴥ 0 17.5-1. Consider the birth-and-death process with all ␮n  2 (n  1, 2, . . .), ␭0  3, ␭1  2, ␭2  1, and ␭n  0 for n  3, 4, . . . . (a) Display the rate diagram. (b) Calculate P0, P1, P2, P3, and Pn for n  4, 5, . . . . (c) Calculate L, Lq, W, and Wq. 17.5-2. Consider a birth-and-death process with just three attainable states (0, 1, and 2), for which the steady-state probabilities are P0, P1, and P2, respectively. The birth-and-death rates are summarized in the following table: State Birth Rate Death Rate 0 1 2 1 1 0 — 2 2 (a) (b) (c) (d) Construct the rate diagram for this birth-and-death process. Develop the balance equations. Solve these equations to find P0, P1, and P2. Use the general formulas for the birth-and-death process to calculate P0, P1, and P2. Also calculate L, Lq, W, and Wq. 17.5-3. Consider the birth-and-death process with the following mean rates. The birth rates are ␭0  2, ␭1  3, ␭2  2, ␭3  1, and ␭n  0 for n ⬎ 3. The death rates are ␮1  3, ␮2  4, ␮3  1, and ␮n  2 for n ⬎ 4. (a) Construct the rate diagram for this birth-and-death process. (b) Develop the balance equations. (c) Solve these equations to find the steady-state probability distribution P0, P1, . . . . (d) Use the general formulas for the birth-and-death process to calculate P0, P1, . . . . Also calculate L, Lq, W, and Wq. 17.5-4. Consider the birth-and-death process with all ␭n  2 (n  0, 1, . . .), ␮1  2, and ␮n  4 for n  2, 3, . . . . (a) Display the rate diagram. (b) Calculate P0 and P1. Then give a general expression for Pn in terms of P0 for n  2, 3, . . . . (c) Consider a queueing system with two servers that fits this process. What is the mean arrival rate for this queueing system? What is the mean service rate for each server when it is busy serving customers? 17.5-5.* A service station has one gasoline pump. Cars wanting gasoline arrive according to a Poisson process at a mean rate of 15 per hour. However, if the pump already is being used, these potential customers may balk (drive on to another service station). In particular, if there are n cars already at the service station, the probability that an arriving potential customer will balk is n/3 for n  1, 2, 3. The time required to service a car has an exponential distribution with a mean of 4 minutes. (a) Construct the rate diagram for this queueing system. (b) Develop the balance equations. (c) Solve these equations to find the steady-state probability distribution of the number of cars at the station. Verify that this solution is the same as that given by the general solution for the birth-and-death process. (d) Find the expected waiting time (including service) for those cars that stay. 17.5-6. A maintenance person has the job of keeping two machines in working order. The amount of time that a machine works before breaking down has an exponential distribution with a mean of 10 hours. The time then spent by the maintenance person to repair the machine has an exponential distribution with a mean of 8 hours. (a) Show that this process fits the birth-and-death process by defining the states, specifying the values of the ␭n and ␮n, and then constructing the rate diagram. (b) Calculate the Pn. (c) Calculate L, Lq, W, and Wq. (d) Determine the proportion of time that the maintenance person is busy. (e) Determine the proportion of time that any given machine is working. 17.5-7. Consider a single-server queueing system where interarrival times have an exponential distribution with parameter ␭ and service times have an exponential distribution with parameter ␮. In addition, customers renege (leave the queueing system without being served) if their waiting time in the queue grows too large. In particular, assume that the time each customer is willing to wait in the queue before reneging has an exponential distribution with a mean of 1/␪. (a) Construct the rate diagram for this queueing system. (b) Develop the balance equations. PROBLEMS 17.5-8.* A certain small grocery store has a single checkout stand with a full-time cashier. Customers arrive at the stand “randomly” (i.e., a Poisson input process) at a mean rate of 30 per hour. When there is only one customer at the stand, she is processed by the cashier alone, with an expected service time of 1.5 minutes. However, the stock boy has been given standard instructions that whenever there is more than one customer at the stand, he is to help the cashier by bagging the groceries. This help reduces the expected time required to process a customer to 1 minute. In both cases, the service-time distribution is exponential. (a) Construct the rate diagram for this queueing system. (b) What is the steady-state probability distribution of the number of customers at the checkout stand? (c) Derive L for this system. (Hint: Refer to the derivation of L for the M/M/1 model at the beginning of Sec. 17.6.) Use this information to determine Lq, W, and Wq. 17.5-9. A department has one word-processing operator. Documents produced in the department are delivered for word processing according to a Poisson process with an expected interarrival time of 20 minutes. When the operator has just one document to process, the expected processing time is 15 minutes. When she has more than one document, then editing assistance that is available reduces the expected processing time for each document to 10 minutes. In both cases, the processing times have an exponential distribution. (a) Construct the rate diagram for this queueing system. (b) Find the steady-state distribution of the number of documents that the operator has received but not yet completed. (c) Derive L for this system. (Hint: Refer to the derivation of L for the M/M/1 model at the beginning of Sec. 17.6.) Use this information to determine Lq, W, and Wq. 17.5-10. Customers arrive at a queueing system according to a Poisson process with a mean arrival rate of 2 customers per minute. The service time has an exponential distribution with a mean of 1 minute. An unlimited number of servers are available as needed so customers never wait for service to begin. Calculate the steadystate probability that exactly 1 customer is in the system. 17.5-11. Suppose that a single-server queueing system fits all the assumptions of the birth-and-death process except that customers always arrive in pairs. The mean arrival rate is 2 pairs per hour (4 customers per hour) and the mean service rate (when the server is busy) is 5 customers per hour. (a) Construct the rate diagram for this queueing system. (b) Develop the balance equations. (c) For comparison purposes, display the rate diagram for the corresponding queueing system that completely fits the birth-anddeath process, i.e., where customers arrive individually at a mean rate of 4 per hour. 17.5-12. Consider a single-server queueing system with a finite queue that can hold a maximum of 2 customers excluding any being served. The server can provide batch service to 2 customers simultaneously, where the service time has an exponential 789 distribution with a mean of 1 unit of time regardless of the number being served. Whenever the queue is not full, customers arrive individually according to a Poisson process at a mean rate of 1 per unit of time. (a) Assume that the server must serve 2 customers simultaneously. Thus, if the server is idle when only 1 customer is in the system, the server must wait for another arrival before beginning service. Formulate the queueing model in terms of transitions that only involve exponential distributions by defining the appropriate states and then constructing the rate diagram. Give the balance equations, but do not solve further. (b) Now assume that the batch size for a service is 2 only if 2 customers are in the queue when the server finishes the preceding service. Thus, if the server is idle when only 1 customer is in the system, the server must serve this single customer, and any subsequent arrivals must wait in the queue until service is completed for this customer. Formulate the resulting queueing model in terms of transitions that only involve exponential distributions by defining the appropriate states and then constructing the rate diagram. Give the balance equations, but do not solve further. 17.5-13. Consider a queueing system that has two classes of customers, two clerks providing service, and no queue. Potential customers from each class arrive according to a Poisson process, with a mean arrival rate of 10 customers per hour for class 1 and 5 customers per hour for class 2, but these arrivals are lost to the system if they cannot immediately enter service. Each customer of class 1 that enters the system will receive service from either one of the clerks that is free, where the service times have an exponential distribution with a mean of 5 minutes. Each customer of class 2 that enters the system requires the simultaneous use of both clerks (the two clerks work together as a single server), where the service times have an exponential distribution with a mean of 5 minutes. Thus, an arriving customer of this kind would be lost to the system unless both clerks are free to begin service immediately. (a) Formulate the queueing model in terms of transitions that only involve exponential distributions by defining the appropriate states and constructing the rate diagram. (b) Now describe how the formulation in part (a) can be fitted into the format of the birth-and-death process. (c) Use the results for the birth-and-death process to calculate the steady-state joint distribution of the number of customers of each class in the system. (d) For each of the two classes of customers, what is the expected fraction of arrivals who are unable to enter the system? 17.6-1. Read the referenced article that fully describes the OR study summarized in the application vignette presented in Sec. 17.6. Briefly describe how queueing theory was applied in this study. Then list the various financial and nonfinancial benefits that resulted from this study. 790 CHAPTER 17 QUEUEING THEORY 17.6-2.* The 4M Company has a single turret lathe as a key work center on its factory floor. Jobs arrive at this work center according to a Poisson process at a mean rate of 2 per day. The processing time to perform each job has an exponential distribution with a mean of 14 day. Because the jobs are bulky, those not being worked on are currently being stored in a room some distance from the machine. However, to save time in fetching the jobs, the production manager is proposing to add enough in-process storage space next to the turret lathe to accommodate 3 jobs in addition to the one being processed. (Excess jobs will continue to be stored temporarily in the distant room.) Under this proposal, what proportion of the time will this storage space next to the turret lathe be adequate to accommodate all waiting jobs? (a) Use available formulas to calculate your answer. T (b) Use the corresponding Excel template to obtain the probabilities needed to answer the question. 17.6-3. Customers arrive at a single-server queueing system according to a Poisson process at a mean rate of 10 per hour. If the server works continuously, the number of customers that can be served in an hour has a Poisson distribution with a mean of 15. Determine the proportion of time during which no one is waiting to be served. 17.6-4. Consider the M/M/1 model, with ␭ ␮. (a) Determine the steady-state probability that a customer’s actual waiting time in the system is longer than the expected waiting time in the system, i.e., P{ᐃ ⬎ W}. (b) Determine the steady-state probability that a customer’s actual waiting time in the queue is longer than the expected waiting time in the queue, i.e., P{ᐃq ⬎ Wq}. 17.6-5. Verify the following relationships for an M/M/1 queueing system: (1 ⫺ P0)2 ␭  , WqP0 1 ⫺ P0 ␮  . WqP0 17.6-6. It is necessary to determine how much in-process storage space to allocate to a particular work center in a new factory. Jobs arrive at this work center according to a Poisson process with a mean rate of 3 per hour, and the time required to perform the necessary work has an exponential distribution with a mean of 0.5 hour. Whenever the waiting jobs require more in-process storage space than has been allocated, the excess jobs are stored temporarily in a less convenient location. If each job requires 1 square foot of floor space while it is in in-process storage at the work center, how much space must be provided to accommodate all waiting jobs (a) 50 percent of the time, (b) 90 percent of the time, and (c) 99 percent of the time? Derive an analytical expression to answer these three questions. Hint: The sum of a geometric series is N 1 ⫺ xN⫹1 . 冱 xn   1⫺x n0 17.6-7. Consider the following statements about an M/M/1 queueing system and its utilization factor ␳. Label each of the statements as true or false, and then justify your answer. (a) The probability that a customer has to wait before service begins is proportional to ␳. (b) The expected number of customers in the system is proportional to ␳. (c) If ␳ has been increased from ␳  0.9 to ␳  0.99, the effect of any further increase in ␳ on L, Lq, W, and Wq will be relatively small as long as ␳ 1. 17.6-8. Customers arrive at a single-server queueing system in accordance with a Poisson process with an expected interarrival time of 25 minutes. Service times have an exponential distribution with a mean of 30 minutes. Label each of the following statements about this system as true or false, and then justify your answer. (a) The server definitely will be busy forever after the first customer arrives. (b) The queue will grow without bound. (c) If a second server with the same service-time distribution is added, the system can reach a steady-state condition. 17.6-9. For each of the following statements about an M/M/1 queueing system, label the statement as true or false and then justify your answer by referring to specific statements in the chapter. (a) The waiting time in the system has an exponential distribution. (b) The waiting time in the queue has an exponential distribution. (c) The conditional waiting time in the system, given the number of customers already in the system, has an Erlang (gamma) distribution. 17.6-10. The Friendly Neighbor Grocery Store has a single checkout stand with a full-time cashier. Customers arrive randomly at the stand at a mean rate of 30 per hour. The service-time distribution is exponential, with a mean of 1.5 minutes. This situation has resulted in occasional long lines and complaints from customers. Therefore, because there is no room for a second checkout stand, the manager is considering the alternative of hiring another person to help the cashier by bagging the groceries. This help would reduce the expected time required to process a customer to 1 minute, but the distribution still would be exponential. The manager would like to have the percentage of time that there are more than two customers at the checkout stand down below 25 percent. She also would like to have no more than 5 percent of the customers needing to wait at least 5 minutes before beginning service, or at least 7 minutes before finishing service. (a) Use the formulas for the M/M/1 model to calculate L, W, Wq, Lq, P0, P1, and P2 for the current mode of operation. What is the probability of having more than two customers at the checkout stand? T (b) Use the Excel template for this model to check your answers in part (a). Also find the probability that the waiting time before beginning service exceeds 5 minutes, and the probability that the waiting time before finishing service exceeds 7 minutes. (c) Repeat part (a) for the alternative being considered by the manager. (d) Repeat part (b) for this alternative. (e) Which approach should the manager use to satisfy her criteria as closely as possible? 791 PROBLEMS T 17.6-11. The Centerville International Airport has two runways, one used exclusively for takeoffs and the other exclusively for landings. Airplanes arrive in the Centerville air space to request landing instructions according to a Poisson process at a mean rate of 10 per hour. The time required for an airplane to land after receiving clearance to do so has an exponential distribution with a mean of 3 minutes, and this process must be completed before giving clearance to do so to another airplane. Airplanes awaiting clearance must circle the airport. The Federal Aviation Administration has a number of criteria regarding the safe level of congestion of airplanes waiting to land. These criteria depend on a number of factors regarding the airport involved, such as the number of runways available for landing. For Centerville, the criteria are (1) the average number of airplanes waiting to receive clearance to land should not exceed 1, (2) 95 percent of the time, the actual number of airplanes waiting to receive clearance to land should not exceed 4, (3) for 99 percent of the airplanes, the amount of time spent circling the airport before receiving clearance to land should not exceed 30 minutes (since exceeding this amount of time often would require rerouting the plane to another airport for an emergency landing before its fuel runs out). (a) Evaluate how well these criteria are currently being satisfied. (b) A major airline is considering adding this airport as one of its hubs. This would increase the mean arrival rate to 15 airplanes per hour. Evaluate how well the above criteria would be satisfied if this happens. (c) To attract additional business [including the major airline mentioned in part (b)], airport management is considering adding a second runway for landings. It is estimated that this eventually would increase the mean arrival rate to 25 airplanes per hour. Evaluate how well the above criteria would be satisfied if this happens. T 17.6-12. The Security & Trust Bank employs 4 tellers to serve its customers. Customers arrive according to a Poisson process at a mean rate of 2 per minute. However, business is growing and management projects that the mean arrival rate will be 3 per minute a year from now. The transaction time between the teller and customer has an exponential distribution with a mean of 1 minute. Management has established the following guidelines for a satisfactory level of service to customers. The average number of customers waiting in line to begin service should not exceed 1. At least 95 percent of the time, the number of customers waiting in line should not exceed 5. For at least 95 percent of the customers, the time spent in line waiting to begin service should not exceed 5 minutes. (a) Use the M/M/s model to determine how well these guidelines are currently being satisfied. (b) Evaluate how well the guidelines will be satisfied a year from now if no change is made in the number of tellers. (c) Determine how many tellers will be needed a year from now to completely satisfy these guidelines. 17.6-13. Consider the M/M/s model. T (a) Suppose there is one server and the expected service time is exactly 1 minute. Compare L for the cases where the mean arrival rate is 0.5, 0.9, and 0.99 customers per minute, respectively. Do the same for Lq, W, Wq, and P{ᐃ ⬎ 5}. What conclusions do you draw about the impact of increasing the utilization factor ␳ from small values (e.g., ␳  0.5) to fairly large values (e.g., ␳  0.9) and then to even larger values very close to 1 (e.g., ␳  0.99)? (b) Now suppose there are two servers and the expected service time is exactly 2 minutes. Follow the instructions for part (a). 17.6-14. Consider the M/M/s model with a mean arrival rate of 10 customers per hour and an expected service time of 5 minutes. Use the Excel template for this model to obtain and print out the various measures of performance (with t  10 and t  0, respectively, for the two waiting time probabilities) when the number of servers is 1, 2, 3, 4, and 5. Then, for each of the following possible criteria for a satisfactory level of service (where the unit of time is 1 minute), use the printed results to determine how many servers are needed to satisfy this criterion. (a) Lq  0.25 (b) L  0.9 (c) Wq  0.1 (d) W  6 (e) P{ᐃq ⬎ 0}  0.01 (f) P{ᐃ ⬎ 10}  0.2 T s (g) 冱 Pn  0.95 n0 17.6-15. A gas station with only one gas pump employs the following policy: If a customer has to wait, the price is $3.50 per gallon; if she does not have to wait, the price is $4.00 per gallon. Customers arrive according to a Poisson process with a mean rate of 20 per hour. Service times at the pump have an exponential distribution with a mean of 2 minutes. Arriving customers always wait until they can eventually buy gasoline. Determine the expected price of gasoline per gallon. 17.6-16. You are given an M/M/1 queueing system with mean arrival rate ␭ and mean service rate ␮. An arriving customer receives n dollars if n customers are already in the system. Determine the expected cost in dollars per customer. 17.6-17. Section 17.6 gives the following equations for the M/M/1 model: ⴥ (1) P{ᐃ t}  冱 Pn P{Sn1 ⬎ t}. n0 (2) P{ᐃ ⬎ t}  e⫺␮(1⫺␳)t. Show that Eq. (1) reduces algebraically to Eq. (2). (Hint: Use differentiation, algebra, and integration.) 17.6-18. Derive Wq directly for the following cases by developing and reducing an expression analogous to Eq. (1) in Prob. 17.6-17. (Hint: Use the conditional expected waiting time in the queue given that a random arrival finds n customers already in the system.) (a) The M/M/1 model (b) The M/M/s model 17.6-19. Consider an M/M/2 queueing system with ␭  4 and ␮  3. Determine the mean rate at which service completions occur during the periods when no customers are waiting in the queue. T 792 CHAPTER 17 QUEUEING THEORY 17.6-20. You are given an M/M/2 queueing system with ␭  4 per hour and ␮  6 per hour. Determine the probability that an arriving customer will wait more than 30 minutes in the queue, given that at least 2 customers are already in the system. T 17.6-21.* In the Blue Chip Life Insurance Company, the deposit and withdrawal functions associated with a certain investment product are separated between two clerks, Clara and Clarence. Deposit slips arrive randomly (a Poisson process) at Clara’s desk at a mean rate of 16 per hour. Withdrawal slips arrive randomly (a Poisson process) at Clarence’s desk at a mean rate of 14 per hour. The time required to process either transaction has an exponential distribution with a mean of 3 minutes. To reduce the expected waiting time in the system for both deposit slips and withdrawal slips, the actuarial department has made the following recommendations: (1) Train each clerk to handle both deposits and withdrawals, and (2) put both deposit and withdrawal slips into a single queue that is accessed by both clerks. (a) Determine the expected waiting time in the system under current procedures for each type of slip. Then combine these results to calculate the expected waiting time in the system for a random arrival of either type of slip. T (b) If the recommendations are adopted, determine the expected waiting time in the system for arriving slips. T (c) Now suppose that adopting the recommendations would result in a slight increase in the expected processing time. Use the Excel template for the M/M/s model to determine by trial and error the expected processing time (within 0.001 hour) that would cause the expected waiting time in the system for a random arrival to be essentially the same under current procedures and under the recommendations. 17.6-22. People’s Software Company has just set up a call center to provide technical assistance on its new software package. Two technical representatives are taking the calls, where the time required by either representative to answer a customer’s questions has an exponential distribution with a mean of 8 minutes. Calls are arriving according to a Poisson process at a mean rate of 10 per hour. By next year, the mean arrival rate of calls is expected to decline to 5 per hour, so the plan is to reduce the number of technical representatives to one then. T (a) Assuming that ␮ will continue to be 7.5 calls per hour for next year’s queueing system, determine L, Lq, W, and Wq for both the current system and next year’s system. For each of these four measures of performance, which system yields the smaller value? (b) Now assume that ␮ will be adjustable when the number of technical representatives is reduced to one. Solve algebraically for the value of ␮ that would yield the same value of W as for the current system. (c) Repeat part (b) with Wq instead of W. 17.6-23. Consider a generalization of the M/M/1 model where the server needs to “warm up” at the beginning of a busy period, and so serves the first customer of a busy period at a slower rate than other customers. In particular, if an arriving customer finds the server idle, the customer experiences a service time that has an exponential distribution with parameter ␮1. However, if an arriving customer finds the server busy, that customer joins the queue and subsequently experiences a service time that has an exponential distribution with parameter ␮2, where ␮1 ␮2. Customers arrive according to a Poisson process with mean rate ␭. (a) Formulate this model in terms of transitions that only involve exponential distributions by defining the appropriate states and constructing the rate diagram accordingly. (b) Develop the balance equations. (c) Suppose that numerical values are specified for ␮1, ␮2, and ␭, and that ␭ ␮2 (so that a steady-state distribution exists). Since this model has an infinite number of states, the steady-state distribution is the simultaneous solution of an infinite number of balance equations (plus the equation specifying that the sum of the probabilities equals 1). Suppose that you are unable to obtain this solution analytically, so you wish to use a computer to solve the model numerically. Considering that it is impossible to solve an infinite number of equations numerically, briefly describe what still can be done with these equations to obtain an approximation of the steady-state distribution. Under what circumstances will this approximation be essentially exact? (d) Given that the steady-state distribution has been obtained, give explicit expressions for calculating L, Lq, W, and Wq. (e) Given this steady-state distribution, develop an expression for P{ᐃ ⬎ t} that is analogous to Eq. (1) in Prob. 17.6-17. 17.6-24. For each of the following models, write the balance equations and show that they are satisfied by the solution given in Sec. 17.6 for the steady-state distribution of the number of customers in the system. (a) The M/M/1 model. (b) The finite queue variation of the M/M/1 model, with K  2. (c) The finite calling population variation of the M/M/1 model, with N  2. T 17.6-25. Consider a telephone system with three lines. Calls arrive according to a Poisson process at a mean rate of 6 per hour. The duration of each call has an exponential distribution with a mean of 15 minutes. If all lines are busy, calls will be put on hold until a line becomes available. (a) Print out the measures of performance provided by the Excel template for this queueing system (with t  1 hour and t  0, respectively, for the two waiting time probabilities). (b) Use the printed result giving P{ᐃq ⬎ 0} to identify the steadystate probability that a call will be answered immediately (not put on hold). Then verify this probability by using the printed results for the Pn. (c) Use the printed results to identify the steady-state probability distribution of the number of calls on hold. (d) Print out the new measures of performance if arriving calls are lost whenever all lines are busy. Use these results to identify the steady-state probability that an arriving call is lost. 17.6-26.* Janet is planning to open a small car-wash operation, and she must decide how much space to provide for waiting cars. Janet 793 PROBLEMS estimates that customers would arrive randomly (i.e., a Poisson input process) with a mean rate of 1 every 4 minutes, unless the waiting area is full, in which case the arriving customers would take their cars elsewhere. The time that can be attributed to washing one car has an exponential distribution with a mean of 3 minutes. Compare the expected fraction of potential customers that will be lost because of inadequate waiting space if (a) 0 spaces (not including the car being washed), (b) 2 spaces, and (c) 4 spaces were provided. 17.6-27. Consider the finite queue variation of the M/M/s model. Derive the expression for Lq given in Sec. 17.6 for this model. 17.6-28. For the finite queue variation of the M/M/1 model, develop an expression analogous to Eq. (1) in Prob. 17.6-17 for the following probabilities: (a) P{ᐃ ⬎ t}. (b) P{ᐃq ⬎ t}. [Hint: Arrivals can occur only when the system is not full, so the probability that a random arrival finds n customers already there is Pn /(1 ⫺ PK).] 17.6-29. George is planning to open a drive-through photodeveloping booth with a single service window that will be open approximately 200 hours per month in a busy commercial area. Space for a drive-through lane is available for a rental of $200 per month per car length. George needs to decide how many car lengths of space to provide for his customers. Excluding this rental cost for the drive-through lane, George believes that he will average a profit of $4 per customer served (nothing for a drop off of film and $8 when the photographs are picked up). He also estimates that customers will arrive randomly (a Poisson process) at a mean rate of 20 per hour, although those who find the drive-through lane full will be forced to leave. Half of the customers who find the drive-through lane full wanted to drop off film, and the other half wanted to pick up their photographs. The half who wanted to drop off film will take their business elsewhere instead. The other half of the customers who find the drive-through lane full will not be lost because they will keep trying later until they can get in and pick up their photographs. George assumes that the time required to serve a customer will have an exponential distribution with a mean of 2 minutes. T (a) Find L and the mean rate at which customers are lost when the number of car lengths of space provided is 2, 3, 4, and 5. (b) Calculate W from L for the cases considered in part (a). (c) Use the results from part (a) to calculate the decrease in the mean rate at which customers are lost when the number of car lengths of space provided is increased from 2 to 3, from 3 to 4, and from 4 to 5. Then calculate the increase in expected profit per hour (excluding space rental costs) for each of these three cases. (d) Compare the increases in expected profit found in part (c) with the cost per hour of renting each car length of space. What conclusion do you draw about the number of car lengths of space that George should provide? 17.6-30. At the Forrester Manufacturing Company, one repair technician has been assigned the responsibility of maintaining three machines. For each machine, the probability distribution of the running time before a breakdown is exponential, with a mean of 9 hours. The repair time also has an exponential distribution, with a mean of 2 hours. (a) Which queueing model fits this queueing system? T (b) Use this queueing model to find the probability distribution of the number of machines not running, and the mean of this distribution. (c) Use this mean to calculate the expected time between a machine breakdown and the completion of the repair of that machine. (d) What is the expected fraction of time that the repair technician will be busy? T (e) As a crude approximation, assume that the calling population is infinite and that machine breakdowns occur randomly at a mean rate of 3 every 9 hours. Compare the result from part (b) with that obtained by making this approximation while using (i) the M/M/s model and (ii) the finite queue variation of the M/M/s model with K  3. T (f) Repeat part (b) when a second repair technician is made available to repair a second machine whenever more than one of these three machines require repair. 17.6-31. Reconsider the specific birth-and-death process described in Prob. 17.5-1. (a) Identify a queueing model (and its parameter values) in Sec. 17.6 that fits this process. T (b) Use the corresponding Excel template to obtain the answers for parts (b) and (c) of Prob. 17.5-1. 17.6-32.* The Dolomite Corporation is making plans for a new factory. One department has been allocated 12 semiautomatic machines. A small number (yet to be determined) of operators will be hired to provide the machines the needed occasional servicing (loading, unloading, adjusting, setup, and so on). A decision now needs to be made on how to organize the operators to do this. Alternative 1 is to assign each operator to her own machines. Alternative 2 is to pool the operators so that any idle operator can take the next machine needing servicing. Alternative 3 is to combine the operators into a single crew that will work together on any machine needing servicing. The running time (time between completing service and the machine’s requiring service again) of each machine is expected to have an exponential distribution, with a mean of 150 minutes. The service time is assumed to have an exponential distribution, with a mean of 15 minutes (for Alternatives 1 and 2) or 15 minutes divided by the number of operators in the crew (for Alternative 3). For the department to achieve the required production rate, the machines must be running at least 89 percent of the time on average. (a) For Alternative 1, what is the maximum number of machines that can be assigned to an operator while still achieving the required production rate? What is the resulting utilization of each operator? (b) For Alternative 2, what is the minimum number of operators needed to achieve the required production rate? What is the resulting utilization of the operators? (c) For Alternative 3, what is the minimum size of the crew needed to achieve the required production rate? What is the resulting utilization of the crew? T 794 CHAPTER 17 QUEUEING THEORY 17.6-33. A shop contains three identical machines that are subject to a failure of a certain kind. Therefore, a maintenance system is provided to perform the maintenance operation (recharging) required by a failed machine. The time required by each operation has an exponential distribution with a mean of 30 minutes. However, with probability 31, the operation must be performed a second time (with the same distribution of time) in order to bring the failed machine back to a satisfactory operational state. The maintenance system works on only one failed machine at a time, performing all the operations (one or two) required by that machine, on a first-come-firstserved basis. After a machine is repaired, the time until its next failure has an exponential distribution with a mean of 3 hours. (a) How should the states of the system be defined in order to formulate a model for this queueing system in terms of transitions that only involve exponential distributions? (Hint: Given that a first operation is being performed on a failed machine, completing this operation successfully and completing it unsuccessfully are two separate events of interest. Then useProperty 6 regarding disaggregation for the exponential distribution.) (b) Construct the corresponding rate diagram. (c) Develop the balance equations. (d) Among all possible service-time distributions (with ␭ and ␮ fixed), the exponential distribution yields the largest value of Lq. 17.7-4. Marsha operates an expresso stand. Customers arrive according to a Poisson process at a mean rate of 30 per hour. The time needed by Marsha to serve a customer has an exponential distribution with a mean of 75 seconds. (a) Use the M/G/1 model to find L, Lq, W, and Wq. (b) Suppose Marsha is replaced by an expresso vending machine that requires exactly 75 seconds for each customer to operate. Find L, Lq, W, and Wq. (c) What is the ratio of Lq in part (b) to Lq in part (a)? T (d) Use trial and error with the Excel template for the M/G/1 model to see approximately how much Marsha would need to reduce her expected service time to achieve the same Lq as with the expresso vending machine. T 17.7-5. Antonio runs a shoe repair store by himself. Customers arrive to bring a pair of shoes to be repaired according to a Poisson process at a mean rate of 1 per hour. The time Antonio requires to repair each individual shoe has an exponential distribution with a mean of 15 minutes. (a) Consider the formulation of this queueing system where the individual shoes (not pairs of shoes) are considered to be the customers. For this formulation, construct the rate diagram and develop the balance equations, but do not solve further. (b) Now consider the formulation of this queueing system where the pairs of shoes are considered to be the customers. Identify the specific queueing model that fits this formulation. (c) Calculate the expected number of pairs of shoes in the shop. (d) Calculate the expected amount of time from when a customer drops off a pair of shoes until they are repaired and ready to be picked up. T (e) Use the corresponding Excel template to check your answers in parts (c) and (d). 17.7-3. Consider the following statements about an M/G/1 queueing system, where ␴ 2 is the variance of service times. Label each statement as true or false, and then justify your answer. (a) Increasing ␴ 2 (with fixed ␭ and ␮) will increase Lq and L, but will not change Wq and W. (b) When choosing between a tortoise (small ␮ and ␴ 2) and a hare (large ␮ and ␴ 2) to be the server, the tortoise always wins by providing a smaller Lq. (c) With ␭ and ␮ fixed, the value of Lq with an exponential servicetime distribution is twice as large as with constant service times. 17.7-6.* The maintenance base for Friendly Skies Airline has facilities for overhauling only one airplane engine at a time. Therefore, to return the airplanes to use as soon as possible, the policy has been to stagger the overhauling of the four engines of each airplane. In other words, only one engine is overhauled each time an airplane comes into the shop. Under this policy, airplanes have arrived according to a Poisson process at a mean rate of 1 per day. The time required for an engine overhaul (once work has begun) has an exponential distribution with a mean of 12 day. A proposal has been made to change the policy so that all four engines are overhauled consecutively each time an airplane comes into the shop. Although this would quadruple the expected service time, each plane would need to come to the maintenance base only one-fourth as often. Management now needs to decide whether to continue the status quo or adopt the proposal. The objective is to minimize the average amount of flying time lost by the entire fleet per day due to engine overhauls. (a) Compare the two alternatives with respect to the average amount of flying time lost by an airplane each time it comes to the maintenance base. 17.7-1.* Consider the M/G/1 model. (a) Compare the expected waiting time in the queue if the servicetime distribution is (i) exponential, (ii) constant, (iii) Erlang with the amount of variation (i.e., the standard deviation) halfway between the constant and exponential cases. (b) What is the effect on the expected waiting time in the queue and on the expected queue length if both ␭ and ␮ are doubled and the scale of the service-time distribution is changed accordingly? 17.7-2. Consider the M/G/1 model with ␭  0.2 and ␮  0.25. (a) Use the Excel template for this model (or hand calculations) to find the main measures of performance—L, Lq, W, Wq—for each of the following values of ␴: 4, 3, 2, 1, 0. (b) What is the ratio of Lq with ␴  4 to Lq with ␴  0? What does this say about the importance of reducing the variability of the service times? (c) Calculate the reduction in Lq when ␴ is reduced from 4 to 3, from 3 to 2, from 2 to 1, and from 1 to 0. Which is the largest reduction? Which is the smallest? (d) Use trial and error with the template to see approximately how much ␮ would need to be increased with ␴  4 to achieve the same Lq as with ␮  0.25 and ␴  0. PROBLEMS (b) Compare the two alternatives with respect to the average number of airplanes losing flying time due to being at the maintenance base. (c) Which of these two comparisons is the appropriate one for making management’s decision? Explain. 17.7-7. Reconsider Prob. 17.7-6. Management has adopted the proposal but now wants further analysis conducted of this new queueing system. (a) How should the state of the system be defined in order to formulate the queueing model in terms of transitions that only involve exponential distributions (b) Construct the corresponding rate diagram. 17.7-8. The McAllister Company factory currently has two tool cribs, each with a single clerk, in its manufacturing area. One tool crib handles only the tools for the heavy machinery; the second one handles all other tools. However, for each crib the mechanics arrive to obtain tools at a mean rate of 24 per hour, and the expected service time is 2 minutes. Because of complaints that the mechanics coming to the tool crib have to wait too long, it has been proposed that the two tool cribs be combined so that either clerk can handle either kind of tool as the demand arises. It is believed that the mean arrival rate to the combined two-clerk tool crib would double to 48 per hour and that the expected service time would continue to be 2 minutes. However, information is not available on the form of the probability distributions for interarrival and service times, so it is not clear which queueing model would be most appropriate. Compare the status quo and the proposal with respect to the total expected number of mechanics at the tool crib(s) and the expected waiting time (including service) for each mechanic. Do this by tabulating these data for the four queueing models considered in Figs. 17.6, 17.8, 17.10, and 17.11 (use k  2 when an Erlang distribution is appropriate). 17.7-9.* Consider a single-server queueing system with a Poisson input, Erlang service times, and a finite queue. In particular, suppose that k  2, the mean arrival rate is 2 customers per hour, the expected service time is 0.25 hour, and the maximum permissible number of customers in the system is 2. This system can be formulated in terms of transitions that only involve exponential distributions by dividing each service time into two consecutive phases, each having an exponential distribution with a mean of 0.125 hour, and then defining the state of the system as (n, p), where n is the number of customers in the system (n  0, 1, 2), and p indicates the phase of the customer being served ( p  0, 1, 2, where p  0 means that no customer is being served). (a) Construct the corresponding rate diagram. Write the balance equations, and then use these equations to solve for the steadystate distribution of the state of this queueing system. (b) Use the steady-state distribution obtained in part (a) to identify the steady-state distribution of the number of customers in the system (P0, P1, P2) and the steady-state expected number of customers in the system (L). 795 (c) Compare the results from part (b) with the corresponding results when the service-time distribution is exponential. 17.7-10. Consider the E2/M/1 model with ␭  4 and ␮  5. This model can be formulated in terms of transitions that only involve exponential distributions by dividing each interarrival time into two consecutive phases, each having an exponential distribution with a mean of 1/(2␭)  0.125, and then defining the state of the system as (n, p), where n is the number of customers in the system (n  0, 1, 2, . . .) and p indicates the phase of the next arrival (not yet in the system) ( p  1, 2). Construct the corresponding rate diagram (but do not solve further). 17.7-11. A company has one repair technician to keep a large group of machines in running order. Treating this group as an infinite calling population, individual breakdowns occur according to a Poisson process at a mean rate of 1 per hour. For each breakdown, the probability is 0.9 that only a minor repair is needed, in which case the repair time has an exponential distribution with a mean of 12 hour. Otherwise, a major repair is needed, in which case the repair time has an exponential distribution with a mean of 5 hours. Because both of these conditional distributions are exponential, the unconditional (combined) distribution of repair times is hyperexponential. (a) Compute the mean and standard deviation of this hyperexponential distribution. [Hint: Use the general relationships from probability theory that, for any random variable X and any pair of mutually exclusive events E1 and E2, E(X)  E(X⏐E1)P(E1)  E(X⏐E2)P(E2) and var(X)  E(X2) ⫺ E(X)2.] Compare this standard deviation with that for an exponential distribution having this mean. (b) What are P0, Lq, L, Wq, and W for this queueing system? (c) What is the conditional value of W, given that the machine involved requires major repair? A minor repair? What is the division of L between machines requiring the two types of repairs? (Hint: Little’s formula still applies for the individual categories of machines.) (d) How should the states of the system be defined in order to formulate this queueing system in terms of transitions that only involve exponential distributions (Hint: Consider what additional information must be given, besides the number of machines down, for the conditional distribution of the time remaining until the next event of each kind to be exponential.) (e) Construct the corresponding rate diagram. 17.7-12. Consider the finite queue variation of the M/G/1 model, where K is the maximum number of customers allowed in the system. For n  1, 2, . . . , let the random variable Xn be the number of customers in the system at the moment tn when the nth customer has just finished being served. (Do not count the departing customer.) The times {t1, t2, . . .} are called regeneration points. Furthermore, {Xn} (n  1, 2, . . .) is a discrete time Markov chain and is known as an embedded Markov chain. Embedded Markov chains are useful for studying the properties of continuous time stochastic processes such as for an M/G/1 model. 796 CHAPTER 17 QUEUEING THEORY Now consider the particular special case where K  4, the service time of successive customers is a fixed constant, say, 10 minutes, and the mean arrival rate is 1 every 50 minutes. Therefore, {Xn} is an embedded Markov chain with states 0, 1, 2, 3. (Because there are never more than 4 customers in the system, there can never be more than 3 in the system at a regeneration point.) Because the system is observed at successive departures, Xn can never decrease by more than 1. Furthermore, the probabilities of transitions that result in increases in Xn are obtained directly from the Poisson distribution. (a) Find the one-step transition matrix for the embedded Markov chain. (Hint: In obtaining the transition probability from state 3 to state 3, use the probability of 1 or more arrivals rather than just 1 arrival, and similarly for other transitions to state 3.) (b) Use the corresponding routine in the Markov chains area of your IOR Tutorial to find the steady-state probabilities for the number of customers in the system at regeneration points. (c) Compute the expected number of customers in the system at regeneration points, and compare it to the value of L for the M/D/1 model (with K  ⴥ) in Sec. 17.7. 17.8-1.* Southeast Airlines is a small commuter airline serving primarily the state of Florida. Their ticket counter at a certain airport is staffed by a single ticket agent. There are two separate lines—one for first-class passengers and one for coach-class passengers. When the ticket agent is ready for another customer, the next first-class passenger is served if there are any in line. If not, the next coach-class passenger is served. Service times have an exponential distribution with a mean of 3 minutes for both types of customers. During the 12 hours per day that the ticket counter is open, passengers arrive randomly at a mean rate of 2 per hour for first-class passengers and 10 per hour for coachclass passengers. (a) What kind of queueing model fits this queueing system? T (b) Find the main measures of performance—L, Lq, W, and Wq— for both first-class passengers and coach-class passengers. (c) What is the expected waiting time before service begins for first-class customers as a fraction of this waiting time for coach-class customers? (d) Determine the average number of hours per day that the ticket agent is busy. T 17.8-2. Consider the model with nonpreemptive priorities presented in Sec. 17.8. Suppose there are two priority classes, with ␭1  2 and ␭2  3. In designing this queueing system, you are offered the choice between the following alternatives: (1) one fast server (␮  6) and (2) two slow servers (␮  3). Compare these alternatives with the usual four mean measures of performance (W, L, Wq, Lq) for the individual priority classes (W1, W2, L1, L2, and so forth). Which alternative is preferred if your primary concern is expected waiting time in the system for priority class 1 (W1)? Which is preferred if your primary concern is expected waiting time in the queue for priority class 1? 17.8-3. Consider the single-server variation of the nonpreemptive priorities model presented in Sec. 17.8. Suppose there are three priority classes, with ␭1  1, ␭2  1, and ␭3  1. The expected service times for priority classes 1, 2, and 3 are 0.4, 0.3, and 0.2, respectively, so ␮1  2.5, ␮2  313, and ␮3  5. (a) Calculate W1, W2, and W3. (b) Repeat part (a) when using the approximation of applying the general model for nonpreemptive priorities presented in Sec. 17.8 instead. Since this general model assumes that the expected service time is the same for all priority classes, use an expected service time of 0.3 so ␮  313. Compare the results with those obtained in part (a) and evaluate how good an approximation is provided by making this assumption. T 17.8-4.* A particular work center in a job shop can be represented as a single-server queueing system, where jobs arrive according to a Poisson process, with a mean rate of 8 per day. Although the arriving jobs are of three distinct types, the time required to perform any of these jobs has the same exponential distribution, with a mean of 0.1 working day. The practice has been to work on arriving jobs on a first-come-first-served basis. However, it is important that jobs of type 1 not wait very long, whereas the wait is only moderately important for jobs of type 2 and is relatively unimportant for jobs of type 3. These three types arrive with a mean rate of 2, 4, and 2 per day, respectively. Because all three types have experienced rather long delays on average, it has been proposed that the jobs be selected according to an appropriate priority discipline instead. Compare the expected waiting time (including service) for each of the three types of jobs if the queue discipline is (a) first-comefirst-served, (b) nonpreemptive priority, and (c) preemptive priority. T 17.8-5. Reconsider the County Hospital emergency room problem as analyzed in Sec. 17.8. Suppose that the definitions of the three categories of patients are tightened somewhat in order to move marginal cases into a lower category. Consequently, only 5 percent of the patients will qualify as critical cases, 20 percent as serious cases, and 75 percent as stable cases. Develop a table showing the data presented in Table 17.3 for this revised problem. 17.8-6. Reconsider the queueing system described in Prob. 17.4-6. Suppose now that type 1 customers are more important than type 2 customers. If the queue discipline were changed from first-comefirst-served to a priority system with type 1 customers being given nonpreemptive priority over type 2 customers, would this increase, decrease, or keep unchanged the expected total number of customers in the system? (a) Determine the answer without any calculations, and then present the reasoning that led to your conclusion. T (b) Verify your conclusion in part (a) by finding the expected total number of customers in the system under each of these two queue disciplines. 17.8-7. Consider the queueing model with a preemptive priority queue discipline presented in Sec. 17.8. Suppose that s  1, 797 PROBLEMS N  2, and (␭1  ␭2) ␮; and let Pij be the steady-state probability that there are i members of the higher-priority class and j members of the lower-priority class in the queueing system (i  0, 1, 2, . . . ; j  0, 1, 2, . . .). Use a method analogous to that presented in Sec. 17.5 to derive a system of linear equations whose simultaneous solution is the Pij. Do not actually obtain this solution. 17.9-1. Read the referenced article that fully describes the OR study summarized in the application vignette presented in Sec. 17.9. Briefly describe how queueing theory was applied in this study. Then list the various financial and nonfinancial benefits that resulted from this study. 17.9-2. Consider a queueing system with two servers, where the customers arrive from two different sources. From source 1, the customers always arrive 2 at a time, where the time between consecutive arrivals of pairs of customers has an exponential distribution with a mean of 20 minutes. Source 2 is itself a two-server queueing system, which has a Poisson input process with a mean rate of 7 customers per hour, and the service time from each of these two servers has an exponential distribution with a mean of 15 minutes. When a customer completes service at source 2, he or she immediately enters the queueing system under consideration for another type of service. In the latter queueing system, the queue discipline is preemptive priority where customers from source 1 always have preemptive priority over customers from source 2. However, service times are independent and identically distributed for both types of customers according to an exponential distribution with a mean of 6 minutes. (a) First focus on the problem of deriving the steady-state distribution of only the number of source 1 customers in the queueing system under consideration. Define the states and construct the rate diagram for most efficiently deriving this distribution (but do not actually derive it). (b) Now focus on the problem of deriving the steady-state distribution of the total number of customers of both types in the queueing system under consideration. Define the states and construct the rate diagram for most efficiently deriving this distribution (but do not actually derive it). (c) Now focus on the problem of deriving the steady-state joint distribution of the number of customers of each type in the queueing system under consideration. Define the states and construct the rate diagram for deriving this distribution (but do not actually derive it). 17.9-3. Consider a system of two infinite queues in series, where each of the two service facilities has a single server. All service times are independent and have an exponential distribution, with a mean of 3 minutes at facility 1 and 4 minutes at facility 2. Facility 1 has a Poisson input process with a mean rate of 10 per hour. (a) Find the steady-state distribution of the number of customers at facility 1 and then at facility 2. Then show the product form solution for the joint distribution of the number at the respective facilities. (b) What is the probability that both servers are idle? (c) Find the expected total number of customers in the system and the expected total waiting time (including service times) for a customer. 17.9-4. Under the assumptions specified in Sec. 17.9 for a system of infinite queues in series, this kind of queueing network actually is a special case of a Jackson network. Demonstrate that this is true by describing this system as a Jackson network, including specifying the values of the aj and the pij, given ␭ for this system. 17.9-5. Consider a Jackson network with three service facilities having the parameter values shown below. pij Facility j sj ␮j aj iⴝ1 iⴝ2 iⴝ3 j1 j2 j3 1 1 1 40 50 30 10 15 3 0 0.5 0.3 0.3 0 0.2 0.4 0.5 0 T (a) Find the total arrival rate at each of the facilities. (b) Find the steady-state distribution of the number of customers at facility 1, facility 2, and facility 3. Then show the product form solution for the joint distribution of the number at the respective facilities. (c) What is the probability that all the facilities have empty queues (no customers waiting to begin service)? (d) Find the expected total number of customers in the system. (e) Find the expected total waiting time (including service times) for a customer. T 17.10-1. When describing economic analysis of the number of servers to provide in a queueing system, Sec. 17.10 introduces a basic cost model where the objective is to minimize E(TC)  Css  CwL. The purpose of this problem is to enable you to explore the effect that the relative sizes of Cs and Cw have on the optimal number of servers. Suppose that the queueing system under consideration fits the M/M/s model with ␭  8 customers per hour and ␮  10 customers per hour. Use the Excel template in your OR Courseware for economic analysis with the M/M/s model to find the optimal number of servers for each of the following cases. (a) Cs  $100 and Cw  $10. (b) Cs  $100 and Cw  $100. (c) Cs  $10 and Cw  $100. 17.10-2.* Jim McDonald, manager of the fast-food hamburger restaurant McBurger, realizes that providing fast service is a key to the success of the restaurant. Customers who have to wait very long are likely to go to one of the other fast-food restaurants in town next time. He estimates that each minute a customer has to wait in line before completing service costs him an average of T 798 CHAPTER 17 QUEUEING THEORY 30 cents in lost future business. Therefore, he wants to be sure that enough cash registers always are open to keep waiting to a minimum. Each cash register is operated by a part-time employee who obtains the food ordered by each customer and collects the payment. The total cost for each such employee is $9 per hour. During lunch time, customers arrive according to a Poisson process at a mean rate of 66 per hour. The time needed to serve a customer is estimated to have an exponential distribution with a mean of 2 minutes. Determine how many cash registers Jim should have open during lunch time to minimize his expected total cost per hour. T 17.10-3. The Garrett-Tompkins Company provides three copy machines in its copying room for the use of its employees. However, due to recent complaints about considerable time being wasted waiting for a copier to become free, management is considering adding one or more additional copy machines. During the 2,000 working hours per year, employees arrive at the copying room according to a Poisson process at a mean rate of 30 per hour. The time each employee needs with a copy machine is believed to have an exponential distribution with a mean of 5 minutes. The lost productivity due to an employee spending time in the copying room is estimated to cost the company an average of $25 per hour. Each copy machine is leased for $3,000 per year. Determine how many copy machines the company should have to minimize its expected total cost per hour. 17.11-1. From the bottom part of the selected references given at the end of the chapter, select one of these award-winning applications of queueing theory. Read this article and then write a two-page summary of the application and the benefits (including nonfinancial benefits) it provided. 17.11-2. From the bottom part of the selected references given at the end of the chapter, select three of these award-winning applications of queueing theory. For each one, read the article and then write a one-page summary of the application and the benefits (including nonfinancial benefits) it provided. ■ CASES CASE 17.1 Inventory Reducing In-Process Jim Wells, vice-president for manufacturing of the Northern Airplane Company, is exasperated. His walk through the company’s most important plant this morning has left him in a foul mood. However, he now can vent his temper at Jerry Carstairs, the plant’s production manager, who has just been summoned to Jim’s office. “Jerry, I just got back from walking through the plant, and I am very upset.” “What is the problem, Jim?” “Well, you know how much I have been emphasizing the need to cut down on our in-process inventory.” “Yes, we’ve been working hard on that,” responds Jerry. “Well, not hard enough!” Jim raises his voice even higher. “Do you know what I found by the presses?” “No.” “Five metal sheets still waiting to be formed into wing sections. And then, right next door at the inspection station, 13 wing sections! The inspector was inspecting one of them, but the other 12 were just sitting there. You know we have a couple hundred thousand dollars tied up in each of those wing sections. So between the presses and the inspection station, we have a few million bucks worth of terribly expensive metal just sitting there. We can’t have that!” The chagrined Jerry Carstairs tries to respond. “Yes, Jim, I am well aware that that inspection station is a bottleneck. It usually isn’t nearly as bad as you found it this morning, but it is a bottleneck. Much less so for the presses. You really caught us on a bad morning.” “I sure hope so,” retorts Jim, “but you need to prevent anything nearly this bad happening even occasionally. What do you propose to do about it?” Jerry now brightens noticeably in his response. “Well actually, I’ve already been working on this problem. I have a couple proposals on the table and I have asked an operations research analyst on my staff to analyze these proposals and report back with recommendations.” “Great,” responds Jim, “glad to see you are on top of the problem. Give this your highest priority and report back to me as soon as possible.” “Will do,” promises Jerry. Here is the problem that Jerry and his OR analyst are addressing. Each of 10 identical presses is being used to form wing sections out of large sheets of specially processed metal. The sheets arrive randomly to the group of presses at a mean rate of 7 per hour. The time required by a press to form a wing section out of a metal sheet has an exponential distribution with a mean of 1 hour. When finished, the wing sections arrive randomly at an inspection station at the same mean rate as the metal sheets arrived at the presses (7 per hour). A single inspector has the full-time job of inspecting these wing sections to make sure they meet specifications. Each inspection takes her 721 minutes, so she can inspect 8 wing sections per hour. This inspection rate has resulted in a substantial average amount of in-process inventory at the inspection station (i.e., the average number of wing sheets waiting to complete inspection is fairly large), in addition to that already found at the group of machines. The cost of this in-process inventory is estimated to be $8 per hour for each metal sheet at the presses or each wing section at the inspection station. Therefore, Jerry Carstairs PREVIEW OF AN ADDED CASE ON OUR WEBSITE has made two alternative proposals to reduce the average level of in-process inventory. Proposal 1 is to use slightly less power for the presses (which would increase their average time to form a wing section to 1.2 hours), so that the inspector can keep up with their output better. This also would reduce the cost of the power for running each machine from $7.00 to $6.50 per hour. (By contrast, increasing to maximum power would increase this cost to $7.50 per hour while decreasing the average time to form a wing section to 0.8 hour.) Proposal 2 is to substitute a certain younger inspector for this task. He is somewhat faster (albeit with some variability in his inspection times because of less experience), so he should keep up better. (His inspection time would have an Erlang distribution with a mean of 7.2 minutes and a shape parameter k  2.) This inspector is in a job classification that calls for a total compensation (including benefits) of $19 per hour, whereas the current inspector is in a lower job classification where the compensation is $17 per hour. (The inspection times for each of these inspectors are typical of those in the same job classification.) You are the OR analyst on Jerry Carstair’s staff who has been asked to analyze this problem. He wants you to “use 799 the latest OR techniques to see how much each proposal would cut down on in-process inventory and then make your recommendations.” (a) To provide a basis of comparison, begin by evaluating the status quo. Determine the expected amount of in-process inventory at the presses and at the inspection station. Then calculate the expected total cost per hour when considering all of the following: the cost of the in-process inventory, the cost of the power for runnng the presses, and the cost of the inspector. (b) What would be the effect of proposal 1? Why? Make specific comparisons to the results from part (a). Explain this outcome to Jerry Carstairs. (c) Determine the effect of proposal 2. Make specific comparisons to the results from part (a). Explain this outcome to Jerry Carstairs. (d) Make your recommendations for reducing the average level of in-process inventory at the inspection station and at the group of machines. Be specific in your recommendations, and support them with quantitative analysis like that done in part (a). Make specific comparisons to the results from part (a), and cite the improvements that your recommendations would yield. ■ PREVIEW OF AN ADDED CASE ON OUR WEBSITE (www.mhhe.com/hillier) CASE 17.2 Queueing Quandary Many angry customers are complaining about the long waits needed to get through to a call center. It appears that more service representatives are needed to answer the calls. Another option is to train the service representatives further to enable them to answer calls more efficiently. Some possible criteria for satisfactory levels of service have been proposed. Queueing theory needs to be applied to determine how the operation of the call center should be redesigned. 18 C H A P T E R Inventory Theory “S orry, we’re out of that item.” How often have you heard that during shopping trips? In many of these cases, what you have encountered are stores that aren’t doing a very good job of managing their inventories (stocks of goods being held for future use or sale). They aren’t placing orders to replenish inventories soon enough to avoid shortages. These stores could benefit from the kinds of techniques of scientific inventory management that are described in this chapter. It isn’t just retail stores that must manage inventories. In fact, inventories pervade the business world. Maintaining inventories is necessary for any company dealing with physical products, including manufacturers, wholesalers, and retailers. For example, manufacturers need inventories of the materials required to make their products. They also need inventories of the finished products awaiting shipment. Similarly, both wholesalers and retailers need to maintain inventories of goods to be available for purchase by customers. The annual costs associated with storing (“carrying”) inventory can be very large, ranging as high as a quarter of the value of the inventory. Therefore, the costs being incurred for the storage of inventory in the United States run into the hundreds of billions of dollars annually. Reducing storage costs by avoiding unnecessarily large inventories can enhance any firm’s competitiveness. Some Japanese companies were pioneers in introducing the just-in-time inventory system—a system that emphasizes planning and scheduling so that the needed materials arrive “just-in-time” for their use. Huge savings are thereby achieved by reducing inventory levels to a bare minimum. Many companies in other parts of the world also have been revamping the way in which they manage their inventories. The application of operations research techniques in this area (sometimes called scientific inventory management) is providing a powerful tool for gaining a competitive edge. How do companies use operations research to improve their inventory policy for when and how much to replenish their inventory? They use scientific inventory management comprising the following steps: 1. Formulate a mathematical model describing the behavior of the inventory system. 2. Seek an optimal inventory policy with respect to this model. 3. Use a computerized information processing system to maintain a record of the current inventory levels. 800 18.1 EXAMPLES 801 4. Using this record of current inventory levels, apply the optimal inventory policy to signal when and how much to replenish inventory. The mathematical inventory models used with this approach can be divided into two broad categories—deterministic models and stochastic models—according to the predictability of demand involved. The demand for a product in inventory is the number of units that will need to be withdrawn from inventory for some use (e.g., sales) during a specific period. If the demand in future periods can be forecast with considerable precision, it is reasonable to use an inventory policy that assumes that all forecasts will always be completely accurate. This is the case of known demand where a deterministic inventory model would be used. However, when demand cannot be predicted very well, it becomes necessary to use a stochastic inventory model where the demand in any period is a random variable rather than a known constant. There are several basic considerations involved in determining an inventory policy that must be reflected in the mathematical inventory model. These are illustrated in the examples presented in the first section and then are described in general terms in Sec. 18.2. Section 18.3 develops and analyzes deterministic inventory models for situations where the inventory level is under continuous review. Section 18.4 does the same for situations where the planning is being done for a series of periods rather than continuously. Section 18.5 extends certain deterministic models to coordinate the inventories at various points along a company’s supply chain. The following two sections present stochastic models, first under continuous review, and then for dealing with a perishable product over a single period. (A supplement to this chapter on the book’s website introduces stochastic periodic-review models for multiple periods.) Section 18.8 then introduces a relatively new area of inventory theory, called revenue management, that is concerned with maximizing a company’s expected revenue when dealing with the special kind of perishable product whose entire inventory must be provided to customers at a designated point in time or be lost forever. (Certain service industries, such as an airline company providing its entire inventory of seats on an particular flight at the designated time for the flight, now make extensive use of revenue management.) ■ 18.1 EXAMPLES We present two examples in rather different contexts (a manufacturer and a wholesaler) where an inventory policy needs to be developed. EXAMPLE 1 Manufacturing Speakers for TV Sets A television manufacturing company produces its own speakers, which are used in the production of its television sets. The television sets are assembled on a continuous production line at a rate of 8,000 per month, with one speaker needed per set. The speakers are produced in batches because they do not warrant setting up a continuous production line, and relatively large quantities can be produced in a short time. Therefore, the speakers are placed into inventory until they are needed for assembly into television sets on the production line. The company is interested in determining when to produce a batch of speakers and how many speakers to produce in each batch. Several costs must be considered: 1. Each time a batch is produced, a setup cost of $12,000 is incurred. This cost includes the cost of “tooling up,” administrative costs, record keeping, and so forth. Note that the existence of this cost argues for producing speakers in large batches. 2. The unit production cost of a single speaker (excluding the setup cost) is $10, independent of the batch size produced. (In general, however, the unit production cost need not be constant and may decrease with batch size.) 802 CHAPTER 18 INVENTORY THEORY 3. The production of speakers in large batches leads to a large inventory. The estimated holding cost of keeping a speaker in stock is $0.30 per month. This cost includes the cost of capital tied up in inventory. Since the money invested in inventory cannot be used in other productive ways, this cost of capital consists of the lost return (referred to as the opportunity cost) because alternative uses of the money must be forgone. Other components of the holding cost include the cost of leasing the storage space, the cost of insurance against loss of inventory by fire, theft, or vandalism, taxes based on the value of the inventory, and the cost of personnel who oversee and protect the inventory. 4. Company policy prohibits deliberately planning for shortages of any of its components. However, a shortage of speakers occasionally crops up, and it has been estimated that each speaker that is not available when required costs $1.10 per month. This shortage cost includes the extra cost of installing speakers after the television set is fully assembled otherwise, the interest lost because of the delay in receiving sales revenue, the cost of extra record keeping, and so forth. We will develop the inventory policy for this example with the help of the first inventory model presented in Sec. 18.3. EXAMPLE 2 Wholesale Distribution of Bicycles A wholesale distributor of bicycles is having trouble with shortages of its most popular model and is currently reviewing the inventory policy for this model. The distributor purchases this model bicycle from the manufacturer monthly and then supplies it to various bicycle shops in the western United States in response to purchase orders. What the total demand from bicycle shops will be in any given month is quite uncertain. Therefore, the question is, How many bicycles should be ordered from the manufacturer for any given month, given the stock level leading into that month? The distributor has analyzed her costs and has determined that the following are important: 1. The ordering cost, i.e., the cost of placing an order plus the cost of the bicycles being purchased, has two components: The administrative cost involved in placing an order is estimated as $2,000, and the actual cost of each bicycle is $350 for this wholesaler. 2. The holding cost, i.e., the cost of maintaining an inventory, is $10 per bicycle remaining at the end of the month. This cost represents the costs of capital tied up, warehouse space, insurance, taxes, and so on. 3. The shortage cost is the cost of not having a bicycle on hand when needed. This particular model is easily reordered from the manufacturer, and stores usually accept a delay in delivery. Still, although shortages are permissible, the distributor feels that she incurs a loss, which she estimates to be $150 per bicycle per month of shortage. This estimated cost takes into account the possible loss of future sales because of the loss of customer goodwill. Other components of this cost include lost interest on delayed sales revenue, and additional administrative costs associated with shortages. If some stores were to cancel orders because of delays, the lost revenues from these lost sales would need to be included in the shortage cost. Fortunately, such cancellations normally do not occur for this distributor. We will return to a variation of this example again in Sec. 18.7. These examples illustrate that there are two possibilities for how a firm replenishes inventory, depending on the situation. One possibility is that the firm produces the needed units itself (like the television manufacturer producing speakers). The other is that the firm orders the units from a supplier (like the bicycle distributor ordering bicycles from the manufacturer). 18.2 COMPONENTS OF INVENTORY MODELS 803 Inventory models do not need to distinguish between these two ways of replenishing inventory, so we will use such terms as producing and ordering interchangeably. Both examples deal with one specific product (speakers for a certain kind of television set or a certain bicycle model). In most inventory models, just one product is being considered at a time. All the inventory models presented in this chapter assume a single product. (Multiproduct models also are important, but are beyond the scope of this introduction to inventory theory.) Both examples indicate that there exists a trade-off between the costs involved. The next section discusses the basic cost components of inventory models for determining the optimal trade-off between these costs. ■ 18.2 COMPONENTS OF INVENTORY MODELS Because inventory policies affect profitability, the choice among policies depends upon their relative profitability. As already seen in Examples 1 and 2, some of the costs that determine this profitability are (1) the ordering costs, (2) holding costs, and (3) shortage costs. Other relevant factors include (4) revenues, (5) salvage costs, and (6) discount rates. These six factors are described in turn below. The cost of ordering an amount z (either through purchasing or producing this amount) can be represented by a function c(z). The simplest form of this function is one that is directly proportional to the amount ordered, that is, c ⴢ z, where c represents the unit price paid. Another common assumption is that c(z) is composed of two parts: a term that is directly proportional to the amount ordered and a term that is a constant K for z positive and is 0 for z  0. For this case, c(z)  cost of ordering z units if z  0 0  if z  0, K  cz  where K  setup cost and c  unit cost. The constant K includes the administrative cost of ordering or, when producing, the costs involved in setting up to start a production run. There are other assumptions that can be made about the cost of ordering, but this chapter is restricted to the cases just described. In Example 1, the speakers are produced and the setup cost for a production run is $12,000. Furthermore, each speaker costs $10, so that the production cost when ordering a production run of z speakers is given by c(z)  12,000  10z, for z  0. In Example 2, the distributor orders bicycles from the manufacturer and the ordering cost is given by c(z)  2,000  350z, for z  0. The holding cost (sometimes called the storage cost) represents all the costs associated with the storage of the inventory until it is sold or used. Included are the cost of capital tied up, space, insurance, protection, and taxes attributed to storage. The holding cost can be assessed either continuously or on a period-by-period basis. In the latter case, the cost may be a function of the maximum quantity held during a period, the average amount held, or the quantity in inventory at the end of the period. The end-of period option simplifies the analysis, so it usually will be adopted when assessing the holding cost on a period-by-period basis in this chapter. Applying this end-of-period option to the bicycle example, the holding cost is $10 per bicycle remaining at the end of the month. However, in the TV speakers example, the 804 CHAPTER 18 INVENTORY THEORY holding cost is assessed continuously, where the rate of assessment is $0.30 per speaker in inventory per month, so the average holding cost per month is $0.30 times the average number of speakers in inventory. The shortage cost (sometimes called the unsatisfied demand cost) is incurred when the amount of the commodity required (demand) exceeds the available stock. This cost depends upon which of the following two cases applies. In one case, called backlogging, the excess demand is not lost, but instead is held until it can be satisfied when the next normal delivery replenishes the inventory. For a firm incurring a temporary shortage in supplying its customers (as for the bicycle example), the shortage cost then can be interpreted as the loss of customers’ goodwill and the subsequent reluctance to do business with the firm plus the cost of delayed revenue and the extra administrative costs. For a manufacturer incurring a temporary shortage in materials needed for production (such as a shortage of speakers for assembly into television sets), the shortage cost becomes the cost associated with delaying the completion of the production process. In the second case, called no backlogging, if any excess of demand over available stock occurs, the firm cannot wait for the next normal delivery to meet the excess demand. Either (1) the excess demand is met by a priority shipment, or (2) it is not met at all because the orders are canceled. For situation 1, the shortage cost can be viewed as the cost of the priority shipment. For situation 2, the shortage cost is the loss of current revenue from not meeting the demand plus the cost of losing future business because of lost goodwill.1 Revenue may or may not be included in the model. If both the price and the demand for the product are established by the market and so are outside the control of the company, the revenue from sales (assuming demand is met) is independent of the firm’s inventory policy and may be neglected. However, if revenue is neglected in the model, the loss in revenue must then be included in the shortage cost whenever the firm cannot meet the demand and the sale is lost. Furthermore, even in the case where demand is backlogged, the cost of the delay in revenue must also be included in the shortage cost. With these interpretations, revenue will not be considered explicitly in the remainder of this chapter. The salvage value of an item is the value of a leftover item when no further inventory is desired. The salvage value represents the disposal value of the item to the firm, perhaps through a discounted sale. The negative of the salvage value is called the salvage cost. If there is a cost associated with the disposal of an item, the salvage cost may be positive. We assume hereafter that any salvage cost is incorporated into the holding cost. Finally, the discount rate takes into account the time value of money. When a firm ties up capital in inventory, the firm is prevented from using this money for alternative purposes. For example, it could invest this money in secure investments, say, government bonds, and have a return on investment 1 year hence of, say, 3 percent. Thus, $1 invested today would be worth $1.03 in year 1, or alternatively, a $1 profit 1 year hence is equivalent to ␣  $1/$1.03 today. The quantity ␣ is known as the discount factor. Thus, in adding up the total profit from an inventory policy, the profit or costs 1 year hence should be multiplied by ␣; in 2 years hence by ␣2; and so on. (Units of time other than 1 year also can be used.) The total profit calculated in this way normally is referred to as the net present value. In problems having short time horizons, ␣ may be assumed to be 1 (and thereby neglected) because the current value of $1 delivered during this short time horizon does not change very much. However, in problems having long time horizons, the discount factor should be included. 1 An analysis of situation 2 is provided by E. T. Anderson, G. J. Fitzsimons, and D. Simester, “Measuring and Mitigating the Costs of Stockouts,” Management Science, 52(11): 1751–1763, Nov. 2006. For an analysis of whether backlogging or no backlogging provides a less costly policy under various circumstances, see B. Janakiraman, S. Seshadri, and J. G. Shanthikumar, “A Comparison of the Optimal Costs of Two Canonical Inventory Systems,” Operations Research, 55(5): 866–875, Sept.–Oct. 2007. 18.3 DETERMINISTIC CONTINUOUS-REVIEW MODELS 805 In using quantitative techniques to seek optimal inventory policies, we use the criterion of minimizing the total (expected) cost (or discounted cost if the time horizon is a long one). Under the assumptions that the price and demand for the product are not under the control of the company and that the lost or delayed revenue is included in the shortage penalty cost, minimizing cost is equivalent to maximizing net income. Another useful criterion is to keep the inventory policy simple, i.e., keep the rule for indicating when to order and how much to order both understandable and easy to implement. Most of the policies considered in this chapter possess this property. As mentioned at the beginning of the chapter, inventory models are usually classified as either deterministic or stochastic according to whether the demand for a period is known or is a random variable having a known probability distribution. The production of batches of speakers in Example 1 of Sec. 18.1 illustrates deterministic demand because the speakers are used in television assemblies at a fixed rate of 8,000 per month. The bicycle shops’ purchases of bicycles from the wholesale distributor in Example 2 of Sec. 18.1 illustrates random demand because the total monthly demand varies from month to month according to some probability distribution. Another component of an inventory model is the lead time, which is the amount of time between the placement of an order to replenish inventory (through either purchasing or producing) and the receipt of the goods into inventory. If the lead time always is the same (a fixed lead time), then the replenishment can be scheduled just when desired. Most models in this chapter assume that each replenishment occurs just when desired, either because the delivery is nearly instantaneous or because it is known when the replenishment will be needed and there is a fixed lead time. Another classification refers to whether the current inventory level is being monitored continuously or periodically. In continuous review, an order is placed as soon as the stock level falls down to the prescribed reorder point. In periodic review, the inventory level is checked at discrete intervals, e.g., at the end of each week, and ordering decisions are made only at these times even if the inventory level dips below the reorder point between the preceding and current review times. (In practice, a periodic review policy can be used to approximate a continuous review policy by making the time interval sufficiently small.) ■ 18.3 DETERMINISTIC CONTINUOUS-REVIEW MODELS The most common inventory situation faced by manufacturers, retailers, and wholesalers is that stock levels are depleted over time and then are replenished by the arrival of a batch of new units. A simple model representing this situation is the following economic order quantity model or, for short, the EOQ model. (It sometimes is also referred to as the economic lot-size model.) Units of the product under consideration are assumed to be withdrawn from inventory continuously at a known constant rate, denoted by d; that is, the demand is d units per unit time. It is further assumed that inventory is replenished when needed by ordering (through either purchasing or producing) a batch of fixed size (Q units), where all Q units arrive simultaneously at the desired time. For the basic EOQ model to be presented first, the only costs to be considered are K  setup cost for ordering one batch, c  unit cost for producing or purchasing each unit, h  holding cost per unit per unit of time held in inventory. The objective is to determine when and by how much to replenish inventory so as to minimize the sum of these costs per unit time. 806 CHAPTER 18 INVENTORY THEORY We assume continuous review, so that inventory can be replenished whenever the inventory level drops sufficiently low. We shall first assume that shortages are not allowed (but later we will relax this assumption). With the fixed demand rate, shortages can be avoided by replenishing inventory each time the inventory level drops to zero, and this also will minimize the holding cost. Figure 18.1 depicts the resulting pattern of inventory levels over time when we start at time 0 by ordering a batch of Q units in order to increase the initial inventory level from 0 to Q and then repeat this process each time the inventory level drops back down to 0. Example 1 in Sec. 18.1 (manufacturing speakers for TV sets) fits this model and will be used to illustrate the following discussion. The Basic EOQ Model To summarize, in addition to the costs specified above, the basic EOQ model makes the following assumptions. Assumptions (Basic EOQ Model) 1. A known constant demand rate of d units per unit time. 2. The order quantity (Q) to replenish inventory arrives all at once just when desired, namely, when the inventory level drops to 0. 3. Planned shortages are not allowed. In regard to assumption 2, there usually is a lag between when an order is placed and when it arrives in inventory. As indicated in Sec. 18.2, the amount of time between the placement of an order and its receipt is referred to as the lead time. The inventory level at which the order is placed is called the reorder point. To satisfy assumption 2, this reorder point needs to be set at Reorder point  (demand rate)  (lead time). Thus, assumption 2 is implicitly assuming a constant lead time. The time between consecutive replenishments of inventory (the vertical line segments in Fig. 18.1) is referred to as a cycle. For the speaker example, a cycle can be viewed as the time between production runs. Thus, if 24,000 speakers are produced in each production run and are used at the rate of 8,000 per month, then the cycle length is 24,000/8,000  3 months. In general, the cycle length is Q/d. The total cost per unit time T is obtained from the following components: Production or ordering cost per cycle  K  cQ. ■ FIGURE 18.1 Diagram of inventory level as a function of time for the basic EOQ model.  Inventory level Q Q  dt Batch size Q 0 Q d 2Q d Time t 18.3 DETERMINISTIC CONTINUOUS-REVIEW MODELS 807 The average inventory level during a cycle is (Q  0)/2  Q/2 units, and the corresponding cost is hQ/2 per unit time. Because the cycle length is Q/d, hQ2 Holding cost per cycle  . 2d Therefore, hQ2 Total cost per cycle  K  cQ  , 2d so the total cost per unit time is K  cQ  hQ2/(2d) dK hQ T      dc  . Q 2 Q/d The value of Q, say Q*, that minimizes T is found by setting the first derivative to zero (and noting that the second derivative is positive), which yields dK h     0, Q2 2 so that Q*  2dK ,  h which is the well-known EOQ formula.2 (It also is sometimes referred to as the square root formula.) The corresponding cycle time, say t*, is Q* t*    d 2K .  dh It is interesting to observe that Q* and t* change in intuitively plausible ways when a change is made in K, h, or d. As the setup cost K increases, both Q* and t* increase (fewer setups). When the unit holding cost h increases, both Q* and t* decrease (smaller inventory levels). As the demand rate d increases, Q* increases (larger batches) but t* decreases (more frequent setups). These formulas for Q* and t* will now be applied to the speaker example. The appropriate parameter values from Sec. 18.1 are K  12,000, h  0.30, d  8,000, so that Q*  (2)(8,000)(12,000)       25,298 0.30 and 25,298 t*    3.2 months. 8,000 Hence, the optimal solution is to set up the production facilities to produce speakers once every 3.2 months and to produce 25,298 speakers each time. (The total cost curve is rather 2 At the time of this writing, we can celebrate the 100th anniversary of this famous formula. An interesting historical account of this model and formula, including a reprint of a 1913 paper that started it all, is given by D. Erlenkotter, “Ford Whitman Harris and the Economic Order Quantity Model,” Operations Research, 38: 937–950, 1990. 808 CHAPTER 18 INVENTORY THEORY flat near this optimal value, so any similar production run that might be more convenient, say 24,000 speakers every 3 months, would be nearly optimal.) The Solved Examples section of the book’s website includes another example of applying the basic EOQ model when considerable sensitivity analysis also needs to be performed. The EOQ Model with Planned Shortages One of the banes of any inventory manager is the occurrence of an inventory shortage (sometimes referred to as a stockout)—demand that cannot be met currently because the inventory is depleted. This causes a variety of headaches, including dealing with unhappy customers and having extra record keeping to arrange for filling the demand later (backorders) when the inventory can be replenished. By assuming that planned shortages are not allowed, the basic EOQ model presented above satisfies the common desire of managers to avoid shortages as much as possible. (Nevertheless, unplanned shortages can still occur if the demand rate and deliveries do not stay on schedule.) However, there are situations where permitting limited planned shortages makes sense from a managerial perspective. The most important requirement is that the customers generally are able and willing to accept a reasonable delay in filling their orders if need be. If so, the costs of incurring shortages described in Secs. 18.1 and 18.2 (including lost future business) should not be exorbitant. If the cost of holding inventory is high relative to these shortage costs, then lowering the average inventory level by permitting occasional brief shortages may be a sound business decision. The EOQ model with planned shortages addresses this kind of situation by replacing only the third assumption of the basic EOQ model with the following new assumption: Planned shortages now are allowed. When a shortage occurs, the affected customers will wait for the product to become available again. Their backorders are filled immediately when the order quantity arrives to replenish inventory. Under these assumptions, the pattern of inventory levels over time has the appearance shown in Fig. 18.2. The saw-toothed appearance is the same as in Fig. 18.1. However, now the inventory levels extend down to negative values that reflect the number of units of the product that are backordered. Let p  shortage cost per unit short per unit of time short, S  inventory level just after a batch of Q units is added to inventory, Q  S  shortage in inventory just before a batch of Q units is added. The total cost per unit time now is obtained from the following components: Production or ordering cost per cycle  K  cQ.   Inventory level  dt Batch size Q S S ■ FIGURE 18.2 Diagram of inventory level as a function of time for the EOQ model with planned shortages. S S d 0 Q d Time t 18.3 DETERMINISTIC CONTINUOUS-REVIEW MODELS 809 During each cycle, the inventory level is positive for a time S/d. The average inventory level during this time is (S  0)/2  S/2 units, and the corresponding cost is hS/2 per unit time. Hence, hS S hS2 Holding cost per cycle     . 2 d 2d Similarly, shortages occur for a time (Q  S)/d. The average amount of shortages during this time is (0  Q  S)/2  (Q  S)/2 units, and the corresponding cost is p(Q  S)/2 per unit time. Hence, p(Q  S) Q  S p(Q  S)2 Shortage cost per cycle     . 2 d 2d Therefore, hS2 p(Q  S)2 Total cost per cycle  K  cQ    , 2d 2d and the total cost per unit time is K  cQ  hS2/(2d)  p(QS)2/(2d) T   Q/d 2 dK hS p(Q  S)2    dc    . Q 2Q 2Q In this model, there are two decision variables (S and Q), so the optimal values (S* and Q*) are found by setting the partial derivatives T/S and T/Q equal to zero. Thus, T hS p(Q  S)       0. S Q Q T dK hS2 p(Q  S) p(Q  S)2     0. 2   2     Q Q 2Q Q 2Q2 Solving these equations simultaneously leads to S*  2dK p   ,  h  ph Q*  2dK p  h  .  h  p The optimal cycle length t* is given by Q* t*    d 2K p  h    .  dh p The maximum shortage is Q*  S*  2dK h   .  p  ph In addition, from Fig. 18.2, the fraction of time that no shortage exists is given by S*/d p   , Q*/d ph which is independent of K. When either p or h is made much larger than the other, the above quantities behave in intuitive ways. In particular, when p  ⴥ with h constant (so shortage costs dominate holding costs), Q*  S*  0 whereas both Q* and t* converge to their values for 810 CHAPTER 18 INVENTORY THEORY the basic EOQ model. Even though the current model permits shortages, p  ⴥ implies that having them is not worthwhile. On the other hand, when h  ⴥ with p constant (so holding costs dominate shortage costs), S*  0. Thus, having h  ⴥ makes it uneconomical to have positive inventory levels, so each new batch of Q* units goes no further than removing the current shortage in inventory. If planned shortages are permitted in the speaker example, the shortage cost is estimated in Sec. 18.1 as p  1.10. As before, K  12,000, h  0.30, d  8,000, so now (2)(8,000)(12,000) 1.1    22,424,    1.1   0.3 0.30   (2)(8,000)(12,000) 1.1  0.3 Q*      28,540,   0.30   1.1  S*  and 28,540 t*    3.6 months. 8,000 Hence, the production facilities are to be set up every 3.6 months to produce 28,540 speakers. The maximum shortage is 6,116 speakers. Note that Q* and t* are not very different from the no-shortage case. The reason is that p is much larger than h. The EOQ Model with Quantity Discounts When specifying their cost components, the preceding models have assumed that the unit cost of an item is the same regardless of the quantity in the batch. In fact, this assumption resulted in the optimal solutions being independent of this unit cost. The EOQ model with quantity discounts replaces this assumption with the following new assumption: The unit cost of an item now depends on the quantity in the batch. In particular, an incentive is provided to place a large order by replacing the unit cost for a small quantity by a smaller unit cost for every item in a larger batch, and perhaps by even smaller unit costs for even larger batches. Otherwise, the assumptions are the same as for the basic EOQ model. To illustrate this model, consider the TV speakers example introduced in Sec. 18.1. Suppose now that the unit cost for every speaker is c1  $11 if less than 10,000 speakers are produced, c2  $10 if production falls between 10,000 and 80,000 speakers, and c3  $9.50 if production exceeds 80,000 speakers. What is the optimal policy? The solution to this specific problem will reveal the general method. From the results for the basic EOQ model, the total cost per unit time Tj if the unit cost is cj is given by dK hQ Tj    dcj  , Q 2 for j  1, 2, 3. (This expression assumes that h is independent of the unit cost of the items, but a common small refinement would be to make h proportional to the unit cost to reflect the fact that the cost of capital tied up in inventory varies in this way.) A plot of Tj versus Q is 811 18.3 DETERMINISTIC CONTINUOUS-REVIEW MODELS Total cost per unit time 105,000 ■ FIGURE 18.3 Total cost per unit time for the speaker example with quantity discounts. T1 (unit cost equals $11) 100,000 T2 (unit cost equals $10) 95,000 T3 (unit cost equals $9.50) 90,000 85,000 82,500 10,000 25,000 80,000 Batch size Q shown in Fig. 18.3 for each j, where the solid part of each curve extends over the feasible range of values of Q for that discount category. For each curve, the value of Q that minimizes Tj is found just as for the basic EOQ model. For K  12,000, h  0.30, and d  8,000, this value is (2)(8,000)(12,000)    25,298.   0.30  (If h were not independent of the unit cost of the items, then the minimizing value of Q would be slightly different for the different curves.) This minimizing value of Q is a feasible value for the cost function T2. For any fixed Q, T2 T1, so T1 can be eliminated from further consideration. However, T3 cannot be immediately discarded. Its minimum feasible value (which occurs at Q  80,000) must be compared to T2 evaluated at 25,298 (which is $87,589). Because T3 evaluated at 80,000 equals $89,200, it is better to produce in quantities of 25,298, so this quantity is the optimal value for this set of quantity discounts. If the quantity discount led to a unit cost of $9 (instead of $9.50) when production exceeded 80,000, then T3 evaluated at 80,000 would equal $85,200, and the optimal production quantity would become 80,000. Although this analysis concerned a specific problem, the same approach is applicable to any similar problem. Here is a summary of the general procedure: 1. For each available unit cost cj, use the EOQ formula for the EOQ model to calculate its optimal order quantity Q*j. 2. For each cj where Q*j is within the feasible range of order quantities for cj, calculate the corresponding total cost per unit time Tj. 3. For each cj where Q*j is not within this feasible range, determine the order quantity Qj that is at the endpoint of this feasible range that is closest to Q*j. Calculate the total cost per unit time Tj for Qj and cj. 4. Compare the Tj obtained for all the cj and choose the minimum Tj. Then choose the order quantity Qj obtained in step 2 or 3 that gives this minimum Tj. A similar analysis can be used for other types of quantity discounts, such as incremental quantity discounts where a cost c0 is incurred for the first q0 units, c1 for the next q1 units, and so on. 812 CHAPTER 18 INVENTORY THEORY Some Useful Excel Templates For your convenience, we have included five Excel templates for the EOQ models in this chapter’s Excel file on the book’s website. Two of these templates are for the basic EOQ model. In both cases, you enter basic data (d, K, and h), as well as the lead time for the deliveries and the number of working days per year for the firm. The template then calculates the firm’s total annual expenditures for setups and for holding costs, as well as the sum of these two costs (the total variable cost). It also calculates the reorder point—the inventory level at which the order needs to be placed to replenish inventory so the replenishment will arrive when the inventory level drops to 0. One template (the Solver version) enables you to enter any order quantity you want and then see what the annual costs and reorder point would be. This version also enables you to use Solver to solve for the optimal order quantity. The second template (the analytical version) uses the EOQ formula to obtain the optimal order quantity. The corresponding pair of templates also is provided for the EOQ model with planned shortages. After entering the data (including the unit shortage cost p), each of these templates will obtain the various annual costs (including the annual shortage cost). With the Solver version, you can either enter trial values of the order quantity Q and maximum shortage Q  S or solve for the optimal values, whereas the analytical version uses the formulas for Q* and Q*  S* to obtain the optimal values. The corresponding maximum inventory level S* also is included in the results. The final template is an analytical version for the EOQ model with quantity discounts. This template includes the refinement that the unit holding cost h is proportional to the unit cost c, so h  Ic, where the proportionality factor I is referred to as the inventory holding cost rate. Thus, the data entered includes I along with d and K. You also need to enter the number of discount categories (where the lowest-quantity category with no discount counts as one of these), as well as the unit price and range of order quantities for each of the categories. The template then finds the feasible order quantity that minimizes the total annual cost for each category, and also shows the individual annual costs (including the annual purchase cost) that would result. Using this information, the template identifies the overall optimal order quantity and the resulting total annual cost. All these templates can be helpful for calculating a lot of information quickly after entering the basic data for the problem. However, perhaps a more important use is for performing sensitivity analysis on these data. You can immediately see how the results would change for any specific change in the data by entering the new data values in the spreadsheet. Doing this repeatedly for a variety of changes in the data is a convenient way to perform sensitivity analysis. Observations about EOQ Models 1. If it is assumed that the unit cost of an item is constant throughout time, independent of the batch size (as with the first two EOQ models), the unit cost does not appear in the optimal solution for the batch size. This result occurs because no matter what inventory policy is used, the same number of units is required per unit time, so this cost per unit time is fixed. 2. The analysis of the EOQ models assumed that the batch size Q is constant from cycle to cycle. The resulting optimal batch size Q* actually minimizes the total cost per unit time for any cycle, so the analysis shows that this constant batch size should be used from cycle to cycle even if a constant batch size is not assumed. 18.3 DETERMINISTIC CONTINUOUS-REVIEW MODELS 813 3. The optimal inventory level at which inventory should be replenished can never be greater than zero under these models. Waiting until the inventory level drops to zero (or less than zero when planned shortages are permitted) reduces both holding costs and the frequency of incurring the setup cost K. However, if the assumptions of a known constant demand rate and the order quantity will arrive just when desired (because of a constant lead time) are not completely satisfied, it may become prudent to plan to have some “safety stock” left when the inventory is scheduled to be replenished. This is accomplished by increasing the reorder point above that implied by the model. 4. The basic assumptions of the EOQ models are rather demanding ones. They seldom are satisfied completely in practice. For example, even when a constant demand rate is planned (as with the production line in the TV speakers example in Sec. 18.1), interruptions and variations in the demand rate still are likely to occur. It also is very difficult to satisfy the assumption that the order quantity to replenish inventory arrives just when desired. Although the schedule may call for a constant lead time, variations in the actual lead times often will occur. Fortunately, the EOQ models have been found to be robust in the sense that they generally still provide nearly optimal results even when their assumptions are only rough approximations of reality. This is a key reason why these models are so widely used in practice. However, in those cases where the assumptions are significantly violated, it is important to do some preliminary analysis to evaluate the adequacy of an EOQ model before it is used. This preliminary analysis should focus on calculating the total cost per unit time provided by the model for various order quantities and then assessing how this cost curve would change under more realistic assumptions. 5. Selected Reference 4 provides much more information about a variety of deterministic and stochastic EOQ models and their applications. Different Types of Demand for a Product Example 2 (wholesale distribution of bicycles) introduced in Sec. 18.1 focused on managing the inventory of one model of bicycle. The demand for this product is generated by the wholesaler’s customers (various retailers) who purchase these bicycles to replenish their inventories according to their own schedules. The wholesaler has no control over this demand. Because this model is sold separately from other models, its demand does not even depend on the demand for any of the company’s other products. Such demand is referred to as independent demand. The situation is different for the speaker example introduced in Sec. 18.1. Here, the product under consideration—television speakers—is just one component being assembled into the company’s final product—television sets. Consequently, the demand for the speakers depends on the demand for the television set. The pattern of this demand for the speakers is determined internally by the production schedule that the company establishes for the television sets by adjusting the production rate for the production line producing the sets. Such demand is referred to as dependent demand. The television manufacturing company produces a considerable number of products— various parts and subassemblies—that become components of the television sets. Like the speakers, these various products also are dependent-demand products. Because of the dependencies and interrelationships involved, managing the inventories of dependent-demand products can be considerably more complicated than for independent-demand products. A popular technique for assisting in this task is material requirements planning, abbreviated as MRP. MRP is a computer-based system for planning, scheduling, and controlling the production of all the components of a final product. The system begins by “exploding” the product by breaking it down into all its subassemblies and then into all its individual component parts. A production schedule 814 CHAPTER 18 INVENTORY THEORY is then developed, using the demand and lead time for each component to determine the demand and lead time for the subsequent component in the process. In addition to a master production schedule for the final product, a bill of materials provides detailed information about all its components. Inventory status records give the current inventory levels, number of units on order, etc., for all the components. When more units of a component need to be ordered, the MRP system automatically generates either a purchase order to the vendor or a work order to the internal department that produces the component.3 The Role of Just-In-Time (JIT) Inventory Management When the basic EOQ model was used to calculate the optimal production lot size for the speaker example, a very large quantity (25,298 speakers) was obtained. This enables having relatively infrequent setups to initiate production runs (only once every 3.2 months). However, it also causes large average inventory levels (12,649 speakers), which leads to a large total holding cost per year of over $45,000. The basic reason for this large cost is the high setup cost of K  $12,000 for each production run. The setup cost is so sizable because the production facilities need to be set up again from scratch each time. Consequently, even with less than four production runs per year, the annual setup cost is over $45,000, just like the annual holding costs. Rather than continuing to tolerate a $12,000 setup cost each time in the future, another option for the company is to seek ways to reduce this setup cost. One possibility is to develop methods for quickly transferring machines from one use to another. Another is to dedicate a group of production facilities to the production of speakers so they would remain set up between production runs in preparation for beginning another run whenever needed. Suppose the setup cost could be drastically reduced from $12,000 all the way down to K  $120. This would reduce the optimal production lot size from 25,298 speakers down to Q*  2,530 speakers, so a new production run lasting only a brief time would be initiated more than 3 times per month. This also would reduce both the annual setup cost and the annual holding cost from over $45,000 down to only slightly over $4,500 each. By having such frequent (but inexpensive) production runs, the speakers would be produced essentially just in time for their assembly into television sets. Just in time actually is a well-developed philosophy for managing inventories. A justin-time (JIT) inventory system places great emphasis on reducing inventory levels to a bare minimum, and so providing the items just in time as they are needed. This philosophy was first developed in Japan, beginning with the Toyota Company in the late 1950s, and is given part of the credit for the remarkable gains in Japanese productivity through much of the late 20th century. The philosophy also has become popular in other parts of the world, including the United States, in more recent years.4 Although the just-in-time philosophy sometimes is misinterpreted as being incompatible with using an EOQ model (since the latter gives a large order quantity when the setup cost is large), they actually are complementary. A JIT inventory system focuses on finding ways to greatly reduce the setup costs so that the optimal order quantity will be small. Such a system also seeks ways to reduce the lead time for the delivery of an order, since this reduces the uncertainty about the number of units that will be needed when the delivery occurs. Another emphasis is on improving preventive maintenance so that the required production facilities will be available to produce the units when they are needed. 3 A series of articles on pp. 32–44 of the September 1996 issue of IIE Solutions provides further information about MRP. 4 For further information about applications of JIT in the United States, see R. E. White, J. N. Pearson, and J. R. Wilson, “JIT Manufacturing: A Survey of Implementations in Small and Large U.S. Manufacturing,” Management Science, 45: 1–15, 1999. Also see H. Chen, M. Z. Frank, and O. Q. Wu, “What Actually Happened to the Inventories of American Companies Between 1981 and 2000,” Management Science, 51(7): 1015–1031, July 2005. 18.4 A DETERMINISTIC PERIODIC-REVIEW MODEL 815 Still another emphasis is on improving the production process to guarantee good quality. Providing just the right number of units just in time does not provide any leeway for including defective units. In more general terms, the focus of the just-in-time philosophy is on avoiding waste wherever it might occur in the production process. One form of waste is unnecessary inventory. Others are unnecessarily large setup costs, unnecessarily long lead times, production facilities that are not operational when they are needed, and defective items. Minimizing these forms of waste is a key component of superior inventory management. ■ 18.4 A DETERMINISTIC PERIODIC-REVIEW MODEL The preceding section explored the basic EOQ model and some of its variations. The results were dependent upon the assumption of a constant demand rate. When this assumption is relaxed, i.e., when the amounts that need to be withdrawn from inventory are allowed to vary from period to period, the EOQ formula no longer ensures a minimum-cost solution. Consider the following periodic-review model. Planning is to be done for the next n periods regarding how much (if any) to produce or order to replenish inventory at the beginning of each of the periods. (The order to replenish inventory can involve either purchasing the units or producing them, but the latter case is far more common with applications of this model, so we mainly will use the terminology of producing the units.) The demands for the respective periods are known (but not the same in every period) and are denoted by ri  demand in period i, for i  1, 2, . . . , n. These demands must be met on time. There is no stock on hand initially, but there is still time for a delivery at the beginning of period 1. The costs included in this model are similar to those for the basic EOQ model: K  setup cost for producing or purchasing any units to replenish inventory at beginning of period, c  unit cost for producing or purchasing each unit, h  holding cost for each unit left in inventory at end of period. Note that this holding cost h is assessed only on inventory left at the end of a period. There also are holding costs for units that are in inventory for a portion of the period before being withdrawn to satisfy demand. However, these are fixed costs that are independent of the inventory policy and so are not relevant to the analysis. Only the variable costs that are affected by which inventory policy is chosen, such as the extra holding costs that are incurred by carrying inventory over from one period to the next, are relevant for selecting the inventory policy. By the same reasoning, the unit cost c is an irrelevant fixed cost because, over all the time periods, all inventory policies produce the same number of units at the same cost. Therefore, c will be dropped from the analysis hereafter. The objective is to minimize the total cost over the n periods. This is accomplished by ignoring the fixed costs and minimizing the total variable cost over the n periods, as illustrated by the following example. An Example An airplane manufacturer specializes in producing small airplanes. It has just received an order from a major corporation for 10 customized executive jet airplanes for the use of the corporation’s upper management. The order calls for three of the airplanes to be delivered 816 CHAPTER 18 INVENTORY THEORY (and paid for) during the upcoming winter months (period 1), two more to be delivered during the spring (period 2), three more during the summer (period 3), and the final two during the fall (period 4). Setting up the production facilities to meet the corporation’s specifications for these airplanes requires a setup cost of $2 million. The manufacturer has the capacity to produce all 10 airplanes within a couple of months, when the winter season will be under way. However, this would necessitate holding seven of the airplanes in inventory, at a cost of $200,000 per airplane per period, until their scheduled delivery times. To reduce or eliminate these substantial holding costs, it may be worthwhile to produce a smaller number of these airplanes now and then to repeat the setup (again incurring the cost of $2 million) in some or all of the subsequent periods to produce additional small numbers. Management would like to determine the least costly production schedule for filling this order. Thus, using the notation of the model, the demands for this particular airplane during the four upcoming periods (seasons) are r1  3, r2  2, r3  3, r4  2. Using units of millions of dollars, the relevant costs are K  2, h  0.2. The problem is to determine how many airplanes to produce (if any) during the beginning of each of the four periods in order to minimize the total variable cost. The high setup cost K gives a strong incentive not to produce airplanes every period and preferably just once. However, the significant holding cost h makes it undesirable to carry a large inventory by producing the entire demand for all four periods (10 airplanes) at the beginning. Perhaps the best approach would be an intermediate strategy where airplanes are produced more than once but less than four times. For example, one such feasible solution (but not an optimal one) is depicted in Fig. 18.4, which shows the evolution of the inventory level over the next year that results from producing three airplanes at the beginning of the first period, six airplanes at the beginning of the second period, and one airplane at the beginning of the fourth period. The dots give the inventory levels after any production at the beginning of the four periods. How can the optimal production schedule be found? For this model in general, production (or purchasing) is automatic in period 1, but a decision on whether to produce must be made for each of the other n  1 periods. Therefore, one approach to solving this model is to enumerate, for each of the 2n1 combinations of production decisions, the ■ FIGURE 18.4 The inventory levels that result from one sample production schedule for the airplane example. Inventory 6 level 5 4 3 2 1 0 1 2 3 4 Period 18.4 A DETERMINISTIC PERIODIC-REVIEW MODEL 817 possible quantities that can be produced in each period where production is to occur. This approach is rather cumbersome, even for moderate-sized n, so a more efficient method is desirable. Such a method is described next in general terms, and then we will return to finding the optimal production schedule for the example. Although the general method can be used when either producing or purchasing to replenish inventory, we now will only use the terminology of producing for definiteness. An Algorithm The key to developing an efficient algorithm for finding an optimal inventory policy (or equivalently, an optimal production schedule) for the above model is the following insight into the nature of an optimal policy. An optimal policy (production schedule) produces only when the inventory level is zero. To illustrate why this result is true, consider the policy shown in Fig. 18.4 for the example. (Call it policy A.) Policy A violates the above characterization of an optimal policy because production occurs at the beginning of period 4 when the inventory level is greater than zero (namely, one airplane). However, this policy can easily be adjusted to satisfy the above characterization by simply producing one less airplane in period 2 and one more airplane in period 4. This adjusted policy (call it B) is shown by the dashed line in Fig. 18.5 wherever B differs from A (the solid line). Now note that policy B must have less total cost than policy A. The setup costs (and the production costs) for both policies are the same. However, the holding cost is smaller for B than for A because B has less inventory than A in periods 2 and 3 (and the same inventory in the other periods). Therefore, B is better than A, so A cannot be optimal. This characterization of optimal policies can be used to identify policies that are not optimal. In addition, because it implies that the only choices for the amount produced at the beginning of the ith period are 0, ri, ri  ri1, . . . , or ri  ri1   rn, it can be exploited to obtain an efficient algorithm that is related to the deterministic dynamic programming approach described in Sec. 11.3. In particular, define Ci  total variable cost of an optimal policy for periods i, i  1, . . . , n when period i starts with zero inventory (before producing), for i  1, 2, . . . , n. ■ FIGURE 18.5 Comparison of two inventory policies (production schedules) for the airplane example. Inventory 6 level A 5 4 B 3 nd Aa 2 A B A B an 0 dB 1 1 2 3 4 Period 818 CHAPTER 18 INVENTORY THEORY By using the dynamic programming approach of solving backward period by period, these Ci values can be found by first finding Cn, then finding Cn1, and so on. Thus, after Cn, Cn1, . . . , Ci1 are found, then Ci can be found from the recursive relationship Ci  minimum ji, i1, . . . , n {Cj1  K  h[ri1  2ri2  3ri3   ( j  i)rj]}, where j can be viewed as an index that denotes the (end of the) period when the inventory reaches a zero level for the first time after production at the beginning of period i. In the time interval from period i through period j, the term with coefficient h represents the total holding cost over this interval. When j  n, the term Cn1  0. The minimizing value of j indicates that if the inventory level does indeed drop to zero upon entering period i, then the production in period i should cover all demand from period i through this period j. The algorithm for solving the model consists basically of solving for Cn, Cn1, . . . , C1 in turn. For i  1, the minimizing value of j then indicates that the production in period 1 should cover the demand through period j, so the second production will be in period j  1. For i  j  1, the new minimizing value of j identifies the time interval covered by the second production, and so forth to the end. We will illustrate this approach with the example. The application of this algorithm is much quicker than the full dynamic programming approach.5 As in dynamic programming, Cn, Cn1, . . . , C2 must be found before C1 is obtained. However, the number of calculations is much smaller, and the number of possible production quantities is greatly reduced. Application of the Algorithm to the Example Returning to the airplane example, first we consider the case of finding C4, the cost of the optimal policy from the beginning of period 4 to the end of the planning horizon: C4  C5  2  0  2  2. To find C3, we must consider two cases, namely, the first time after period 3 when the inventory reaches a zero level occurs at (1) the end of the third period or (2) the end of the fourth period. In the recursive relationship for C3, these two cases correspond to (1) j  3 and (2) j  4. Denote the corresponding costs (the right-hand side of the recur(4) sive relationship with this j) by C (3) 3 and C 3 , respectively. The policy associated with (3) C 3 calls for producing only for period 3 and then following the optimal policy for period 4, whereas the policy associated with C (4) 3 calls for producing for periods 3 and 4. (4) The cost C3 is then the minimum of C (3) 3 and C 3 . These cases are reflected by the policies given in Fig. 18.6. C (3) 3  C4  2  2  2  4. C (4) 3  C5  2  0.2(2)  0  2  0.4  2.4. C3  min{4, 2.4}  2.4. Therefore, if the inventory level drops to zero upon entering period 3 (so production should occur then), the production in period 3 should cover the demand for both periods 3 and 4. To find C2, we must consider three cases, namely, the first time after period 2 when the inventory reaches a zero level occurs at (1) the end of the second period, (2) the end of the third period, or (3) the end of the fourth period. In the recursive relationship for C2, 5 The full dynamic programming approach is useful, however, for solving generalizations of the model (e.g., nonlinear production cost and holding cost functions) where the above algorithm is no longer applicable. (See Probs. 18.4-3 and 18.4-4 for examples where dynamic programming would be used to deal with generalizations of the model.) 819 18.4 A DETERMINISTIC PERIODIC-REVIEW MODEL Inventory level 5 ■ FIGURE 18.6 Alternative production schedules when production is required at the beginning of period 3 for the airplane example. Schedule resulting in C(3) 3 Inventory level 5 4 4 3 3 2 2 1 1 0 3 4 Period Schedule resulting in C(4) 3 0 3 4 Period these cases correspond to (1) j  2, (2) j  3, and (3) j  4, where the corresponding costs (3) (4) (2) (3) (4) are C (2) 2 , C 2 , and C 2 , respectively. The cost C2 is then the minimum of C 2 , C 2 , and C 2 . C (2) 2  C3  2  2.4  2  4.4. C (3) 2  C4  2  0.2(3)  2  2  0.6  4.6. C (4) 2  C5  2  0.2[3  2(2)]  0  2  1.4  3.4. C2  min{4.4, 4.6, 3.4}  3.4. Consequently, if production occurs in period 2 (because the inventory level drops to zero), this production should cover the demand for all the remaining periods. Finally, to find C1, we must consider four cases, namely, the first time after period 1 when the inventory reaches zero occurs at the end of (1) the first period, (2) the second period, (3) the third period, or (4) the fourth period. These cases correspond to j  1, 2, (2) (3) (4) 3, 4 and to the costs C (1) 1 , C 1 , C 1 , C 1 , respectively. The cost C1 is then the minimum (1) (2) (3) (4) of C 1 , C 1 , C 1 , and C 1 . C (1) 1  C2  2  3.4  2  5.4. C (2) 1  C3  2  0.2(2)  2.4  2  0.4  4.8. C (3) 1  C4  2  0.2[2  2(3)]  2  2  1.6  5.6. C (4) 1  C5  2  0.2[2  2(3)  3(2)]  0  2  2.8  4.8. C1  min{5.4, 4.8, 5.6, 4.8}  4.8. (4) Note that C (2) 1 and C 1 tie as the minimum, giving C1. This means that the policies (2) (4) corresponding to C 1 and C (4) 1 tie as being the optimal policies. The C 1 policy says to produce enough in period 1 to cover the demand for all four periods. The C (2) 1 policy covers only the demand through period 2. Since the latter policy has the inventory level drop to zero at the end of period 2, the C3 result is used next, namely, produce enough in period 3 to cover the demand for periods 3 and 4. The resulting production schedules are summarized below. Optimal Production Schedules 1. Produce 10 airplanes in period 1. Total variable cost  $4.8 million. 2. Produce 5 airplanes in period 1 and 5 airplanes in period 3. Total variable cost  $4.8 million. 820 CHAPTER 18 INVENTORY THEORY If you would like to see another example applying this algorithm, one is provided in the Solved Examples section of the book’s website. ■ 18.5 DETERMINISTIC MULTIECHELON INVENTORY MODELS FOR SUPPLY CHAIN MANAGEMENT Our growing global economy has caused a dramatic shift in inventory management in recent years. Now, as never before, the inventory of many manufacturers is scattered throughout the world. Even the inventory of an individual product may be dispersed globally. A manufacturer’s inventory may be stored initially at the point or points of manufacture (one echelon of the inventory system), then at national or regional warehouses (a second echelon), then at field distribution centers (a third echelon), and so on. Thus, each stage at which inventory is held in the progression through a multistage inventory system is called an echelon of the inventory system. Such a system with multiple echelons of inventory is referred to as a multiechelon inventory system. In the case of a fully integrated corporation that both manufactures its products and sells them at the retail level, its echelons will extend all the way to its retail outlets. Some coordination is needed between the inventories of any particular product at the different echelons. Since the inventory at each echelon (except the last one) is used to replenish the inventory at the next echelon as needed, the inventory level currently needed at an echelon is affected by how soon replenishment will be needed at the various locations for the next echelon. The analysis of multiechelon inventory systems is a major challenge. However, considerable innovative research (with roots tracing back to the middle of the 20th century) has been conducted to develop tractable multiechelon inventory models. With the growing prominence of multiechelon inventory systems, this undoubtedly will continue to be a very active area of research. Another key concept that has emerged in the global economy is that of supply chain management. This concept pushes the management of a multiechelon inventory system one step further by also considering what needs to happen to bring a product into the inventory system in the first place. However, as with inventory management, the main purpose still is to win the competitive battle against other companies in bringing the product to the customers as promptly as possible. A supply chain is a network of facilities that procure raw materials, transform them into intermediate goods and then final products, and finally deliver the products to customers through a distribution system that includes a multiechelon inventory system. Thus, a supply chain spans procurement, manufacturing, and distribution. Since inventories are needed at all these stages, effective inventory management is one key element in managing the supply chain. To fill orders efficiently, it is necessary to understand the linkages and interrelationships of all the key elements of the supply chain. Therefore, integrated management of the supply chain has become a key success factor for some of today’s leading companies. To aid in supply chain management, multiechelon inventory models now are likely to include echelons that incorporate the early part of the supply chain as well as the echelons for the distribution of the finished product. Thus, the first echelon might be the inventory of raw materials or components that eventually will be used to produce the product. A second echelon could be the inventory of subassemblies that are produced from the raw materials or components in preparation for later assembling the subassemblies into the final product. This might then lead into the echelons for the distribution of the An Application Vignette Founded in 1837, Deere & Company (also commonly known as John Deere) is a leading worldwide producer of equipment for agriculture, forestry, and consumer use. The company sells its products through an international network of independently owned dealers and retailers. Its quarterly worldwide net sales and revenues set a new record of nearly $11 billion in the second quarter of 2013. For decades, the Commercial and Consumer Equipment (C&CE) Division of Deere pushed inventories to the dealers, booked the revenues, and hoped that the dealers had the right products to sell at the right time. However, the division had an inventory-to-annual-sales ratio of 58 percent based on inventories at Deere and at its dealers in 2001, so inventory costs were getting badly out of control. Ironically, although dealers had large inventories, they often did not have the right products in stock. C&CE’s supply chain managers needed to cut inventory levels while improving product availability and delivery performance. They had read about inventory optimization successes in Fortune, so they hired a leading OR consulting firm (SmartOps) to tackle this challenge. With 300 products, 2,500 North American dealers, five plants and associated warehouses, seven European warehouses, and several retailers’ consignment warehouses, the coordination and optimization of C&CE’s supply chain was indeed a formidable challenge. However, SmartOps rose to this challenge very successfully by applying state-of-the-art inventory optimization techniques embedded in its multistage inventory planning and optimization software product to set trustworthy targets. C&CE used these targets, together with appropriate dealer incentives, to transform the operation of its entire supply chain on an enterprise-wide basis. In the process, Deere improved its factories’ on-time shipments from 63 percent to 92 percent, while maintaining customer service levels at 90 percent. By the end of 2004, the C&CE Division also had exceeded its goal of $1 billion of inventory reduction or avoidance. Source: Troyer, L., J. Smith, S. Marshall, E. Yaniv, S. Tayur, M. Barkman, A. Kaya, and Y. Liu: “Improving Asset Management and Order Fulfillment at Deere & Company’s C&CE Division,” Interfaces, 35(1): 76–87, Jan.–Feb. 2005. (A link to this article is provided on our website, www.mhhe.com/hillier.) finished product, starting with storage at the point or points of manufacture, then at national or regional warehouses, then at field distribution centers, and so on. The usual objective for a multiechelon inventory model is to coordinate the inventories at the various echelons so as to minimize the total cost associated with the entire multiechelon inventory system. This is a natural objective for a fully integrated corporation that operates this entire system. It might also be a suitable objective when certain echelons are managed by either the suppliers or the customers of the company. The reason is that a key concept of supply chain management is that a company should strive to develop an informal partnership relationship with its suppliers and customers that enables them jointly to maximize their total profit. This often leads to developing mutually beneficial supply contracts that enable reducing the total cost of operating a jointly managed multiechelon inventory system. The analysis of multiechelon inventory models tends to be considerably more complicated than those for single-facility inventory models considered elsewhere in this chapter. However, we present two relatively tractable multiechelon inventory models below that illustrate the relevant concepts. A Model for a Serial Two-Echelon System The simplest possible multiechelon inventory system is one where there are only two echelons and only a single installation at each echelon. Figure 18.7 depicts such a system, where the inventory at installation 1 is used to periodically replenish the inventory at installation 2. For example, installation 1 might be a factory producing a certain product with occasional production runs, and installation 2 might be the distribution center for that product. Alternatively, installation 2 might be the factory producing the product, and then installation 1 is another facility where the components needed to produce that product are themselves either produced or received from suppliers. 822 ■ FIGURE 18.7 A serial two-echelon inventory system. CHAPTER 18 INVENTORY THEORY Inventory at installation 1 Inventory at installation 2 1 2 Since the items at installation 1 and installation 2 may be somewhat different, we will refer to them as item 1 and item 2, respectively. The units of item 1 and item 2 are defined so that exactly one unit of item 1 is needed to obtain one unit of item 2. For example, if item 1 collectively consists of the components needed to produce the final product (item 2), then one set of components needed to produce one unit of the final product is defined as one unit of item 1. The model makes the following assumptions. Assumptions for Serial Two-Echelon Model 1. The assumptions of the basic EOQ model (see Sec. 18.3) hold at installation 2. Thus, there is a known constant demand rate of d units per unit time, an order quantity of Q2 units is placed in time to replenish inventory when the inventory level drops to zero, and planned shortages are not allowed. 2. The relevant costs at installation 2 are a setup cost of K2 each time an order is placed and a holding cost of h2 per unit per unit time. 3. Installation 1 uses its inventory to provide a batch of Q2 units to installation 2 immediately each time an order is received. 4. An order quantity of Q1 units is placed in time to replenish inventory at installation 1 before a shortage would occur. 5. Similarly to installation 2, the relevant costs at installation 1 are a setup cost of K1 each time an order is placed and a holding cost of h1 per unit per unit time. 6. The units increase in value when they are received and processed at installation 2, so h1 h2. 7. The objective is to minimize the sum of the variable costs per unit time at the two installations. (This will be denoted by C.) The word “immediately” in assumption 3 implies that there is essentially zero lead time between when installation 2 places an order for Q2 units and installation 1 fills that order. In reality, it would be common to have a significant lead time because of the time needed for installation 1 to receive and process the order and then to transport the batch to installation 2. However, as long as the lead time is essentially fixed, this is equivalent to assuming zero lead time for modeling purposes because the order would be placed just in time to have the batch arrive when the inventory level drops to zero. For example, if the lead time is one week, the order would be placed one week before the inventory level drops to zero. Although a zero lead time and a fixed lead time are equivalent for modeling purposes, we specifically are assuming a zero lead time because it simplifies the conceptualization of how the inventory levels at the two installations vary simultaneously over time. Figure 18.8 depicts this conceptualization. Because the assumptions of the basic EOQ model hold at installation 2, the inventory levels there vary according to the familiar saw-tooth pattern first shown in Fig. 18.1. Each time installation 2 needs to replenish its inventory, installation 1 ships Q2 units of item 1 to installation 2. Item 1 may be identical to item 2 (as in the case of a factory shipping the final product to a distribution center). If not (as in the case of a supplier shipping the components needed to produce the final product to a factory), installation 2 immediately uses the shipment of Q2 units of item 1 to produce Q2 units of item 2 (the final product). The inventory at installation 2 then gets depleted at the constant 823 18.5 DETERMINISTIC MULTIECHELON INVENTORY MODELS Inventory level at installation 1 Q1 Echelon stock, item 1 Installation stock, item 1 Q1 − Q2 Q1 − 2Q2 ■ FIGURE 18.8 The synchronized inventory levels at the two installations when Q1  3Q2. The installation stock is the stock that is physically being held at the installation, whereas the echelon stock includes both the installation stock and the stock of the same item that already is downstream at the next installation (if any). Time 0 Inventory level at installation 2 Installation stock = echelon stock, item 2 Q2 0 Time demand rate of d units per unit time until the next replenishment, which occurs just as the inventory level drops to 0. The pattern of inventory levels over time for installation 1 is somewhat more complicated than for installation 2. Q2 units need to be withdrawn from the inventory of installation 1 to supply installation 2 each time installation 2 needs to add Q2 units to replenish its inventory. This necessitates replenishing the inventory of installation 1 occasionally, so an order quantity of Q1 units is placed periodically. Using the same kind of reasoning as employed in the preceding section (including in Figs. 18.4 and 18.5), the deterministic nature of our model implies that installation 1 should replenish its inventory only at the instant when its inventory level is zero and it is time to make a withdrawal from the inventory in order to supply installation 2. The reasoning involves checking what would happen if installation 1 were to replenish its inventory any later or any earlier than this instant. If the replenishment were any later than this instant, installation 1 could not supply installation 2 in time to continue following the optimal inventory policy there, so this is unacceptable. If the replenishment were any earlier than this instant, installation 1 would incur the extra cost of holding this inventory until it is time to supply installation 2, so it is better to delay the replenishment at installation 1 until this instant. This leads to the following insight: An optimal policy should have Q1  nQ2 where n is a fixed positive integer. Furthermore, installation 1 should replenish its inventory with a batch of Q1 units only when its inventory level is zero and it is time to supply installation 2 with a batch of Q2 units. This is the kind of policy depicted in Fig. 18.8, which shows the case where n  3. In particular, each time installation 1 receives a batch of Q1 units, it simultaneously supplies 824 CHAPTER 18 INVENTORY THEORY installation 2 with a batch of Q2 units, so the amount of stock left on hand (called the installation stock) at installation 1 becomes (Q1  Q2) units. After later supplying installation 2 with two more batches of Q2 units, Fig. 18.8 shows that the next cycle begins with installation 1 receiving another batch of Q1 units at the same time as when it needs to supply installation 2 with yet another batch of Q2 units. The dashed line in the top part of Fig. 18.8 shows another quantity called the echelon stock for installation 1. The echelon stock of a particular item at any installation in a multiechelon inventory system consists of the stock of the item that is physically on hand at the installation (referred to as the installation stock) plus the stock of the same item that already is downstream (and perhaps incorporated into a more finished product) at subsequent echelons of the system. Since the stock of item 1 at installation 1 is shipped periodically to installation 2, where it is transformed immediately into item 2, the echelon stock at installation 1 in Fig. 18.8 is the sum of the installation stock there and the inventory level at installation 2. At time 0, the echelon stock of item 1 at installation 1 is Q1 because (Q1  Q2) units remain on hand and Q2 units have just been shipped to installation 2 to replenish the inventory there. As the constant demand rate at installation 2 withdraws inventory there accordingly, the echelon stock of item 1 at installation 1 decreases at this same constant rate until the next shipment of Q1 units is received there. If the echelon stock of item 1 at installation 1 were to be plotted over a longer period than shown in Fig. 18.8, you would see the same sawtooth pattern of inventory levels as in Fig. 18.1. You will see soon that echelon stock plays a fundamental role in the analysis of multiechelon inventory systems. The reason is that the saw-tooth pattern of inventory levels for echelon stock enables using an analysis similar to that for the basic EOQ model. Since the objective is to minimize the sum of the variable costs per unit time at the two installations, the easiest (and commonly used) approach would be to solve separately for the values of Q2 and Q1  nQ2 that minimize the total variable cost per unit at installation 2 and installation 1, respectively. Unfortunately, this approach overlooks (or ignores) the connections between the variable costs at the two installations. Because the batch size Q2 for item 2 affects the pattern of inventory levels for item 1 at installation 1, optimizing Q2 separately without considering the consequences for item 1 does not lead to an overall optimal solution. To better understand this subtle point, it may be instructive to begin by optimizing separately at the two installations. We will do this and then demonstrate that this can lead to fairly large errors. The Trap of Optimizing the Two Installations Separately. Let us begin by optimizing installation 2 by itself. Since the assumptions for installation 2 fit the basic EOQ model precisely, the results presented in Sec. 18.3 for this model can be used directly. The total variable cost per unit time at this installation is dK h Q2 C2  2  2 . Q2 2 (This expression for total variable cost differs from the one for total cost given in Sec. 18.3 for the basic EOQ model by deleting the fixed cost, dc, where c is the unit cost of acquiring the item.) The EOQ formula indicates that the optimal order quantity for this installation by itself is Q*2 2dK ,   h 2 2 18.5 DETERMINISTIC MULTIECHELON INVENTORY MODELS 825 so the resulting value of C2 with Q2  Q*2 is C*2  兹2dK 苶 2h苶. 2 Now consider installation 1 with an order quantity of Q1  nQ2. Figure 18.8 indicates that the average inventory level of installation stock is (n  1)Q2/2. Therefore, since installation 1 needs to replenish its inventory with Q1 units every Q1/d  nQ2/d units of time, the total variable cost per unit time at installation 1 is dK1 h1(n  1)Q2 C1    . nQ2 2 To find the order quantity Q1  nQ2 that minimizes C1, given Q2  Q*2, we need to solve for the value of n that minimizes C1. Ignoring the requirement that n be an integer, this is done by differentiating C1 with respect to n, setting the derivative equal to zero (while noting that the second derivative is positive for positive n), and solving for n, which yields 1 n*  Q2* 2dK Kh    .    h Kh 1 1 1 2 2 1 If n* is an integer, then Q1  n*Q*2 is the optimal order quantity for installation 1, given Q2  Q*2. If n* is not an integer, then n* needs to be rounded either up or down to an integer. The rule for doing this is the following. Rounding Procedure for n* If n* 1, choose n  1. If n*  1, let [n*] be the largest integer round as follows. n* If  [n*] n*, so [n*] n* [n*]  1, and then [n*]  1 , choose n  [n*].  n* n* [n*]  1 If   , choose n  [n*]  1. * [n ] n* The formula for n* indicates that its value depends on both K1/K2 and h2/h1. If both of these quantities are considerably greater than 1, then n* also will be considerably greater than 1. Recall that assumption 6 of the model is that h1 h2. This implies that h2/h1 exceeds 1, perhaps substantially so. The reason assumption 6 usually holds is that item 1 normally increases in value when it gets converted into item 2 (the final product) after item 1 is transferred to installation 2 (the location where the demand can be met for the final product). This means that the cost of capital tied up in each unit in inventory (usually a primary component in holding costs) also will increase as the units move from installation 1 to installation 2. Similarly, if a production run needs to be set up to produce each batch at installation 1 (so K1 is large), whereas only a relatively small administrative cost of K2 is required for installation 2 to place each order, then K1/K2 will be considerably greater than 1. The flaw in the above analysis comes in the first step when choosing the order quantity for installation 2. Rather than considering only the costs at installation 2 when doing this, the resulting costs at installation 1 also should have been taken into account. Let us turn now to the valid analysis that simultaneously considers both installations by minimizing the sum of the costs at the two locations. 826 CHAPTER 18 INVENTORY THEORY Optimizing the Two Installations Simultaneously. By adding the costs at the individual installations obtained above, the total variable cost per unit time at the two installations is  K d Q2 C  C1  C2  1  K2   [(n – 1)h1  h2]. n Q2 2 The holding costs on the right have an interesting interpretation in terms of the holding costs for the echelon stock at the two installations. In particular, let e1  h1  echelon holding cost per unit per unit time for installation 1, e2  h2 – h1  echelon holding cost per unit per unit time for installation 2. Then the holding costs can be expressed as Q nQ Q [(n  1)h1  h2] 2  h12  (h2  h1)2 2 2 2 Q1 Q2  e1  e2, 2 2 where Q1/2 and Q2/2 are the average inventory levels of the echelon stock at installations 1 and 2, respectively. (See Fig. 18.8.) The reason that e2  h2 – h1 rather than e2  h2 is that e1Q1/2  h1Q1/2 already includes the holding cost for the units of item 1 that are downstream at installation 2, so e2  h2 – h1 only needs to reflect the value added by converting the units of item 1 to units of item 2 at installation 2. (This concept of using echelon holding costs based on the value added at each installation will play an even more important role in our next model where there are more than two echelons.) Using these echelon holding costs, we now have  K d Q2 C  1  K2   (ne1  e2). n Q2 2 Differentiating with respect to Q2, setting the derivative equal to zero (while verifying that the second derivative is positive for positive Q2), and solving for Q2 yields Q*2    K 2d 1  K2 n  ne1 e2 as the optimal order quantity (given n) at installation 2. Note that this is identical to the EOQ formula for the basic EOQ model where the total setup cost is K1/n  K2 and the total unit holding cost is ne1  e2. Inserting this expression for Q*2 into C and performing some algebraic simplification yields C (ne  e ). 2d Kn  K  1 2 1 2 To solve for the optimal value of the order quantity at installation 1, Q1  nQ*2, we need to find the value of n that minimizes C. The usual approach for doing this would be to differentiate C with respect to n, set this derivative equal to zero, and solve for n. However, because the expression for C involves taking a square root, doing this directly is not very convenient. A more convenient approach is to get rid of the square root sign by squaring C and minimizing C 2 instead, since the value of n that minimizes C 2 also is the value that minimizes C. Therefore, we differentiate C 2 with respect to n, set this derivative equal 827 18.5 DETERMINISTIC MULTIECHELON INVENTORY MODELS to zero, and solve this equation for n. Since the second derivative is positive for positive n, this yields the minimizing value of n as n*  Ke .   Ke 1 2 2 1 This is identical to the expression for n* obtained in the preceding subsection except that h1 and h2 have been replaced here by e1 and e2, respectively. When n* is not an integer, the procedure for rounding n* to an integer also is the same as described in the preceding subsection. Obtaining n in this way enables calculating Q*2 with the above expression and then setting Q*1  nQ*2. An Example. To illustrate these results, suppose that the parameters of the model are K1  $1,000, K2  $100, h1  $2, h2  $3, d  600. Table 18.1 gives the values of Q*2, n*, n (the rounded value of n*), Q*1, and C* (the resulting total variable cost per unit time) when solving in the two ways described in this section. Thus, the second column gives the results when using the imprecise approach of optimizing the two installations separately, whereas the third column uses the valid method of optimizing the two installations simultaneously. Note that simultaneous optimization yields rather different results than separate optimization. The biggest difference is that the order quantity at installation 2 is nearly twice as large. In addition, the total variable cost C* is nearly 3 percent smaller. With different parameter values, the error from separate optimization can sometimes lead to a considerably larger percentage difference in the total variable cost. Thus, this approach provides a pretty rough approximation. There is no reason to use it since simultaneous optimization can be performed just as readily. A Model for a Serial Multiechelon System We now will extend the preceding analysis to serial systems with more than two echelons. Figure 18.9 depicts this kind of system, where installation 1 has its inventory replenished periodically, then the inventory at installation 1 is used to replenish the inventory at installation 2 periodically, then installation 2 does the same for installation 3, and so on down to the final installation (installation N). Some or all of the installations might be processing centers that process the items received from the preceding installation and transform them into something closer to the finished product. Installations also are used to store items until they are ready to be moved to the next processing center or to the next storage facility that is closer to the customers for the final product. Installation N does any needed final processing and also stores the final product at a location where it can immediately meet the demand for that product on a continuous basis. ■ TABLE 18.1 Application of the serial two-echelon model to the example Quantity Q*2 n* n Q*1 C* Separate Optimization of the Installations Simultaneous Optimization of the Installations 200 15  4 800 $1,950 379 5  2 758 $1,897 828 ■ FIGURE 18.9 A serial multiechelon inventory system. CHAPTER 18 INVENTORY THEORY Inventory at installation 1 Inventory at installation 2 1 2 Inventory at installation N . . . N Since the items may be somewhat different at the different installations as they are being processed into something closer to the finished product, we will refer to them as item 1 while they are at installation 1, item 2 while at installation 2, and so forth. The units of the different items are defined so that exactly one unit of the item from one installation is needed to obtain one unit of the next item at the next installation. Our model for a serial multiechelon inventory system is a direct generalization of the preceding one for a serial two-echelon inventory system, as indicated by the following assumptions for the model. Assumptions for Serial Multiechelon Model 1. The assumptions of the basic EOQ model (see Sec. 18.3) hold at installation N. Thus, there is a known constant demand of d units per unit time, an order quantity of QN units is placed in time in replenish inventory when the inventory level drops to zero, and planned shortages are not allowed. 2. An order quantity of Q1 units is placed in time to replenish inventory at installation 1 before a shortage would occur. 3. Each installation except installation N uses its inventory to periodically replenish the inventory of the next installation. Thus, installation i (i  1, 2, . . . , N  1) provides a batch of Qi1 units to installation (i  1) immediately each time an order is received from installation (i  1). 4. The relevant costs at each installation i (i  1, 2, . . . , N) are a setup cost of Ki each time an order is placed and a holding cost of hi per unit per unit time. 5. The units increase in value each time they are received and processed at the next installation, so h1 h2 ⴢ ⴢ ⴢ hN. 6. The objective is to minimize the sum of the variable costs per unit time at the N installations. (This will be denoted by C.) The word “immediately” in assumption 3 implies that there is essentially zero lead time between when an installation places an order and the preceding installation fills that order, although a positive lead time that is fixed causes no complication. With zero lead time, Fig. 18.10 extends Fig. 18.8 to show how the inventory levels would vary simultaneously at the installations when there are four installations instead of only two. In this case, Qi  2Qi1 for i  1, 2, 3, so each of the first three installations needs to replenish its inventory once for every two times it replenishes the inventory of the next installation. Consequently, when a complete cycle of replenishments at all four installations begins at time 0, Fig. 18.10 shows an order of Q1 units arriving at installation 1 when the inventory level had been zero. Half of this order then is immediately used to replenish the inventory at installation 2. Installation 2 then does the same for installation 3, and installation 3 does the same for installation 4. Therefore, at time 0, some of the units that just arrived at installation 1 get transferred downstream as far as to the last installation as quickly as possible. The last installation then immediately starts using its replenished inventory of the final product to meet the demand of d units per unit time for that product. 18.5 DETERMINISTIC MULTIECHELON INVENTORY MODELS 829 Inventory level (installation 1) Q1 Echelon stock Installation stock Q1 − Q2 0 Time Inventory level (installation 2) Q2 Q2 − Q3 0 Time Inventory level (installation 3) Q3 Q3 − Q4 0 ■ FIGURE 18.10 The synchronized inventory level at four installations (N  4) when Qi  2Qi1 (i  1, 2, 3), where the solid lines show the levels of the installation stock and the dashed lines do the same for the echelon stock. Time Inventory level (installation 4) Q4 0 Time Recall that the echelon stock at installation 1 is defined as the stock that is physically on hand there (the installation stock) plus the stock that already is downstream (and perhaps incorporated into a more finished product) at subsequent echelons of the inventory system. Therefore, as the dashed lines in Fig. 18.10 indicate, the echelon stock at installation 1 begins at Q1 units at time 0 and then decreases at the rate of d units per unit time until it is time to order another batch of Q1 units, after which the saw-tooth pattern continues. The echelon stock at installations 2 and 3 follow the same saw-tooth pattern, but with shorter cycles. The echelon stock coincides with the installation stock at installation 4, so the echelon stock again follows a saw-tooth pattern there. 830 CHAPTER 18 INVENTORY THEORY This saw-tooth pattern in the basic EOQ model in Sec. 18.3 made the analysis particularly straightforward. For the same reason, it is convenient to focus on the echelon stock instead of the installation stock at the respective installations when analyzing the current model. To do this, we need to use the echelon holding costs, e1  h1, e2  h2 – h1, e3  h3 – h2, . . . , eN  hN – hN – 1, where ei is interpreted as the holding cost per unit per unit time on the value added by converting item (i – 1) from installation (i – 1) into item i at installation i. Figure 18.10 assumes that the replenishment cycles at the respective installations are carefully synchronized so that, for example, a replenishment at installation 1 occurs at the same time as some of the replenishments at the other installations. This makes sense since it would be wasteful to replenish inventory at an installation before that inventory is needed. To avoid having inventory left over at the end of a replenishment cycle at an installation, it also is logical to order only enough to supply the next installation an integer number of times. An optimal policy should have Qi  niQi1 (i  1, 2, . . . , N – 1), where ni is a positive integer, for any replenishment cycle. (The value of ni can be different for different replenishment cycles.) Furthermore, installation i (i  1, 2, . . . , N – 1) should replenish its inventory with a batch of Qi units only when its inventory level is zero and it is time to supply installation (i  1) with a batch of Qi  1 units. A Revised Problem That Is Easier to Solve. Unfortunately, it is surprisingly difficult to solve for an optimal solution for this model when N  2. For example, an optimal solution can have order quantities that change from one replenishment cycle to the next at the same installation. Therefore, two simplifying approximations normally are made to derive a solution. Simplifying Approximation 1: Assume that the order quantity at an installation must be the same on every replenishment cycle. Thus, Qi  niQi1 (i  1, 2, . . . , N – 1), where ni is a fixed positive integer. Simplifying Approximation 2: ni  2mi (i  1, 2, . . . , N – 1), where mi is a nonnegative integer, so the only values considered for ni are 1, 2, 4, 8, . . . . In effect, these simplifying approximations revise the original problem by imposing some new constraints that reduce the size of the feasible region that needs to be considered. This revised problem has some additional structure (including the relatively simple cyclic schedule implied by simplifying approximation 2) that makes it considerably easier to solve than the original problem. Furthermore, it has been shown that an optimal solution for the revised problem always is nearly optimal for the original problem, because of the following key result. Roundy’s 98 Percent Approximation Property: The revised problem is guaranteed to provide at least a 98 percent approximation of the original problem in the following sense. The amount by which the cost of an optimal solution for the revised problem exceeds the cost of an optimal solution for the original problem never is more than 2 percent (and usually will be much less). Specifically, if C*  total variable cost per unit time of an optimal solution for the original problem, 苶 C  total variable cost per unit time of an optimal solution for the revised problem, then C – C* 苶 0.02 C*. 18.5 DETERMINISTIC MULTIECHELON INVENTORY MODELS 831 This often is referred to as Roundy’s 98 percent approximation because the formulation and proof of this fundamental property (which also holds for some more general types of multiechelon inventory systems) was developed by Professor Robin Roundy of Brigham Young University.6 One implication of the two simplifying approximations is that the order quantities for the revised problem must satisfy the weak inequalities, Q1 Q2 QN. The procedure for solving the revised problem has two phases, where these inequalities play a key role in phase 1. In particular, consider the following variation of both the original problem and the revised problem. A Relaxation of the Problem: Continue to assume that the order quantity at an installation must be the same on every replenishment cycle. However, replace simplifying approximation 2 by the less restrictive requirement that Q1 Q2 QN. Thus, the only restriction on ni in simplifying approximation 1 is that each ni 1 (i  1, 2, . . . , N  1), without even requiring that ni be an integer. When ni is not an integer, the resulting lack of synchronization between the installations is ignored. It is instead assumed that each installation satisfies the basic EOQ model with inventory being replenished when the echelon inventory level reaches zero, regardless of what the other installations do, so that the installations can be optimized separately. Although this relaxation is not a realistic representation of the real problem because it ignores the need to coordinate replenishments at the installations (and so understates the true holding costs), it provides an approximation that is very easy to solve. Phase 1 of the solution procedure for solving the revised problem consists of solving the relaxation of the problem. Phase 2 then modifies this solution by reimposing simplifying approximation 2. The weak inequalities, Qi Qi1 (i  1, 2, . . . , N  1), allow for the possibility that Qi  Qi1. (This corresponds to having mi  0 in simplifying approximation 2.) As suggested by Fig. 18.10, if Qi  Qi1, whenever installation (i  1) needs to replenish its inventory with Qi1 units, installation i needs to simultaneously order the same number of units and then (after any necessary processing) immediately transfer the entire batch to installation (i  1). Therefore, even though these are separate installations in reality, for modeling purposes, we can treat them as a single combined installation which is placing one order for Qi  Qi1 units with a setup cost of Ki  Ki1 and an echelon holding cost of ei  ei1. This merging of installations (for modeling purposes) is incorporated into phase 1 of the solution procedure. We describe and outline the two phases of the solution procedure in turn below. Phase 1 of the Solution Procedure. Recall that assumption 6 for the model indicates that the objective is to minimize C, the total variable cost per unit time for all the installations. By using the echelon holding costs, the total variable cost per unit time at installation i is dKi eiQi Ci   , for i  1, 2, . . . , N, Qi 2 so that N C  冱 Ci. i1 6 R. Roundy, “A 98%-Effective Lot-Sizing Rule for a Multi-Product, Multi-Stage Production/Inventory System,” Mathematics of Operations Research, 11: 699–727, 1986. 832 CHAPTER 18 INVENTORY THEORY (This expression for Ci assumes that the echelon inventory is replenished just as its level reaches zero, which holds for the original and revised problems, but is only an approximation for the relaxation of the problem because the lack of coordination between installations in setting order quantities tends to lead to premature replenishments.) Note that Ci is just the total variable cost per unit time for a single installation that satisfies the basic EOQ model when ei is the relevant holding cost per unit time at the installation. Therefore, by first solving the relaxed problem, which only requires optimizing the installations separately (when using echelon holding costs instead of installation holding costs), the EOQ formula simply would be used to obtain the order quantity at each installation. It turns out that this provides a reasonable first approximation of the optimal order quantities when optimizing the installations simultaneously for the revised problem. Therefore, applying the EOQ formula in this way is the key step in phase 1 of the solution procedure. Phase 2 then applies the needed coordination between the order quantities by applying simplifying approximation 2. When applying the EOQ formula to the respective installations, a special situation arises when Ki/ei Ki1/ei1, since this would lead to Q*i Q*i1, which is prohibited by the relaxation of the problem. To satisfy the relaxation, which requires that Qi Qi1, the best that can be done is to set Qi  Qi1. As described at the end of the preceding subsection, this implies that the two installations should be merged for modeling purposes. Outline of Phase 1 (Solve the Relaxation) Ki Ki1  1. If ei  ei1 for any i  1, 2, . . . , N  1, treat installations i and i  1 as a single merged installation (for modeling purposes) with a setup cost of Ki  Ki1 and an echelon holding cost of ei  ei1 per unit per unit time. After the merger, repeat this step as needed for any other pairs of consecutive installations (which might include a merged installation). Then renumber the installations accordingly with N reset as the new total number of installations. 2. Set Qi  2dK ,   e i for i  1, 2, . . . , N. i 3. Set dK e Qi Ci  i  i , Qi 2 for i  1, 2, . . . , N, N C   i1Ci. Phase 2 of the Solution Procedure. Phase 2 now is used to coordinate the order quantities to obtain a convenient cyclic schedule of replenishments, such as the one illustrated in Fig. 18.10. This is done mainly by rounding the order quantities obtained in phase 1 to fit the pattern prescribed in the simplifying approximations. After tentatively determining the values of ni  2mi such that Qi  niQi1 in this way, the final step is to refine the value of QN to attempt to obtain an overall optimal solution for the revised problem. This final step involves expressing each Qi in terms of QN. In particular, given each ni such that Qi  niQi1, let pi be the product, pi  nini1 nN1, for i  1, 2, . . . , N  1, so that Qi  piQN, for i  1, 2, . . . , N  1, 18.5 DETERMINISTIC MULTIECHELON INVENTORY MODELS 833 where pN  1. Therefore, the total variable cost per unit time at all the installations is N dKi eipiQN C  冱    . 2 i1 piQN 冤 冥 Since C includes only the single order quantity QN, this expression also can be interpreted as the total variable cost per unit time for a single inventory facility that satisfies the basic EOQ model with a setup cost and unit holding cost of N dKi Setup cost  冱 , i1 pi N Unit holding cost  冱 eipi. i1 Hence, the value of QN that minimizes C is given by the EOQ formula as Q*N   N K 2d pi i1 i  . N eipi i1 Because this expression requires knowing the ni, phase 2 begins by using the value of QN calculated in phase 1 as an approximation of Q*N, and then uses this QN to determine the ni (tentatively), before using this formula to calculate Q*N. Outline of Phase 2 (Solve the Revised Problem) 1. Set Q*N to the value of QN obtained in phase 1. 2. For i  N  1, N – 2, . . . , 1 in turn, do the following. Using the value of Qi obtained in phase 1, determine the nonnegative integer value of m such that 2mQ*i1 Qi If   2mQi*1 Qi 2m1Q*i1. 2m1Q*i1 , Qi Qi 2m1Q*i1 If  m   , 2 Qi*1 Qi set ni  2m and Q*i  niQ*i1. set ni  2m+1 and Q*i  niQ*i1. 3. Use the values of the ni obtained in step 2 and the above formulas for pi and Q*N to calculate Q*N. Then use this Q*N to repeat step 2.7 If none of the ni change, use (Q*1, Q*2, . . . , Q*N) as the solution for the revised problem and calculate the corresponding cost C . If any of the ni did change, repeat step 2 (starting with the current Q*N) and then step 3 one more time. Use the resulting solution and calculate C . This procedure provides a very good solution for the revised problem. Although the solution is not guaranteed to be optimal, it often is, and if not, it should be close. Since the revised problem is itself an approximation of the original problem, obtaining such a solution for the revised problem is very adequate for all practical purposes. Available theory guarantees that this solution will provide a good approximation of an optimal solution for the original problem. Recall that Roundy’s 98 percent approximation property guarantees that the cost of an optimal solution for the revised problem is within 2 percent of C*, the cost of the unknown optimal solution for the original problem. In practice, this difference usually is far less 7 A possible complication that would prevent repeating step 2 is if QN–1 Q*N with this new value of Q*N. If this occurs, you can simply stop and use the previous value of (Q*1, Q*2, . . . , Q*N) as the solution for the revised problem. This same provision also applies for a subsequent attempt to repeat step 2. 834 CHAPTER 18 INVENTORY THEORY than 2 percent. If the solution obtained by the above procedure is not optimal for the re苶 is within 6 percent of C*. vised problem, Roundy’s results still guarantee that its cost C Again, the actual difference in practice usually is far less than 6 percent and often is considerably less than 2 percent. It would be nice to be able to check how close C 苶 is on any particular problem even though C*is unknown. The relaxation of the problem provides an easy way of doing this. Because the relaxed problem does not require coordinating the inventory replenishments at the installations, the cost that is calculated for its optimal solution  C is a lower bound on C*. Furthermore,  C normally is extremely close to C*. Therefore, checking how close C 苶 is to  C gives a conservative estimate of how close C 苶 must be to C*, as summarized below. Cost Relationships:  C C* 苶 C, so 苶 C  C* 苶 C C, where C  cost of an optimal solution for the relaxed problem,  C*  cost of an (unknown) optimal solution for the original problem, C 苶  cost of the solution obtained for the revised problem. You will see in the following rather typical example that, because 苶 C  1.0047C  for the example, it is known that 苶 C is within 0.47 percent of C*. An Example. Consider a serial system with four installations that have the setup costs and unit holding costs shown in Table 18.2. The first step in applying the model is to convert the unit holding cost hi at each installation into the corresponding unit echelon holding cost ei that reflects the value added at each installation. Thus, e1  h1  $0.50, e3  h3 – h2  $3, e2  h2 – h1  $0.05, e4  h4 – h3  $4. We now can apply step 1 of phase 1 of the solution procedure to compare each Ki/ei with Ki1/ei1. K 1  500, e1 K 2  120, e2 K 3  10, e3 K 4  27.5 e4 These ratios decrease from left to right with the exception that K 3  10 e3 K 4  27.5, e4 so we need to treat installations 3 and 4 as a single merged installation for modeling purposes. After combining their setup costs and their echelon holding costs, we now have the adjusted data shown in Table 18.3. Using the adjusted data, Table 18.4 shows the results of applying the rest of the solution procedure to this example. ■ TABLE 18.2 Data for the example of a four-echelon inventory system Installation i Ki hi 1 2 3 4 $250 $6 $30 $110 $0.50 $0.55 $3.55 $7.55 d ⴝ 4,000 835 18.5 DETERMINISTIC MULTIECHELON INVENTORY MODELS ■ TABLE 18.3 Adjusted data for the four-echelon example after merging installations 3 and 4 for modeling purposes Installation i 1 2 3( 4) Ki ei $250 $6 $140 $0.50 $0.05 $7 d ⴝ 4,000 ■ TABLE 18.4 Results from applying the solution procedure to the four-echelon example Installation i Solution of Relaxed Problem Qi Ci 1 2 3( 4) 2,000 980 400 $1,000 $49 $2,800 C  $3,849 Initial Solution of Revised Problem Q*i Ci 1,600 800 400  $1,025 $50 $2,800 C  $3,875 Final Solution of Revised Problem Q*i Ci 1,700 850 425 苶 C $1,013 $49 $2,805  $3,867 The second and third columns present the straightforward calculations from steps 2 and 3 of phase 1. For step 1 of phase 2, Q3  400 in the second column is carried over to Q*3  400 in the fourth column. For step 2, we find that 21Q*3 Q2 22Q*3 since 2(400)  800 980 4(400)  1600. Because Q2 980 1    2 Q*3 800 1600 22Q*3   , Q2 980 we set n2  21  2 and Q*2  n2Q*3