Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
Lecture Notes in Computer Science Edited by G. Goos, J. Hartmanis, and J. van Leeuwen 2570 3 Berlin Heidelberg New York Barcelona Hong Kong London Milan Paris Tokyo Michael Jünger Gerhard Reinelt Giovanni Rinaldi (Eds.) Combinatorial Optimization – Eureka, You Shrink! Papers Dedicated to Jack Edmonds 5th International Workshop Aussois, France, March 5-9, 2001 Revised Papers 13 Series Editors Gerhard Goos, Karlsruhe University, Germany Juris Hartmanis, Cornell University, NY, USA Jan van Leeuwen, Utrecht University, The Netherlands Volume Editors Michael Jünger Institut für Informatik Universität zu Köln Pohligstr. 1, 50969 Köln, Germany E-mail: mjuenger@informatik.uni-koeln.de Gerhard Reinelt Institut für Informatik Universität Heidelberg Im Neuenheimer Feld 368, 69120 Heidelberg, Germany E-mail: gerhard.reinelt@informatik.uni-heidelberg.de Giovanni Rinaldi Istituto di Analisi dei Sistemi ed Informatica "Antonio Ruberti" CNR viale Manzoni 30, 00185 Rome, Italy E-mail: rinaldi@iasi.rm.cnr.it Cataloging-in-Publication Data applied for A catalog record for this book is available from the Library of Congress. Bibliographic information published by Die Deutsche Bibliothek Die Deutsche Bibliothek lists this publication in the Deutsche Nationalbibliografie; detailed bibliographic data is available in the Internet at <http://dnb.ddb.de>. CR Subject Classification (1998): G.1.6, G.2.1, F.2.2, I.3.5 ISSN 0302-9743 ISBN 3-540-00580-3 Springer-Verlag Berlin Heidelberg New York This work is subject to copyright. All rights are reserved, whether the whole or part of the material is concerned, specifically the rights of translation, reprinting, re-use of illustrations, recitation, broadcasting, reproduction on microfilms or in any other way, and storage in data banks. Duplication of this publication or parts thereof is permitted only under the provisions of the German Copyright Law of September 9, 1965, in its current version, and permission for use must always be obtained from Springer-Verlag. Violations are liable for prosecution under the German Copyright Law. Springer-Verlag Berlin Heidelberg New York a member of BertelsmannSpringer Science+Business Media GmbH http://www.springer.de © Springer-Verlag Berlin Heidelberg 2003 Printed in Germany Typesetting: Camera-ready by author, data conversion by PTP-Berlin, Stefan Sossna e. K. Printed on acid-free paper SPIN: 10872263 06/3142 543210 Preface A legend says that Jack Edmonds shouted “Eureka – you shrink!” when he found a good characterization for matching (and the matching algorithm) in 1963, the day before his talk at a summer workshop at RAND Corporation with celebrities like George Dantzig, Ralph Gomory, and Alan Hoffman in the audience. During Aussois 2001, Jack confirmed: “‘Eureka – you shrink!’ is really true, except that instead of ‘Eureka’ it maybe was some less dignified word.” Aussois 2001 was the fifth in an annual series of workshops on combinatorial optimization that are alternately organized by Thomas Liebling, Denis Naddef, and Laurence Wolsey – the initiators of this series – in even years and the editors of this book in odd years (except 1997). We decided to dedicate Aussois 2001 to Jack Edmonds in appreciation of his groundbreaking work that laid the foundations of much of what combinatorial optimizers have done in the last 35 years. Luckily, Jack is a regular participant of the Aussois workshops and, as ever, he cares a lot for young combinatorial optimizers who traditionally play a major rôle in the Aussois series. Fig. 1. Handout for the Aussois 2001 participants VI Preface Highlights of Aussois 2001 included a special session entitled “Eureka – you shrink!” in honor of Jack and a special lecture by Jack on “Submodular Functions, Matroids, and Certain Polyhedra” that closed the workshop. In this book, we give an account of the “Eureka – you shrink!” session as well as reprints of three hardly accessible papers that Jack gave as a handout to the Aussois 2001 participants and that were originally published in the Proceedings of the Calgary International Conference on Combinatorial Structures and Their Applications 1969, Gordon and Breach (1970) – newly typeset by the editors and reprinted by permission. We are happy that 13 speakers of Aussois 2001 agreed to dedicate revisions of the papers they presented during the workshop to Jack Edmonds. Their contributions have made this book possible. As organizers and editors we would like to thank Jack Edmonds, the authors, Miguel Anjos for transcribing Bill Pulleyblank’s speech, and, in particular, Bill Pulleyblank for support that went well beyond the contribution you will find in the “Eureka – you shrink!” chapter! When Jack gave an interview for the Kitchener-Waterloo Record of May 30, 1985, on the occasion of the award of the John von Neumann Theory Prize to him a month earlier, he said: “I hit it lucky by putting a poetic title on a paper that was mathematically a hit,” referring to his seminal paper entitled “Paths, Trees, and Flowers.” With the help of Matthias Elf, we found an artist whose proposal for a cover illustration immediately convinced all three of us: People who know Jack’s work will have the right association without reading the words. A special “Thank You!” to Thorsten Felden who contributed his own hommage à Jack! January 2003 Cologne Heidelberg Rome Michael Jünger Gerhard Reinelt Giovanni Rinaldi Participants Aardal, Karen (University of Utrecht) Ahr, Dino (University of Heidelberg) Amaldi, Edoardo (Politecnico di Milano) Békési, Jozsef (University of Szeged) Bixby, Robert E. (Rice University, Houston) Buchheim, Christoph (University of Cologne) Cameron, Kathie (Wilfried Laurier University, Waterloo) Caprara, Alberto (University of Bologna) Edmonds, Jack (University of Waterloo) Elf, Matthias (University of Cologne) Euler, Reinhardt (University of Brest) Evans, Lisa (Georgia Institute of Technology, Atlanta) Farias, Ismael de (University at Buffalo) Feremans, Corinne (University of Brussels) Fischetti, Matteo (University of Padova) Fleischer, Lisa (Columbia University, New York) Fortz, Bernard (University of Brussels) Galambos, Gábor (University of Szeged) Gruber, Gerald (Carinthia Tech Institute, Villach) Hemmecke, Raymond University of Duisburg) Johnson, Ellis (Georgia Institute of Technology, Atlanta) Jünger, Michael (University of Cologne) Kaibel, Volker (University of Technology, Berlin) Labbé, Martine (University of Brussels) Lemaréchal, Claude (INRIA Rhône-Alpes) Letchford, Adam (Lancaster University) Liers, Frauke (University of Cologne) Lodi, Andrea (University of Bologna) Lübbecke, Marco (University of Braunschweig) Luzzi, Ivan (University of Padova) Maffioli, Francesco (Politecnico di Milano) Martin, Alexander (University of Technology, Darmstadt) Maurras, Jean François (Université de la Mediterrané, Marseille) Meurdesoif, Philippe (INRIA Rhône-Alpes) VIII Participants Möhring, Rolf H. (University of Technology, Berlin) Monaci, Michele (University of Bologna) Mutzel, Petra (University of Technology, Vienna) Naddef, Denis (ENSIMAG, Montbonnot Saint Martin) Nemhauser, George (Georgia Institute of Technology, Atlanta) Nguyen, Viet Hung (Université de la Mediterrané, Marseille) Ortega, Francois (CORE, Louvain-la-Neuve) Oswald, Marcus (University of Heidelberg) Percan, Merijam (University of Cologne) Perregard, Michael (Carnegie Mellon University, Pittsburgh) Pulleyblank, William (IBM Yorktown Heights) Reinelt, Gerhard (University of Heidelberg) Remshagen, Anja (University of Texas at Dallas) Rendl, Franz (University of Klagenfurt) Richard, Jean Philippe (Georgia Institute of Technology, Atlanta) Riis, Morton (University of Aarhus) Rinaldi, Giovanni (IASI-CNR Rome) Rote, Günter (Free University of Berlin) Salazar-González, Juan-José (University of La Laguna, Tenerife) Schultz, Rüdiger (University of Duisburg) Skutella, Martin (University of Technology, Berlin) Spille, Bianca (EPFL-DMA Lausanne) Stork, Frederik (University of Technology, Berlin) Toth, Paolo (University of Bologna) Uetz, Marc (University of Technology, Berlin) Vandenbusse, Dieter (Georgia Institute of Technology, Atlanta) Verweij, Bram (Georgia Institute of Technology, Atlanta) Weismantel, Robert (University of Magdeburg) Wenger, Klaus (University of Heidelberg) Woeginger, Gerhard (University of Twente) Wolsey, Laurence (CORE, Louvain-la-Neuve) Zimmermann, Uwe (University of Technology, Braunschweig) Table of Contents “Eureka – You Skrink!” “Eureka – You Shrink!” . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Surprise Session for Jack Edmonds 1 Submodular Functions, Matroids, and Certain Polyhedra . . . . . . . . . . . . . . . Jack Edmonds (National Bureau of Standards, Washington) 11 Matching: A Well-Solved Class of Integer Linear Programs . . . . . . . . . . . . . Jack Edmonds (National Bureau of Standards, Washington), Ellis L. Johnson (I.B.M. Research Center, Yorktown Heights) 27 Theoretical Improvements in Algorithmic Efficiency for Network Flow Problems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Jack Edmonds (National Bureau of Standards, Washington), Richard M. Karp (University of California, Berkeley) 31 Connected Matchings . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Kathie Cameron (Wilfried Laurier University, Waterloo) 34 Hajós’ Construction and Polytopes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Reinhardt Euler (Université de Brest) 39 Algorithmic Characterization of Bipartite b-Matching and Matroid Intersection . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Robert T. Firla (University of Magdeburg), Bianca Spille (EPFL-DMA, Lausanne), Robert Weismantel (University of Magdeburg) 48 Solving Real-World ATSP Instances by Branch-and-Cut . . . . . . . . . . . . . . . Matteo Fischetti (University of Padova), Andrea Lodi (University of Bologna), Paolo Toth (University of Bologna) 64 The Bundle Method for Hard Combinatorial Optimization Problems . . . . Gerald Gruber (Carinthia Tech Institute, Villach), Franz Rendl (University of Klagenfurt) 78 The One-Commodity Pickup-and-Delivery Travelling Salesman Problem . Hipólito Hernández-Pérez, Juan-José Salazar-González (University of La Laguna, Tenerife) 89 Reconstructing a Simple Polytope from Its Graph . . . . . . . . . . . . . . . . . . . . . 105 Volker Kaibel (University of Technology, Berlin) X Table of Contents An Augment-and-Branch-and-Cut Framework for Mixed 0-1 Programming . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 119 Adam N. Letchford (Lancaster University), Andrea Lodi (University of Bologna) A Procedure of Facet Composition for the Symmetric Traveling Salesman Polytope . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 134 Jean François Maurras, Viet Hung Nguyen (Université de Marseille) Constructing New Facets of the Consecutive Ones Polytope . . . . . . . . . . . . 147 Marcus Oswald, Gerhard Reinelt (University of Heidelberg) A Simplex-Based Algorithm for 0-1 Mixed Integer Programming . . . . . . . . 158 Jean-Philippe P. Richard (Georgia Institute of Technology, Atlanta), Ismael R. de Farias (CORE, Louvain-la-Neuve), George L. Nemhauser (Georgia Institute of Technology, Atlanta) Mixed-Integer Value Functions in Stochastic Programming . . . . . . . . . . . . . 171 Rüdiger Schultz (University of Duisburg) Exact Algorithms for NP-Hard Problems: A Survey . . . . . . . . . . . . . . . . . . . 185 Gerhard J. Woeginger (University of Twente, Enschede) Author Index . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 209 Matching: A Well-Solved Class of Integer Linear Programs Jack Edmonds1 and Ellis L. Johnson2 1 2 National Bureau of Standards, Washington, D.C., U.S.A. I.B.M. Research Center, Yorktown Heights, NY, U.S.A. A main purpose of this work is to give a good algorithm for a certain welldescribed class of integer linear programming problems, called matching problems (or the matching problem). Methods developed for simple matching [2,3], a special case to which these problems can be reduced [4], are applied directly to the larger class. In the process, we derive a description of a system of linear inequalities whose polyhedron is the convex hull of the admissible solution vectors to the given matching problem. At the same time, various combinatorial results about matchings are derived and discussed in terms of graphs. (1) The general integer linear programming problem can be stated as:  Minimize z = j∈E cj xj , where cj is a given real number, subject to (2) xj an integer for each j ∈ E; (3) 0 ≤ xj ≤ αj , j ∈ E, where αj is a given positive integer or +∞;  j∈E aij xj = bi , i ∈ V , where aij and bi are given integers; (4) V and E are index sets having cardinalities |V | and |E|. (5) The integer program (1) is called a matching problem whenever  i∈V |aij | ≤ 2 holds for all j ∈ E. (6) A solution to the integer program (1) is a vector [xj ], j ∈ E, satisfying (2), (3), and (4), and an optimum solution is a solution which minimizes z among all solutions. When the integer program is a matching problem, a solution is called a matching and an optimum solution is an optimum matching. If the integer restriction (2) is omitted, the problem becomes a linear program. An optimum solution to that linear program will typically have fractional values. There is an important class of linear programs, called transportation or network flow problems, which have the property that for any integer right-hand side bi , i ∈ V , and any cost vector cj , j ∈ E, there is an optimum solution which has all integer xj , j ∈ E. The class of matching probems includes that class of linear programs, but, in addition, includes problems for which omitting M. Jünger et al. (Eds.): Combinatorial Optimization (Edmonds Festschrift), LNCS 2570, pp. 27–30, 2003. c Springer-Verlag Berlin Heidelberg 2003  28 J. Edmonds and E.L. Johnson the integer restriction (2) results in a linear program with no optimum solution which is all integer. Many interesting and practical combinatorial problems can be formulated as integer linear programs. However, limitations in the known methods for treating general integer linear programs have made such formulations of limited value. By contrast with general integer linear programming, the matching problem is well-solved. (7) Theorem. There is an algorithm for the general matching problem such that an upper bound on the amount of work which it requires for any input is on the order of the product of (8), (9), (10), and (11). An upper bound on the memory required is on the order of (8) times (11) plus (9) times (11). (8) |V |2 , the number of nodes squared; |E|, the number of edges;   (10) αj <∞ αj ; i∈V |bi | + 2 (9)  (11) log(|V | max |bi | + |E| maxαj <∞ αj ) + log( j∈E |cj |). (12) Theorem. For any matching problem, (1), the convex hull P of the matchings, i.e., of the solutions to [(2), (3), and (4)], is the polyhedron of solutions to the linear constraints (3) and (4) together with additional inequalities:    (13) j∈U αj . j∈U xj ≥ 1 − j∈W xj − There is an inequality (13) for every pair (T, U ) where T is a subset of V and U is a subset of E such that  (14) i∈T |aij | = 1 for each j ∈ U ;   (15) j∈U αj is an odd integer. i∈T bi + The W in (13) is given by  (16) W = {j ∈ E : i∈T |aij | = 1} − U . (17) Let Q denote the set of pairs (T, U ). The inequalities (13), one for each (T, U ) ∈ Q, are called the blossom inequalities of the matching problem (1). By Theorem (12), the matching problem is the linear program:  (18) Minimize z = j∈E cj xj subject to (3), (4), and (13).  (19) Theorem. If cj is an integral multiple of i∈V |aij |, and if the l.p. dual of (18) has an optimum solution, then it has an optimum solution which is integer-valued. Using l.p. duality, theorems (12) and (19) yield a variety of combinatorial existence and optimality theorems. To treat matching more graphically, we use what we will call bidirected graphs. All of our graphs are bidirected, so graph is used to mean bidirected graph. Matching: A Well-Solved Class of Integer Linear Programs 29 (20) A graph G consists of a set V = V (G) of nodes and a set E = E(G) of edges. Each edge has one or two ends and each end meets one node. Each end of an edge is either a head or a tail. (21) If an edge has two ends which meet the same node it is called a loop. If it has two ends which meet different nodes, it is called a link. If it has only one end it is called a lobe. An edge is called directed if it has one head and one tail. Otherwise it is called undirected, all-head or all-tail accordingly. (22) The node-edge incidence matrix of a graph is a matrix A = [aij ] with a row for each node i ∈ V and a column for each edge j ∈ E, such that aij = +2, +1, 0, −1, or −2, according to whether edge j has two tails, one tail, no end, one head, or two heads meeting node i. (Directed loops are not needed for the matching problem.) (23) If we interpret the capacity αj to mean that αj copies of edge j are present in graph Gα , then xj copies of j for each j ∈ E, where x = [xj ] is a solution of [(2), (3), (4)], gives a subgraph Gx of Gα . The degree of node i in Gx is bi , the number of tails of Gx which meet i minus the number of heads of Gx which meet i. Thus, where x is an optimum matching, Gx can be regarded as an “optimum degree-constrained subgraph” of Gα , where the bi ’s are the degree constraints. (24) A Fortran code of the algorithm is available from either author. It was written in large part by Scott C. Lockhart, who also wrote many comments interspersed through the deck to make it understandable. Several random problem generators are included. It has been run on a variety of problems on a Univac 1108, IBM 7094, and IBM 360. On the latter, problems of 300 nodes, 1500 edges, b = 1 or 2, α = 1, and randon cj ’s from 1 to 10, take about 30 seconds. Running times fit rather closely a formula which is an order of magnitude better than our theoretical upper bound. References 1. Berge, C., Théorie des graphes et ses applications, Dunod, Paris, 1958. 2. Edmonds, J., Paths, trees, and flowers, Canad. J. Math. 17 (1965), 449–467. 3. Edmonds, J., Maximum matching and a polyhedron with 0,1-vertices, J. Res. Nat. Bur. Standards 69B (1965), 125–130. 4. Edmonds, J., An introduction to matching, preprinted lectures, Univ. of Mich. Summer Engineering Conf. 1967. 5. Johnson, E.L., Programming in networks and graphs, Operation Research Center Report 65-1, Etchavary Hall, Univ. of Calif., Berkeley. 6. Tutte, W.T., The factorization of linear graphs, J. London Math. Soc. 22 (1947), 107–111. 7. Tutte, W.T., The factors of graphs, Canad. J. Math. 4 (1952), 314–328. 8. Witzgall, C. and Zahn, C.T. Jr., Modification of Edmonds’ matching algorithm, J. Res. Nat. Bur. Standards 69B (1965), 91–98. 30 J. Edmonds and E.L. Johnson 9. White, L.J., A parametric study of matchings, Ph.D. Thesis, Dept. of Elec. Engineering, Univ. of Mich., 1967. 10. Balinski, M., A labelling method for matching, Combinatorics Conference, Univ. of North Carolina, 1967. 11. Balinski, M., Establishing the matching poloytope, preprint, City Univ. of New York, 1969. Submodular Functions, Matroids, and Certain Polyhedra⋆ Jack Edmonds National Bureau of Standards, Washington, D.C., U.S.A. I The viewpoint of the subject of matroids, and related areas of lattice theory, has always been, in one way or another, abstraction of algebraic dependence or, equivalently, abstraction of the incidence relations in geometric representations of algebra. Often one of the main derived facts is that all bases have the same cardinality. (See Van der Waerden, Section 33.) From the viewpoint of mathematical programming, the equal cardinality of all bases has special meaning — namely, that every basis is an optimumcardinality basis. We are thus prompted to study this simple property in the context of linear programming. It turns out to be useful to regard “pure matroid theory”, which is only incidentally related to the aspects of algebra which it abstracts, as the study of certain classes of convex polyhedra. (1) A matroid M = (E, F ) can be defined as a finite set E and a nonempty family F of so-called independent subsets of E such that (a) Every subset of an independent set is independent, and (b) For every A ⊆ E, every maximal independent subset of A, i.e., every basis of A, has the same cardinality, called the rank, r(A), of A (with respect to M ). (This definition is not standard. It is prompted by the present interest). (2) Let RE denote the space of real-valued vectors x = [xj ], j ∈ E. Let R+ E = {x : 0 ≤ x ∈ RE }. (3) A polymatroid P in the space RE is a compact non-empty subset of R+ E such that (a) 0 ≤ x0 ≤ x1 ∈ P =⇒ x0 ∈ P . maximal x ∈ P such that x ≤ a, i.e., every basis (b) For every a ∈ R+ E , every  x of a, has the same sum j∈E xj , called the rank, r(a), of a (with respect to P ). ⋆ Synopsis for the Instructional Series of Lectures, “Polyhedral Combinatorics”. M. Jünger et al. (Eds.): Combinatorial Optimization (Edmonds Festschrift), LNCS 2570, pp. 11–26, 2003. c Springer-Verlag Berlin Heidelberg 2003  12 J. Edmonds Here maximal x means that there is no x′ > x having the properties of x. (4) A polymatroid is called integral if (b) holds also when a and x are restricted to being integer-valued, i.e., for every integer-valued vector a ∈ R+ E, every maximal integer-valued x, such that x ∈ P and x ≤ a, has the same sum  j∈E xj = r(a). (Sometimes it may be convenient to regard an integral polymatroid as consisting only of its integer-valued members). (5) Clearly, the 0–1 valued vectors in an integral polymatroid are the “incidence vectors” of the sets J ∈ F of a matroid M = (E, F ). II (6) Let f be a real-valued function on a lattice L. Call it a β0 -function if (a) f (a) ≥ 0 for every a ∈ K = L − {∅}; (b) is non-decreasing: a ≤ b =⇒ f (a) ≤ f (b); and (c) submodular: f (a ∨ b) + f (a ∧ b) ≤ f (a) + f (b) for every a ∈ L and b ∈ L. (d) Call it a β-function if, also, f (∅) = 0. In this case, f is also subadditive, i.e., f (a ∨ b) ≤ f (a) + f (b). (We take the liberty of using the prefixes sub and super rather than “upper semi” and “lower semi”. Semi refers to either. The term semi-modular is taken from lattice theory where it refers to a type of lattice on which there exists a semimodular function f such that if a is a maximal element less than element b then f (a) + 1 = f (b). See [1].)  (7) For any x = [xj ] ∈ RE , and any A ⊆ E, let x(A) denote j∈A xj . (8) Theorem. Let L be a family of subsets of E, containing E and ∅, and closed under intersections, A ∩ B = A ∧ B. Let f be a β0 -function on L. Then the following polyhedron is a polymatroid: P (E, f ) = {x ∈ R+ E : x(A) ≤ f (A) for every A ∈ L − ∅ = K}. Its rank function r is, for any a = [aj ] ∈ R+ E,     f (A)yA  aj zj + r(a) = min  j∈E A∈K where the zj ’s and yA ’s are 0’s and 1’s such that for every j ∈ E, Submodular Functions, Matroids, and Certain Polyhedra zj +  13 yA ≥ 1. j∈A∈K Where f (∅) ≥ 0, only one non-zero yA is needed. Where f is integer-valued, P (E, f ) is an integral polymatroid. (9) Theorem. A function f of all sets A ⊆ E is itself the rank function of a matroid M = (E, F ) iff it is an integral β-function such that f ({j}) = 1 or 0 for every j ∈ E. Such an f determines M by: J ∈ F ⇐⇒ J ⊆ E and |J| = f (J). + + (10) For any a = [aj ] ∈ R+ E and b = [bj ] ∈ RE , let a ∨ b = [uj ] ∈ RE and + a ∧ b = [vj ] ∈ RE , where uj = max(aj , bj ) and vj = min(aj , bj ). (11) Theorem. The rank function r(a), a ∈ R+ E , for any polymatroid P ⊂ + relative to the above ∨ and ∧. RE , is a β-function on R+ E + (12) For any x = [xj ] ∈ R+ E and any A ⊆ E, let x/A = [(x/A)j ] ∈ RE denote the vector such that (x/A)j = xj for j ∈ A, and (x/A)j = 0 for j ∈ / A. + (13) Given a polymatroid P ⊂ R+ E , let α ∈ RE be an integer-valued vector such that x < α for every x ∈ P . Where r is the rank function of P , let fP (A) = r(α/A) for every A ⊆ E. Let LE = {A : A ⊆ E}. Clearly, by (11), fP is a β-function on LE . Furthermore, if P is integral, then f is integral. (14) Theorem. For any polymatroid P ⊂ R+ E, P = P (E, fP ). Thus, all polymatroids P ∈ R+ E are polyhedra, and they correspond to certain β-functions on LE . Theorem 8 provides a useful way of constructing matroids which is quite different from the usual algebraic constructions. (15) For any given integral β0 -function f as in (8), let a set J ⊆ E be a member of F iff for every A ∈ K = L − {∅}, |J ∩ A| ≤ f (A). In particular, where L = LE , let a set J ⊆ E be a member of F when for every ∅ = A ⊆ J, |A| ≤ f (A). Then (8) implies that M = (E, F ) is a matroid, and gives a formula for its rank function in terms of f . (This generalizes a construction given by Dilworth [1]). 14 J. Edmonds III In this section, K will denote LE − {∅} = {A : ∅ = A ⊆ E). (16) Given any c = {cj } ∈ RE , and given a β-function f on LE , we show how to solve the linear program: maximize c · x =  cj xj over x ∈ P (E, f ). j∈E (17) Let j(1), j(2), . . . be an ordering of E such that cj(1) ≥ cj(2) ≥ · · · cj(k) > 0 ≥ cj(k+1) ≥ · · · (18) For each integer i, 1 ≤ i ≤ k, let Ai = {j(1), j(2), . . . , j(i)}. (19) Theorem. (The Greedy Algorithm). c·x is maximized over x ∈ P (E, f ) by the following vector x0 : x0j(1) = f (A1 ); x0j(i) = f (Ai ) − f (Ai−1 ) for 2 ≤ i ≤ k; x0j(i) for k < i ≤ |E|. =0 (There is a well-known non-polyhedral version of this for graphs, given by Kruskal [9]. A related theorem for matroids is given by Rado [15]). The dual l.p. is to minimize  f ·y = f (A)y(A) where A∈K (20) y(A) ≥ 0; and for every j ∈ E, (21) is Theorem.  j∈A y(A) ≥ cj . An optimum solution, y 0 = [y 0 (A)], A ∈ K, to the dual l.p. y 0 (Ai ) = cj(i) − cj(i+1) y 0 (Ak ) = cj(k) ; and y 0 (A) = 0 for 1 ≤ i ≤ k − 1; for all other A ∈ K. (22) Theorem. Corollary to (19). The vertices of the polyhedron P (E, f ) are precisely the vectors of the form x0 in (19) for some sequence j(1), j(2), . . . , j(k). Submodular Functions, Matroids, and Certain Polyhedra 15 (23) Where f is the rank function of a matroid M = (E, F ), (9) and (22) imply that the vertices of P (E, f ) are precisely the incidence vectors of the members of F , i.e., the independent sets of M . Such a P (E, f ) is called a matroid polyhedron. (24) Let f be a β-function on LE . A set A ∈ LE is called f -closed or an f -flat, when, for any C ∈ LE which properly contains A, f (A) < f (C). (25) Theorem. If A and B are f -closed then A ∩ B is f -closed. (In particular, for the f of (9), the f -flats form a “geometric” or “matroid” lattice.) Proof: Suppose that C properly contains A ∩ B. Then either C ⊆ A or C ⊆ B. Since f is non-decreasing we have f (A∩B) ≤ f (A∩C) and f (A∩B) ≤ f (B ∩C). Thus, since f is submodular, we have either 0 < f (A ∪ C) − f (A) ≤ f (C) − f (A ∩ C) ≤ f (C) − f (A ∩ B), or 0 < f (B ∪ C) − f (B) ≤ f (C) − f (B ∩ C) ≤ f (C) − f (A ∩ B). (26) A set A ∈ K is called f -separable when f (A) = f (A1 ) + f (A2 ) for some partition of A into non-empty subsets A1 and A2 . Otherwise A is called f -inseparable. (27) Theorem. Any A ∈ K partitions  in only one way into a family of f inseparable sets Ai such that f (A) = f (Ai ). The Ai ’s are called the f -blocks of A. If a polyhedron P ⊂ RE has dimension equal to |E| then there is a unique minimal system of linear inequalities having P as its set of solutions. These inequalities are called the faces of P . It is obvious that a polymatroid P ⊂ R+ E has dimension |E| if and only if, where f is the β-function which determines it, and set ∅ is f -closed. It is obvious that inequality x(A) ≤ f (A), A ∈ K, is a face of P (E, f ) only if A is f -closed and f -inseparable. (28) Theorem. Where f is a β-function on LE such that the empty set is f -closed, the faces of polymatroid P (E, f ) are: xj ≥ 0 for every j ∈ E; and x(A) ≤ f (A) for every A ∈ K which is f -closed and f -inseparable. IV (29) Let each Vp , p = 1 and 2, be a family of disjoint subsets of H. Where [aij ], i ∈ E, j ∈ E, is the 0–1 incidence matrix of V1 ∪ V2 = H, the following l.p. is known as the Hitchcock problem. 16 (30) (31) J. Edmonds Maximize c · x =  j∈E cj xj , where  xj ≥ 0 for every j ∈ E, and j∈E aij xj ≤ bi for every i ∈ H. The dual l.p. is (32) (33) Minimize b · y =  bi yi , where  yi ≥ 0 for every i ∈ H, and i∈H aij yi ≥ cj for every j ∈ E. Denote the polyhedron of solutions of a system Q by P [Q]. The following properties of the Hitchcock problem are important in its combinatorial use. (34) Theorem. (a) Where the bi ’s are integers, the vertices of P [(31)] are integer-valued. (b) Where the cj ’s are integers, the vertices of P [(33)] are integervalued. Theorem (34a) generalizes to the following. (35) Theorem. For any two integral polymatroids P1 and P2 in R+ E , the vertices of P1 ∩ P2 are integer-valued. The following technique for proving theorems like (34) is due to Alan Hoffman [7]. (36) Theorem. The matrix [aij ] of the Hitchcock problem is totally unimodular — that is, the determinant of every square submatrix has value 0, 1, or −1. (37) Theorem. Theorem (34) holds whenever [aij ] is totally unimodular. (38) Let each Vp , p = 1 and 2, be a family of subsets of E such that any two members of P are either disjoint or else one is a subset of the other. (39) Theorem. modular. The incidence matrix of the V1 ∪ V2 of (38) is totally uni- Property (29) is a special case of (38). Property (38) is a special case of the following. (40) Let each Vp , p = 1 and 2, be a family of subsets of E such that for any R ∈ Vp and S ∈ Vp either R ∩ S = ∅ or R ∩ S ∈ Vp . The incidence matrix of the V1 ∪ V2 of (40) is generally not totally unimodular. However, (41) Theorem. From the incidence matrix of each Vp of (40), once can obtain, by subtracting certain rows from others, the incidence matrix of a family of mutually disjoint subsets of E. Thus, in the same way, one can obtain from the incidence matrix of the V1 ∪ V2 of (40), a matrix of the Hitchcock type. (42) Theorem. For any polymatroid P (E, f ) and any x ∈ P (E, f ), if x(A) = f (A) and x(B) = f (B) then either A ∩ B = ∅ or x(A ∩ B) = f (A ∩ B). Submodular Functions, Matroids, and Certain Polyhedra 17 Theorems (42), (41), and (34a) imply (35). (43) Assuming that each Vp of (38) contains the set E, Lp = Vp ∪ {∅} is a particularly simple lattice. For any non-negative non-decreasing function f (i) = bi , i ∈ Vp , let f (∅) = −f (E). Then f is a β0 -function on Lp . (44) The only integer vectors in a matroid polyhedron P are the vectors of the independent sets of the matroid, and these vectors are all vertices of P . Thus, (35) implies: (45) Theorem. Where P1 and P2 are the polyhedra of any two matroids M1 and M2 on E, the vertices of P1 ∩ P2 are precisely the vectors which are vertices of both P1 and P2 — namely, the incidence vectors of sets which are independent in both M1 and M2 . Where P1 , P2 , and P3 are the polyhedra of three matroids on E, polyhedron P1 ∩ P2 ∩ P3 generally has many vertices besides those which are vertices of P1 , P2 , and P3 . Let c = [cj ], j ∈ E, be any numerical weighting of the elements of E. In view of (45), the problem: (46)  Find a set J, independent in both M1 and M2 , that has maximum weightsum, j∈J cj , is equivalent to the l.p. problem: (47) Find a vertex x of P1 ∩ P2 that maximizes c · x. (48) Assuming there is a good algorithm for recognizing whether of not a set J ⊆ E is independent in M1 or in M2 , there is a good algorithm for problem (46). This seems remarkable in view of the apparent complexity of matroid polyhedra in other respects. For example, a good algorithm is not known for the problem: (49) Given a matroid M1 = (E, F1 ) and given an element e ∈ E, minimize |D|, D ⊆ E, where e ∈ D ∈ / F1 ; Or the problems: (50) Given three matroids M1 , M2 , and M3 , on E, and given an objective vector c ∈ RE , maximize c · x where x ∈ P1 ∩ P2 ∩ P3 .  Or maximize j∈J cj where J ∈ F1 ∩ F2 ∩ F3 . V Where f1 and f2 are β-functions on LE , the dual of the l.p.:  cj xj , where (51) Maximize c · x = j∈E (52) For every j ∈ E, xj ≥ 0; and for every A ∈ K, x(A) ≤ f1 (A) and x(A) ≤ f2 (A); is the l.p.: 18 (53) J. Edmonds Minimize f · y =  [f1 (A)y1 (A) + f2 (A)y2 (A)] A∈K where (54) For every A ∈ K, y1 (A) ≥ 0 and y2 (A) ≥ 0; and for every j ∈ E,  [y1 (A) + y2 (A)] ≥ cj . j∈A∈K Combining systems (52) and (54) we get,     xj  [y1 (A) + y2 (A)] − cj  j∈E +  y1 (A)[f1 (A) −  y2 (A)[f2 (A) −  xj ]  xj ] ≥ 0. j∈A A∈K + (55) j∈A∈K j∈A A∈K Expanding and cancelling we get c·x≤f ·y (56) for any x satisfying (52) and any y = (y1 , y2 ) satisfying (54). (57) Equality holds in (56) if and only if equality holds in (55). The l.p. duality theorem says that (58) If there is an x0 , a vertex of P [(52)], which maximizes c · x, then there is a y 0 = (y10 , y20 ), a vertex of P [(54)], such that c · x0 = f · y 0 , (59) and hence such that y 0 minimizes f · y. For the present problem obviously there is such an x0 . The vertices of (54) are not generally all integer-valued when the cj ’s are. However, (60) Theorem. If the cj ’s are all integers, then, regardless of whether f1 and f2 are integral, there is an integer-valued solution y 4 = (y14 , y24 ) of (54) which minimizes f · y. (61) Let y 3 = (y13 , y23 ) be any solution of (54) which minimizes f · y.  yp3 For every j ∈ E, and p = 1, 2 let cpj = j∈A∈K For each p = 1, 2 consider the problem, Submodular Functions, Matroids, and Certain Polyhedra (62) Minimize fp · yp =  19 fp (A)yp (A) where A∈K (63) for every A ∈ K, yp (A) ≥ 0; and for every j ∈ E,  yp (A) ≥ cpj . j∈A∈K (64) By (21) for each p, there is an optimum solution, say yp4 , to (62) having the following form: (65) The sets A ∈ K, such that yp4 (A) > 0, form a nested sequence, A1 ⊂ A2 ⊂ A3 ⊂ . . . . Since yp3 is a solution of (63), we have fp yp4 ≤ fp yp3 , for each p, and thus f · y 4 ≤ f · y 3 . Since c1j + c2j ≥ cj for every j ∈ E, y 4 is a solution of (54), and hence y 4 is an optimum solution of (54). Thus, we have that (66) Theorem. There exists a solution y 4 of (54) which minimizes f · y and which has property (65) for each p = 1, 2. The problem, minimize f · y subject to (54) and also subject to yp (A) = 0 for every yp4 (A) = 0, has the form [(32), (33)] where [aij ] is the incidence matrix of a V1 ∪ V2 as in (38). Thus, by (39) and (37), we have: (67) Theorem. If the cj ’s are all integers then the y 4 of (66) can be taken to be integer-valued. In particular this proves (60). An immediate consequence of (35), (60), and the l.p. duality theorem is (68) Theorem. max c · x = min f · y where x ∈ P [(52)] and y ∈ P [(54)]. If f is integral, x can be integral. If c is integral, y can be integral. In particular, where f1 and f2 are the rank functions, r1 and r2 , of any two matroids, M1 = (E, F1 ) and M2 = (E, F2 ), and where every cj = 1, (68) implies: (69) Theorem. where S ⊆ E. max |J| = min[r1 (S) + r2 (E − S)], where J ∈ F1 ∩ F2 , and (A related result is given by Tutte [16]). VI (70) Theorem. For each i ∈ E ′ , let Qi be a subset of E. For each A′ ⊆ E ′ ,  ′ let u(A ) = i∈A′ Qi . Let f be any integral β-function on LE . 20 J. Edmonds Then f ′ (A′ ) = f (u(A′ )) is an integral β-function on LE ′ = {A′ : A′ ⊆ E ′ }. (71) This follows from the relations u(A′ ∪ B ′ ) = u(A′ ) ∪ u(B ′ ) and u(A′ ∩ B ′ ) ⊆ u(A′ ) ∩ u(B ′ ). (72) Applying (15) to f ′ yields a matroid on E ′ . (73) In particular, taking f to mean cardinality, if we let J ′ ⊆ E ′ be a member of F ′ iff |A′ | ≤ |u(A′ )| for every A′ ⊆ J ′ , then M ′ = (E ′ , F ′ ) is a matroid. (74) Hall’s SDR theorem says that: |A′ | ≤ |u(A′ )| for every A′ ⊆ J ′ iff the family {Qj }, i ∈ J ′ , has a system of distinct representatives, i.e., a transversal. A transversal of a family {Qi }, i ∈ J ′ is a set {ji }, i ∈ J ′ , of distinct elements such that ji ∈ Qi . Thus, (75) Theorem. For any finite family {Qi }, i ∈ E ′ , of subsets of E, the sets J ′ ⊆ E ′ such that {Qi }, i ∈ J ′ , has a transversal are the independent sets of a matroid on E ′ (called a transversal matroid ). There are a number of interesting ways to derive (75). Some others are in [2], [3], [5], and [12]. The present derivation is the way (75) was first obtained and communicated. The following is the same result with the roles of elements and sets interchanged. (76) Let J ∈ F0 iff, for some J ′ ⊆ E ′ , J is a transversal of {Qi }, i ∈ J ′ . That is, let J ∈ F0 iff J is a partial transversal of {Qi }, i ∈ E ′ . Then M0 = (E, F0 ) is a matroid. (77) Thus, where P0 is the polyhedron of M0 and where P is the polyhedron of any other matroid, M = (E, F ), on E, the vertices of P0 ∩ P are the incidence vectors of the M -independent partial transversals of {Qi }, i ∈ E ′ . By (8), the rank function r0 of M0 is, for each A ⊆ E, (78) r0 (A) = min [|A0 | + |{i : (A − A0 ) ∩ Qi = ∅}|] where A0 ⊆ A. Combining (69) and (78), we get max |J| = min [r(A1 ) + |A0 | + |E ′ | − |{i : Qi ⊆ A1 ∪ A0 }|] = min [r(u(A′ )) + |E ′ | − |A′ |], where J ∈ F0 ∩F , A0 ∪A1 ⊆ E, A0 ∩A1 = ∅, ′ and A ⊆ E ′ . (79) In particular, (79)implies the following theorem of Rado [14], given in 1942. (80) For any matroid M on E, a family {Qi }, i ∈ E ′ , of subsets of E, has a transversal which is independent in M iff |A′ | ≤ r(u(A′ )) for every A′ ⊆ E ′ . Submodular Functions, Matroids, and Certain Polyhedra 21 Taking the f of (70) to be r, (70), (15), and (80) imply: (81) Theorem. For any matroid M on E, and any family {Qi }, i ∈ E ′ , of subsets of E, the sets J ′ ⊆ E ′ such that {Qi }, i ∈ J ′ , has an M -independent transversal are the independent sets of a matroid on E ′ . (82) A bipartite graph G consists of two disjoint finite sets, V1 and V2 , of nodes and a finite set E(G) of edges such that each member of E(G) meets one node in V1 and one node in V2 . The following theorem of König is a prototype of (69). (83) Theorem. For any bipartite graph G, max |J|, J ⊆ E(G), such that (a) no two members of J meet the same node in V1 , and (b) no two members of J meet the same node in V2 , equals min(|T1 | + |T2 |), T1 ⊆ V1 , and T2 ⊆ V2 , such that every member of E(G) meets a node in T1 or a node in T2 . (84) To get the Hall theorem, (74), from (83), let V1 be the E ′ of (70), let V2 be the E of (70), and let there be an edge in E(G) which meets i ∈ V1 and j ∈ V2 iff j ∈ Qi . Clearly, if the family {Qi }, i ∈ E ′ , has no transversal then, in (83), max |J| < |V1 |. If the latter holds, then by (83), the T1 of min(|T1 | + |T2 |), in (83), is such that |V1 − T1 | > |u(V1 − T1 )|. (85) For the König-theorem instance, (83) of (69), the matroids M1 = (E, F1 ) and M2 = (E, F2 ) are particularly simple: Let E = E(G). For p = 1 and p = 2, let J ⊆ E(G) be a member of Fp iff no two members of J meet the same node in Vp . (86) Where P1 and P2 are the polyhedra of these two matroids, finding a vertex x of P1 ∩ P2 which maximizes c · x is essentially the optimal assignment problem. That is, the Hitchcock problem where every bi = 1. (87) Clearly, the inequality x(A) ≤ rp (A) is a face of Pp , that is, A is rp -closed and rp -inseparable, iff, for some node v ∈ Vp , A is the set of edges which meet v. VII (88) Let {Mi }, i ∈ I, be a family of matroids, Mi = (E, Fi ), having rank functions ri . Let J ⊆ E be a member of F iff:  (89) |A| ≤ i ri (A) for every A ⊆ J. 22 J. Edmonds Since f (A) = (90) Theorem. matroids Mi .  i ri (A) is a β-function on LE , The M = (E, F ) of (88) is a matroid, called the sum of the In [5], and in [2], it is shown that (91) Theorem. such that Ji ∈ Fi . J ⊆ E satisfies (89) iff J can be partitioned into sets Ji (92) An algorithm, MPAR, is given there for either finding such a partition of J or else finding an A ⊆ J which violates (89). That is, for recognizing whether or not J ∈ F . (93) The algorithm is a good one, assuming: (94) that a good algorithm is available for recognizing, for any K ⊆ E and for each i ∈ I, whether or not K ∈ Fi . (95) The definition of a matroid M = (E, F ) is essentially that, modulo the ease of recognizing, for any J ⊆ E, whether or not J ∈ F , one has what is perhaps the easiest imaginable algorithm for finding, in any A ⊆ E, a maximum cardinality subset J of A such that J ∈ F . (96) In particular, by virtue of (90), assuming (94), MPAR provides a good algorithm for finding a maximum cardinality set J ⊆ E which is partitionable into sets Ji ∈ Fi . (97) Assuming (94), MPAR combined with (19) is a good algorithm for, given numbers c j , j ∈ E, finding a set J which is partitionable into sets Ji ∈ Fi and such that j∈J cj is maximum. Where r is the rank function of matroid M = (E, F ), let (98) r∗ (A) = |A| + r(E − A) − r(E) for every A ⊆ E. Substituting r(E) = |E| − r∗ (E), and A for E − A, in (98), yields (99) r(A) = |A| + r∗ (E − A) − r∗ (E). (100) It is easy to verify that r∗ is the rank function of a matroid M ∗ = (E, F ∗ ), e.g., that r∗ satisfies (9). M ∗ is called the dual of M . By (99), M ∗∗ = M . (101) By (98), |J| = r∗ (J) iff r(E − J) = r(E). Therefore, J ∈ F ∗ iff E − J contains an M -basis of E, i.e., a basis of M . Thus, it can be determined whether or not J ∈ F ∗ by obtaining an M -basis of E − J and observing whether or not its cardinality equals r(E). Where r is the rank function of a matroid M = (E, F ), and where n is a non-negative integer, let (102) r(n) (A) = min[n, r(A)] for every A ⊆ E. Submodular Functions, Matroids, and Certain Polyhedra 23 (103) Clearly, r(n) is the rank function of a matroid M (n) = (E, F (n) ), called the n-truncation of M , such that J ∈ F (n) iff J ∈ F and |J| ≤ n. (104) For matroids M1 = (E, F1 ) and M2 = (E, F2 ), and any integer n ≤ r2 (E), by (103) and (101), there is a set J ∈ F1 ∩ F2 such that |J| = n iff E can be (n) ∗ partitioned into a set J1 ∈ F1 and a set J2 ∈ F2 . Theorem (91) says this is (n) ∗ possible iff |A| ≤ r1 (A) + r2 (n)(A) for every A ⊆ E. Using (102) and (98), this implies (69). (105) Using MPAR, a maximum cardinality J ∈ F1 ∩F2 can be found as follows: Find a maximum cardinality set H = J1 ∪ J2 such that J1 ∈ F1 and J2 ∈ F2∗ . Extend J2 to B, an M2∗ -basis of H. Clearly, B is an M2∗ -basis of E, and so H − B ∈ F1 ∩ F2 . It is easy to verify that |H − B| = max |J|, J ∈ F1 ∩ F2 . (106) It is more practical to go in the other direction, obtaining for a given family of matroids Mi = (E, Fi ), i ∈ I, an “optimum” family of mutually disjoint sets Ji ∈ Fi , by using the “matroid intersection algorithm” of (48) on the following two matroids M1 = (EI , F1 ) and M2 = (EI , F2 ). Let EI consist of all pairs (j, i), j ∈ E and i ∈ I. There is a 1–1 correspondence between sets J ∈ EI and families {Ji }, i ∈ I, of sets Ji ⊆ E, where J corresponds to the family {Ji } such that j ∈ Ji ⇐⇒ (j, i) ∈ J. Let M1 = (EI , F1 ) be the matroid such that J ⊆ EI is a member of F1 iff the corresponding sets Ji are mutually disjoint — that is, if and only if the j’s of the members of J are distinct. Let M2 = (EI , F2 ) be the matroid such that J ⊆ EI is a member of F2 iff the corresponding sets Ji are such that Ji ∈ Fi . (Nash-Williams has developed the present subject in another interesting way [13].) VIII (107) If f (a) is a β-function on L and k is a not-too-large constant, then f (a)−k is a β0 -function on L. It is useful to apply (15) to, non-β, β0 -functions. (108) For example, let G be a graph having edge-set E = E(G) and node-set V = V (G). For each j ∈ E, let Qj be the set of nodes which j meets. For every A ⊆ E, let f (A) = |u(A)| − 1. Then, by (70), f (A) is a β0 -function on LE . (109) Applying (15) to this f yields a matroid, M (G) = (E, F (G)). (110) The minimal dependent sets of a matroid M = (E, F ), i.e., the minimal subsets of E which are not members of F , are called the circuits of M . (111) The circuits of M (G) are the minimal non-empty sets A ⊆ E such that |A| = |u(A)|. (112) A set J ⊆ E is a member of F (G) iff J together with the set u(J) of nodes is a forest in G. 24 J. Edmonds IX (113) Let G be a directed graph. For any R ⊆ V (G), a branching B of G rooted at R, is a forest of G such that, for every v ∈ V (G), there is a unique directed path in B (possibly having zero edges) from some node in R to v. (114) The following problem is solved using matroid intersection (115) Given any directed graph G, given a numerical weight cj for each j ∈ E = E(G), and given sets Ri ⊆ V (G), i ∈ I, find edge-disjoint branchings Bi , i ∈ I,   rooted respectively at Ri , which minimize s = j cj , j ∈ i∈I Bi . (116) The problem easily reduces to the case where each Ri consists of the same single node, v0 ∈ V (G). That is, find n = |I| edge-disjoint branchings Bi , each rooted at node v0 , which minimize s. (117) Where F (G) is as defined in (109), let J ⊆ E be a member of F1 iff it is the union of n members of F (G). By (91), M1 = (E, F1 ) is a matroid. (118) Let J ⊆ E be a member of F2 iff no more than n edges of J are directed toward the same node in V (G) and no edge of J is directed toward v0 . Clearly, M2 = (E, F2 ) is a matroid: (119) Theorem. A set J ⊆ E is the edge-set of n edge-disjoint branchings of G, rooted at node v0 ∈ V (G), iff |J| = n(|V (G)| − 1) and J ∈ F1 ∩ F2 . This is a consequence of the following. (120) Theorem. The maximum number of edge-disjoint branchings of G, rooted at v0 , equals the minimum over all C, v0 ∈ C ⊂ V (G), of the number of edges having their tails in C and their heads not in C. (121) There is an algorithm for finding such a family of branchings in G, and in particular for partitioning a set J as described in (119) into branchings as described in (119). (122) Let P1 and P2 he the polyhedra of matroids M1 and M2 respectively. Let H = {x : x(E) = n(|V (G)| − 1)}. It follows from (45) that (123) A vector x ∈ RE is a vertex of P1 ∩ P2 ∩ H iff it is the incidence vector of a set J as described in (119). (124) A variant of the matroid-intersection algorithm will find such an x which minimizes c · x. The case n = 1 is treated in [4]. X (125) Let each Li be a commutative semigroup. We say a ≤ b, for {a, b} ⊆ Li , iff a + d = b for some d ∈ Li . Submodular Functions, Matroids, and Certain Polyhedra 25 (126) A function f from L0 into L1 is called a ψ-function iff (127) for every {a, d} ⊆ L0 , f (a) ≤ f (a + d); and (128) for every {a, b, c} ⊆ L0 , f (a + b + c) + f (c) ≤ f (a + c) + f (b + c). (129) Li is called a ψ-semigroup iff, for {a, b, c} ⊆ Li , a + c + c = b + c + c =⇒ a + c = b + c. For example, Li is a ψ-semigroup if it is cancellative or if it is idempotent. (130) Theorem. If f (·) is a ψ-function from L0 into L1 , g(·) is a ψ-function from L1 into L2 , and L1 is a ψ-semigroup, then g(f (·)) is a ψ-function from L0 into L2 . (131) Theorem. A function f from a lattice, L0 , into the non-negative reals, L1 , satisfies (128), where “+” in L0 means “ ∨” and “+” in L1 means ordinary addition, iff f is non-decreasing, i.e., satisfies (127), and f is submodular. (132) Thus, β-functions can be obtained by composing ψ-functions. (133) Theorem. A function f from the non-negative reals into the nonnegative reals is a ψ-function, relative to addition in the image and preimage, iff it is non-decreasing and concave. (134) Theorem. A function f from a lattice, L0 , into a lattice, L1 , is a ψfunction, relative to joins “ ∨” in each, iff it is a join-homomorphism, i.e., for every {a, b} ⊆ L0 , f (a ∨ b) = f (a) ∨ f (b). Let h(S) be any real (integer)-valued function of the elements S ∈ L of a finite lattice L. In principle, an (integral) non-decreasing submodular function f on L can be obtained recursively from h as follows: (135) Theorem. For each S ∈ L, let g(S) = min[h(S), g(A)+g(B)−g(A∧B)] where A < S, B < S, and A ∨ B = S. Then g is submodular. For each S ∈ L, let f (S) = min g(A) where S ≤ A ∈ L. Then f is submodular and non-decreasing. If h is submodular then g = h. If h is submodular and non-decreasing then f = h. (A similar construction was communicated to me by D. A. Higgs.) (136) The β-functions on a finite lattice L correspond to the members of a polyhedral cone β(L) in the space of vectors y = [yA ], A ∈ L − {∅}. Where y∅ = 0, β(L) is the set of solutions to the system: (137) yA + yB − yA∨B − yA∧B ≥ 0 and yA∨B − yA ≥ 0 for every A ∈ L and B ∈ L. (138) Characterizing the extreme rays of β(L), in particular for L = {A : A ⊆ E}, appears to be difficult. 26 J. Edmonds References 1. Dilworth, R.P., Dependence Relations in a Semimodular Lattice, Duke Math. J., 11 (1944), 575–587. 2. Edmonds, J. and Fulkerson, D.R., Transversals and Matroid Partition, J. Res. Nat. Bur. Standards, 69B (1965), 147–153. 3. Edmonds, J., Systems of Distinct Representatives and Linear Algebra, J. Res. Nat. Bur. Standards, 71B (1967), 241–245. 4. Edmonds, J., Optimum Branchings, J. Res. Nat. Bur. Standards, 71B (1967), 233– 240, reprinted with [5], 346–361. 5. Edmonds, J., Matroid Partition, Math. of the Decision Sciences, Amer. Math Soc. Lectures in Appl. Math., 11 (1968), 335–345. 6. Gale, D., Optimal assignments in an ordered set: an application of matroid theory, J. Combin. Theory, 4 (1968) 176–180. 7. Hoffman, A.J., Some Recent Applications of the Theory of Linear Inequalities to Extremal Combinatorial Analysis, Proc. Amer. Math. Soc. Symp. on Appl. Math., 10 (1960), 113–127. 8. Ingleton, A.W., A Note on Independence Functions and Rank, J. London Math. Soc., 34 (1959), 49–56. 9. Kruskal, J.B., On the shortest spanning subtree of a graph, Proc. Amer. Math. Soc., 7 (1956), 48–50. 10. Kuhn, H.W. and Tucker, A.W., eds., Linear inequalities and related systems, Annals of Math. Studies, no. 38, Princeton Univ. Press, 1956. 11. Lehman, A., A Solution of the Shannon Switching Game, J. Soc. Indust. Appl. Math., 12 (1964) 687–725. 12. Mirsky, L. and Perfect, H., Applications of the Notion of Independence to Problems in Combinatorial Analysis, J. Combin. Theory, 2 (1967), 327–357. 13. Nash-Williams, C.St.J.A., An application of matroids to graph theory, Proc. Int’l. Symposium on the Theory of Graphs, Rome 1966, Dunod. 14. Rado, R., A theorem on Independence Relations, Quart. J. Math., 13 (1942), 83– 89. 15. Rado, R., A Note on Independence Functions, Proc. London Math. Soc., 7 (1957), 300–320. 16. Tutte, W.T., Menger’s Theorem for Matroids, J. Res. Nat. Bur. Standards, 69B (1965), 49–53. Matching: A Well-Solved Class of Integer Linear Programs Jack Edmonds1 and Ellis L. Johnson2 1 2 National Bureau of Standards, Washington, D.C., U.S.A. I.B.M. Research Center, Yorktown Heights, NY, U.S.A. A main purpose of this work is to give a good algorithm for a certain welldescribed class of integer linear programming problems, called matching problems (or the matching problem). Methods developed for simple matching [2,3], a special case to which these problems can be reduced [4], are applied directly to the larger class. In the process, we derive a description of a system of linear inequalities whose polyhedron is the convex hull of the admissible solution vectors to the given matching problem. At the same time, various combinatorial results about matchings are derived and discussed in terms of graphs. (1) The general integer linear programming problem can be stated as:  Minimize z = j∈E cj xj , where cj is a given real number, subject to (2) xj an integer for each j ∈ E; (3) 0 ≤ xj ≤ αj , j ∈ E, where αj is a given positive integer or +∞;  j∈E aij xj = bi , i ∈ V , where aij and bi are given integers; (4) V and E are index sets having cardinalities |V | and |E|. (5) The integer program (1) is called a matching problem whenever  i∈V |aij | ≤ 2 holds for all j ∈ E. (6) A solution to the integer program (1) is a vector [xj ], j ∈ E, satisfying (2), (3), and (4), and an optimum solution is a solution which minimizes z among all solutions. When the integer program is a matching problem, a solution is called a matching and an optimum solution is an optimum matching. If the integer restriction (2) is omitted, the problem becomes a linear program. An optimum solution to that linear program will typically have fractional values. There is an important class of linear programs, called transportation or network flow problems, which have the property that for any integer right-hand side bi , i ∈ V , and any cost vector cj , j ∈ E, there is an optimum solution which has all integer xj , j ∈ E. The class of matching probems includes that class of linear programs, but, in addition, includes problems for which omitting M. Jünger et al. (Eds.): Combinatorial Optimization (Edmonds Festschrift), LNCS 2570, pp. 27–30, 2003. c Springer-Verlag Berlin Heidelberg 2003  28 J. Edmonds and E.L. Johnson the integer restriction (2) results in a linear program with no optimum solution which is all integer. Many interesting and practical combinatorial problems can be formulated as integer linear programs. However, limitations in the known methods for treating general integer linear programs have made such formulations of limited value. By contrast with general integer linear programming, the matching problem is well-solved. (7) Theorem. There is an algorithm for the general matching problem such that an upper bound on the amount of work which it requires for any input is on the order of the product of (8), (9), (10), and (11). An upper bound on the memory required is on the order of (8) times (11) plus (9) times (11). (8) |V |2 , the number of nodes squared; |E|, the number of edges;   (10) αj <∞ αj ; i∈V |bi | + 2 (9)  (11) log(|V | max |bi | + |E| maxαj <∞ αj ) + log( j∈E |cj |). (12) Theorem. For any matching problem, (1), the convex hull P of the matchings, i.e., of the solutions to [(2), (3), and (4)], is the polyhedron of solutions to the linear constraints (3) and (4) together with additional inequalities:    (13) j∈U αj . j∈U xj ≥ 1 − j∈W xj − There is an inequality (13) for every pair (T, U ) where T is a subset of V and U is a subset of E such that  (14) i∈T |aij | = 1 for each j ∈ U ;   (15) j∈U αj is an odd integer. i∈T bi + The W in (13) is given by  (16) W = {j ∈ E : i∈T |aij | = 1} − U . (17) Let Q denote the set of pairs (T, U ). The inequalities (13), one for each (T, U ) ∈ Q, are called the blossom inequalities of the matching problem (1). By Theorem (12), the matching problem is the linear program:  (18) Minimize z = j∈E cj xj subject to (3), (4), and (13).  (19) Theorem. If cj is an integral multiple of i∈V |aij |, and if the l.p. dual of (18) has an optimum solution, then it has an optimum solution which is integer-valued. Using l.p. duality, theorems (12) and (19) yield a variety of combinatorial existence and optimality theorems. To treat matching more graphically, we use what we will call bidirected graphs. All of our graphs are bidirected, so graph is used to mean bidirected graph. Matching: A Well-Solved Class of Integer Linear Programs 29 (20) A graph G consists of a set V = V (G) of nodes and a set E = E(G) of edges. Each edge has one or two ends and each end meets one node. Each end of an edge is either a head or a tail. (21) If an edge has two ends which meet the same node it is called a loop. If it has two ends which meet different nodes, it is called a link. If it has only one end it is called a lobe. An edge is called directed if it has one head and one tail. Otherwise it is called undirected, all-head or all-tail accordingly. (22) The node-edge incidence matrix of a graph is a matrix A = [aij ] with a row for each node i ∈ V and a column for each edge j ∈ E, such that aij = +2, +1, 0, −1, or −2, according to whether edge j has two tails, one tail, no end, one head, or two heads meeting node i. (Directed loops are not needed for the matching problem.) (23) If we interpret the capacity αj to mean that αj copies of edge j are present in graph Gα , then xj copies of j for each j ∈ E, where x = [xj ] is a solution of [(2), (3), (4)], gives a subgraph Gx of Gα . The degree of node i in Gx is bi , the number of tails of Gx which meet i minus the number of heads of Gx which meet i. Thus, where x is an optimum matching, Gx can be regarded as an “optimum degree-constrained subgraph” of Gα , where the bi ’s are the degree constraints. (24) A Fortran code of the algorithm is available from either author. It was written in large part by Scott C. Lockhart, who also wrote many comments interspersed through the deck to make it understandable. Several random problem generators are included. It has been run on a variety of problems on a Univac 1108, IBM 7094, and IBM 360. On the latter, problems of 300 nodes, 1500 edges, b = 1 or 2, α = 1, and randon cj ’s from 1 to 10, take about 30 seconds. Running times fit rather closely a formula which is an order of magnitude better than our theoretical upper bound. References 1. Berge, C., Théorie des graphes et ses applications, Dunod, Paris, 1958. 2. Edmonds, J., Paths, trees, and flowers, Canad. J. Math. 17 (1965), 449–467. 3. Edmonds, J., Maximum matching and a polyhedron with 0,1-vertices, J. Res. Nat. Bur. Standards 69B (1965), 125–130. 4. Edmonds, J., An introduction to matching, preprinted lectures, Univ. of Mich. Summer Engineering Conf. 1967. 5. Johnson, E.L., Programming in networks and graphs, Operation Research Center Report 65-1, Etchavary Hall, Univ. of Calif., Berkeley. 6. Tutte, W.T., The factorization of linear graphs, J. London Math. Soc. 22 (1947), 107–111. 7. Tutte, W.T., The factors of graphs, Canad. J. Math. 4 (1952), 314–328. 8. Witzgall, C. and Zahn, C.T. Jr., Modification of Edmonds’ matching algorithm, J. Res. Nat. Bur. Standards 69B (1965), 91–98. 30 J. Edmonds and E.L. Johnson 9. White, L.J., A parametric study of matchings, Ph.D. Thesis, Dept. of Elec. Engineering, Univ. of Mich., 1967. 10. Balinski, M., A labelling method for matching, Combinatorics Conference, Univ. of North Carolina, 1967. 11. Balinski, M., Establishing the matching poloytope, preprint, City Univ. of New York, 1969. Theoretical Improvements in Algorithmic Efficiency for Network Flow Problems Jack Edmonds1 and Richard M. Karp2 1 2 National Bureau of Standards, Washington, D.C., U.S.A. University of California, Berkeley, CA, U.S.A., formerly at I.B.M. Thomas J. Watson Research Center This paper presents new algorithms for the maximum flow problem, the Hitchcock transportation problem and the general minimum-cost flow problem. Upper bounds on the number of steps in these algorithms are derived, and are shown to improve on the upper bounds of earlier algorithms. 1 The Maximum Flow Problem A network N is a directed graph together with an assignment of nonnegative capacity c(u, v) to each arc (u, v). A flow is an assignment of a real number f (u, v) to each arc so that (i) 0 ≤ f (u, v) ≤ c(u, v);   (ii) for any fixed u, f (u, v) = f (v, u). v v One arc (t, s) is distinguished, and a flow which maximizes f (t, s) is called maximum. Let f ∗ denote the maximum value of f (t, s). Ford and Fulkerson [1] have given a labelling algorithm to compute a maximum flow by repeated flow changes along “flow-augmenting paths”. They do not specify which flow-augmenting path to choose. In Fig. 1, let M be any positive integer. M s M t 1 M M Fig. 1 M. Jünger et al. (Eds.): Combinatorial Optimization (Edmonds Festschrift), LNCS 2570, pp. 31–33, 2003. c Springer-Verlag Berlin Heidelberg 2003  32 J. Edmonds and R.M. Karp Then, if a flow-augmenting path of length 3 is selected at each step, 2M augmentations will be needed to determine that f ∗ = 2M . Let n denote the number of nodes of the network N . Theorem 1 If each flow augmentation is made along an augmenting path having a minimum number of arcs, then a maximum flow will be obtained after no 3 more than n 4−n augmentations. Let c̄ be the average capacity of an arc (excluding the distinguished arc (t, s)). Theorem 2 If each flow augmentation is chosen to produce a maximum increase in f (t, s), then the maximum flow will be obtained after no more than 1+ n2 2 (1 + 2 ) (2 ln n + ln c̄) 4 n −2 augmentations. 2 The Minimum-Cost Flow Problem Assign  to each arc (u, v) a nonnegative cost d(u, v). Define the cost of a flow f as d(u, v)f (u, v). We seek a flow f of minimum cost subject to the constraint that f (t, s) = f ∗ . Define the cost of a flow-augmenting path P as   d(u, v) d(u, v) − (u, v) a forward arc in P (u, v) a reverse arc in P The following algorithm solves the minimum-cost flow problem: start with the zero flow, and repeatedly augment along a minimum-cost flow-augmenting path. Stop when a maximum flow is obtainted. In a direct implementation of this algorithm, the selection of each flowaugmenting path requires the calculation of a minimum-cost path through a network which includes arcs of negative cost (namely, the reverse arcs of flowaugmenting paths). We show, however, that the algorithm can be modified so that, in each minimum-cost calculation, all arcs have nonnegative cost. This is done by replacing d(u, v) at the kth augmentation by d(u, v) + π k (u) − π k (v), where the “node potentials” π k (u) are derived as a by-product of finding the (k − 1)th augmenting path. This refinement is significant, since minimum-cost path problems with all costs nonnegative can be solved in O(n2 ) steps (where a step is an operation on scalar quantities, such as addition or comparison), whereas existing methods for general minimum-cost path problems require O(n3 ) steps. This refinement reduces the number of steps in the assignment problem, for example, from O(n4 ) to O(n3 ). Theoretical Improvements in Algorithmic Efficiency 3 33 A Scaling Method for the Hitchcock Problem An instance of the Hitchcock transportation problem is specified as follows: minimize n  n  cij xij i=1 j=1 subject to  xij = bj j = 1, 2, . . . , n  xij = ai i = 1, 2, . . . , m i j xij ≥ 0. Here the “supplies” ai and “demands” bj are nonnegative integers auch that m n   ai = bj = B. i=1 j=1 A Hitchcock problem can be expressed as a minimum-cost flow problem on a network with m + n + 2 nodes, and can be solved within B flow augmentations. The scaling methods reduces this bound by applying the technique of flow augmentations to a sequence of approximate problems. In the pth approximate b problem the ith supply is ⌊ 2api ⌋, and the jth demand is ⌊ 2jp ⌋. A ficticious supply or demand is added in a standard way to establish a balance of supplies and demands. The original problem is the zeroth approximate problem. Approximate problems are solved successively, using the flow-augmentation path technique. A saving is effected by using, as a starting solution for approximate problem p − 1, twice the optimum solution for problem p. The over-all provess is shown to require only n log2 (1 + B n ) flow augmentations. (The material of this section was presented by the present authors under the title “A Technique for Accelerating the Solution of Transportation Problems” at the Second Annual Princeton Conference on Information Sciences and Systems, March, 1968.) References 1. Ford, L.R. and Fulkerson, D.R. (1962) Flows in Networks, Princeton University Press, 1962. 2. Busacker, R.G. and Saaty, T.L. (1965) Finite Graphs and Networks, McGraw-Hill, 1965. Connected Matchings Kathie Cameron Department of Mathematics Wilfrid Laurier University Waterloo, Ontario N2L 3C5 Canada kcameron@wlu.ca Abstract. A connected matching M in a graph G is a matching such that every pair of edges of M is joined by an edge of G. Plummer, Stiebitz and Toft introduced connected matchings in connection with their study of the famous Hadwiger Conjecture. In this paper, I prove that the connected matching problem is NP-complete for 0-1-weighted bipartite graphs, but polytime-solvable for chordal graphs and for graphs with no circuits of size 4. 1 Introduction A matching is a set of edges, no two of which meet a common node. The optimum matching problem was well-solved by Edmonds [6,7]. Here I consider a particular type of matching. We say edges e and f are joined by an edge if an endpoint of e is joined to an endpoint of f. A connected matching M in a graph G is a matching such that every pair of edges of M is joined by an edge of G. Plummer, Stiebitz and Toft [14] introduced connected matchings in connection with their study of the famous Hadwiger Conjecture, which says that the chromatic number of a graph G is at most the maximum number k for which G has a complete subgraph on k nodes, Kk , as a minor. In other words, any graph either has a k − 1 colouring of its nodes, or has Kk as a minor. Their work suggests that connected matchings will play an important role in the solution of this conjecture: Note that by contracting the edges of a connected matching of size m, we obtain a complete subgraph of size m, and thus in graphs with no Km minor, the maximum size of a connected matching is at most m − 1. In particular, the maximum size of a connected matching in a planar graph is at most 4. Plummer, Stiebitz and Toft [14] showed that, for a given integer m, the problem of finding a connected matching of size at least m is NP-complete for general graphs. Independently, I showed the problem of finding a connected matching of weight at least m is NP-complete for 0-1-weighted bipartite graphs. The proof is in Section 2 below. In this paper, I will give a polytime algorithm for finding a largest connected matching in chordal graphs. I will prove that in graphs with no circuit on four nodes, the maximum size of a connected matching is at most 5, and thus can be found in an obvious way. M. Jünger et al. (Eds.): Combinatorial Optimization (Edmonds Festschrift), LNCS 2570, pp. 34–38, 2003. c Springer-Verlag Berlin Heidelberg 2003  Connected Matchings 35 A notion closely related to that of connected matchings is induced matchings. An induced matching M in a graph G is a matching such that no two edges of M are joined by an edge of G; that is, an induced matching is a matching which forms an induced subgraph. Induced matchings have been studied by several authors. Stockmeyer and Vazirani [15] and I [1] independently proved that the problem of finding an induced matching of size at least m is NP-complete for bipartite graphs, and Ko and Shepherd [12] gave an easy proof that it is NPcomplete for planar graphs. It has been shown that the problem of finding a largest induced matching can be solved in polytime in chordal graphs by me [1], in circular-arc graphs by Golumbic and Laskar [9], in cocomparability graphs by Golumbic and Lewenstein [10], in asteroidal-triple free graphs by Jou-Ming Chang [4] and me [2], in weakly chordal graphs by me, Sritharan, and Tang [3], and in Gavril’s interval-filament graphs by me [2]. Interval-filament graphs include cocomparability graphs [8] and polygon-circle graphs [8], and polygoncircle graphs include chordal graphs [11], circular-arc graphs [11], circle graphs, and outer-planar graphs [13]. The line-graph, L(G), of graph G has node-set E(G), and an edge joining two nodes exactly when the edges of G they correspond to meet a common node. The square, G2 , of graph G has node-set V (G), and two nodes are joined in G2 exactly when they are joined by an edge or a path of two edges in G. A set of nodes is called independent if no two of them are joined by an edge. As pointed out in [1], for any graph G, every induced matching in G is an independent set of nodes in [L(G)]2 , and conversely. A connected matching in G becomes a clique in [L(G)]2 . In [1], a set N of edges in a graph G was defined to be neighbourly if every pair of edges of N either meet a common node or are joined by an edge of G. For any graph G, neighbourly sets of edges correspond precisely to cliques in [L(G)]2 . Every connected matching is a neighbourly set, and a matching in a neighbourly set is a connected matching. Connected matchings in G correspond precisely to independent sets of nodes in L(G) which are cliques in [L(G)]2 . I will use uv to denote an edge with ends u and v. 2 The Connected Matching Problem Is NP-Complete for 0-1 Weighted Bipartite Graphs Consider the problems (C) Max clique: Given a positive integer m and an arbitrary graph G, does this graph have a clique of size at least m ? and (WBCM) Max connected matching in 0-1 weighted bipartite graphs: Given a positive integer m and a 0-1 weighted bipartite graph B, does B have a connected matching of weight at least m? (WBCM) is clearly in NP. Here is a formulation of (C) as an instance of (WBCM). Since (C) is NP-complete, it follows that (WBCM) is NP-complete. Given a graph G, construct a bipartite graph B as follows. For each node v of G, put two nodes v ′ and v ′′ in B, and join them by an edge with weight 1. For 36 K. Cameron each edge uv of G, put edges u′ v ′′ and v ′ u′′ in B, and give these edges weight 0. If C is a clique of size m in G, {v ′ v ′′ : v ∈ C} is a connected matching in B of weight m. Conversely, if M is a connected matching in B with weight m, then M contains m edges of the form v ′ v ′′ . {v : v ′ v ′′ ∈ M } is a clique in G of size m. 3 Finding Largest Connected Matchings in Chordal Graphs A graph is called chordal if it has no chordless circuits on four or more nodes. As mentioned in the introduction, connected matchings are precisely matchings in neighbourly sets. In [1] it is proved that neighbourly sets in chordal graphs are what I call clique-neighbourhoods: a clique together with edges meeting the nodes of the cliques. Thus, to find a largest connected matching in a chordal graph, we can find a largest matching contained in a clique-neighbourhood. Note that for a clique-neighbourhood N where the clique C has k nodes, the size of a largest matching equals the size l of a largest matching in the bipartite graph of edges from nodes of C to nodes met by N − C plus ⌊(k − l)/2 ⌋. To find a largest matching contained in a clique-neighbourhood, we can restrict ourselves to clique-neighbourhoods where the clique is a maximal clique of the graph. It follows from the fact that chordal graphs have a simplicial ordering [5] (an ordering such that the after-neighbours of a node form a clique) that chordal graphs have only a linear number of maximal cliques - they are of the form: a node together with its after-neighbours in the simplicial ordering. Thus given a chordal graph, enumerate all maximal cliques (using the simplicial ordering) and then for each, find the size of a largest matching from the clique nodes to other nodes. The largest connected matching is obtained when (the number of edges in the matching from clique nodes to other nodes) plus (the round-down of one half the number of nodes of the clique not met by the matching) is largest. 4 Connected Matchings in Graphs with No Circuits on Four Nodes C4 is the circuit with four nodes. We consider graphs which do not have C4 as a subgraph. The Peterson graph is such a graph with a connected matching of size 5. The following theorem shows this is best possible. Theorem 1. The maximum size of a connected matching in a graph with no C4 as a subgraph is 5. Proof. Let G = (V, E) be a graph with no C4 as a subgraph. Suppose G contains a connected matching M of size 6, say M = {si ti : 1 ≤ i ≤ 6}. Connected Matchings 37 There is an edge between every pair of edges of M. Without loss of generality, s1 is joined to s2 , s3 , and s4 . The following simple observations will be used below. (1) If s1 is joined to si and sj, i = j, and si tj ∈ E, then s1 , si , tj , sj would be a C4 . (2) If s1 is joined to si , sj , and sk , and si sj , sj sk ∈ E, then s1 , si , sj , sk would be a C4 . Suppose first that s1 is also joined to s5 . By (1), there are no edges between {si : 2 ≤ i ≤ 5} and {ti : 2 ≤ i ≤ 5} other than the M -edges. Without loss of generality, by (2), we can assume the only edges with both ends in {si : 2 ≤ i ≤ 5} are possibly s2 s3 and s4 s5 . Thus there are all possible edges with both ends in {ti : 2 ≤ i ≤ 5} except possibly t2 t3 and t4 t5 . But then t2 , t4 , t3 , t5 is a C4 . So s1 is not joined to s5 or s6 , and thus, without loss of generality, t1 is joined to t5 and t6 . By (1), the only edges between {si : 2 ≤ i ≤ 4} and {ti : 2 ≤ i ≤ 4} are the M -edges, and analogously, the only edges between {si : 5 ≤ i ≤ 6} and {ti : 5 ≤ i ≤ 6} are the M -edges. By (2), there is at most one edge with both ends in {si : 2 ≤ i ≤ 4}. Without loss of generality, we have either structure (A) s2 s4 ∈ E and t2 t3 , t3 t4 ∈ E, or else (B) t2 t4 , t2 t3 , t3 t4 ∈ E. Edge s5 t5 is joined to each of the M -edges si ti , 2 ≤ i ≤ 4. It may be joined by an edge of the form s5 si , s5 ti , or t5 ti ; call these type 1, type 2, and type 3 edges respectively. An edge t5 si would create a C4 : s1 , si , t5 , t1 . If there are two type 1 edges, say s5 si and s5 sj , then s1 , si , s5 , sj is a C4 . Suppose there is both a type 1 edge, s5 si , and a type 2 edge, s5 tj . If ti tj ∈ E, then s5 , si , ti , tj is a C4 . If ti tj ∈ / E, then we have structure (A), and without loss of generality, i = 2 and j = 4. Then s2 , s4 , t4 , s5 is a C4 . Suppose there are two type 2 edges, say s5 ti and s5 tj . Then ti , s5 , tj , tk is a C4 , where k is the one of 2, 3, 4 different from i and j unless we are have structure (A) and {i, j} = {2, 3} or {i, j} = {3, 4}. So, without loss of generality, say s5 t2 and s5 t3 are type 2 edges, s5 t4 ∈ / E, and we have structure (A). If the edge joining s4 t4 and s5 t5 is a type 1 edge, s5 s4 , then s5 , s4 , t4 , t3 is a C4 . If the edge joining s4 t4 and s5 t5 is a type 3 edge, t5 t4 , then t3 , t4 , t5 , s5 is a C4 . Thus there must be two type 3 edges, say t5 ti and t5 tj . Then ti , t5 , tj , tk is a C4 , where k is the one of 2, 3, 4 different from i and j unless we have structure (A) and {i, j} = {2, 3} or {i, j} = {3, 4}. So, without loss of generality, say t5 t2 and t5 t3 are type 3 edges, t5 t4 ∈ / E, and we have structure (A). If the edge joining s4 t4 and s5 t5 is a type 2 edge, s5 t4 , then t3 , t4 , s5 , t5 is a C4 . The one remaining case is that the edge joining s4 t4 and s5 t5 is a type 1 edge, s5 s4 . Edge s6 t6 must be joined to each of s2 t2 , s3 t3 , and s4 t4 . Let’s use the same notion of type for these edges as we did for the edges joining s5 t5 to s2 t2 , s3 t3 , and s4 t4 . It must be that s6 t6 is joined to s2 t2 , s3 t3 , and s4 t4 by two type 3 edges and one type 1 edge since otherwise replacing 5 by 6 in the argument above, we are finished. In fact, it must be that either t6 t2 and t6 t3 are type 3 edges and s6 s4 38 K. Cameron is type 1 or t6 t3 and t6 t4 are type 3 edges and s6 s2 is type 1. In the first case, t2 , t5 , t3 , t6 is a C4 . In the second case, since s5 t5 and s6 t6 must be joined by an edge, it follows from the analogue of (1) obtained by interchanging s’s and t’s that either s5 s6 ∈ E or t5 t6 ∈ E. In the first case, s2 , s4 , s5 , s6 is a C4 . In the second case, t3 , t4 , t6 , t5 is a C4 . It follows that G does not contain a connected matching of size 6. Acknowlegements. Research supported by the Natural Sciences and Engineering Research Council of Canada (NSERC) and Équipe Combinatoire, Université Pierre et Marie Curie (Paris IV), France. References 1. Kathie Cameron, Induced matchings, Discrete Applied Mathematics 24 (1989), 97–102. 2. Kathie Cameron, Induced matchings in Intersection Graphs, 3-page abstract in: Electronic Notes in Discrete Mathematics 5 (2000); paper submitted for publication. 3. Kathie Cameron, R. Sritharan, and Yingwen Tang, accepted for publication in Discrete Mathematics. 4. Jou-Ming Chang, Induced matchings in asteroidal triple-free graphs, manuscript, April 2001. 5. G. A. Dirac, On rigid circuit graphs, Abh. Math. Sem. Univ. Hamburg 25 (1961), 71–76. 6. Jack Edmonds, Paths, trees, and flowers, Canad. J. Math. 17 (1965), 449–467. 7. Jack Edmonds, Maximum matching and a polyhedron with 0, 1-vertices, J. Res. Nat. Bur. Standards Sect. B 69B (1965), 125–130. 8. F. Gavril, Maximum weight independent sets and cliques in intersection graphs of filaments, Information Processing Letters 73 (2000), 181–188. 9. Martin Charles Golumbic and Renu C. Laskar, Irredundancy in circular-arc graphs, Discrete Applied Mathematics 44 (1993), 79–89. 10. Martin Charles Golumbic and Moshe Lewenstein, New results on induced matchings, Discrete Applied Mathematics 101 (2000), 157–165. 11. S. Janson and J. Kratochvil, Threshold functions for classes of intersection graphs, Discrete Mathematics 108 (1992), 307–326. 12. C. W. Ko and F.B. Shepherd, Adding an identity to a totally unimodular matrix, London School of Economics Operations Research Working Paper, LSEOR 94.14, July 1994. 13. Alexandr Kostochka and Jan Kratochvı́l, Covering and coloring polygon-circle graphs, Discrete Mathematics 163 (1997), 299–305. 14. Michael D. Plummer, Michael Stiebitz and Bjarne Toft, On a special case of Hadwiger’s conjecture, manuscript June 2001. 15. Larry J. Stockmeyer and Vijay V. Vazirani, NP-completeness of some generalizations of the maximum matching problem, Information Processing Letters 15 (1982), 14–19. Hajós’ Construction and Polytopes Reinhardt Euler Faculté des Sciences, 20 Avenue Le Gorgeu, 29285 Brest Cedex, France, Reinhardt.Euler@univ-brest.fr Abstract. Odd cycles are well known to induce facet-defining inequalities for the stable set polytope. In graph coloring odd cycles represent the class of 3-critical graphs. We study Hajós’ construction to obtain a large class of n-critical graphs (n > 3), which properly generalize both cliques and odd cycles, and which again turn out to be facet-inducing for the associated stable set polytope. 1 Introduction Given a finite graph G = (V, E) and a set of colors C = {1, ..., n} an n-coloring of G is a function f : V → C such that f (u) = f (v) for all edges uv of G. Every color class f −1 {i} forms a stable set, i.e. a subset of vertices no two of which are joined by an edge. An n-coloring thus represents a partition of V (or V(G)) into n stable sets. The minimum number n for which G has an n-coloring is called the chromatic number χ(G) of G, and G is called χ(G)-chromatic. A homomorphism of a graph G into a graph H is a mapping ϕ : V (G) → V (H) such that ϕ(x)ϕ(y) is an edge of H if xy is an edge of G. Therefore, an n-coloring of G may be considered as a homomorphism of G into the complete graph (or clique) Kn . Finally, we call a connected graph G n-critical, if χ(G) = n but χ(G \ e) = (n − 1) for any edge e of G. This definition of criticality allows to establish a close relationship with a number of concepts from polyhedral theory such as “rank-criticality” (due to Chvàtal [3]) or “bipartite subgraphs” that might now be generalized to “n-colorable subgraphs” and whose associated polytopes are an interesting new subject in its own: we just mention at this place  that an n-critical graph G = (V, E) gives rise to an inequality of the form e∈E xe ≤ |E| − 1, which is valid for the (n-1)-colorable subgraph polytope and which can be lifted to a global facet-defining inequality. In this paper we will concentrate on the stable set polytope P(G) associated with certain graphs G, i.e. the convex hull of the incidence vectors of all stable sets in G. It will turn out that any n-critical graph presented gives rise to a facet-defining inequality of the induced stable set polytope, a result indicating that homomorphisms might be an interesting subject to study within polyhedral combinatorics. Let us return to n-critical graphs and recall that there is just one 2-critical graph, K2 , and that odd cycles are well known to constitute the class of 3-critical graphs. Few is known for n exceeding 3, and it is one of our aims to describe a large class of n-critical graphs for n > 3. The basic tool to be used is Hajós’ construction (cf. [1], [4]): call a graph Hajós-n-constructible if it can be obtained M. Jünger et al. (Eds.): Combinatorial Optimization (Edmonds Festschrift), LNCS 2570, pp. 39–47, 2003. c Springer-Verlag Berlin Heidelberg 2003  40 R. Euler from complete graphs Kn by repeated application of the following two operations (see Figure 1 for an illustration): (a) Let G1 and G2 be already obtained disjoint graphs with edges x1 y1 and x2 y2 . Remove x1 y1 and x2 y2 , identify x1 and x2 , and join y1 and y2 by a new edge. (b) Identify independent vertices (i.e. apply a homomorphism). x1= x2 v u y 2 y after operation (a) 1 and identification of u and v Fig. 1. Hajós’ construction for G1 = G2 = K4 We call any such application (part (b) may be empty) a Hajós step. As shown by Hajós in 1961 this construction allows a characterization of n-chromatic graphs in the following way: Theorem 1 (Hajós (1961)) A graph has chromatic number at least n if and only if it contains a Hajós-n-constructible subgraph. Every n-critical graph is Hajós-n-constructible. According to [4] no interesting applications of this theorem have been found so far; in particular, no explicit description of n-critical graphs is available for n > 3. (How about the automatic generation of all such graphs for small n?) In a recent paper [5] we have generalized the notion of odd cycles to that of (Kn \ e)cycles, a new class of n-critical graphs that allowed us to fully characterize the 3-colorability of infinite planar Toeplitz graphs. These graphs have also been shown to be facet-inducing for the associated stable set polytope. In this paper we are going to present even more general classes of n-critical graphs: the idea is to repeatedly apply Hajós’ construction to the initial graph Kn . We will present our results in two parts: first we will study the case that part (b) in Hajós’ construction is always empty. Afterwards we will allow part (b) as part of a Hajós step, but slightly refined in order to avoid redundancies. As already done with respect to (Kn \ e)-cycles, we will describe the consequences for the associated stable set polytopes. Hajós’ Construction and Polytopes 2 41 Hajós’ Construction without Homomorphisms Let G1 be an n-critical graph, G2 = Kn and x1 y1 , x2 y2 be arbitrary edges of G1 and G2 , respectively. If G∗ is the graph resulting from one application of operation (a) we have the following: Theorem 2 G∗ is again n-critical. Proof: a) Suppose G∗ is (n-1)-colorable. Then y1 and y2 must be colored differently with respect to this (n-1)-coloring, and the same holds for x1 and y1 (because x1 has to be colored as y2 ) . But then G1 would have an (n-1)-coloring, a contradiction. Since G∗ has an n-coloring, χ(G∗ ) = n. b) We have to show that χ(G∗ \ e) = (n − 1) for any edge e of G∗ . b1) e is an edge of G1 \ x1 y1 : then G1 \ e has an (n-1)-coloring in which y1 is colored differently from x1 . Now extend this (n-1)-coloring to G∗ by giving y2 the color of x1 and by coloring the remaining vertices with the (n-2) colors left. b2) e = y1 y2 : G1 \ x1 y1 has an (n-1)-coloring which can be extended to G∗ by coloring y2 as y1 (or x1 ) and the remaining (n-2) vertices as in b1). b3) e is an edge of Kn \x2 y2 : then Kn \{x2 y2 , e} can be colored with (n-1) colors such that x2 and y2 receive different colors; this coloring can be patched with an (n-1)-coloring of G1 \ x1 y1 (giving the same color to x1 and y1 ) to produce an (n-1)-coloring of G∗ . This terminates our proof. Starting with G1 = Kn , which is n-critical, we hereby obtain a first class of n-critical graphs G={G1 , G2 , ..., Gm , ...}. For n = 3 we don’t get anything new: Gm is just an odd cycle of length 2m+1. G 1 G 2 G 3 G 4 Fig. 2. A series of 4-critical graphs If n=4, however (cf. Figure 2 for the first members of such a sequence), we obtain an interesting class of planar graphs: all of their faces are odd, i.e. delimited by odd cycles. They are of particular importance for a polyhedral study of the stable set problem in planar graphs. For arbitrary n we get a class of graphs that properly generalize cliques and odd cycles. That they are again facet-inducing is shown by our next theorem: 42 R. Euler Theorem 3 Let Gm = (Vm , Em ) be obtained from G1 = Kn by (m-1) applications of operation (a) as indicated above. Then the inequality x(Vm ) :=  x v∈Vm v ≤ m defines a facet of the induced stable set polytope P (Gm ). Proof: By Theorem 2, Gm is n-critical, i.e. Gm \ e has an (n-1)-coloring for any edge e ∈ Em . Since |Vm | = (n−1)m+1, there is a stable set S in Gm \e with |S| = m + 1. The size of a largest stable set in Gm , however, is m. It follows that Gm is rank-critical and since it is also connected, Chvàtal’s result [3] implies that the inequality x(Vm ) ≤ m defines a facet of P (Gm ). Theorem 3 leads to the following questions: 1) Is any planar graph with only odd faces facet-inducing for the associated stable set polytope? 2) Is there an efficient separation algorithm for this class of inequalities? 3 Hajós’ Construction – The General Case In this section we are going to refine operation (b) of Hajós’ construction: the aim is to allow a non-redundant generation of n-critical graphs. Such redundancies may occur if, for instance, a vertex of Kn \ {x2 , y2 } is identified with a nonneighbor of x1 , or if the vertices of Kn \ {x2 , y2 } are identified with the complete G1 -neighborhood of x1 (so that this vertex becomes superfluous). Our refinement is as follows: (b*) Identify at most (n-3) vertices of Kn \ {x2 , y2 } with neighbors of x1 that induce a complete subgraph of G1 . x1 = x x 1 A . . . y 2 y 1 G1 2 G y 1 2 Fig. 3. Refinement of operation (b) . B =/ o/ Hajós’ Construction and Polytopes 43 Figure 3 illustrates this refinement: Kn \{x2 , y2 } now splits up into an overlap with vertices of G1 , say A, and a set of new vertices, say B, which is supposed to be non-empty. We do not know whether all n-critical graphs can be constructed this way, but we certainly properly generalize the notion of (Kn \ e)-cycles as introduced in [5]: just observe that we get such a (Kn \ e)-cycle, if we start with an arbitrary edge in G1 and then systematically choose x1 to be y2 and y1 as before. As in the previous section we can show Theorem 4 If G1 is n-critical, then G2 is n-critical, too. To prove this we proceed basically as in Theorem 2; the only difference occurs when the edge e of G1 is incident with A: if e is not incident with x1 , then we can always extend the (n-1)-coloring of G1 \ e to G2 , and if e is incident with x1 , we color one new vertex with the color of y1 (different from that of x1 ) in order to have one color left for vertex y2 . Again we can start with H1 = Kn to obtain a second class of n-critical graphs H={H1 , H2 , ..., Hm , ...}, and, as before, we don’t get anything but odd cycles if n=3. Just note, that we may loose planarity if “overlaps” are allowed. To come back to polyhedral aspects we may ask, whether and how these graphs Hm are facet-inducing for the associated stable set polytope. For this we first present a procedure that adjusts the coefficients of the linear inequality supposed to be facet-defining at each Hajós step: Procedure “Facet-extension”: Step 1: Choose any edge x1 y1 in Hm . Step 2: Determine w0m+1 := wm (Hm \ x1 y1 ), the maximum wm -weight of a stable set in Hm \x1 y1 , and set δm := w0m+1 −w0m ,the increase in weight caused by the deletion of x1 y1 . Step 3: For all new vertices v ∈ (B∪{y2 }) set wm+1 (v) := δm and for all vertices v ∈ A modify the current coefficient wm (v) to wm+1 (v) := wm (v) + δm . Figure 4 illustrates a series of 4-critical graphs together with the coefficients of their associated inequality. It is not difficult to see that 1 ≤ δm ≤ min(wm (x1 ), wm (y1 )). We are now going to show that the inequality (wm )T x ≤ w0m defines a facet of P (Hm ) for all m ∈ IN. For this we need two (technical) results: Lemma 1 For every vertex v in Hm there is a stable set S of weight w0m containing v. For a proof we proceed by induction on m. The case m = 1 is clear; so let the assumption be true for Hm−1 and consider the graph Hm obtained by an additional Hajós step (cf. Figure 3). For v ∈ A, by induction hypothesis, there is a stable set of weight w0m−1 containing v, and by definition of wm (v) we are done. For v = y2 observe that it can be added to the stable set of weight w0m−1 containing x1 and if v is one of the remaining new vertices one may add it to the corresponding stable set containing y1 . Finally, since B is nonempty, the assumption clearly holds for x1 and y1 . 44 R. Euler 1 2 2 1 1 1 2 2 1 1 1 1 2 1 1 1 1 1 1 1 H1 w 1 =1 0 1 1 1 1 1 H2 H 2 w 0 =2 2 1 1 3 w 3 =3 0 H4 w 04 =4 Fig. 4. Lemma 2 For every edge uv in Hm and clique-inducing neighborhood N (u) ⊆ Vm with |N (u)| ≤ (n − 3) there is a stable set S of weight w0m such that S ∩ ({u, v} ∪ N (u)) = ∅. Again, we proceed by induction on m. The case m=1 is obvious. Suppose that the statement is true for the graph Hm−1 : a) uv is an edge in Hm−1 and u ∈ / A, u = x1 : by induction hypothesis there is a stable set S of weight w0m−1 in Hm−1 such that S ∩ ({u, v} ∪ N (u)) = ∅, in particular, N (u) ⊆ Vm−1 . Now if S ∩ A = ∅, then S is a w0m - weight stable set in Hm , and we are done. If, however, S ∩ A = ∅, and x1 ∈ S, then S ∪ y2 is a stable set of weight w0m in Hm having the desired property; if x1 ∈ / S, there is a vertex x ∈ B that we can add to S in a similar way. b) uv is an edge in Hm−1 and u ∈ A: b1) N (u) ∩ B = ∅: by induction hypothesis there is a w0m−1 - weight stable set S in Hm−1 such that S ∩ ({u, v} ∪ N (u)) = ∅. If S ∩ A = ∅, S is the desired stable set. So let S ∩ A = ∅. If y1 ∈ S, add x ∈ B to S, and we are done. If y1 ∈ / S it is sufficient to add y2 to S. b2) N (u) ∩ B = ∅: then N (u) ⊆ ({x1 , y2 } ∪ A ∪ B). Moreover, x1 ∈ N (u) implies y2 ∈ / N (u), and by induction hypothesis there is a w0m−1 - weight stable set S in Hm−1 such that S ∩ ({x1 , y1 } ∪ A) = ∅: now add y2 to S and we are done. If finally, x1 ∈ / N (u), the w0m - weight stable set S of Hm containing both x1 and y1 has the desired property. c) uv is an edge in Hm−1 and u = x1 (hence v ∈ A): by induction hypothesis there is a w0m−1 - weight stable set S in Hm−1 with S ∩ ({u, y1 } ∪ N (u)) = ∅. If S ∩ A = ∅, S has the desired property. If however, S ∩ A = ∅, because of y2 ∈ / N (u) we can add y2 to S to obtain a w0m - weight stable set with the desired property. d) uv is an edge in Hm \ Hm−1 : Hajós’ Construction and Polytopes 45 d1) u = x1 and v ∈ B: by Lemma 1, we may suppose that A ⊆ N (u) and by induction hypothesis there is a w0m−1 - weight stable set S in Hm−1 with S ∩ ({u, y1 } ∪ A) = ∅. Add y2 to S and we are done. The converse situation: v = x1 and u ∈ B can be handled in a similar way. d2) u ∈ A and v ∈ B ∪ {y2 }: again with A ⊆ N (u) there is a vertex in B ∪ {y2 } and a w0m−1 - weight stable set S in Hm−1 with S ∩ ({x1 , y1 } ∪ A) = ∅; just add that vertex to S; the converse situation can be treated analogously. d3) the case that both u and v are in B ∪ y2 is similar to case d2). d4) u = y1 and v = y2 : in Hm−1 there is a w0m−1 - weight stable set S with S ∩ ({x1 , y1 } ∪ N (u)) = ∅. But there is also a vertex x ∈ B that we can add to S to obtain a stable set with the desired property. The converse case, i.e. u = y2 and d = y1 , is similar to case d2). This completes the proof of Lemma 2. We are now able to show Theorem 5 The inequality (wm )T x ≤ w0m defines a facet of P (Hm ), for all m ∈ IN. The proof is again by induction on m. Validity of our inequality is clear by construction, and for m = 1 the inequality x(V1 ) ≤ 1 is well known to be facetdefining. So suppose that Hm = (Vm , Em ) is the n-critical graph obtained after (m − 1) Hajós steps (with or without homomorphism) and that (wm )T x ≤ w0m is a facet-defining inequality for P (Hm ). It is sufficient to exhibit a linearly independent set of |Vm+1 | incidence vectors of stable sets all having weight w0m+1 . Figure 5 illustrates the corresponding matrix Mm+1 . Observe that Mm+1 is defined inductively: Mm is the submatrix surrounded by heavy lines. Also, the second last block of lines of Mm+1 comes from the stable set S, whose existence is guaranteed by Lemma 2, and the last line corresponds to the stable set of weight w0m+1 that is obtained by deleting the edge x1 y1 from Hm . It is not difficult to check that Mm+1 has full rank, and this completes our proof. 4 Conclusion We are not sure at this moment, whether we really grasp all of the n-critical graphs by our “general” construction. Figure 6 illustrates a class of 4-critical graphs, interesting for further study in this direction: Again, any member of this class is facet-inducing for the associated stable set polytope: the corresponding inequality is 2k+1  (k + 1)( i=1 (2k+1)′ xi ) + k(  xi ) + k 2 x2k+2 ≤ 2k 2 + k. i=1′ Moreover, it would be interesting to establish König-type theorems for other classes of graphs and general n-colorability. Finally, a promising direction of 46 R. Euler Vm\{x1,y1}\ N(x1) x 1 y N(x1) 1 1 . . 1 0 . . 0 0 . . 0 1 . . 1 0 . . . . . 0 B y2 1 . . 1 1 . . 1 1 . 1 1 . 1 1 . . 1 * * ... * * ... * * ... * 0 0 0 0 1 1 . . 1 1 1 0 0 0 Fig. 5. Matrix Mm+1 1 2 2k+1 1’ 2’ 3 2k 3’ 2k+2 4’ ... 4 2k-1 ... 5 Fig. 6. Another infinite class of 4-critical graphs research could be the generalization of Hajós’ construction to hypergraphs (for which Jack Edmonds’ work on matroid coloring [2] could be one of the starting points) and beyond, the study of hypergraph coloring in relation to polyhedral combinatorics. Hajós’ Construction and Polytopes 47 References 1. G.Hajós. Über eine Konstruktion nicht n-färbbarer Graphen. Wiss.Z.MartinLuther-Univ.Halle-Wittenberg, Math.-Naturw. Reihe 10, 116–117, 1961. 2. J.Edmonds. Minimum Partition of a Matroid into Independent Subsets. Journal of Research of the National Bureau of Standards 69B, 67–72, 1965. 3. V.Chvàtal. On certain polytopes associated with graphs. J.Comb.Theory B 18, 138–154, 1975. 4. T.R.Jensen B.Toft. Graph Coloring Problems. Wiley, New York, 1995. 5. R.Euler. Coloring planar Toeplitz graphs and the stable set polytope. Working paper, presented at the 17th International Symposium on Mathematical Programming, Atlanta, 2000, and the 6th International Conference on Graph Theory, Marseille, 2000 (submitted). Algorithmic Characterization of Bipartite b-Matching and Matroid Intersection Robert T. Firla1⋆ , Bianca Spille2 , and Robert Weismantel1 1 2 Institute for Mathematical Optimization, University of Magdeburg, Universitätsplatz 2, D-39106 Magdeburg, Germany, {firla,weismantel}@imo.math.uni-magdeburg.de EPFL-DMA, CH-1015 Lausanne, Switzerland, bianca.spille@epfl.ch Abstract. An algorithmic characterization of a particular combinatorial optimization problem means that there is an algorithm that works exact if and only if applied to the combinatorial optimization problem under investigation. According to Jack Edmonds, the Greedy algorithm leads to an algorithmic characterization of matroids. We deal here with the algorithmic characterization of the intersection of two matroids. To this end we introduce two different augmentation digraphs for the intersection of any two independence systems. Paths and cycles in these digraphs correspond to candidates for improving feasible solutions. The first digraph gives rise to an algorithmic characterization of bipartite b-matching. The second digraph leads to a polynomial-time augmentation algorithm for the (weighted) matroid intersection problem and to a conjecture about an algorithmic characterization of matroid intersection. 1 Introduction This paper deals with algorithmic characterizations of some combinatorial optimization problems. It is motivated by the pioneering work of Jack Edmonds on matroids, see [5,6,7,8]. He showed that an independence system is a matroid if and only if the Greedy algorithm applied to the independence system yields an optimal solution for any weight function. This result can be understood as an algorithmic characterization of matroids with respect to independence systems. Moreover, Edmonds investigated the matroid intersection problems, characterized the corresponding polytopes and gave polynomial-time algorithms to solve the problems. We focus on an algorithmic characterization of the intersection of two matroids. Let S be a finite set and let F consist of families of subsets of S. Any element F of F consists of a collection of subsets of S. For any F ∈ F, we are interested in the family of maximization problems max c(J) : J ∈ F, ⋆ c ∈ IRS . (1) Supported by a “Gerhard-Hess-Forschungsförderpreis” (WE 1462/2-2) of the German Science Foundation (DFG) awarded to R. Weismantel. M. Jünger et al. (Eds.): Combinatorial Optimization (Edmonds Festschrift), LNCS 2570, pp. 48–63, 2003. c Springer-Verlag Berlin Heidelberg 2003  Algorithmic Characterization of Bipartite b-Matching 49 Let DF (J) be an augmentation digraph that is defined for any F ∈ F, any J ∈ F and any c ∈ IRS . Its node set is S ∪ {r, s} and a node a ∈ J has weigh ca , a node b ∈ J has weight −cb and r, s have weight 0. The definition of arcs is specified in the examples. Negative (r, s)-dipaths and dicycles in this digraph correspond to candidates to augment J. We denote an (r, s)-dipath (r, P, s) by P , i.e., P consists of the inner nodes of such a sequence. Definition 1. An (r, s)-dipath or dicycle T in DF (J) is feasible for J if J △ T ∈ F. A corresponding augmentation algorithm A can be defined as follows. Augmentation Algorithm A Input: F ∈ F, J ∈ F , c ∈ IRS Construct DF (J). Find a negative feasible dicycle or (r, s)-dipath T in DF (J). Set J ∗ := J △ T . For the examples to be discussed in this paper, we may always resort to a polynomial-time algorithm to find a negative feasible dicycle or (r, s)-dipath in DF (J) when DF (J) contains such an object. Then, by the polynomial-time equivalence of augmentation and optimization for 0/1-programs [16,10], the augmentation algorithm A applied to F and c that uses this polynomial-time algorithm as a subalgorithm leads to a generic polynomial-time algorithm to find a locally maximal solution of (1). In general, one may expect that J ∈ F is not globally maximal although the corresponding augmentation digraph DF (J) does not contain a negative dicycle or (r, s)-dipath. This raises the question to characterize those F ∈ F for which the corresponding augmentation digraph DF is exact. Definition 2. DF is exact if for any J ∈ F and any c ∈ IRS , J is maximal if and only if there does not exist a negative (r, s)-dipath or dicycle in DF (J). Let F ∗ be a subset of F. {DF : F ∈ F} is an algorithmic characterization of F ∗ with respect to F if for any F ∈ F, DF is exact if and only if F ∈ F ∗ . If DF is exact and there is a polynomial-time algorithm available to detect the negative feasible dicycles and dipaths in DF (J), then A leads to a polynomialtime algorithm to solve the maximization problem (1). We mentioned that the Greedy algorithm is an algorithmic characterization of matroids with respect to independence systems. Regarding Definition 2 this can be interpreted as follows: Let F consist of all independence systems on S and let F ∗ consist of all matroids on S. Definition 3. Let I be an independence system on S. I is a matroid on S if for every A ⊆ S every maximal independent subset of A (a basis of A) has the same cardinality. For an independence system I on S, J ∈ I and c ∈ IRS , let DI (J) be as in Fig. 1. If there exists a negative (r, s)-dipath or dicycle in DI (J) then there also exists one of the following form: 50 R.T. Firla, B. Spille, and R. Weismantel i∈ / J : J ∪{i} ∈ I j∈J r s J ∪{i}\{j} ∈ I i∈ / J : J ∪{i} ∈ I Fig. 1. The augmentation digraph DI (J) – a negative (r, s)-dipath (b) with b ∈ J and J ∪ {b} ∈ I, – a negative (r, s)-dipath (a) with a ∈ J, or – a negative dicycle (a, b) with a ∈ J, b ∈ J and J ∪ {b} \ {a} ∈ I. Then for a matroid I on S, DI is exact. For an independence system I on S that is not a matroid there exists A ⊆ S and a basis J of A that has not maximum cardinality. For c = χA (the characteristic vector of A), J is nonmaximal but DI (J) does not contain a negative (r, s)-dipath or dicycle. Hence, DI is not exact. Therefore, {DI : I ∈ F} is an algorithmic characterization of matroids (F ∗ ) with respect to independence systems (F). In this paper we deal with pairs of independence systems on S and their intersection. We define F := {(I1 , I2 ) : I1 , I2 independence systems on S}. We introduce an augmentation digraph D(J, I1 , I2 ) = D(I1 ,I2 ) (J) for any two independence systems I1 and I2 on S and any common independent set J, see Fig. 2. Let I1 and I2 be relations on S with respect to I1 and I2 , respectively. The augmentation digraph D(J, I1 , I2 ) has node set S ∪ {r, s} and arcs (r, i) for (j, i) for (j, s) for (r, j) for (i, j) for (i, s) for i∈ /J j ∈ J, i ∈ /J j ∈ J; j ∈ J; j ∈ J, i ∈ /J i∈ /J with J ∪ {i} ∈ I1 ; with J ∪ {i} ∈ / I1 and i I1 j; with J ∪ {i} ∈ / I2 , and i I2 j; with J ∪ {i} ∈ I2 . We call the first three types of arcs I1 -arcs (illustrated by solid arcs) and the last three I2 -arcs (dashed arcs). The arcs of any (r, s)-dipath or dicycle in this digraph alternately fulfill conditions with respect to I1 and I2 . The remaining part of this paper is divided into two sections. Every section deals with a separate family of augmentation digraphs, arising by different conditions for the relations I1 and I2 . In Sect. 2, the relations I1 and I2 are quite restrictive and ensure that all (r, s)-dipaths and dicycles in D(J, I1 , I2 ) are feasible for J. The set of b-matchings of a bipartite graph can be represented Algorithmic Characterization of Bipartite b-Matching 51 i∈ / J : J ∪{i} ∈ I1 , ∈ I2 r j∈J i I2 s j i i I1 j I1 j i I2 j i∈ / J : J ∪{i} ∈ I1 , ∈ / I2 i∈ / J : J ∪{i} ∈ / I 1 , ∈ I2 i∈ / J : J ∪{i} ∈ / I1 , ∈ / I2 Fig. 2. The augmentation digraph D(J, I1 , I2 ) by a pair of independence systems for which the corresponding augmentation digraph is exact. Conversely, the augmentation digraph is only exact for this pair of independence systems. Hence, we obtain an algorithmic characterization of bipartite b-matching. The augmentation digraph of Sect. 3 has as subdigraph the augmentation digraph of Sect. 2. The relations I1 and I2 are weaker and not all dicycles or dipaths are feasible. In general, it is difficult to characterize the feasible dicycles and dipaths and hence to define the augmentation algorithm A. In the case where both independence systems are matroids, we are able to give a polynomial-time algorithm for finding negative feasible dipaths and dicycles, we prove that the augmentation digraph is exact, and thereby we obtain a polynomial-time algorithm to solve the matroid intersection problem. Moreover, we conjecture that this augmentation digraph supplies an algorithmic characterization of matroid intersection. More precisely, we conjecture that if for two independence systems I1 and I2 on S the augmentation digraph D(I1 ,I2 ) is exact, then the intersection of I1 and I2 can also be represented as the intersection of two matroids defined on the same ground set S. We present several partial results that support this conjecture. 2 Algorithmic Characterization of Bipartite b-Matching Let A and C be nonnegative integral matrices, and b, d nonnegative integral vectors such that I1 := {J ⊆ S : AχJ ≤ b} and I2 := {J ⊆ S : CχJ ≤ d}. The definition of I1 and I2 depends on the special choice of A and C: i I1 j : i I2 j : Aei ≤ Aej Cei ≤ Cej To be more precise, we denote the augmentation digraph that corresponds to the common independent set J by D(J, A, C) or DA,C (J), see Fig. 3 for an illustration. The following lemma assures that all dipaths and dicycles are feasible. Lemma 1. Every (r, s)-dipath or dicycle in D(J, A, C) is feasible for J. 52 R.T. Firla, B. Spille, and R. Weismantel i ∈ J : A(x + ei ) ≤ b C(x + ei ) ≤ d r s j∈J i Ce ≤ Ce i j Ae ≤ Ae i Ae ≤ Ae i ∈ J : A(x + ei ) ≤ b C(x + ei ) ≤ d j i Ce ≤ Ce j j i ∈ J : A(x + ei ) ≤ b C(x + ei ) ≤ d i ∈ J : A(x + ei ) ≤ b C(x + ei ) ≤ d Fig. 3. The augmentation digraph D(J, A, C) with x = χJ Proof. Let T be an (r, s)-dipath or dicycle of the form T = (i0 , j1 , i1 , . . . , jk , ik ), where i0 and ik are optional, depending on the structure of the dipath or dicycle. Let   ej . ei − x := χJ and t := i∈T \J j∈T ∩J Then x + t = χJ△T and we have A(x + t) = A(x + ei0 ) +    ≤b k−1  l=1 (Aeil − Aejl ) + (Aeik − Aejk ) ≤ b.       ≤0 ≤0 k Analogously, C(x + t) = (Cei0 − Cej1 ) + l=2 (Ceil−1 − Cejl ) + C(x + eik ) ≤ d and therefore, J △ T is a common independent set of I1 and I2 . ⊓ ⊔ We illustrate the augmentation digraph on an example. Example 1. Consider the intersection of two 0/1-programs,   max 2 1 9 8 4 5 4 6 x     1 1 1  x ≤ 1 1 1 1 1 1 1 2   3 1 4 2 5 4 4 5 x ≤ 13 x ∈ {0, 1}8 . The first 3×8-matrix represents the system Ax ≤ b and thereby the independence system I1 = {J ⊆ {1, 2, . . . , 8} : AχJ ≤ b}. The system Cx ≤ d corresponds in our example to the second 1 × 8-matrix and describes the independence system I2 = {J ⊆ {1, 2, . . . , 8} : CχJ ≤ d}. We start with the feasible solution x := e1 + e3 + e8 that represents the common independent set J = {1, 3, 8} and has weight 17. It may be checked that x cannot be improved by a two exchange, i.e., a vector of the form (ei − ej ) for i ∈ {2, 4, 5, 6, 7} and j ∈ {1, 3, 8}. The Algorithmic Characterization of Bipartite b-Matching 53 9 1 r 3 8 2 7 −4 s 6 4 −8 5 −4 6 −5 2 −1 Fig. 4. D(J, A, C) for J = {1, 3, 8} and x = e1 + e3 + e8 augmentation digraph D(J, A, C) is depicted in Fig. 4. The numbers attached to the nodes correspond to the node weights. The augmentation digraph contains no negative dicycle. The (r, s)-dipath of minimal weight is P := (7, 3, 4, 1, 2) with weight −2. It is represented by thick lines. Let J ′ := J △ P = {2, 4, 7, 8} ′ and y := χJ = e2 + e4 + e7 + e8 . Then y is feasible, attaining a weight of 19. We next show that the set of b-matchings in a bipartite graph is the intersection of two independence systems I1 = {J ⊆ S : AχJ ≤ b} and I2 = {J ⊆ S : CχJ ≤ d} such that the augmentation digraph D(A,C) is exact. In fact, the definition of the relations I1 and I2 and hence of the augmentation digraph is motivated by the bipartite matching problem. Definition 4. Let G = (V, E) be a bipartite graph with bipartition V = V1 ∪ V2 and b ∈ ZZ V+ . J ⊆ E is a b-matching in G if |J ∩ δ(v)| ≤ bv for all v ∈ V . The bipartite b-matching problem is to find a maximal b-matching in G for any c ∈ IRE . Let S := E and for i = 1, 2, define Ii := {J ⊆ E : |J ∩ δ(v)| ≤ bv for all v ∈ Vi }. (2) Then I1 and I2 are independence systems on S and their common independent sets are precisely the b-matchings of G. Let A and C be the incidence matrices of (V1 , E) and (V2 , E), respectively, b := (bv )v∈V1 and d := (bv )v∈V2 . Then I1 = {J ⊆ E : AχJ ≤ b} and I2 = {J ⊆ E : CχJ ≤ d}. We have i I1 j if and only if the end node of i in V1 coincides with the end node of j in V1 and i I2 j if and only if the end nodes of i and j in V2 are the same. Definition 5. Let J be a b-matching in G. A node v ∈ V is called J-exposed if |J ∩ δ(v)| < bv . A path P (or cycle C) in G is called J-alternating if (i) the edges of P (or C) are alternately in and not in J and (ii) if the first (or last) edge of the path P is not in J, then the corresponding first (or last) node of P is J-exposed. Note that this definition implies that the symmetric difference of the bmatching J with a J-alternating path or cycle is again a b-matching in G. As a slight extension of Berge’s augmenting path theorem for matchings [2] we obtain 54 R.T. Firla, B. Spille, and R. Weismantel Theorem 1. A b-matching J in a bipartite graph G is maximal if and only if there does not exist a negative (r, s)-dipath or dicycle in D(J, A, C). Proof. The symmetric difference J △ J ′ of two b-matchings J and J ′ in G is the edge-disjoint union of J-alternating paths and cycles in G. Since every Jalternating path or cycle in G splits into the node-disjoint union of (r, s)-dipaths and dicycles in D(J, A, C), the claim follows. ⊓ ⊔ Corollary 1. The augmentation digraph D(A,C) that corresponds to the set of bipartite b-matchings is exact. We now address the converse question. If the augmentation digraph is exact for a problem, then what can we say about the problem under investigation? For technical reasons, we require that the inequalities of Ax ≤ b, Cx ≤ d define 0/1-facets of the polytope conv{x ∈ {0, 1}n : Ax ≤ b, Cx ≤ d}. There exist several examples that fulfill the additional assumption. For the bipartite b-matching problem, all inequalities x(δ(v)) ≤ bv where bv < deg(v) (and these are the only inequalities needed) define 0/1-facets of the bipartite b-matching polytope. Hence, for this problem, the required description is available. Moreover, any matroid intersection polytope has only 0/1-facets. Last but not least, for the stable set problem in a graph G = (V, E), let P be the convex hull of all stable sets in G and let Ax ≤ b, Cx ≤ d be the clique inequalities that correspond to all maximal cliques in G. Then these inequalities define 0/1-facets of P = conv{x ∈ {0, 1}n : Ax ≤ b, Cx ≤ d}. Having made this further assumption, Theorem 2 implies that the exactness of the augmentation digraph for such a problem forces the problem to be a bipartite b-matching problem. This together with Corollary 1 yields an algorithmic characterization of bipartite b-matching. Theorem 2. Let I1 = {J ⊆ S : AχJ ≤ b} and I2 = {J ⊆ S : CχJ ≤ d} be two independence systems on S such that each inequality of Ax ≤ b, Cx ≤ d defines a 0/1-facet of the polytope P = conv{x ∈ {0, 1}n : Ax ≤ b, Cx ≤ d}. If the augmentation digraph D(A,C) is exact then I1 ∩ I2 is the set of b-matchings in a bipartite graph. Proof. Let I := I1 ∩ I2 = {J ⊆ S : χJ ∈ P }. Since I is an independence system, any inequality of Ax ≤ b, Cx ≤ d has the form x(F ) ≤ r(F ) with F ⊆ S and r(F ) is the rank of F in I. We claim that each column of A and C contains at most one nonzero entry, respectively. Assume that there are inequalities x(F ) ≤ r(F ) and x(F ′ ) ≤ r(F ′ ) of Ax ≤ b with F = F ′ and F ∩ F ′ = ∅. Then S := P ∩ {x : x(F ) = r(F )} and S ′ := P ∩ {x : x(F ∩ F ′ ) = r(F ∩ F ′ )} define faces of P with S ⊆ S ′ , since S is even a facet, F = F ′ , and S ′ = P . Therefore, there exists y ∈ (S \S ′ )∩{0, 1}n . For J := supp(y)∩F , we have J ∈ I, J ⊆ F , |J| = r(F ), and |J ∩ F ′ | < r(F ∩ F ′ ). Let J ∗ be a maximal independent Algorithmic Characterization of Bipartite b-Matching ∗ 55 ′ set in F ∩ F ′ and c := χ(J∪J )∩(F ∩F ) . Then c(J) = |J ∩ F ′ | < r(F ∩ F ′ ) = |J ∗ | = c(J ∗ ). Hence J is not maximal. Since the augmentation digraph is exact, there exists a negative (r, s)-dipath or dicycle T in D(J, A, C), i.e., we have     | T ∩ J ∩ (F ∩ F ′ ) | < | T ∩ (J ∗ \ J) ∩ (F ∩ F ′ ) |. Since J is maximally independent in F it follows J ∗ \J ⊆ {i ∈ J : A(y +ei ) ≤ b}. Thus, there exists an I1 -arc (j, i) in T with j ∈ J, i ∈ J such that c(j) = 0 and c(i) = 1. This implies that j ∈ J \ (F ∩ F ′ ) and i ∈ J ∗ \ J ⊆ F ∩ F ′ . We obtain ′ F′ j ∈ F ′ and i ∈ F ′ , i.e., χF i = 1 and χj = 0. This proves the claim, since this means that A.i ≤ A.j contradicting the condition of an A-arc. Extend the systems Ax ≤ b and Cx ≤ d by adding the upper bound inequalities xi ≤ 1 such that any of the new systems A′ x ≤ b′ and C ′ x ≤ d′ contains exactly one 1-entry in each column. Now these systems represent the incidence matrix of a bipartite graph G = (V1 ∪ V2 , E), where V1 and V2 correspond to the rows of A′ and C ′ , respectively. With b := (b′ , d′ ) the elements of I are exactly the b-matchings in G. ⊓ ⊔ 3 About an Algorithmic Characterization of Matroid Intersection In this section we consider different conditions for the relations I1 and I2 and therefore, a different augmentation digraph than in the previous section. The conditions for the relations are motivated by the matroid intersection problem. Definition 6. Let I1 and I2 be matroids on S. The matroid intersection problem is to find a maximum weighted common independent set of I1 , I2 for any c ∈ IRS . Definition 7. Let I be an independence system on S and C, A ⊆ S. C is a circuit of I if it is minimally dependent. The set of all circuits of I is the circuit system of I. The rank r(A) of A is the size of a maximal basis of A, whereas the lower rank rl (A) of A is the size of a minimal basis of A. The rank quotient of I is min{rl (A)/r(A) : A ⊆ S, r(A) > 0}. We denote by span(A) the maximal superset of A having the same rank as A. The rank quotient of a matroid is 1. In general, the rank quotient of an independence system is not easy to determine. It is known that if an independence system I on S is the intersection of m matroids on S, then the rank quotient of I is at least 1/m. The bipartite b-matching problem is a special case of the matroid intersection problem since the independence systems defined as in (2) are matroids. We want to define the relations I1 and I2 in such a way that the augmentation digraph D(I1 ,I2 ) is exact for two matroids I1 , I2 on S. Hence, we have to weaken the conditions for the relations of the previous section, i.e., if i I1 j according to the relation introduced in Sect. 2 then i I1 j according to the relation introduced in this section but now there exists more pairs of elements of S that are in relation with respect to I1 , same for the relation I2 . 56 R.T. Firla, B. Spille, and R. Weismantel Let I1 , I2 be two independence systems on S and J a common independent set. The relations I1 and I2 of this section are defined as follows: b I1 a : b I2 a : J \ {a} ∪ {b} ∈ I1 J \ {a} ∪ {b} ∈ I2 The corresponding augmentation digraph D(J, I1 , I2 ) is illustrated in Fig. 5. b∈ / J : J ∪{b} ∈ I1 , ∈ I2 r a∈J s J ∪{b}\{a} ∈ I2 J \{a}∪{b} ∈ I1 J \{a}∪{b} ∈ I1 J ∪{b}\{a} ∈ I2 b∈ / J : J ∪{b} ∈ / I1 , ∈ I2 b∈ / J : J ∪{b} ∈ I1 , ∈ / I2 b∈ / J : J ∪{b} ∈ / I1 , ∈ / I2 Fig. 5. The augmentation digraph D(J, I1 , I2 ) For I1 = I2 =: I and J ∈ I, the augmentation digraph D(J, I1 , I2 ) coincides with DI (J) defined in Sect. 1. We remark that the difference to the augmentation digraph used in the algorithm for the cardinality matroid intersection problem or in Frank’s weightsplitting algorithm for the matroid intersection problem are the additional arcs (r, a) and (a, s), see e.g. [3]. In the case of bipartite b-matching, let I1 and I2 be defined as in (2). Then for a b-matching J and a ∈ J, b ∈ J with J ∪ {b} ∈  I1 we have J ∪ {b} \ {a} ∈ I1 if and only if a and b have the same end nodes in V1 , same for I2 . Hence, the relations and the augmentation digraphs of the both sections coincide. The following lemma implies that for all pairs of matroids I1 , I2 on S, we can use a shortest path algorithm to find feasible negative dicycles or dipaths in D(J, I1 , I2 ) if the digraph contains any negative dipath or dicycle. Lemma 2. Let I1 , I2 be matroids on S, c ∈ IRS , J a common independent set, and D(J) := D(J, I1 , I2 ). Any minimal (w.r.t. cardinality) negative dicycle in D(J) is feasible for J. If there does not exist a negative dicycle in D(J), then any minimal shortest (r, s)-dipath in D(J) is feasible for J. The proof of this lemma makes use of the following proposition that can be proved by induction. Proposition 1. [12] Let I be a matroid on S, J ∈ I, and b1 , a1 , . . . , bn , an a sequence of distinct elements of S such that Algorithmic Characterization of Bipartite b-Matching 57 (i) bi ∈ / J, ai ∈ J for 1 ≤ i ≤ n; (ii) J ∪ {bi } ∈ / I, J ∪ {bi } \ {ai } ∈ I for 1 ≤ i ≤ n; (iii) J ∪ {bi } \ {aj } ∈ / I for 1 ≤ i < j ≤ n. Then J ′ := J △ {b1 , a1 , . . . , bn , an } ∈ I and span(J ′ ) = span(J). We next present the proof of Lemma 2. In the special case where J is maximal among all common independent sets of cardinality |J| a similar proof is presented in [14] and in [9]. Proof. (of Lemma 2) Let C be a negative dicycle in D(J) of minimal cardinality. Suppose there exists an undirected cycle K in D(J) that consists only of I1 -arcs and is C-alternating, meaning that its edges are alternately in and not in C, see Fig. 6 (a). We claim that there exist k := 12 |K| dicycles C1 , . . . , Ck in D(J) such that their node sets are subsets of the node set of C and the sum of their weights is a nonnegative integral multiple of the weight of C, a contradiction to the minimality of C. The construction of these cycles is illustrated in Fig. 6. Let (a) a1 a2 a3 a4 a5 a6 (c) . b1 . . (b) b2 b3 b4 b5 . . b6 . . . . . . . . . . . . . . . . . . . . . . . Fig. 6. (a) A cycle C = {a1 , b1 , . . . , a6 , b6 } and a C-alternating cycle K in D(J), (b) the union of C and K, (c) the cycles C1 , . . . , C4 which cover each I2 -arc 2 times C = (a1 , b1 , . . . , an , bn ) with a1 , . . . , an ∈ J, b1 , . . . , bn ∈ J. Let {d1 , . . . , dk } be the I1 -arcs of K that are not in C. For each di = (ai1 , bi2 ) define Ci as the unique alternating dicycle in C ∪ K with Ci ∩ {d1 , . . . , dk } = {di }. Then |Ci | < |C| and V (Ci ) ⊆ V (C) for 1 ≤ i ≤ k. We next show that every I2 -arc of C (and hence every node of C) is contained in exactly λ := |{di : i1 < i2 }| dicycles of C1 , . . . , Ck . For (bn , a1 ) this is obvious since (bn , a1 ) is an arc of Ci if and only if i1 < i2 . Let (bj , aj+1 ) be an I2 -arc of C with j < n. Since K is a cycle, we have |{di : i1 ≤ j < i2 }| = |{di : i2 ≤ j < i1 }|. (3) It is (bj , aj+1 ) ∈ Ci if and only if j + 1 ≤ i1 < i2 , i1 < i2 ≤ j, or i2 ≤ j < i1 . Since |{di : j + 1 ≤ i1 < i2 }| + |{di : i1 < i2 ≤ j}| + |{di : i1 ≤ j < i2 }| = λ 58 R.T. Firla, B. Spille, and R. Weismantel with (3) follows that (bj , aj+1 ) is contained in exactly λ dicycles of C1 , . . . , Ck . Hence, the sum of the weights of C1 , . . . , Ck is λ times the weight of C. Therefore, there does not exist a cycle in D(J) that consists only of I1 -arcs and is C-alternating. Proposition 1 implies that J △ C is independent in I1 . Similarly, J △ C is independent in I2 . Consequently, C is feasible for J. The proof for the (r, s)-dipath is similar. ⊓ ⊔ Lemma 2 implies that J is not maximal if there exists a negative (r, s)-dipath or dicycle in the augmentation digraph D(J, I1 , I2 ). The following decomposition result implies the existence of a negative dicycle or (r, s)-dipath in D(J, I1 , I2 ) for any non-maximal common independent set J. Lemma 3. Let I1 , I2 be matroids on S, c ∈ IRS , J a common independent set, and D(J) := D(J, I1 , I2 ). Let J, J ∗ be two common independent sets. Then J △ J ∗ is the union of pairwise node-disjoint dicycles and (r, s)-dipaths in D(J). This result has also been obtained by Krogdahl [13,14] in the special case where |J ∗ | = |J| + 1 and by Fujishige [9] for |J ∗ | = |J|. Proof. For i = 1, 2, we define matroids Ii′ := {I ⊆ J △ J ∗ : I ∪ (J ∩ J ∗ ) ⊆ Mi } on S ′ := J △ J ∗ which we obtain from I1 and I2 by contracting J ∩ J ∗ and then deleting all elements e ∈ S with e ∈ / J ∪ J ∗ . For i = 1, 2, let span′i , ri′ ′ denote the span and the rank in Ii , respectively, and let Di be the digraph D(J) restricted on the Ii -arcs. Obviously, D1 and D2 are bipartite. We show that  I1 } into A := J \ J ∗ . there is a matching in D1 of B := {b ∈ J ∗ \ J : J ∪ {b} ∈ Using Hall’s Theorem [11], we have to show that |X| ≤ |Γ (X)| for every X ⊆ B, where Γ (X) denotes all nodes in A which are adjacent to at least one node of X. Let X ⊆ B. Then Γ (b) ∪ {b} is a circuit in I1′ for each b ∈ X. Thus, b ∈ span′1 (Γ (b)) ⊆ span′1 (Γ (X)) for all b ∈ X and hence, X ⊆ span′1 (Γ (X)). Because X ⊆ B ⊆ J ∗ \J, X is independent in I1′ . Hence, |X| ≤ r1′ (span′1 (Γ (X))). On the other hand, r1′ (span′1 (Γ (X))) = |Γ (X)| since Γ (X) is independent in I1′ (because Γ (X) ⊆ J \ J ∗ ). This results in |X| ≤ |Γ (X)|. Hence, there is a matching in D1 of B into A. We expand the matching to a matching on D1 that covers exactly J △ J ∗ . Similarly, there is a matching in D2 that covers J △ J ∗ . The union of these matchings on D1 and D2 leads to the union of node-disjoint (r, s)-dipaths and dicycles in D(J) whose node set is J △ J ∗ , see Fig. 8. ⊓ ⊔ r s r Fig. 7. A perfect matching on D1 and D2 restricted to the node set J △ J ∗ s Algorithmic Characterization of Bipartite b-Matching r 59 s Fig. 8. J △ J ∗ as the union of node-disjoint dicycles and (r, s)-dipaths in D(J) Consequently, we obtain the following two equivalent statements. Theorem 3. Let I1 , I2 be matroids on S and c ∈ IRS . A common independent set J is maximal if and only if there does not exist a negative (r, s)-dipath or dicycle in D(J, I1 , I2 ). Theorem 4. D(I1 ,I2 ) is exact for two matroids I1 , I2 on S. Hence, we can solve the matroid intersection problem by a polynomial-time algorithm that is based on an augmentation strategy. The dipaths and dicycles TJ , that are used in each augmentation step to augment the current common independent set J, are irreducible, i.e., they are not decomposable in the sense that there does not exist T1 , T2 such that TJ = T1 ∪ T2 and J △ T1 , J △ T2 are common independent sets. Fujishige [9] as well as Brezovec, Cornuejols, and Glover [1] and Camerini and Hamacher [4] presented a different digraph than D(J, I1 , I2 ) for any common independent set J to solve the augmentation problem for the matroid intersection problem. Their construction of the digraph corresponding to J depends on whether J is optimal among all independent sets of cardinality |J|. The dipaths and dicycles they augment with are in general not irreducible. In the following we derive some properties of independence systems I1 , I2 on S for which the augmentation digraph is exact. First we obtain a decomposition result similar to Lemma 3. Theorem 5. Let I1 , I2 be two independence systems on S such that D(I1 ,I2 ) is exact. Let J, J ∗ ∈ I1 ∩ I2 and D(J) := D(J, I1 , I2 ). Then J △ J ∗ is the union of pairwise node-disjoint dicycles and (r, s)-dipaths in D(J). Proof. For i = 1, 2, let Di be the digraph which we obtain from D(J) by restricting on the Ii -arcs. Obviously, D1 and D2 are bipartite digraphs. We show that there is a matching in D1 of B := {b ∈ J ∗ \ J : J ∪ {b} ∈  I1 } into A := J \ J ∗ . Using Hall’s Theorem [11], we have to show that |X| ≤ |Γ (X)| for every X ⊆ B, where Γ (X) denotes all nodes in A which are adjacent to at least one node ∗ of X. Let X ⊆ B. Suppose |X| > |Γ (X)|. Let c := χX∪Γ (X) + χJ∩J . Then c(J) = |Γ (X)| + |J ∩ J ∗ | < |X| + |J ∩ J ∗ | = c(J ∗ ), i.e., J is not maximal. Since D(I1 ,I2 ) is exact there exists a negative (r, s)-dipath or dicycle Q in D(J, I1 , I2 ). Q contains for any b ∈ Q ∩ B an ingoing arc (a, b) with a ∈ Q ∩ J. Since Q is negative, there exists an arc (a′ , b′ ) of Q such that ca′ = 0 and cb′ = 1, i.e., 60 R.T. Firla, B. Spille, and R. Weismantel a′ ∈ A \ Γ (X) and b′ ∈ X, a contradiction to the definition of Γ (X). Consequently, there is a matching in D1 of B into A. Similarly, there is a matching in D2 of {b ∈ J ∗ \ J : J ∪ {b} ∈  I2 } into A. The union of these matchings leads to the union of node-disjoint (r, s)-dipaths and dicycles in D(J) whose node set is J △ J ∗ . ⊓ ⊔ Next we show a similar result to Lemma 2. It implies that if D(I1 ,I2 ) is exact, we can use a variant of a shortest path algorithm to find a negative feasible (r, s)-dipath or dicycle in D(J, I1 , I2 ) if J is non-maximal. Hence, for independence systems I1 and I2 on S for which the corresponding augmentation digraph is exact we can solve the maximization problem (1) with F = I1 ∩ I2 in polynomial-time for any c ∈ IRS . Theorem 6. Let I1 , I2 be two independence systems on S such that D(I1 ,I2 ) is exact. Let c ∈ IRS , J ∈ I1 ∩ I2 , and D(J) := D(J, I1 , I2 ). Any minimal (w.r.t. cardinality) negative dicycle in D(J) is feasible for J. If there does not exist a negative dicycle in D(J), then any negative shortest (r, s)-dipath of minimal cardinality in D(J) is feasible for J. Proof. Let M := 2|S| · max{1, max{|ci | : i ∈ S}}. Let C be a minimal negative dicycle in D(J). For i ∈ S,   −M 2 : i ∈ J ∪ C ′ M2 : i ∈ J \ C ci :=  ci + M : i ∈ C The weight of C w.r.t. c′ is equal to its weight w.r.t. c and hence negative. Since D(I1 ,I2 ) is exact, there is J ∗ ∈ I1 ∩ I2 such that c′ (J ∗ ) > c′ (J). The definition of c′ implies that J \ C ⊆ J ∗ ⊆ J ∪ C, i.e., J △ J ∗ ⊆ C. By Theorem 5, J △ J ∗ is the union of pairwise node-disjoint dicycles and (r, s)-dipaths in D(J). Since no (r, s)-dipath is negative w.r.t. c′ the above union contains a dicycle C ′ that is negative w.r.t. c′ . Hence, C ′ is negative w.r.t. c and its node set is a subset of C. By the minimality of C, we obtain that the node sets of C and C ′ coincide. Consequently, J △ J ∗ = C, i.e., J △ C = J ∗ ∈ I1 ∩ I2 . The other cases can be proved similarly. ⊓ ⊔ Definition 8. For sets S1 , . . . , Sk such that Si ⊆ Sj for i = j we denote by < S1 , . . . , Sk > the independence system on S1 ∪ . . . ∪ Sk with bases S1 , . . . , Sk . Let I1 , I2 be independence systems on S such that D(I1 ,I2 ) is exact. Then I1 and I2 need not be matroids. To see this, consider, for instance, I1 =< {1, 2}, {3}, {4} > and I2 =< {1}, {2}, {3, 4} > . In general, D(I1 ,I2 ) is not exact, even if I1 ∩ I2 is a matroid. As an example, let S = {1, 2, 3}, I1 =< {1, 2}, {3} >, I2 =< {1}, {2, 3} >, and c = 1 be the allones-vector. Then I1 ∩ I2 =< {1}, {2}, {3} > is a matroid on S. {2} is a maximal common independent set but D({2}, I1 , I2 ) contains the negative (r, s)-dipath (1, 2, 3). Suppose that D(I1 ,I2 ) is exact. Then Theorem 6 implies {1, 3} ∈ I1 ∩I2 , a contradiction. Hence, D(I1 ,I2 ) is not exact. Algorithmic Characterization of Bipartite b-Matching 61 Consequently, the fact that the intersection of two independence systems I1 and I2 on S can also be represented as the intersection of two matroids on S (or is even a matroid) does not imply the exactness of D(I1 ,I2 ) . On the other hand, the following theorem can be interpreted as an indicator that the exactness of D(I1 ,I2 ) for two independence systems I1 and I2 on S forces the intersection of I1 and I2 to be an intersection of two matroids. This interpretation arises from the fact that the rank quotient of an independence system is at least 1/m if the independence system is the intersection of m matroids. Theorem 7. If D(I1 ,I2 ) is exact for I1 , I2 , then the rank quotient of I1 ∩ I2 is at least 1/2. Proof. Let A ⊆ S and J, J ∗ ∈ I with J, J ∗ ⊆ A. Suppose |J ∗ | > 2 · |J|. It follows |J ∗ \ J| ≥ 2 · |J \ J ∗ | + 1. Let m := |J \ J ∗ |. We consider subsets of J ∗ \ J of cardinality (m + 1). Let J1 be one of them and J 1 := (J ∩ J ∗ ) ∪ J1 . Then J 1 ⊆ J ∗ ∈ I and |J 1 | = |J| + 1. Since D(I1 ,I2 ) is exact, there is i1 ∈ J1 such that J ∪ {i1 } ∈ I1 . Let J2 be a subset of J ∗ \ J of cardinality (m + 1) with i1 ∈ J2 . Analogously follows the existence of i2 ∈ J2 such that J ∪ {i2 } ∈ I1 . Continuing this argumentation we obtain elements i1 , i2 , . . . , im , im+1 ∈ J ∗ \ J such that J ∪ {ij } ∈ I1 for 1 ≤ j ≤ m + 1. Let J ′ := (J ∩ J ∗ ) ∪ {i1 , i2 , . . . , im , im+1 }. Then J ′ ⊆ J ∗ ∈ I and |J ′ | = |J| + 1. Hence, there exists an 1 ≤ j ≤ m + 1 such that J ∪ {ij } ∈ I2 and thus, J ∪ {ij } ∈ I. Consequently, J is not a basis of A. ⊓ ⊔ Conjecture 1. Let I1 , I2 be independence systems on S such that D(I1 ,I2 ) is exact. Then the intersection of I1 and I2 can also be represented as the intersection of two matroids on S. In general, the exactness of D(·, I1 , I2 ) does not imply the existence of matroids M1 , M2 such that I1 ∩ I2 = M1 ∩ M2 and the corresponding augmentation digraphs coincide. To see this, consider S = {1, 2, 3}, I1 =< {1, 2}, {3} >, and I2 =< {1}, {2}, {3} >. Then D(I1 ,I2 ) is exact. Suppose there exist matroids M1 , M2 with coinciding augmentation digraphs. Then {1, 2} and {3} are both elements of M1 but {1, 3}, {2, 3} are not, a contradiction to M1 being a matroid. This fact already implies a difficulty to prove Conjecture 1. Since the augmentation digraph of the conjectured matroids is in general different to the one of the original independence systems, it is not obvious how to define the matroids. Nevertheless, we present partial results for Conjecture 1. In particular, the validity of this conjecture has been verified in the following special cases, see [15]: r(I1 ∩ I2 ) ∈ {1, 2, n − 1, n}, r(I1 ∩ I2 ) = rl (I1 ∩ I2 ) = n − 2, or n ≤ 5, where n is the cardinality of S. We refrain from giving the technical proofs here since they provide no insight to find a general proof. We next mention one possible attempt to prove Conjecture 1. We exhibit a construction for the supposedly existent matroids M1 and M2 on S such that I1 ∩ I2 = M1 ∩ M2 . Let I := I1 ∩ I2 and let C be the circuit system of I. For i = 1, 2, let Ci := {C ∈ C : C ∈ Ii }. (4) 62 R.T. Firla, B. Spille, and R. Weismantel We define Mi to be the matroid on S with the maximum number of independent sets such that any circuit in Ci is dependent in Mi . Then M1 ∩ M2 ⊆ I. We conjecture that even M1 ∩ M2 = I = I1 ∩ I2 holds. This is however quite difficult to prove for the following reason. It has to be shown that no element of I is dependent in any of the matroids M1 , M2 . We only know that D(I1 ,I2 ) is exact, hence, we have some information about the elements of I and the augmentation digraphs D(J, I1 , I2 ) for J ∈ I. Consequently, proving results for M1 and M2 whose elements have in general nothing to do with any of these augmentation digraphs is a difficult task. We consider an example. In Fig. 9 and Fig. 10 we picture two graphic matroids M′1 and M′2 on S = {a, b, c, d, e, f, g, h}. Theorem 4 guarantees the exact- c f b g h a e g d Fig. 9. Graphic matroid M′1 Fig. 10. Graphic matroid M′2 ness of D(M′1 ,M′2 ) . Define I1 := M′1 \{abef , abgh, cdef , cdgh, efgh} and I2 := M′2 , where s1 s2 . . . sk is short for {s1 , s2 , . . . , sk }. Then I1 ∩ I2 = M′1 ∩ M′2 . I1 is an independence system on S which is not a matroid, since aef and abfg are bases of abefg of different cardinality. The independent sets of M′1 that are not in I1 have no effect on the augmentation digraphs. Consequently, the family of digraphs D(I1 ,I2 ) is exact. The circuit system C of I := I1 ∩ I2 is C = {ab, cd , ef , gh, aeg, bfh, cfg, deh}. Then by (4) C1 = {aeg, bfh, cfg, deh} and C2 = {ab, cd , ef , gh}. C2 is the circuit system of the matroid M2 := M′2 . C1 is not the circuit system of a matroid. The matroid M1 on S with the maximum number of independent sets such that any circuit in C1 is dependent in M1 is M1 := M′1 ∪ {abcd }. It is M1 ∩ M2 = I1 ∩ I2 . Finding an algorithmic characterization of matroid intersection still remains a challenging open problem. References 1. C. Brezovec, G. Cornuejols, and F. Glover, Two algorithms for weighted matroid intersection, Mathematical Programming 36 (1986), 39–53. 2. C. Berge, Two theorems in graph theory, Proc. of the National Academy of Sciences (U.S.A.) 43 (1957), 842–844. 3. W.J. Cook, W.H. Cunningham, W.R. Pulleyblank, and A. Schrijver, Combinatorial optimization, Wiley-Interscience, New York, 1998. Algorithmic Characterization of Bipartite b-Matching 63 4. P.M. Camerini and H.W. Hamacher, Intersection of two matroids: (condensed) border graphs and ranking, SIAM Journal on Discrete Mathematics 2 (1989), no. 1, 16–27. 5. J. Edmonds, Matroid partition, Math. Decision Sciences, Proceedings 5th Summer Seminary Stanford 1967, Part 1 (Lectures of Applied Mathematics 11) (1968), 335–345. 6. J. Edmonds, Submodular functions, matroids, and certain polyhedra, Combinatorial Structures and their Applications (R. K. Guy, H. Hanai, N. Sauer and J. Schönheim, eds.), Gordon and Brach, New York (1970), 69–87. 7. J. Edmonds, Matroids and the greedy algorithm, Mathematical Programming 1 (1971), 127–136. 8. J. Edmonds, Matroid intersection, Annals of Discrete Mathematics 4 (1979), 39– 49. 9. S. Fujishige, A primal approach to the independent assignment problem, Journal of the Operations Research Society of Japan 20 (1977), 1–15. 10. M. Grötschel and L. Lovász, Combinatorial optimization, Handbook of Combinatorics (R. Graham, M. Grötschel, and L. Lovász, eds.), North-Holland, Amsterdam, 1995, pp. 1541–1598. 11. P. Hall, On representatives of subsets, Journal of the London Mathematical Society 10 (1935), 26–30. 12. M. Iri and N. Tomizawa, An algorithm for finding an optimal “independent assignment”, Journal of the Operations Research Society of Japan 19 (1976), 32–57. 13. S. Krogdahl, A combinatorial proof of lawler’s matroid intersection algorithm, unpublished manuscript (partly published in [14]) (1975). 14. E.L. Lawler, Combinatorial optimization: Networks and matroids, Holt, Rinehart and Winston, New York etc., 1976. 15. B. Spille, Primal characterizations of combinatorial optimization problems, PhD Thesis, University Magdeburg, Germany, 2001. 16. A.S. Schulz, R. Weismantel, and G.M. Ziegler, 0/1 integer programming: optimization and augmentation are equivalent, Algorithms-ESA95 (P. Spirakis, ed.), Lecture Notes in Computer Science 979, Springer, Berlin, 1995, pp. 473–483. Solving Real-World ATSP Instances by Branch-and-Cut Matteo Fischetti1 , Andrea Lodi2 and Paolo Toth2 1 DEI, University of Padova Via Gradenigo 6/A, 35100 Padova, Italy matteo.fischetti@unipd.it 2 DEIS, University of Bologna Viale Risorgimento 2, 40136 Bologna, Italy {alodi,ptoth}@deis.unibo.it Abstract. Recently, Fischetti, Lodi and Toth [15] surveyed exact methods for the Asymmetric Traveling Salesman Problem (ATSP) and computationally compared branch-and-bound and branch-and-cut codes. The results of this comparison proved that branch-and-cut is the most effective method to solve hard ATSP instances. In the present paper the branch-and-cut algorithms by Fischetti and Toth [17] and by Applegate, Bixby, Chvátal and Cook [2] are considered and tested on a set of 35 realworld instances including 16 new instances recently presented in [12]. 1 Introduction Let G = (V, A) be a given complete digraph, where V = {1, . . . , n} is the vertex set and A = {(i, j) : i, j ∈ V } the arc set, and let cij be the cost associated with arc (i, j) ∈ A (with cii = +∞ for each i ∈ V ). A Hamiltonian circuit (tour) of G is a circuit visiting each vertex of V exactly once. The Asymmetric Traveling Salesman Problem (ATSP) is to find a Hamilto nian circuit G∗ = (V, A∗ ) of G whose cost (i,j)∈A∗ cij is a minimum, and can be formulated as the following Integer Linear Program (ILP):  (1) v(ATSP) = min (i,j)∈A cij xij subject to  xij = 1 j∈V (2)  i∈V x = 1 i ∈ V (3) ij  j∈V x S ⊂ V : S = ∅ (4) i∈S j∈S ij ≤ |S| − 1 xij ≥ 0 (i, j) ∈ A (5) xij integer (i, j) ∈ A (6) where xij = 1 if and only if arc (i, j) is in the optimal tour. Constraints (2) and (3) impose that the in-degree and out-degree of each vertex, respectively, is equal to one, while constraints (4) are Subtour Elimination Constraints (SECs) and impose that no partial circuit exists. M. Jünger et al. (Eds.): Combinatorial Optimization (Edmonds Festschrift), LNCS 2570, pp. 64–77, 2003. c Springer-Verlag Berlin Heidelberg 2003  Solving Real-World ATSP Instances by Branch-and-Cut 65 To  notation, for any f : A → R and S1 , S2 ⊆ V , we write f (S1 , S2 )  simplify for i∈S1 j∈S2 fij ; moreover, we write f (i, S2 ) or f (S1 , i) whenever S1 = {i} or S2 = {i}, respectively. Fischetti, Lodi and Toth [15] recently surveyed exact methods for ATSP and computationally compared four exact codes, namely, the branch-and-bound code by Carpaneto, Dell’Amico and Toth [11] based on the Assignment Problem (AP) relaxation1 , the additive branch-and-bound code by Fischetti and Toth [16], the branch-and-cut code by Fischetti and Toth [17], and the branch-and-cut Symmetric TSP (STSP) code by Applegate, Bixby, Chvátal and Cook [1]. These codes have been extensively tested on a set of 86 ATSP instances. The results of this comparison proved that branch-and-cut is the most effective method to solve hard ATSP instances. In the present paper the two branch-and-cut codes considered in [15] are tested on a set of 35 real-world instances including 16 new instances recently presented in [12]. The paper is organized as follows. In Section 2 the branch-andcut method by Fischetti and Toth [17] is briefly discussed, while in Section 3 the adaptations to the STSP branch-and-cut code by Applegate, Bixby, Chvátal and Cook [1] needed to solve ATSP instances are presented. In Section 4 the codes are compared on the set of instances. Some conclusions are finally drawn in Section 5. 2 The Branch-and-Cut Algorithm by Fischetti and Toth We next outline the polyhedral method of Fischetti and Toth [17]. Branch-andcut methods for ATSP with side constraints have been proposed recently by Ascheuer [3], Ascheuer, Jünger and Reinelt [6], and Ascheuer, Fischetti and Grötschel [4,5], among others. The Fischetti-Toth method is based on model (1)– (6), and exploits additional classes of facet-inducing inequalities for the ATSP polytope P that proved to be of crucial importance for the solution of some realworld instances. For each class, one needs to address the associated separation problem (in its optimization version), defined as follows: Given a point x∗ ≥ 0 satisfying the degree equations, along with a family F of ATSP inequalities, find a most violated member of F, i.e., an inequality αx ≤ α0 belonging to F and maximizing the degree of violation αx∗ − α0 . 2.1 Separation of Symmetric Inequalities An ATSP inequality αx ≤ α0 is called symmetric when αij = αji for all (i, j) ∈ A. Symmetric inequalities can be thought of as derived from valid inequalities for the STSP, defined as the problem of finding a minimum-cost Hamiltonian cycle in a given undirected graph GE = (V, E). Indeed, let ye = 1 if edge e∈ E belongs to the optimal STSP solution; ye = 0 otherwise. Every inequality e∈E αe ye ≤ α0 for STSP can be transformed into a valid ATSP inequality by 1 The AP is the relaxation obtained by dropping constraints (4). 66 M. Fischetti, A. Lodi, and P. Toth simply replacing ye by xij + xji for all edges e = (i, j) ∈ E. This produces the symmetric inequality αx ≤ α0 , where αij = αji = α(i,j) for all i, j ∈ V , i = j. Conversely, every symmetric ATSP inequality αx ≤ α0 corresponds to the valid STSP inequality (i,j)∈E αij y(i,j) ≤ α0 . The above correspondence implies that every separation algorithm for STSP can be used, as a “black box”, for ATSP as well. To this end, given the ATSP (fractional) point x∗ one first defines the undirected counterpart y ∗ of x∗ by means of the transformation ye∗ := x∗ij + x∗ji for all edges e = (i, j) ∈ E and then applies the STSP separation algorithm to y ∗ . On return, the detected most violated STSP inequality is transformed into its ATSP counterpart, both inequalities having the same degree of violation. Several exact/heuristic separation algorithms for STSP have been proposed in recent years, all of which can be used for ATSP. In [17] only two such separation tools are used, namely: (i) the Padberg-Rinaldi [26] exact algorithm for SECs; and (ii) the simplest heuristic scheme for comb (actually, 2-matching) constraints, in which the components of the graph induced by the edges e ∈ E with fractional ye∗ are considered as potential handles of the comb. 2.2 Separation of Dk+ and Dk− Inequalities The following Dk+ inequalities have been proposed by Grötschel and Padberg [19]: xi1 ik + k  xih ih−1 + 2 h=2 k−1  xi1 ih + h=2 k−1  x({i2 , . . . , ih−1 }, ih ) ≤ k − 1 (7) h=3 where (i1 , . . . , ik ) is any sequence of k ∈ {3, . . . , n − 1} distinct vertices. Dk+ inequalities are facet-inducingfor the ATSP polytope [13], and are obtained by lifting the cycle inequality (i,j)∈C xij ≤ k − 1 associated with the subtour C := {(i1 , ik ), (ik , ik−1 ), . . . , (i2 , i1 )}. The separation problem for the class of Dk+ inequalities calls for a vertex sequence (i1 , . . . , ik ), 3 ≤ k ≤ n−1, for which the dek k−1 k−1 gree of violation x∗i1 ik + h=2 x∗ih ih−1 +2 h=2 x∗i1 ih + h=3 x∗ ({i2 , . . . , ih−1 }, ih ) −k + 1 is as large as possible. This is itself a combinatorial optimization problem that can be effectively solved in practice by an implicit enumeration scheme enhanced by suitable pruning conditions [17]. Also addressed in [17] are the following Dk− inequalities: xik i1 + k  h=2 xih−1 ih + 2 k−1  h=2 xih i1 + k−1  x(ih , {i2 , . . . , ih−1 }) ≤ k − 1 (8) h=3 where (i1 , . . . , ik ) is any sequence of k ∈ {3, . . . , n − 1} distinct nodes. Dk− inequalities are valid [19] and facet-inducing [13] for P ; they can be obtained Solving Real-World ATSP Instances by Branch-and-Cut 67  by lifting the cycle inequality (i,j)∈C xij ≤ k − 1 associated with the circuit C := {(i1 , i2 ), . . . , (ik−1 , ik ), (ik , i1 )}. Dk− inequalities can be thought of as derived from Dk+ inequalities by swapping the coefficient of the two arcs (i, j) and (j, i) for all i, j ∈ V , i < j. This is a perfectly general operation, called transposition in [19], that works as follows. T := αji for all (i, j) ∈ A. For every α ∈ RA , let αT ∈ RA be defined by: αij Clearly, inequality αx ≤ α0 is valid (or facet-inducing) for the ATSP polytope P if and only if its transposed version, αT x ≤ α0 , is. This follows from the obvious fact that αT x = αxT , where x ∈ P if and only if xT ∈ P . Moreover, every separation procedure for αx ≤ α0 can also be used, as a black box, to deal with αT x ≤ α0 . To this end one gives the transposed point (x∗ )T (instead of x∗ ) on input to the procedure, and then transposes the returned inequality. The above considerations show that both the heuristic and exact separation algorithms designed for Dk+ inequalities can be used for Dk− inequalities as well. 2.3 Separation of Odd CAT Inequalities The following class of inequalities has been proposed by Balas [7]. Two distinct arcs (i, j) and (u, v) are called incompatible if i = u, or j = v, or i = v and j = u; compatible otherwise. A Closed Alternating Trail (CAT, for short) is a sequence T = {a1 , . . . , at } of t distinct arcs such that, for k = 1, . . . , t, arc ak is incompatible with arcs ak−1 and ak+1 , and compatible with all other arcs in T (with a0 := at and at+1 := a1 ). Let δ + (v) and δ − (v) denote the set of the arcs of G leaving and entering any vertex v ∈ V , respectively. Given a CAT T , a node v is called a source if |δ + (v) ∩ T | = 2, whereas it is called a sink if |δ − (v) ∩ T | = 2. Notice that a node can play both source and sink roles. Let Q be the set of the arcs (i, j) ∈ A \ T such that i is a source and j is a sink node. For any CAT of odd length t, the following odd CAT inequality  (i,j)∈T ∪Q xij ≤ |T | − 1 2 (9) is valid and facet-defining (except in two pathological cases arising for n ≤ 6) for the ATSP polytope [7]. The following heuristic separation algorithm is based on the known fact that odd CAT inequalities correspond to odd cycles on an auxiliary “incompatibility” graph [7]. Given the point x∗ , we set-up an edge-weighted undirected graph G̃ = (Ñ , Ẽ) having a node νa for each arc a ∈ A with x∗a > 0, and an edge e = (νa , νb ) for each pair a, b of incompatible arcs, whose weight is defined as we := 1 − (x∗a + x∗b ). We assume that x∗ satisfies all degree equations as well as all trivial SECs of the form xij + xji ≤ 1; this implies we ≥ 0 for all e ∈ Ẽ. Let δ̃(v) contain the edges in Ẽ incident with a given node v ∈ Ñ . A cycle C̃ of G̃ is an edge subset of Ẽ inducing a connected subdigraph of G̃, and such that |C̃ ∩ δ̃(v)| is even for all v ∈ Ñ . Cycle C̃ is called (i) odd if |C̃| is odd; (ii) simple if |C̃ ∩ δ̃(v)| ∈ {0, 2} for all v ∈ Ñ ; and (iii) chordless if the subdigraph of G̃ induced by the nodes covered by C̃ has no other edges than those in C̃. 68 M. Fischetti, A. Lodi, and P. Toth By construction, every simple and chordless odd cycle C̃ in G̃ corresponds to an odd CAT T , where a ∈ T ifand only if νa is covered by C̃. In addition, the total weight of C̃ is w(C̃) := e∈C̃ we = (νa ,νb )∈C̃ (1 − x∗a − x∗b ) = |T | −  2 a∈T x∗a hence (1 − w(C̃))/2 gives a lower bound on the degree  of violation of the corresponding CAT inequality, computed as φ(T ) := (2 a∈T ∪Q x∗a − |T | + 1)/2. The heuristic separation algorithm used in [17] computes, for each e ∈ Ẽ, a minimum-weight odd cycle C̃e that uses edge e. If C̃e happens to be simple and chordless, then it corresponds to an odd CAT, say T . If, in addition, the lower bound (1−w(C̃e ))/2 exceeds a given threshold θ = −1/2, then the corresponding inequality is hopefully violated; hence one evaluates its actual degree of violation, φ(T ), and stores the inequality if φ(T ) > 0. In order to avoid detecting twice the same inequality, edge e is removed from G̃ after the computation of each C̃e . The key point of the algorithm is the computation in G̃ of a minimum-weight odd cycle going through a given edge. Assuming that the edge weights are all nonnegative, this problem is known to be polynomially solvable as it can be transformed into a shortest path problem; see Gerards and Schrijver [18]. To this end one constructs an auxiliary bipartite undirected graph GB = (NB′ ∪NB′′ , EB ) obtained from G̃ as follows. For each ν in G̃ there are two nodes in GB , say ν ′ and ν ′′ . For each edge e = (ν1 , ν2 ) of G̃ there are two edges in Gb , namely edge (ν1′ , ν2′′ ) and edge (ν2′ , ν1′′ ), both having weight we . By construction, every minimumweight odd cycle C̃e of G̃ going through edge e = (ν1 , ν2 ) corresponds in GB to a shortest path from ν1′ to ν2′ , plus the edge (ν2′ , ν1′′ ). Hence, the computation of all C̃e ’s can be performed efficiently by computing, for each ν1′ , the shortest path from ν1′ to all other nodes in NB′ . 2.4 Clique Lifting and Shrinking Clique lifting can be described as follows, see Balas and Fischetti [9] for details. Let P (G′ ) denote the ATSP polytope associated with a given complete digraph G′ = (V ′ , A′ ). Given a valid inequality βy ≤ β0 for P (G′ ), we define βhh := max{βih + βhj − βij : i, j ∈ V ′ \ {h}, i = j} for all h ∈ V ′ and construct an enlarged complete digraph G = (V, A) obtained from G′ by replacing V ′ by a clique Sh containing at least one node (hence,  each node h ∈ ′ |V | = h∈V ′ |Sh | ≥ |V |). In other words (S1 , . . . , S|V ′ | ) is a proper partition of V , in which the h-th set corresponds to the h-th node in V ′ . For all v ∈ V , let v ∈ Sh(v) . We  define a new clique lifted inequality for P (G), say αx ≤ α0 , where α0 := β0 + h∈V ′ βhh (|Sh | − 1) and αij := βh(i)h(j) for each (i, j) ∈ A. It is shown in [9] that the new inequality is always valid for P (G); in addition, if the starting inequality βx ≤ β0 defines a facet of P (G′ ), then αx ≤ α0 is guaranteed to be facet-inducing for P (G). Clique lifting is a powerful theoretical tool for extending known classes of inequalities. Also, it has important applications in the design of separation algorithms in that it allows one to simplify the separation problem through the following shrinking procedure [27]. Solving Real-World ATSP Instances by Branch-and-Cut 69 Let S ⊂ V , 2 ≤ |S| ≤ n − 2, be a vertex subset saturated by x∗ , in the sense that x∗ (S, S) = |S| − 1, and suppose S is shrunk into a single node, say σ, and x∗ is updated accordingly. Let G′ = (V ′ , A′ ) denote the shrunken digraph, where V ′ := V \ S ∪ {σ}, and let y ∗ be the shrunken counterpart of x∗ . Every valid inequality βy ≤ β0 for P (G′ ) that is violated by y ∗ corresponds in G to a violated inequality, say αx ≤ α0 , obtained through clique lifting by replacing back σ with the original set S. As observed by Padberg and Rinaldi [27], however, this shrinking operation can affect the possibility of detecting violated cuts on G′ , as it may produce a point y ∗ belonging to P (G′ ) even when x∗ ∈ P (G). The above observation shows that shrinking has to be applied with some care. There are however simple conditions on the choice of S that guarantee y ∗ ∈ P (G′ ), provided x∗ ∈ P (G) as in the cases of interest for separation. The simplest such condition concerns the shrinking of 1-arcs (i.e., arcs (i, j) with x∗ij = 1), and requires S = {i, j} for a certain node pair i, j with x∗ij = 1. In [17] 1-arc shrinking is applied iteratively, so as to replace each path of 1-arcs by a single node. As a result of this pre-processing on x∗ , all the nonzero variables are fractional. Notice that a similar result cannot be obtained for the symmetric TSP, where each chain of 1-edges (i.e., edges whose associated x∗ variable takes value 1) can be replaced by a single 1-edge, but not by a single node. 2.5 Pricing with Degeneracy Pricing is an important ingredient of branch-and-cut codes, in that it allows one to effectively handle LP problems involving a huge number of columns. Let z := min{cx : M x ≡ b, x ≥ 0} (10) be the LP problem to be solved. M is an m × |A| matrix whose columns are indexed by the arcs (i, j) ∈ A. The first 2n − 1 rows of M correspond to the degree equations (2)-(3) (with the redundant constraint x(1, V ) = 1 omitted), whereas the remaining rows, if any, correspond to some of the cuts generated through separation. Notation “≡” stands for “=” for the first 2n − 1 rows of M , h and “≤” for the remaining rows. Let Mij denote the entry of M indexed by row h and column (i, j). In order to keep the size of the LP as small as possible, the following pricing scheme is commonly used. We determine a (small) core set of arcs, say Ã, and decide to temporarily fix xij = 0 for all (i, j) ∈ A\ Ã. We then solve the restricted LP problem z̃ := min{c̃x̃ : M̃ x̃ ≡ b, x̃ ≥ 0} (11) where c̃, x̃, and M̃ are obtained from c, x, and M , respectively, by removing all entries indexed by A \ Ã. Assume problem (11) is feasible, and let x̃∗ and ũ∗ be the optimal primal and dual basic solutions found, respectively. Clearly, z̃ ≥ z. We are interested 70 M. Fischetti, A. Lodi, and P. Toth in easily-checkable conditions that guarantee z̃ = z, thus proving that x̃∗ (with x̃∗ij := 0 for all (i, j) ∈ A \ Ã) is an optimal basic solution to (10), and hence that its value z̃ is a valid lower bound on v(AT SP ). To this end we compute the LP reduced costs associated with ũ∗ , namely cij := cij − m  h ∗ Mij ũh for (i, j) ∈ A h=1 and check whether cij ≥ 0 for all (i, j) ∈ A. If this is indeed the case, then z̃ = z and we are done. Otherwise, the current core set à is enlarged by adding (some of) the arcs with negative reduced cost, and the whole procedure is iterated. This iterative solution of (11), followed by the possible updating of Ã, is generally referred to as the pricing loop. According to common computational experience, the first iterations of the pricing loop tend to add a very large number of new columns to the LP even when z̃ = z, due to the typically high primal degeneracy of (11). A different technique, called AP pricing in [17], exploits the fact that any feasible solution to (10) cannot select the arcs with negative reduced cost in an arbitrary way, as the degree equations —among other constraints— have to be fulfilled. The technique is related to the so-called Lagrangian pricing introduced independently by Löbel [23] as a powerful method for solving large-scale vehicle scheduling problems. Let us consider the dual solution ũ∗ to (11) as a vector of Lagrangian multipliers, and the LP reduced costs cij as the corresponding Lagrangian costs. In this view, standard pricing consists of solving the following trivial relaxation of (10): LB1 := min[cx + ũ∗ (b − M x)] = ũ∗ b + min cx x≥0 x≥0 (12) where ũ∗ b = z̃ by LP duality. Therefore one has z̃ + minx≥0 cx ≤ z ≤ z̃, from which z̃ = z in case minx≥0 cx = 0, i.e., cij ≥ 0 for all i, j. The strengthening then consists in replacing condition x ≥ 0 in (12) by x ∈ F (AP) := {x ∈ {0, 1}A : x(i, V ) = x(V, i) = 1 for all i ∈ V } In this way one computes an improved lower bound on z, namely LB2 := ũ∗ b + min x∈F (AP) cx = z̃ + ∆AP where ∆AP := minx∈F (AP) cx is computed efficiently by solving the AP on the Lagrangian costs cij . As before, z̃ + ∆AP ≤ z ≤ z̃, hence ∆AP = 0 implies z̃ = z. When ∆AP < 0, instead, one has to iterate the procedure, after having added to the core set à the arcs in A \ à that are selected in the optimal AP solution found. The AP pricing has two main advantages over the standard one, namely: (1) an improved check for proving z̃ = z; and (2) a better rule to select the arcs to be Solving Real-World ATSP Instances by Branch-and-Cut 71 added to the core arc set. Moreover, LB2 always gives a lower bound on z (and hence on v(AT SP )), which can in some cases succeed in fathoming the current branching node even when ∆AP < 0. Finally, the nonnegative AP reduced cost vector c available after solving minx∈F (AP) cx can be used for fixing xij = 0 for all (i, j) ∈ A such that LB2 + cij is at least as large as the value of the best known ATSP solution. 2.6 The Overall Algorithm The algorithm is a lowest-first branch-and-cut procedure. At each node of the branching tree, the LP relaxation is initialized by taking all the constraints present in the last LP solved at the father node (for the root node, only the degree equations are taken). As to variables, one retrieves from a scratch file the optimal basis associated with the last LP solved at the father node, and initializes the core variable set, Ã, by taking all the arcs belonging to this basis (for the root node, à contains the 2n − 1 variables in the optimal AP basis found by solving AP on the original costs cij ). In addition, à contains all the arcs of the best known ATSP solution. Starting with the above advanced basis, one iteratively solves the current LP, applies the AP pricing (and variable fixing) procedure described in Section 2.5, and repeats if needed. Observe that the pricing/fixing procedure is applied after each LP solution. On exit of the pricing loop (case ∆AP = 0), the cuts whose associated slack exceeds 0.01 are removed from the current LP (unless the number of these cuts is less than 10), and the LP basis is updated accordingly. Moreover, separation algorithms are applied to find, if any, facet-defining ATSP inequalities that cut off the current LP optimal solution, say x∗ . As a heuristic rule, the violated cuts with degree of violation less than 0.1 (0.01 for SECs) are skipped, and the separation phase is interrupted as soon as 20 + ⌊n/5⌋ violated cuts are found. One first checks for violation the cuts generated during the processing of the current or previous nodes, all of which are stored in a global data-structure called the constraint pool. If some of these cuts are indeed violated by x∗ , the separation phase ends. Otherwise, the Padberg-Rinaldi [26] MINCUT algorithm for SEC separation is applied, and the separation phase is interrupted if violated SECs are found. When this is not the case, one shrinks the 1-arc paths of x∗ (as described in Section 2.4), and applies the separation algorithms for comb (Section 2.1), Dk+ and Dk− (Section 2.2), and odd CAT (Section 2.3) inequalities. In order to avoid finding equivalent inequalities, D3− inequalities (which are the same as D3+ inequalities), are never separated, and odd CAT separation is skipped when a violated comb is found (as the class of comb and odd CAT inequalities overlap). When violated cuts are found, one adds them to the current LP, and repeats. When separation fails and x∗ is integer, the current best ATSP solution is updated, and a backtracking step occurs. If x∗ is fractional, instead, the current LP basis is saved in a file, and one branches on the variable xij with 0 < x∗ij < 1 that maximizes the score σ(i, j) := cij · min{x∗ij , 1 − x∗ij }. As a heuristic rule, a large priority is given to the variables with 0.4 ≤ x∗ij ≤ 0.6 (if any), so as to produce a significant change in both descending nodes. 72 M. Fischetti, A. Lodi, and P. Toth As a heuristic tailing-off rule, one also branches when the current x∗ is fractional and the lower bound did not increase in the last 5 (10 for the root node) LP/pricing/separation iterations. In addition, a simple heuristic algorithm is used to hopefully update the current best optimal ATSP solution. The algorithm is based on the information associated with the current LP, and consists of a complete enumeration of the Hamiltonian circuits in the support graph of x∗ , defined as G∗ := (V, {(i, j) ∈ A : x∗ij > 0}). To this end Martello’s [24] implicit enumeration algorithm HC is used, with at most 100 + 10n backtracking steps allowed. As G∗ is typically very sparse, this upper bound on the number of backtrackings is seldom attained, and HC almost always succeeds in completing the enumeration within a short computing time. The heuristic is applied whenever SEC separation fails, since in this case G∗ is guaranteed to be strongly connected. 3 Using a STSP Code for ATSP Instances It is easy to see that a code for ATSP can be invoked to solve symmetric TSP instances. In fact, the reverse also holds by means of the following two transformations: – the 3-node transformation proposed by Karp [22]. A complete undirected graph with 3n vertices is obtained from the original complete directed one by adding two copies, n + i and 2n + i, of each vertex i ∈ V , and by (i) setting to 0 the cost of the edges (i, n + i) and (n + i, 2n + i) for each i ∈ V , (ii) setting to cij the cost of edge (2n + i, j) ∀i, j ∈ V , and (iii) setting to +∞ the costs of all the remaining edges; – the 2-node transformation proposed by Jonker and Volgenant [20] (see also Jünger, Reinelt and Rinaldi [21]). A complete undirected graph with 2n vertices is obtained from the original complete directed one by adding a copy, n+i, of each vertex i ∈ V , and by (i) setting to 0 the cost of the edge (i, n+i) for each i ∈ V , (ii) setting to cij + M the cost of edge (n + i, j) ∀i, j ∈ V , where M is a sufficiently large positive value, and (iii) setting to +∞ the costs of all the remaining edges. The transformation value nM has to be subtracted from the STSP optimal cost. The most effective branch-and-cut algorithm for the STSP is currently the one by Applegate, Bixby, Chvátal and Cook [2], and the corresponding code, Concorde [1], is publicly available. As already done in [15], we used this code to test the effectiveness of the approach based on the ATSP-to-STSP transformation. The code has been used with default parameters by setting the random seed parameter (“-s 123”) so as to be able to reproduce each run. The results in [15] have shown that the 2-node transformation is in general more effective than the 3-node one, and preliminary experiments on the new real-world instances we considered confirmed this indication. Thus, the computational results reported in the next section are given only for the 2-node transformation for which parameter M has been set to 100,000. Solving Real-World ATSP Instances by Branch-and-Cut 73 Although a fine tuning of the Concorde parameters is out of the scope of this paper, we also analyzed the code sensitivity to the “chunk size” parameter, which controls the implementation of the local cuts paradigm used for separation (see, Applegate, Bixby, Chvátal and Cook [2] and Naddef [25] for details). In particular, setting this size to 0 (“-C 0”) disables the generation of the “local” cuts and lets Concorde behave as a pure STSP code, whereas option “-C 16” (the default) allows for the generation of additional cuts based on the “instancespecific” enumeration of partial solutions over vertex subsets of size up to 16. (Other values of the chunk size, namely 8, 24 and 32, have been preliminary considered but the results suggested that the default size turns out to be, in general, the most effective one.) In our computational experiments we considered both versions of Concorde, with and without the generation of “local” cuts, so as to investigate the capability of the “local” cuts method to automatically generate cuts playing an important role for the ATSP instance to be solved. Thus, the computational results of the next section are given for both “-C 0” and “-C 16” settings. 4 Computational Experiments The two branch-and-cut codes described in the previous sections have been computationally tested on a set of pretty large2 real-world ATSP instances (see Table 1), namely: – – – – 5 instances provided by Balas [8]; the 13 ATSP instances with n ≥ 100 collected in the TSPLIB [28]. 1 instance (ftv180) introduced in [15]; 16 instances by Cirasella, Johnson, McGeoch and Zhang [12]. All instances have integer nonnegative costs, and are available, on request, from the authors. (For a detailed description of the instances the reader is referred to [12], and [15].) For each instance we report in Table 1 the name (Name), the size (n), the optimal (or best known) solution value (OptVal), and the source of the instance (source). All tests have been executed on a Digital Alpha 533 MHz with 512 MB of RAM memory under the Unix Operating System, with Cplex 6.5.3 as LP solver. In Table 2, we report the percentage gaps corresponding to the lower bound at the root node (Root), the final lower bound (fLB), and the final upper bound (fUB), all computed with respect to the optimal (or best known) solution value. Moreover, the number of nodes of the search tree (Nodes), and the computing time in seconds (Time) are given. Code comparison. Table 2 reports a comparison among the two versions of Concorde corresponding to the chunk size equal to 0 and 16, respectively, and the branch-and-cut code implementing the algorithm by Fischetti and Toth described in Section 2. This latter code is enhanced, for the branching scheme, by 2 Instances with less than 100 vertices have not been considered for space constraints. 74 M. Fischetti, A. Lodi, and P. Toth Table 1. Real-world ATSP instances. Name balas108 balas120 balas160 balas200 ftv100 ftv110 ftv120 ftv130 ftv140 ftv150 ftv160 ftv170 ftv180 kro124p rbg323 rbg358 rbg403 rbg443 n OptVal source 108 152 [8] 120 286 [8] 160 397 [8] 200 403 [8] 101 1788 TSPLIB 111 1958 TSPLIB 121 2166 TSPLIB 131 2307 TSPLIB 141 2420 TSPLIB 151 2611 TSPLIB 161 2683 TSPLIB 171 2755 TSPLIB 181 2918 [15] 100 36230 TSPLIB 323 1326 TSPLIB 358 1163 TSPLIB 403 2465 TSPLIB 443 2720 TSPLIB Name n OptVal source atex8 600 39982 [12] big702 702 79081 [12] code198 198 4541 [12] code253 253 106957 [12] dc112 112 11109 [12] dc126 126 123235 [12] dc134 134 5612 [12] dc176 176 8587 [12] dc188 188 10225 [12] dc563 563 25951 [12] dc849 849 37476 [12] dc895 895 107699 [12] dc932 932 479900 [12] td100.1 101 268636 [12] td316.10 317 691502 [12] td1000.20 1001 1242183 [12] the “Fractionality Persistency” mechanism proposed by Fischetti, Lodi, Martello and Toth [14] and is called FT-b&c in the sequel and “FT-b&c + FP” in Table 2. As to time limit, we imposed 10,000 CPU seconds for all tests. In its pure STSP version (“-C 0”), the Concorde code obtains a root-node lower bound which is dominated by the FT-b&c one, thus showing the effectiveness of addressing the ATSP in its original (directed) version. Of course, one can expect to improve the performance of FT-b&c by exploiting additional classes of ATSP-specific cuts, e.g., lifted cycle inequalities [10]. As to Concorde, we observe that the use of the “local” cuts leads to a considerable improvement of the root-node lower bound which becomes generally better than that of FT-b&c. Not surprisingly, this improvement appears more substantial than in the case of pure STSP instances. In our view, this is again an indication of the importance of exploiting the structure of the original asymmetric problem, which results into a very special structure of its STSP counterpart which is not captured adequately by the usual classes of STSP cuts (comb, clique tree inequalities, etc.). Both FT-b&c and Concorde (“-C 16” version) turn out to be quite effective, and only fail in solving to optimality (within the time limit) 3 hard instances of large size. Specifically, the first code is considerably faster than the second one (with the only exception of instance ftv180), though it often requires more branching nodes. We believe this is mainly due to the faster (ATSP-specific) separation and pricing tools used in FT-b&c. The Concorde implementation proved to be very robust for hard instances of large size (the final gap for the three unsolved instances being smaller than the one of FT-b&c), as it has been designed and engineered to address very large STSP instances. Table 2. Comparison of branch-and-cut codes. Time limit of 10,000 seconds. Solving Real-World ATSP Instances by Branch-and-Cut 75 FT-b&c + FP (2-node) Concorde -s 123 -C 0 (2-node) Concorde -s 123 -C 16 %Gap %Gap %Gap Name Root fLB fUB Nodes Time Root fLB fUB Nodes Time Root fLB fUB Nodes Time balas108 2.63 – – 1023 1269.9 2.63 – – 423 1416.0 1.97 – – 267 89.0 2.10 0.00 1.40 2849 10007.3 1.05 – – 755 7186.9 1.05 – – 1339 1276.3 balas120 balas160 2.02 0.25 4.03 2165 10007.1 1.26 – – 739 7848.0 1.26 – – 737 671.1 balas200 2.48 0.74 5.96 1955 10008.0 0.74 – – 239 2294.2 1.24 – – 1495 1712.8 0.73 – – 9 9.5 0.00 – – 1 12.6 0.39 – – 21 2.2 ftv100 0.97 – – 17 17.7 0.05 – – 3 25.6 0.77 – – 77 7.4 ftv110 1.62 – – 69 45.0 0.28 – – 7 54.4 0.97 – – 123 13.1 ftv120 ftv130 0.78 – – 43 24.0 0.00 – – 1 16.6 0.35 – – 7 1.6 ftv140 0.45 – – 5 7.2 0.00 – – 3 25.6 0.25 – – 9 2.1 0.73 – – 13 15.9 0.00 – – 5 27.0 0.27 – – 21 2.6 ftv150 ftv160 0.93 – – 29 70.0 0.30 – – 7 55.7 0.67 – – 17 3.8 ftv170 1.27 – – 19 36.5 0.40 – – 3 41.9 0.87 – – 15 4.1 1.58 – – 91 204.0 0.69 – – 29 236.2 1.20 – – 939 366.0 ftv180 0.46 – – 21 23.9 0.00 – – 1 9.9 0.04 – – 3 1.0 kro124p 0.00 – – 7 34.4 0.00 – – 3 23.9 0.00 – – 1 0.4 rbg323 rbg358 0.00 – – 3 22.0 0.00 – – 3 29.3 0.00 – – 1 0.5 rbg403 0.00 – – 1 19.7 0.00 – – 5 49.3 0.00 – – 1 1.3 0.00 – – 1 20.7 0.00 – – 3 34.5 0.00 – – 1 1.4 rbg443 atex8 1.16 0.95 7.39 595 10080.8 1.09 0.65 7.62 143 10188.0 1.01 0.99 39.97 919 10000.5 big702 0.00 – – 7 70.4 0.00 – – 5 67.6 0.00 – – 1 1.7 code198 0.00 – – 1 23.3 0.00 – – 1 29.6 0.00 – – 1 0.4 0.74 – – 3 26.3 0.00 – – 1 69.8 0.00 – – 1 3.2 code253 dc112 0.02 – – 49 77.6 0.00 – – 9 63.3 0.00 – – 10 2.7 dc126 0.00 – – 11 26.9 0.00 – – 1 47.6 0.00 – – 10 4.4 dc134 0.00 – – 27 44.7 0.00 – – 3 25.1 0.00 – – 7 2.4 0.01 – – 27 83.6 0.01 – – 11 152.4 0.01 – – 6 7.2 dc176 0.02 – – 25 91.3 0.00 – – 7 72.1 0.01 – – 6 11.2 dc188 0.01 – – 175 2615.6 0.00 – – 35 827.7 0.00 – – 69 343.5 dc563 0.00 – – 45 1633.4 0.00 – – 15 713.8 0.00 – – 45 302.9 dc849 dc895 0.03 0.00 0.12 201 10298.1 0.01 0.00 0.07 89 10871.2 0.01 0.00 0.59 298 10008.4 0.01 0.00 0.03 237 10238.1 0.01 0.00 0.06 115 10457.4 0.01 0.01 0.27 211 10000.0 dc932 td100.1 0.00 – – 1 2.0 0.00 – – 1 2.4 0.00 – – 1 0.0 td316.10 0.00 – – 1 10.7 0.00 – – 5 29.1 0.00 – – 1 0.2 td1000.20 0.00 – – 1 57.0 0.00 – – 1 56.6 0.00 – – 1 2.3 76 5 M. Fischetti, A. Lodi, and P. Toth Conclusions We considered a set of 35 real-world ATSP instances (with n ≥ 100), and we computationally tested the effectiveness of branch-and-cut approaches designed either for the asymmetric version of the problem (Fischetti and Toth [17], FT-b&c code) or for its symmetric special case (Applegate, Bixby, Chvátal and Cook [2], Concorde code). The fact that the performance of FT-b&c is generally better than that of the very sophisticated Concorde code (and considerably better than that of the pure STSP Concorde “-C 0”) indicates the effectiveness of exploiting the ATSPspecific separation procedures. This suggests that enriching the Concorde arsenal of STSP separation tools by means of ATSP-specific separation procedures would be the road to go for the effective solution of hard ATSP instances. Acknowledgments. Work supported by Ministero dell’Istruzione, dell’Università e della Ricerca (M.I.U.R.) and by Consiglio Nazionale delle Ricerche (C.N.R.), Italy. References 1. D. Applegate, R.E. Bixby, V. Chvátal, and W. Cook. Concorde - a code for solving traveling salesman problems. 12/15/1999 Release. http://www.keck.caam.rice.edu/concorde.html. 2. D. Applegate, R.E. Bixby, V. Chvátal, and W. Cook. On the solution of traveling salesman problems. Documenta Mathematica, Extra Volume ICM III:645–656, 1998. 3. N. Ascheuer. Hamiltonian Path Problems in the On-line Optimization of Flexible Manufacturing Systems. PhD thesis, Technische Universität Berlin, Germany, 1995. 4. N. Ascheuer, M. Fischetti, and M. Grötschel. A polyhedral study of the asymmmetric travelling salesman problem with time windows. Networks, 36:69–79, 2000. 5. N. Ascheuer, M. Fischetti, and M. Grötschel. Solving the asymmetric travelling salesman problem with time windows by branch-and-cut. Mathematical Programming, Ser. A, 90:475–506, 2001. 6. N. Ascheuer, M. Jünger, and G. Reinelt. A branch & cut algorithm for the asymmetric traveling salesman problem with precedence constraints. Computational Optimization and Applications, 17:61–84, 2000. 7. E. Balas. The asymmetric assignment problem and some new facets of the traveling salesman polytope on a directed graph. SIAM Journal on Discrete Mathematics, 2:425–451, 1989. 8. E. Balas. Personal communication, 2000. 9. E. Balas and M. Fischetti. A lifting procedure for the asymmetric traveling salesman polytope and a large new class of facets. Mathematical Programming, Ser. A, 58:325–352, 1993. 10. E. Balas and M. Fischetti. Polyhedral theory for the asymmetric traveling salesman problem. In G. Gutin and A. Punnen, editors, The Traveling Salesman Problem and its Variations, pages 117–168. Kluwer Academic Publishers, 2002. Solving Real-World ATSP Instances by Branch-and-Cut 77 11. G. Carpaneto, M. Dell’Amico, and P. Toth. Algorithm CDT: a subroutine for the exact solution of large-scale asymmetric traveling salesman problems. ACM Transactions on Mathematical Software, 21:410–415, 1995. 12. J. Cirasella, D.S. Johnson, L.A. McGeoch, and W. Zhang. The asymmetric traveling salesman problem: Algorithms, instance generators, and tests. In A.L. Buchsbaum and J. Snoeyink, editors, Proceedings of ALENEX’01, volume 2153 of Lecture Notes in Computer Science, pages 32–59. Springer-Verlag, Heidelberg, 2001. 13. M. Fischetti. Facets of the asymmetric traveling salesman polytope. Mathematics of Operations Research, 16:42–56, 1991. 14. M. Fischetti, A. Lodi, S. Martello, and P. Toth. A polyhedral approach to simplified crew scheduling and vehicle scheduling problems. Management Science, 47:833– 850, 2001. 15. M. Fischetti, A. Lodi, and P. Toth. Exact methods for the asymmetric traveling salesman problem. In G. Gutin and A. Punnen, editors, The Traveling Salesman Problem and its Variations, pages 169–205. Kluwer Academic Publishers, 2002. 16. M. Fischetti and P. Toth. An additive bounding procedure for the asymmetric travelling salesman problem. Mathematical Programming, Ser. A, 53:173–197, 1992. 17. M. Fischetti and P. Toth. A polyhedral approach to the asymmetric traveling salesman problem. Management Science, 43:1520–1536, 1997. 18. A.M.H. Gerards and A. Schrijver. Matrices with the Edmonds-Johnson property. Combinatorica, 6:365–379, 1986. 19. M. Grötschel and M.W. Padberg. Polyhedral theory. In E.L. Lawler, J.K. Lenstra, A.H.G. Rinnooy Kan, and Shmoys D.B. eds., editors, The Traveling Salesman Problem: A Guided Tour of Combinatorial Optimization, pages 251–305. Wiley, Chichester, 1985. 20. R. Jonker and T. Volgenant. Transforming asymmetric into symmetric traveling salesman problems. Operations Research Letters, 2:161–163, 1983. 21. M. Jünger, G. Reinelt, and G. Rinaldi. The traveling salesman problem. In M. Ball, T.L. Magnanti, C.L. Monma, and G. Nemhauser, editors, Network Models, volume 7 of Handbooks in Operations Research and Management Science, pages 255– 330. North Holland, Amsterdam, 1995. 22. R.M. Karp. Reducibility among combinatorial problems. In R.E. Miller and J.W. Thatcher, editors, Complexity of Computer Computations, pages 85–103. Plenum Press, New York, 1972. 23. A. Löbel. Vehicle scheduling in public transit and Lagrangean pricing. Management Science, 44:1637–1649, 1998. 24. S. Martello. An enumerative algorithm for finding Hamiltonian circuits in a directed graph. ACM Transactions on Mathematical Software, 9:131–138, 1983. 25. D. Naddef. Polyhedral theory and branch-and-cut algorithms for the symmetric TSP. In G. Gutin and A. Punnen, editors, The Traveling Salesman Problem and its Variations, pages 29–116. Kluwer Academic Publishers, 2002. 26. M.W. Padberg and G. Rinaldi. An efficient algorithm for the minimum capacity cut problem. Mathematical Programming, Ser. A, 47:19–36, 1990. 27. M.W. Padberg and G. Rinaldi. Facet identification for the symmetric traveling salesman polytope. Mathematical Programming, Ser. A, 47:219–257, 1990. 28. G. Reinelt. TSPLIB - a traveling salesman problem library. ORSA Journal on Computing, 3:376–384, 1991. http://www.crpc.rice.edu/softlib/tsplib/. The Bundle Method for Hard Combinatorial Optimization Problems Gerald Gruber1 and Franz Rendl2 1 2 Carinthia Tech Institute, School of Geoinformation, A–9524 Villach, Austria g.gruber@cti.ac.at University of Klagenfurt, Department of Mathematics, A–9020 Klagenfurt, Austria franz.rendl@uni-klu.ac.at Abstract. Solving the well known relaxations for large scale combinatorial optimization problems directly is out of reach. We use Lagrangian relaxations and solve it with the bundle method. The cutting plane model at each iteration which approximates the original problem can be kept moderately small and we can solve it very quickly. We report successful numerical results for approximating maximum cut. 1 Introduction In combinatorial optimization many problems are very hard to solve. They have different tractable relaxations, e.g., linear programming relaxations which are based on the study of the convex hull of its integer solutions. Unfortunately for most problems no practical efficient method is known to optimize exactly over these relaxations and therefore usually a partial description by linear inequalities is used. Another valid refinement is semidefinite programming where striking advances have been achieved in the nineties ([6,23]). Nevertheless there is one drawback in common: For large scale problems it still remains hard to solve these modified relaxations directly (high computational effort, large memory requirements, etc.). Recently, great efforts have been made to tackle this problem and several papers were published. For instance, Benson, Ye and Zhang [4] present a dualscaling interior point algorithm and show how it exploits the structure and sparsity of some large scale problems. Helmberg and Rendl propose the spectral bundle method [12]. They also exploit the sparsity structure of the problems. In fact, they consider eigenvalue optimization problems arising from semidefinite programming problems with constant trace and present a method which leads to reasonable bounds to the optimal solution of large problems. Barahona and Anbil propose the volume algorithm [2] for dealing with Lagrangian relaxations of combinatorial optimization problems. The volume algorithm is an extension of the subgradient method which is an iterative technique where iterates are updated using a current subgradient and a carefully chosen step size. In [2] very sucessful experiments with linear programs coming from combinatorial problems are presented. M. Jünger et al. (Eds.): Combinatorial Optimization (Edmonds Festschrift), LNCS 2570, pp. 78–88, 2003. c Springer-Verlag Berlin Heidelberg 2003  The Bundle Method for Hard Combinatorial Optimization Problems 79 We follow the idea of using Lagrangian relaxation but solve it with the bundle method instead of simple subgradient methods. Our main contribution is that we allow inequalities as constraints. We show how they can be handled and present promising numerical results. This paper is organized as follows: In section 2 the problem in general form is stated. In this section we also show how the maximum cut problem fits into this general setting. In section 3 we review Lagrangian relaxation, the bundle method and the bundle algorithm. In section 4 we present first numerical experiments for the maximum cut problem on different randomly generated graphs. We conclude with final remarks. 2 The Problem in General Form We consider the problem of maximizing a concave function c : IRn → IR subject to finitely many linear constraints. In particular, we deal with the problem of the following form. Given c : IRn → IR, A ∈ IRm×n , b ∈ IRm and a convex set X of “nice structure”, solve (P) max c(x) s.t. x ∈ X Ax ≤ b. We do not specify the set X exactly, it could be a simple polyhedron or some other simple structure. By “nice structure” we mean that optimizing over x ∈ X can be done efficiently, while maintaining explicitely the additional inequality constraints Ax ≤ b may be computationally prohibitive. Solving (P) directly is out of reach. The question is: How can we deal with (P) better, how can we select important inequalities of Ax ≤ b to get an approximation of the original problem, as good as possible? In this paper we present a method which leads to such an approximation in reasonable time. 2.1 The Maximum Cut Problem Many combinatorial problems fit exactly into this framework. To give a specific example, we consider the maximum cut problem (max-cut). Given an edgeweighted undirected graph G = (V, E) with vertex set V = {1, . . . , n} and edge set E ⊆ {(ij) : i, j ∈ V, i < j} the max-cut problem is the problem of finding a partition (S, V \S) that maximizes the sum of the weight of the edges connecting S with V \S. The max-cut problem is known to be NP-hard, see Karp [16,17]. There are several classes of graphs for which the maximum cut problem can be solved in polynomial time. For instance for graphs with no long odd cycles, (see [9]), for planar graphs, (see [10,26]), or more generally for graphs with no K5 −minor (see [1]). For planar graphs or more generally graphs not contractible to K5 , it 80 G. Gruber and F. Rendl is known that the cut polytope coincides with the metric polytope, hence this relaxation would yield the optimal solution of max-cut. Several relaxations have been studied and proposed in the literature. (see e.g. [1,3,15] for LP-based relaxations, e.g. [25,27] for eigenvalue based relaxations and e.g. [6,8] for semidefinite relaxations.) More recently, the following well-known semidefinite relaxation of max-cut was introduced, see e.g. [6,8,24,5]. max trace(LX) s.t. diag(X) = e, X  0. (1) The quality of this bound was analyzed by Goemans and Williamson [8]. Assuming nonnegative weights they have shown that the ratio between the optimal value of max-cut and the semidefinite upper bound is at least .878. For completeness we mention that it is NP-complete to approximate the maximum cut problem with a factor closer than .9412, see [14]. This model can be strenghtened by imposing in addition that X is contained in the metric polytope (X ∈ M ET ). By definition,   for i = 1, . . . , n, Xii = 1 X ∈ M ET :⇔ Xij − Xik − Xjk ≥ −1 for 1 ≤ i, j, k ≤ n,   Xij + Xik + Xjk ≥ −1 for 1 ≤ i, j, k ≤ n. (2) Poljak and Rendl [28] were the first who proposed to strengthen the semidefinite relaxation by including the triangle inequalities. Computational results are presented for instance in [13,29]. The semidefinite model is of the form described above. (Ax ≤ b corresponds to X ∈ M ET , while the easy subproblem is a semidefinite program.) 3 The Bundle Method In this section we shall derive the Lagrangian dual associated to (P) which leads to our starting point. We use the following corollary, which is an easy consequence of [30, Theorem 37.3]. Corollary 1. Let X and Y be non–empty closed convex sets in IRm and IRn , respectively, and let f be a continous finite concave - convex function on X × Y . If either X or Y is bounded, one has inf sup f (x, y) = sup inf f (x, y). y∈Y x∈X x∈X y∈Y (In general for a non-empty product set X × Y and f : X × Y → [−∞, +∞] one has: inf x∈X supy∈Y f (x, y) ≥ supy∈Y inf x∈X f (x, y), see [30, Lemma 36.1]) The Bundle Method for Hard Combinatorial Optimization Problems 81 Let γ ∈ IRm , γ ≥ 0 denote the Lagrangian multiplier for b − Ax ≥ 0, then max c(x) s.t. x ∈ X, Ax ≤ b = max min c(x) + γ T (b − Ax) x∈X γ≥0 = min max c(x) + γ T (b − Ax) γ≥0 x∈X ≤ f (γ), ∀γ ≥ 0 T where f (γ) := maxx∈X c(x) + γ (b − Ax). Now consider the translated problem minγ≥0 f (γ). It is still too difficult to solve it directly. As abbreviation for c(x) + γ T (b − Ax) we use L(γ, x). Given γ, we set L(γ, x(γ)) := f (γ) = max L(γ, x). x∈X Fact 1. Suppose that L(γ, x) is affine linear in γ and X is non-empty. Then f (γ) := max L(γ, x) is convex. x∈X Proof. Let α ∈ [0, 1] and choose γi ≥ 0, i = 1, 2. We set γ3 := αγ1 + (1 − α)γ2 . By definition (γi , x(γi )) maximizes L(γi , x), i = 1, 2 For this reason the following inequality holds. L(γ1 , x(γ3 )) ≤ αL(γ1 , x(γ1 )) + (1 − α)L(γ2 , x(γ2 )). ⊓ ⊔ Moreover, f (γ) is non-smooth and therefore classical methods from analysis for minimizing f (γ) will fail completely. A feasible method for handling optimization problems with non-differentiable convex cost functions is the bundle concept, see e.g. [31,19,21]. Bundle methods were first proposed by Lemarechal [21]. The method has developed over the time based on the papers of Kiwiel [18] and Lemarechal [22]. We follow the approach of Helmberg, Kiwiel, Rendl [11] and present the details in the following subsection. 3.1 Minimizing f (γ) Given γ0 we compute f (γ0 ) = L(γ0 , x0 ). Clearly, L(γ, x0 ) ≤ maxx∈X L(γ, x) = f (γ). Hence, for given γ ≥ 0 L(γ, x0 ) is a minorant of f (γ) which approximates f (γ) near γ0 . Now we assume that f (γ) is evaluated at γ0 , . . . , γk . Let fˆk (γ) = max{L(γ, xj ) : 0 ≤ j ≤ k}. Note, fˆk (γ) need not be bounded from below. To make sure that fˆk (γ) is always bounded from below we change fˆk (γ) by adding a quadratic regularization term resulting in f k (γ) = fˆk (γ) + 1 ||γ − γ̄||2 , t ≥ 0 fixed. 2t 82 G. Gruber and F. Rendl Here γ̄ denotes the current best approximation of the minimizer of f (γ). (f k (γ) has a minimum since fˆk (γ) decreases linearly and ||γ − γ̄||2 increases quadratically.) The fixed parameter t controls the step size, large steps will be punished. The choice of t turns out to be very critical. Capable estimates for t are given in [20]. We next show how we use Lagrangian methods for solving the subproblem minγ≥0 f k (γ). Let X k = (x1 , . . . , xk ), c(X k ) = (c(x1 ), . . . , c(xk )) and γk+1 ≥ 0 such that k f (γk+1 ) = min f k (γ). γ≥0 Hence f k (γk+1 ) 1 ||γ − γ̄||2 2t 1 = min max max L(γ, xj ) + ||γ − γ̄||2 − γ T η γ 0≤j≤k η≥0 2t  1 = max λj L(γ, xj ) + ||γ − γ̄||2 − γ T η max min  η≥0 λ≥0, j λj =1 γ 2t j = min max L(γ, xj ) + γ≥0 0≤j≤k Now observe that the inner minimization is unconstrained, and sufficient for optimality. ∂ ∂γ (·) = 0 is necessary k  1 ∂ (·) = 0 ⇔ λj (b − Axj ) + (γ − γ̄) − η = 0 ∂γ t   j=1 :=gj From this it easily follows that γ = γ̄ + t(η − Gλ) where G := (g0 , . . . , gk ) ∈ IRm×k . Using this our problem translates into  1 max λj c(xj ) + γ̄ + t(η − Gλ), Gλ + ||t(η − Gλ)||2 max  η≥0 λ≥0, j λj =1 2t j = max max  η≥0 λ≥0, j λj =1 − γ̄ + t(η − Gλ), η  λj c(xj ) − γ̄ + t(η − Gλ), η − Gλ j t + η − Gλ, η − Gλ 2  t = max λj c(xj ) + γ̄, Gλ − γ̄, η − η − Gλ, η − Gλ max  η≥0 λ≥0, j λj =1 2 j = max max  η≥0 λ≥0, j λj =1 − t η − Gλ, η − Gλ − γ̄, η − λ, β 2 where β := −c(X k )T − GT γ̄. Next we show, how to solve this problem efficiently. First we consider the problem for fixed η ≥ 0 and maximize over λ ≥ 0. Then we fix the optimal λ and maximize over η. This process is iterated several times. The Bundle Method for Hard Combinatorial Optimization Problems 83 Suppose η̄ ≥ 0 is fixed, then the inner maximization is t t λ̄ := argmaxλ≥0,j λj =1 − λT GT Gλ + λT (−β + tGT η̄) − η T η − η T γ̄ . 2 2   const In the second step we keep λ̄ fixed and have to solve t t η̄ := argmaxη≥0 − η T η + tλ̄T GT η − γ̄ T η − λ̄T β − λ̄T GT Gλ̄. 2 2 Consider the problem maxu≥0 h(u) where h(u) := − 2t u2 + au, a ∈ IR. Obviously a ∂ (h(u)) = 0 ⇔ argmax(h(u)) = t ∂u 0 if at ≥ 0 otherwise. Now it is easy to see that η̄i = 1 t (−γk,i + t(Gλ)i ) 0 if (−γk,i + t(Gλ)i ) ≥ 0 otherwise. This “zig zag steps” yield a pair (η̄, λ̄), which we use for computing the step direction d := t(η̄ − Gλ̄). Hence (γk + d)i = 0 γk,i − t(Gλ)i if (−γk,i + t(Gλ)i ) ≥ 0 otherwise. We use γk+1 := γk + d as new trial point. 3.2 The Algorithm In this section we discuss the essential ingredients of our implementation. Our initial relaxation is the original (P) without inequality constraints. The optimal solution of it gives us the first separation point xs , which in general violates a bunch of inequalities of Ax ≤ b. We only choose the most violated ones and for those we perform the following steps. (i) construct the model in the k-th step: f k (γ) := max L(γ, xi ) + 0≤i≤k 1 γ − γ̄2 2t (ii) compute γk+1 the minimizer of f k (γ) (iii) compute xk+1 such that f (γk+1 ) = L(γk+1 , xk+1 ) 84 G. Gruber and F. Rendl (iv) compute the current best approximation of the minimizer of f (γ) γ̄ := γk+1 γ̄ if γk+1 better than γ̄ otherwise. The minimizer in step 2 is computed using the “zig zag idea ”proposed above. The optimal pair (η̄, λ̄) gives us some information, namely (i) λ̄ ≈ 0 tells us that the corresponding subgradient is inactive. (ii) η̄ ≈ 0 tells us that the corresponding inequality in the current set of hard constraints is inactive. In other words we can control the bundle size and we are able to keep only important inequalities from Ax ≤ b. In the k ′ th turn of the steps above we update xs to a certain convex combination of xs and xk+1 . Our next separation point is xs . Convergence follows from the traditional approach and therefore the proof is not embodied in the paper. 4 Computational Results In section 2.1 we introduced max-cut. In this section we report computational experience for this NP-hard problem. As initial relaxation we use the semidefinite relaxation introduced in section 2.1. We strengthen it by adding triangle inequalities from (2). Our numerical experiments were carried out on several randomly generated graphs of various size and fixed density of 50 %. Our test data are available at http://www-sci.uni-klu.ac.at/math-or/home/index.htm. All our codes are implemented in Matlab using some interfaces in C. All computations are done on a Pentium II 400 Mhz computer. It is the primary objective of this section to confront the bundle method with the interior point method. The computational results on our problem sets are reported in Table 1 and Table 2. Table 1 corresponds to the interior point approach and Table 2 to the bundle approach. Column n gives the number of vertices. The column labeled ib stands for “initial bound”and contains the optimal objective value from (1). The column headed ub stands for “upper bound”and shows the bound on the optimal value for max-cut which we achieved after some stopping criterion. The computation time using interior point methods increases drastically with the number of added inequalities and while solving the extended model a better part of them becomes inactive and can be dropped out from the model. Therefore we stopped the interior point code after twelve rounds of adding a limited number of violated inequalities. The bundle code was stopped after four rounds of adding inequalities. bc shows the best cut which we have found using a heuristic method. Column gap gives the relative error of the upper bound lb in % with respect to the values given in column bc. Computation times in minutes are given in Column CP U − time. The numbers given in acttri represent the number of active inequality constraints of the final relaxation. Column viol contains the average violation of triangle inequalities of the current primal solution. The Bundle Method for Hard Combinatorial Optimization Problems 85 The numbers in Table 1 and Table 2 indicate not only the limitation of pure interior point methods but also the potential of the bundle method. Problems of moderate size can be solved by both approaches. The qualitative behaviour is quite the same. On the other hand it is self-evident that large scale problems should not be tackeled by pure interior point methods. The small improvement of the bound is bought very dearly. One important benefit of the bundle approach is that the cost for solving the basic relaxation in each iteration depends not on the number of inequalities. There is only a small change in the objective function. We point out this feature in Table 3. Again we compare our interior point method with our bundle method. The graphs for this experiment are those from above with 100, 200 nodes, respectively. tri denotes the number of triangles from M ET in the current model. In Table 3 the numbers given in acttri represent the number of active inequality constraints at the optimum from the current model. The columns assigned to the interior point results show how computation time increases as more and more triangle inequalities are included. This is not the case in this sheer enormity if we use our bundle code. Table 1. Bounds on max-cut on random graphs using interior point methods, edge density = 50 % n ib ub bc/gap CP U − time[min] acttri viol 100 200 300 400 500 8299.8 26904.2 47661.6 77056.1 104518.7 7436.1 24941.4 44696.0 73574.7 100147.7 6983/6.1 22675/9.1 38648/13.5 62714/14.8 83491/16.6 8.1 56.1 116.5 213.1 337.7 499 1066 1389 1618 1822 0.17 0.13 0.13 0.12 0.12 Table 2. Bounds on max-cut on random graphs using the bundle method, edge density = 50 % n ib ub bc/gap CP U − time[min] acttri viol 100 200 300 400 500 8299.8 26904.2 47661.6 77056.1 104518.7 7520.9 25093.6 45081.3 73971.3 100818.9 6983/7.2 22675/9.6 38648/14.3 62714/15.2 83491/17.2 2.4 10.1 26.3 55.5 100.2 805 1551 1636 1957 1946 0.036 0.026 0.026 0.024 0.024 86 G. Gruber and F. Rendl Table 3. Comparing the behaviour of the interior point approach and the bundle method on a random weighted graph with 100 nodes and density = 50% interior point method ub tri/acttri 8299.8 7882.7 7701.7 7605.3 7535.1 7480.5 7435.6 7407.5 7387.4 7374.5 7366.9 7362.4 7360.2 7359.6 7359.6 0/0 140/106 246/201 341/277 417/353 493/435 575/502 642/580 720/666 806/745 885/827 967/908 1048/993 1091/1067 1118/1105 CPU [min] 0.02 0.3 0.9 1.9 3.2 5.1 8.0 11.0 15.6 21.2 28.2 37.4 47.8 59.3 75.5 bundle method ub tri/acttri 8299.8 7874.9 7671.9 7574.9 7514.3 7473.2 7449.9 7440.2 7434.1 7425.6 7420.3 7415.1 7412.4 7410.9 7409.5 0/0 500/243 743/484 984/686 928/764 1027/903 1108/992 1267/1129 1322/1171 1393/1259 1387/1271 1453/1335 1501/1377 1544/1405 1561/1415 CPU [min] 0.02 0.5 1.1 1.7 2.3 2.9 3.6 4.3 5.0 5.7 6.4 7.2 7.9 8.7 9.5 Table 4. Comparing the behaviour of the interior point approach and the bundle method on a random weighted graph with 200 nodes and density = 50% interior point method ub tri/acttri CPU [min] 26904.2 25982.9 25521.7 25297.6 25145.9 25036.6 24948.4 24893.4 24857.7 24834.1 24821.7 0/0 280/193 473/380 660/556 836/744 1024/904 1184/1068 1348/1243 1523/1411 1691/1586 1866/1789 0.1 1.9 5.0 10.9 21.3 36.6 57.1 86.2 126.0 178.2 248.0 24815.7 2069/2004 330.6 bundle method ub tri/acttri CPU [min] 26904.2 26070.7 25510.3 25267.6 25126.3 25040.2 24999.1 24969.9 24946.0 24932.0 24924.0 .. . 24820.6 0/0 1000/409 1409/824 1824/1295 1640/1404 1885/1670 2173/1932 2135/1979 2178/2013 2195/2073 2304/2173 0.1 2.2 4.8 7.4 10.1 12.8 15.6 18.4 21.3 24.2 27.0 2907/2669 44.7 The Bundle Method for Hard Combinatorial Optimization Problems 5 87 Concluding Comments Bundle methods are an effective solution approach for hard combinatorial optimization problems. In this paper we have presented an algorithm for approximating such problems. The most important feature is that the set of subgradients (the bundle) which is updated at each iteration can be kept moderately small. Therefore this approach allows to solve large scale problems very quickly. In more detail we concentrated on max-cut. We have performed our numerical experiments on this problem. Our computational tests give rise to the following conclusions. Compared with semidefinite programming via interior point methods we obtained acceptable bounds in moderate time, even for large scale problems. The algorithm is easy to implement, but has slow convergence. References 1. F. Barahona. The max-cut problem on graphs not contractible to K5 . Operations Research Letters, 2(3):107–111, 1983. 2. F. Barahona and R. Anbil. The volume algorithm: producing primal solutions with a subgradient method. Mathematical Programming, 87(3):385–399, 2000. 3. F. Barahona and A. R. Mahjoub. On the cut polytope. Mathematical Programming, 36:157–173, 1986. 4. S. J. Benson, Y. Ye, and X. Zhang. Solving large-scale sparse semidefinite programs for combinatorial optimization. SIAM J. Optim., 10(2):443–461 (electronic), 2000. 5. C. Delorme and S. Poljak. Laplacian eigenvalues and the maximum cut problem. Mathematical Programming, 62(3):557–574, 1993. 6. M. X. Goemans. Semidefinite programming in combinatorial optimization. Mathematical Programming, 79(1–3):143–161, 1997. 7. M. X. Goemans and D. P. Williamson. .878-approximation algorithms for MAX CUT and MAX 2SAT. In Proceedings of the Twenty-Sixth Annual ACM Symposium on the Theory of Computing, pages 422–431, Montréal, Québec, Canada, 1994. 8. M. X. Goemans and D. P. Williamson. Improved approximation algorithms for maximum cut and satisfiability problems using semidefinite programming. Journal of the ACM, 42(6):1115–1145, 1995. preliminary version, see [7]. 9. M. Grötschel and G. L. Nemhauser. A polynomial algorithm for the max-cut problem on graphs without long odd cycles. Mathematical Programming, 29:28– 40, 1984. 10. F. Hadlock. Finding a maximum cut of a planar graph in polynomial time. SIAM Journal on Computing, 4:221–225, 1975. 11. C. Helmberg, K. C. Kiwiel, and F. Rendl. Incorporating inequality constraints in the spectral bundle method. In R. E. Bixby, E. A. Boyd, and R. Z. Rı́os-Mercado, editors, Integer Programming and Combinatorial Optimization, volume 1412 of Lecture Notes in Computer Science, pages 423–435. Springer, 1998. 12. C. Helmberg and F. Rendl. A spectral bundle method for semidefinite programming. SIAM Journal on Optimization, 10(3):673–696, 2000. 13. C. Helmberg, F. Rendl, R. J. Vanderbei, and H. Wolkowicz. An interior point method for semidefinite programming. SIAM Journal on Optimization, 6(2):342– 361, 1996. 88 G. Gruber and F. Rendl 14. J. Håstad. Some optimal inapproximability results. In Proceedings of the 29th ACM Symposium on Theory of Computing (STOC), pages 1–10, 1997. 15. M. Jünger, F. Barahona, and G. Reinelt. Experiments in quadratic 0-1 programming. Mathematical Programming, 44(2):127–137, 1989. 16. R. M. Karp. Reducibility among combinatorial problems. In R. E. Miller and J. W. Thatcher, editors, Complexity of Computer Computations, pages 85–103, New York, 1972. Plenum Press. 17. R. M. Karp. On the computational complexity of combinatorial problems. Networks, 5(1):45–68, 1975. 18. K. C. Kiwiel. Methods of Descent of Nondifferentiable Optimization, volume 1133 of Lecture Notes in Mathematics. Springer, Berlin, 1985. 19. K. C. Kiwiel. Survey of bundle methods for nondifferentiable optimization. Math. Appl. Jan. Ser.6, 6:263–282, 1989. 20. K. C. Kiwiel. Proximity control in bundle methods for convex nondifferentiable minimization. Mathematical Programming, 46:105–122, 1990. 21. C. Lemarechal. Bundle methods in nonsmooth optimization. In Claude Lemarechal and Robert Mifflin, editors, Proceedings of the IIASA Workshop, vol. 3, Nonsmooth Optimization, March 28-April 8, pages 79–102. Pergamon Press, 1978, 1977. 22. C. Lemaréchal, A. Nemirovskii, and Yu. Nesterov. New variants of bundle methods. Mathematical Programming, 69:111–147, 1995. 23. L. Lovász. Semidefinite programs and combinatorial optimization. Lecture Notes, 1995. 24. M. Laurent, S. Poljak, F. Rendl. Connections between semidefinite relaxations of the max-cut and stable set problems. Mathematical Programming, 77(2):225–246, 1997. 25. B. Mohar and S. Poljak. Eigenvalues in combinatorial optimization. In Combinatorial Graph-Theoretical Problems in Linear Algebra, IMA Vol. 50. Springer-Verlag, 1993. 26. G. I. Orlova and Ya.G. Dorfman. Finding the maximum cut in a graph. Engineering Cybernetics, 10(3):502–506, 1972. 27. S. Poljak and F. Rendl. Node and edge relaxations for the max-cut problem. Computing, 52:123–127, 1994. 28. S. Poljak and F. Rendl. Nonpolyhedral relaxations of graph-bisection problems. SIAM Journal on Optimization, 5(3):467–487, 1995. 29. F. Rendl. Semidefinite programming and combinatorial optimization. Applied Numerical Mathematics, 29:255–281, 1999. 30. R. Tyrrell Rockafellar. Convex Analysis, volume 28 of Princeton Mathematics Series. Princeton University Press, Princeton, 1970. 31. H. Schramm and J. Zowe. A version of the bundle idea for minimizing a nonsmooth function: Conceptual idea, convergence analysis, numerical results. SIAM J. Optim., 2:121–152, 1992. The One-Commodity Pickup-and-Delivery Travelling Salesman Problem⋆ Hipólito Hernández-Pérez and Juan-José Salazar-González DEIOC, Faculty of Mathematics, University of La Laguna Av. Astrofı́sico Francisco Sánchez, s/n; 38271 La Laguna, Tenerife, Spain {hhperez,jjsalaza}@ull.es Abstract. This article deals with a new generalization of the wellknown “Travelling Salesman Problem” (TSP) in which cities correspond to customers providing or requiring known amounts of a product, and the vehicle has a given capacity and is located in a special city called depot. Each customer and the depot must be visited exactly once by the vehicle serving the demands while minimizing the total travel distance. It is assumed that the product collected from pickup customers can be delivered to delivery customers. The new problem is called “onecommodity Pickup-and-Delivery TSP” (1-PDTSP). We introduce a 0-1 Integer Linear Programming model for the 1-PDTSP and describe a simple branch-and-cut procedure for finding an optimal solution. The proposal can be easily adapted for the classical “TSP with Pickup-andDelivery” (PDTSP). To our knowledge, this is the first work on an exact method to solve the classical PDTSP. Preliminary computational experiments on a test-bed PDTSP instance from the literature show the good performances of our proposal. 1 Introduction This article presents a branch-and-cut algorithm for a routing problem called one-commodity Pickup-and-Delivery Travelling Salesman Problem (1-PDTSP) and is closely related to the well-known Travelling Salesman Problem (TSP). A novelty of the 1-PDTSP compared to the TSP is that one specified city is considered to be a depot, while the others cities are associated to customers divided into two groups according to the type of required service. Each delivery customer requires a given amount of the product, while each pickup customer provides a given amount of the product. The product collected from a pickup customer can be supplied to a delivery customer, on the assumption that there will be no deterioration in the product. It is assumed that the vehicle has a fixed upper-limit capacity, starting and ending the route at the depot. The travel distance between each pair of locations is known. The 1-PDTSP calls for a minimum distance route for the vehicle satisfying the customer requirements without ever exceeding the vehicle capacity. ⋆ Work partially supported by “Gobierno de Canarias” (PI2000/116) and by “Ministerio de Ciencia y Tecnologı́a” (TIC2000-1750-C06-02), Spain. M. Jünger et al. (Eds.): Combinatorial Optimization (Edmonds Festschrift), LNCS 2570, pp. 89–104, 2003. c Springer-Verlag Berlin Heidelberg 2003  90 H. Hernández-Pérez and J.-J. Salazar-González This optimization problem finds applications in the transportation of a product from customer to customer when no unit of product has a precise origin and/or destination. This is the case, for example, when a bank company must move money between branch offices, some of them providing money and the others requiring money; the main office (i.e., the vehicle depot) provides or collects the remaining money. Another example occurs when milk must be distributed from farms to factories by a capacitated vehicle, assuming that each factory is only interested in receiving a stipulated demand of milk but not in the providers of this demand. Clearly, the 1-PDTSP is an N P-hard problem in the strong sense since it coincides with TSP when the vehicle capacity is large enough. A close-related problem in the literature is the Travelling Salesman Problem with Pickup and Delivery (PDTSP). As in the 1-PDTSP, there are two types of customers, each with a given (unrestricted in sign) demand, and a vehicle with a given capacity stationed in a depot. Travel distances are also given. The main difference is that in the PDTSP the product collected from pickup customers is different from the product supplied to delivery customers. Therefore, the total amount of items collected from pickup customers must be delivered only to the depot, and there are other different items going from the depot to the delivery customers. In other words, the positive demand customers provide items of a first product that must transported to the depot, while the negative demand customers require items of a different second product that must be transported from the depot. Both products must share the vehicle capacity during the route. An application of PDTSP is the collection of empty bottles from customers for delivery to a warehouse and full bottles being delivered from the warehouse to the customers. An immediate difference when compared with 1-PDTSP is that PDTSP is only feasible when the vehicle capacity is at least the maximum of the total sum of the pickup demands and of the total sum of the delivery demands. This constraint is not required for the feasibility of the 1-PDTSP in which the vehicle capacity could even be equal to the biggest customer’s demand. Mosheiov [15] introduced the PDTSP and proposed applications and heuristic approaches. Anily and Mosheiov [3] and Gendreau, Laporte and Vigo [9] presented approximation algorithms for the PDTSP. Anily and Hassin [2] introduced the Swapping Problem, the particular case of 1-PDTSP in which the customer requirements and the vehicle capacity are identical, and presented a polynomial approximation algorithm. Another related problem is the TSP with Backhauls-and-Linehauls, where there is the additional constraint that delivery customers must be visited before any pickup customer. In the Dial-a-Ride TSP there is a one-to-one correspondence between pickup customers and delivery customers, and each delivery customer must be visited only after the corresponding pickup customer has been visited. When there is no vehicle capacity, the Dial-a-Ride TSP is also known as Stacker Crane Problem and it is a particular case of the TSP with Precedence Constraints. See Savelsbergh and Sol [18] for references and other variants including time-windows, several vehicles, etc. The 1-PDTSP is also related to the The One-Commodity Pickup-and-Delivery Travelling Salesman Problem 91 Capacitated Vehicle Routing Problem (CVRP), where a homogeneous capacitated vehicle fleet located in a depot must collect a product from a set of pickup customers. CVRP combines structure from the TSP and from the Bin Packing Problem, two well-known and quite different combinatorial problems. See Toth and Vigo [19] for a recent survey on this routing problem. To our knowledge this is the first article introducing and solving the 1PDTSP. Section 2 shows that even finding a feasible solution of this combinatorial problem is N P-hard in the strong sense and Section 3 relates PDTSP to 1-PDTSP. Section 4 presents 0-1 integer linear programming models for the asymmetrical and symmetrical cases, and Section 5 describes a branch-and-cut algorithm for finding an optimal solution. Some preliminary computational results on a test-bed PDTSP instance from Mosheiov [15] are presented in Section 6 to analyze the performance of our proposal solving both 1-PDTSP and classical PDTSP instances. 2 Computational Complexity As mentioned before, 1-PDTSP is an N P-hard optimization problem, since it reduces to the TSP when Q is large enough (e.g. bigger than the sum of the delivery demands and than the sum of pickup demands). This section shows that even finding a feasible 1-PDTSP solution (not necessarily an optimal one) can be a very complex task. Indeed, let us consider the so-called 3-partitioning problem, defined as follows. Let us consider a positive integer number P and a set I of 3m items. Each item iis associated with a positive integer number pi such that P/4 ≤ pi ≤ P/2 and mP . Can the set I be partitioned into m disjoint subsets I1 , . . . , Im i∈I pi = such that i∈Ik pi = P for all k ∈ {1, . . . , m}? Clearly, this problem coincides with the problem of checking if there is (or not) a feasible solution of the following 1-PDTSP instance. Consider a pickup customer with demand qi = pi for each item i ∈ I, m delivery customers with demand −P , and a vehicle with capacity equal to P . Since we are interested only in a feasible solution, the travel costs are irrelevant. It is known (see Garey and Johnson [8]) that the 3-partitioning problem is N P-hard in the strong sense, hence checking if a 1-PDTSP instance admits a feasible solution has the same computational complexity. 3 Transforming PDTSP into 1-PDTSP The PDTSP involves the distribution of two commodities in a very special situation: one location (the depot) is the only source of a product and the only destination of the other product. This property allows us to solve PDTSP by using an algorithm for the 1-PDTSP. The transformation can be done as follows: – duplicate the PDTSP depot into two dummy 1-PDTSP customers, one collecting all the PDTSP pickup-customer demands and the other providing all the PDTSP delivery-customer demands; 92 H. Hernández-Pérez and J.-J. Salazar-González – impose that the vehicle goes directly from one dummy customer to the other with no load. The second requirement is not necessary when the travel costs are Euclidean  distances and Q = max{ qi >0 qi , − qi <0 qi }, as usually occurs in the test-bed instances of the PDTSP literature (see, e.g., Mosheiov [3] and Gendreau, Laporte and Vigo [9]). 4 Mathematical Model This section presents an integer linear programming formulation for the 1PDTSP. Let us start by introducing some notation. The depot will be denoted by 1 and each customer by i for i = 2, . . . , n. To avoid trivial instances, we assume n ≥ 5. For each pair of locations (i, j), the travel distance (or cost) cij of going from i to j is given. It is also given a demand qi associated with each customer i, where qi < 0 if i is a delivery customer and qi > 0 if i is a pickup customer. In the case of a customer with zero demand, it is assumed that it is (say) a pickup customer that must be also visited by the vehicle. The capacity of the vehicle is represented by Q and is assumed to be a positive number. Let V := {1, 2, . . . , n} be the node set, A be the arc set between all nodes, and E the edge set between all nodes. For simplicity in notation, arc a ∈ A with tail i and head j is also denoted by (i, j), and edge e ∈ E with nodes i and j by [i, j]. For each subset S ⊂ V , let δ + (S) := {(i, j) ∈ A : i ∈ S, j ∈ S}, δ − (S) := {(i, j) ∈ A : i ∈ S, j ∈ S} and δ(S) := {[i, j] ∈ E : i ∈ S, j ∈ S}. Without n loss of generality, the depot can be considered a customer by defining q1 := − i=2 qi , i.e., a customer absorbing or providing the necessary amount of product to ensure product  conservation. From  now on we assume this simplification. Finally, let K := i∈V :qi >0 qi = − i∈V :qi <0 qi . To provide a mathematical model to 1-PDTSP, for each arc a ∈ A, we introduce a 0-1 variable  1 if and only if a is routed, xa := 0 otherwise, and the continuous variable fa := load of the vehicle going through arc a. Then the asymmetric 1-PDTSP can be formulated as: min  a∈A subject to ca xa The One-Commodity Pickup-and-Delivery Travelling Salesman Problem  93 xa = 1 for all i ∈ V (1) xa = 1 for all i ∈ V (2) xa ≥ 1 for all S ⊂ V (3) xa ∈ {0, 1}  fa = qi for all a ∈ A (4) for all i ∈ V (5) for all a ∈ A. (6) a∈δ − ({i})  a∈δ + ({i})  a∈δ + (S)  fa − a∈δ + ({i}) a∈δ − ({i}) 0 ≤ fa ≤ Qxa From the model it is clear that the xa variables in a 1-PDTSP solution represent a Hamiltonian cycle in a directed graph G = (V, A), but not all Hamiltonian cycles in G define feasible 1-PDTSP solutions. Nevertheless, a trivial observation is that whenever a Hamiltonian cycle defined by a characteristic vector [x′a : a ∈ A] is a feasible 1-PDTSP solution (because there exists the appropriated loads [fa′ : a ∈ A]) then the cycle in the other direction, i.e., x′′(i,j) :=  1 if x′(j,i) = 1 for all (i, j) ∈ A, 0 otherwise is also a feasible 1-PDTSP solution. Indeed, vector [x′′a : a ∈ A] admits the appropriated loads defined by: ′′ ′ f(i,j) := Q − f(j,i) for all (i, j) ∈ A. If cij = cji for all i, j ∈ V (i = j), it is possible an smaller model by considering the new edge-decision variable  1 if and only if e is routed xe := 0 otherwise for each e ∈ E, and a continuous non-negative variable ga for each a ∈ A. Then the symmetric 1-PDTSP can be formulated as: min  e∈E subject to ce xe (7) 94 H. Hernández-Pérez and J.-J. Salazar-González  xe = 2 for all i ∈ V (8) xe ≥ 2 for all S ⊂ V (9) xe ∈ {0, 1}  ga = qi for all e ∈ E (10) for all i ∈ V (11) for all (i, j) ∈ A. (12) e∈δ({i})  e∈δ(S)  a∈δ + ({i}) ga − a∈δ − ({i}) 0 ≤ g(i,j) ≤ Q x[i,j] 2 Constraints (8) require that each customer must be visited once, and Constraints (9) require the 2-connectivity between customers. Constraints (11) and (12) guarantee the existence of a certificate [ga : a ∈ A] proving that a vector [xe : e ∈ E] defines a feasible 1-PDTSP cycle. In fact, if there exists a given Hamiltonian tour [x′e : e ∈ E] and a vector [ga′ : a ∈ A] satisfying (8)–(12), then there also exists an oriented cycle [x′a : a ∈ A] and a vector [fa′ : a ∈ A] satisfying the constraints in the original model (1)–(6), and thus guaranteeing a feasible 1-PDTSP solution. This can be done by simply considering any orientation of the tour defined by [x′e : e ∈ E] creating an oriented cycle represented by the characteristic vector [x′a : a ∈ A], and by defining ′ ′ f(i,j) := g(i,j) +  Q ′ − g(j,i) 2  for each arc (i, j) in the oriented tour. Reciprocally, each feasible 1-PDTSP (i.e., each [x′a : a ∈ A] and [fa′ : a ∈ A] satisfying (1)–(6)) corresponds to a feasible solution of model (7)–(12). Indeed, this solution is defined by: x′[i,j] := x′(i,j) + x′(j,i) for each [i, j] ∈ E and ′ g(i,j)  ′ if x′(i,j) = 1,  f(i,j) /2 ′ := (Q − f(j,i) )/2 if x′(j,i) = 1,   0 otherwise for each (i, j) ∈ A. Therefore, the meaning of the continuous variable ga in the symmetric 1-PDTSP model (7)–(12) is not properly the load of the vehicle going through arc a, as occurs in the asymmetric 1-PDTSP model, but it is a certificate that a Hamiltonian cycle [xe : e ∈ E] is a feasible 1-PDTSP solution. Without the above observation, a quick overview could lead to the mistake of thinking that the upper limit of a load through arc (i, j) in constraints (12) must be Qx[i,j] instead of Qx[i,j] /2. In other words, it would be a mistake to try to use variables fa instead of ga and to replace Constraints (11)–(12) by The One-Commodity Pickup-and-Delivery Travelling Salesman Problem       fa − a∈δ + ({i})  fa = qi for all i ∈ V a∈δ − ({i}) 0 ≤ f(i,j) ≤ Qx[i,j]    95 (13)   for all (i, j) ∈ A. Indeed, let us consider an instance with a depot and two customers, one with demand q2 = +4 and the other with demand q3 = −2. If the vehicle capacity is (say) Q = 2 then the 1-PDTSP problem is not feasible, but the mathematical model (7)–(10) and (13) has the integer solution x[1,2] = x[2,3] = x[1,3] = 1 with f(1,2) = 0, f(2,3) = 2, f(3,1) = 1 and f(1,3) = 1, f(3,2) = 0, f(2,1) = 2. Therefore, model (7)–(10) and (13) is not valid. By Benders’ decomposition (see Benders [7]) it is possible to project out the continuous variables ga in model (7)–(12), obtaining a pure 0-1 ILP model on the decision variables. Indeed, a given Hamiltonian cycle [xe : e ∈ E] defines a feasible 1-PDTSP solution if there exists a vector [ga : a ∈ A] satisfying (11) and (12). According to Farkas’ Lemma the polytope described by (11) and (12) for a fixed vector [xe : e ∈ E] is feasible if, and only if, all extreme directions [αi : i ∈ V , βa : a ∈ A] of the polyhedral cone  αi − αj ≤ β(i,j) βa ≥ 0 for all (i, j) ∈ A for all a ∈ A (14) satisfy  i∈V αi qi −  (i,j)∈A β(i,j) Q x[i,j] ≤ 0. 2 Clearly, the linearity space of (14) is a 1-dimensional space generated by the vector defined by α̃i = 1 for all i ∈ V and β̃a = 0 for all a ∈ A. Therefore, it is possible to assume αi ≥ 0 for all i ∈ V in (14) to simplify the characterization of the extreme rays. In fact, this assumption also follows immediately from the fact that equalities “=” in the linear system (11) can be replaced by inequalities “≥” without adding new solutions. Therefore, let us characterize the extreme rays of the cone:    αi − αj ≤ β(i,j) αi ≥ 0   βa ≥ 0  for all (i, j) ∈ A  for all i ∈ V for all a ∈ A. (15)   Theorem 1. Except for multiplication by positive scalars, all the extreme directions of the polyhedral cone defined by (15) are: 96 H. Hernández-Pérez and J.-J. Salazar-González (i) for each a′ ∈ A, the vector (α, β) defined by αi = 0 for all i ∈ V , βa = 0 for all a ∈ A \ {a′ } and βa′ = 1; (ii) for each S ⊂ V , the vector (α, β) defined by αi = 1 for all i ∈ S, αi = 0 for all i ∈ V \ S, βa = 1 for all a ∈ δ + (S) and βa = 0 for all a ∈ A \ δ + (S). Proof: It is evident that the two families of vectors are directions of (15). First, we will show that they are also extreme vectors. Second, we will show that they are the unique ones in (15). Clearly vectors of family (i) are extreme. Let us now consider a vector (α, β) of family (ii), and two other vectors (α′ , β ′ ) and (α′′ , β ′′ ) satisfying (15) such that 1 1 (α, β) = (α′ , β ′ ) + (α′′ , β ′′ ). 2 2 By definition, αi = 0 for all i ∈ S and βa = 0 for all a ∈ δ + (S). Because of the non-negativity assumption on all the components, it follows αi′ = 0 = αi′′ for all i ∈ S and βa′ = 0 = βa′′ for all a ∈ δ + (S). Considering a = (i, j) with i, j ∈ S, the last result implies αi′ = αj′ and αi′′ = αj′′ . Moreover, for all (i, j) ∈ δ + (S), ′ ′′ αi = 1 and β(i,j) = 1, thus αi′ + αi′′ = 2 = β(i,j) + β(i,j) whenever (i, j) ∈ δ + (S). ′′ ′ ′′ ′ ′ ′′ ′ whenever Since αi ≤ β(i,j) and αi ≤ β(i,j) then αi = β(i,j) and αi′′ = β(i,j) + ′ ′ ′′ ′′ (i, j) ∈ δ (S). In conclusion, (α , β ) and (α , β ) coincide with (α, β), thus (α, β) is an extreme direction of the cone described by (15). Let us now prove that the two families of vectors are all the extreme directions of the cone defined by (15). To do that, let us consider any vector (α′ , β ′ ) satisfying (15) and let us prove that it is a cone combination of vectors involving at least one from the two families. The aim is obvious when α′ = 0 by considering the vectors of family (i), so let us assume that α′ has a positive component. Set S ′ := {i ∈ V : αi′ > 0} λ′ := min{αi′ : i ∈ S ′ }. and Let us consider (α′′ , β ′′ ) the vector in family (ii) defined by S := S ′ , and (α′′′ , β ′′′ ) := (α′ , β ′ )−λ′ (α′′ , β ′′ ). The proof concludes since λ′ > 0 and (α′′′ , β ′′′ ) satisfies (15). ✷ Therefore, a given Hamiltonian cycle [xe : e ∈ E] defines a feasible 1-PDTSP solution if and only if  2  qi xe ≥ Q e∈δ(S) i∈S for all S ⊂ V (case S = V is unnecessary). These inequalities are known in the CVRP literature as  capacity constraints (see, e.g., Toth and Vigo [19]). Since  δ(S) = δ(V \ S) and i∈S qi = i∈S (−qi ), the above inequalities are equivalent to  2  xe ≥ (−qi ) Q e∈δ(S) for all S ⊂ V . i∈S The One-Commodity Pickup-and-Delivery Travelling Salesman Problem 97 An immediate consequence is that 1-PDTSP can be formulated as the classical TSP model (7)–(10) plus the following Benders’ cuts:    2   for all S ⊆ V : |S| ≤ |V |/2. (16) qi  xe ≥   Q e∈δ(S) i∈S Even if there is an exponential number of linear inequalities in (16), today’s stateof-the-art of cutting-plane approaches allows us to manage all of them in a very effective way. To be more precise, as Section 5.1 discusses in detail, Constraints (16) can be efficiently incorporated by finding (if any) a feasible solution for the linear system (11)–(12) with [xe : e ∈ E] as parameters. Therefore, any cuttingplane approach for solving the TSP can be adapted to solve the 1-PDTSP by also considering Constraints (16). This means that it could be possible to insert the new inequalities in a software like CONCORDE (see Applegate, Bixby, Chvatal and Cook [4]) and to obtain an effective program to solve instances of 1-PDTSP. Unfortunately, the source code of this particular TSP software is very complex to modify and we did not succeed in the adaptation. That was a motivation to develop the “ad hoc” implementation described in the next section. 5 Algorithm for 1-PDTSP In this section an enumerative algorithm is proposed for the exact solution of the problem. The algorithm follows a branch-and-bound scheme, in which lower bounds are computed by solving an LP relaxation of the problem. The relaxation is iteratively tightened by adding valid inequalities to the current LP, according to the so-called cutting plane approach. The overall method is commonly known as a branch-and-cut algorithm; refer to Padberg and Rinaldi [17] and Jünger, Reinelt and Rinaldi [12] for a thorough description of the technique. Some important implementation issues are next described. 5.1 Separating Benders’ Cuts Due to the large number of inequalities in (16), not all of them can be considered in an LP relaxation of the problem. Useful constraints must be identified dynamically, and this is typically called the separation problem of (16): “given a (possibly fractional) solution [xe : e ∈ E] of an LP relaxation, is there a violated cut in (16)? If yes, provide with (at least) one”. As it was mentioned in Section 4, this question can be answered by checking the feasibility of the polytope described by (11)–(12). Indeed, in the Benders’ decomposition terminology, the problem of finding (if any) a vector [ga : a ∈ A] satisfying (11)–(12) for a given [xe : e ∈ E] is called subproblem. An easy way to solve the subproblem is to solve a linear program (LP) by considering (11)–(12) with a dummy objective function. If it is not feasible then its dual program is unbounded and the unbounded extreme direction defines a violated Benders’ cut; otherwise, all constraints in (16) are satisfied. Therefore the separation problem 98 H. Hernández-Pérez and J.-J. Salazar-González of (16) can be solved in polynomial time by using, for example, an implementation of the ellipsoid method for Linear Programming (see [13]). Nevertheless, in practice the efficiency of the overall algorithm strongly depends on this phase. A better way of solving the separation problem for Constraints (16) follows the idea presented by Harche and Rinaldi (see [5]) and is next addressed. Let [x∗e : e ∈ E] be a given solution of a linear relaxation of model (7)–(10) and (16). In order to check whether there is a violated Benders’ cut, let us write the constraints in a different form. For each S ⊂ V  x∗e ≥ e∈δ(S)  +2qi i∈S Q is algebraically equivalent to:  e∈δ(S) x∗e +  i∈V \S:qi >0 2qi + Q  i∈S:qi <0 −2qi ≥ Q  i∈V :qi >0 2qi . Q The right-hand side of the inequality is the positive constant 2K/Q, and the coefficients in the left-hand side are also non-negative. Therefore, this result produces an algorithm to solve the separation problem of the Benders’ cuts based on solving a max-flow problem on the capacitated undirected graph G∗ = (V ∗ , E ∗ ) defined as follows. Consider two dummy nodes n + 1 and n + 2, and let V ∗ := V ∪{n + 1, n + 2}. The edge set E ∗ contains the edges e ∈ E such that x∗e > 0 in the given solution with capacity x∗e , the edge [i, n + 1] for each pickup customer i with capacity 2qi /Q, and the edge [i, n + 2] for each delivery customer i with capacity −2qi /Q. Finding a most-violated Benders’ inequality in (16) calls for the minimumcapacity cut (S ∗ , V ∗ \S ∗ ) with n + 1 ∈ S ∗ and n + 2 ∈ V ∗ \S ∗ in the capacitated undirected graph G∗ . This can be done in O(n3 ) time, as it amounts to finding the maximum flow from n + 1 to n + 2 (see Ahuja, Magnanti and Orlin [1]). If the maximum flow value is no less than 2K/Q then all the inequalities (16) are satisfied; otherwise the capacity of the minimum cut separating n + 1 and n + 2 is strictly less than 2K/Q and a most-violated inequality (16) has been detected among all. The subset S defining a most-violated Benders’ inequality is either S ∗ \ {n + 1} or V ∗ \ (S ∗ ∪ {n + 2}). 5.2 Strengthening the LP-Relaxation Even if a solution [x∗e : e ∈ E] satisfies all constraints of the LP relaxation of model (7)–(10) and (16), its objective value can be even farther from the objective value of an optimal 1-PDTSP solution. Therefore, it is always important to provide ideas to strengthen the LP relaxation and ensure better lower bounds. Some ideas are next addressed. A first improvement of an LP relaxation arises by rounding up to the next even number the right-hand side of constraints (16), i.e., to consider the following rounded Benders’ cuts: The One-Commodity Pickup-and-Delivery Travelling Salesman Problem  xe ≥ 2r1 (S) for all S ⊆ V, 99 (17) e∈δ(S) where       q i i∈S r1 (S) := 2 max 1, . Q This lifting of the right-hand-side in Constraints (16) is possible because the left-hand-side represents the number of times the vehicle goes through δ(S), and this is always a positive even integer. Unfortunately, the polynomial procedures described in Section 5.1 to solve the separation problem for (16) cannot be easily adapted to find a most-violated rounded Benders’ cut. Nevertheless, it is very useful to insert in practice a Benders’ cut in the rounded form whenever a constraint (9) or (16) is separated using the polynomial procedures. A further constraint improvement of (9) and (16) in the 1-PDTSP formulation can be obtained by defining r2 (S) as the smallest number of times the vehicle with capacity Q must go inside S to meet the demand qi of the customers in S. The new valid inequalities are the following:  xe ≥ 2r2 (S) for all S ⊆ V. (18) e∈δ(S) Notice that r2 (S) is not the solution of a Bin Packing Problem since qi is allowed to be negative. Observe that r1 (S) ≤ r2 (S) and inequality can hold strictly as in the following example. Let S = {1, 2, 3, 4} a set of four customers with demands q1 = q2 = +3 and q3 = q4 = −2. If Q = 3 then r1 (S) = 1 and r2 (S) = 2. The computation of r2 (S), even for a fixed subset S, is N P-hard problem in the strong sense (see Section 2). Therefore, we do not consider constraints (18) in our algorithm. Even if the computation of r1 (S) is trivial for a given S ⊂ V , it is very unlikely to find a polynomial algorithm to solve the separation problem of (17) (similar constraints were proven to have an N P-complete separation problem by Harche and Rinaldi for the Capacitated Vehicle Routing Problem; see [5]). Therefore, we have implemented some simple heuristic approaches to separate (17) in a similar way as the one described in [5] for the CVRP: – The first heuristic procedure looks for the most violated constraint (16) by using the exact procedure described in Section 5.1. If S ∗ is the output of this procedure, constraints (17) is checked for violation. – The second procedure compares the current best feasible solution and the solution of the current LP relaxation. By removing edges in the best feasible cycle associated with variables close to value 1 in the current LP solution, the resulting connected components are considered as potential node sets for defining violated rounded Benders’ constraints. – Whenever a subset S ′ defining a violated constraint is identified, a third procedure checks for violation the inequality (17) associated to subset S ′′ := 100 H. Hernández-Pérez and J.-J. Salazar-González S ′ ∪ {v} for each v ∈ S ′ and S ′′ := S ′ \ {v} for each v ∈ S ′ . Notice that when r1 (S ′′ ) > r1 (S ′ ) then constraint (17) defined by S = S ′′ dominates the one defined by S = S ′ . These heuristic approaches will be denoted as Sep1, Sep2 and Sep3, respectively. Another strengthening arises by considering all valid inequalities known for the TSP. Indeed, as mentioned before, the 1-PDTSP is a TSP plus additional constraints, so all constraints (e.g., 2-matching inequalities) can be used to improve the lower bound from LP relaxations of 1-PDTSP. See Naddef [16] for the more common facet-defining inequalities of the TSP polytope. Generalizing this idea, it is also possible to use polyhedral results known for the CVRP. Indeed, constraints (16), (17) and (18) are extensions of the so-called capacity constraints in the CVRP literature (see, e.g., Toth and Vigo [19]). In the same way, other inequalitites can be adapted for the 1-PDTSP. This is the case of the multistar constraints (see, e.g., Gouveia [14]) that for the 1-PDTSP are:         2  (19) qi + qj x[i,j]  xe ≥  Q  i∈S j∈V \S e∈δ(S) for all S ⊂ V . The validity follows from the observation that each time the vehicle visits S and uses edge [i, j] with i ∈ S and j ∈ S, the vehicle must have free capacity for visiting the nodes in S and also for j. Different from the situation in the CVRP, multistar inequalities (19) does not necessarily dominate the capacity inequalities (16). As for the separation, it is clear that the procedure described in Section 5.1 can be easily adapted to solve in polynomial time the separation of (19) when qj ≤ Q/2 for all j ∈ V . +2 4❧ ✱ ❧ ✱ ❚❧ ❚ ❧ ✱ ❚ ❧ ✱ +6 ✱ ❚−6 ❧ +3 7❧ 6❧ 5❧ 3❧ 2❧+6 ◗ −10 ✜ ✑✑ ◗ ◗ ✜✑ ◗ ✜✑ ✑ ◗✜ 1 −1 Q = 10 0.5 1.0 Fig. 1. Fractional solution of model (7)–(10) and (17) A final important improvement is based on the existence of incompatibilities between some edges in the graph G. Figure 1 shows an example of a typical frac- The One-Commodity Pickup-and-Delivery Travelling Salesman Problem 101 tional vector [xe : e ∈ E] satisfying all linear constraints in (7)–(10) and (17). Edges in single lines represent variables with value 0.5, edges in double lines represent variables with value 1, and no-drawn edges represent variables with value 0. This fractional solution is a convex combination of two Hamiltonian cycles, characterized by the node sequences (1, 2, 3, 4, 5, 6, 7, 1) and (1, 3, 2, 4, 7, 6, 5, 1), where the first is a feasible 1-PDTSP but the second is not. Clearly the vehicle capacity requirement forces some variables to be fixed at zero. This is the case of the variable associated to edge [1, 6]. Moreover, Q also produces incompatibilities between pairs of edge variables. In the example, each pair of edges in {[2, 4], [4, 5], [4, 7]} are incompatible. Indeed, the vehicle cannot route e.g. [2, 4] and [4, 5] consecutively since 8 + 8 − 2 > Q. This can be mathematically written as x[2,4] + x[4,5] ≤ 1 , x[2,4] + x[4,7] ≤ 1 , x[4,5] + x[4,7] ≤ 1. None of the above inequalities is violated by the fractional solution in Figure 1, but there is a stronger way of imposing the 3-pair incompatibility: x[2,4] + x[4,5] + x[4,7] ≤ 1, which is a violated cut by the fractional solution. Therefore, in 1-PDTSP with low capacity Q, there is an underlying SetPacking structure and all known valid inequalities can be used to strengthening the LP relaxation of the 1-PDTSP. The interested reader is referred to Balas and Padberg [6] for a survey on the Set-Packing Problem. Basic constraints of the Set-Packing polytope are the so-called clique inequalities defined in the problem as follows. Consider the so-called conflict graph Gc = (V c , E c ) where there is a node in V c for each edge-decision variable, and an edge in E c connecting two nodes when the corresponding variables can not be consecutively routed by the vehicle. Then each subset W ⊆ V c inducing a complete subgraph of Gc defines the following clique inequality:  xe ≤ 1. (20) e∈W Only inequalities associated to maximal-inclusion cliques are computationally useful (and also facet-defining of the Set-Packing polytope). Since finding a maximal clique in a general graph is an N P-hard problem, we heuristically solve the separation problem of (20). Our procedure consists of a simple exhaustive search of stars of the support graph associated G∗ to the solution. In particular, given a fractional solution [x∗e : e ∈ E] of an LP relaxation of 1-PDTSP, and for each customer i, we enumerate all three edges in δ(i) in G∗ defining a violated clique inequality. 5.3 Heuristic Algorithms To speed up the branch-and-bound algorithm it is very important not only to have good lower bounds but also good feasible solutions. In order to achieve 102 H. Hernández-Pérez and J.-J. Salazar-González this second aim, we have developed two main heuristic algorithms: an initial heuristic to be executed at the beginning of the enumerative algorithm, and a primal heuristic to be executed after each node of the branch-and-bound tree using the solution of the current LP relaxation. A number of known tour construction and tour improvement heuristic algorithms for TSP (see, e.g., Golden and Stewart [10]) can be easily adapted to 1-PDTSP, even if in this problem we can not always ensure that a tour construction procedure ends with a feasible 1-PDTSP cycle. Indeed, we have implemented nearest insertion, farthest insertion, cheapest insertion, two-optimality and three-optimality TSP-like procedures, trying to guarantee feasibility and low-cost in the final solutions. See [11] for more details on each procedure. 5.4 Branching When the solution [x∗e : e ∈ E] from the LP-relaxation had non-integer values, we considered the branching on variables, the standard approach for branch-andcut. It consists of selecting a fractional edge-decision variable xe for generating two descendant nodes by fixing xe to either 0 or 1. In our implementation we chose the variable with value x∗e as close as possible to 0.5 (ties are broken by choosing the edge having maximum cost). We performed also experiments with a branching scheme based on subsets (selecting  a subset S previously generated within a separation procedure and such that e∈δ(S) x∗e was as close as possible to an odd number) but we obtained worse results in our benchmark instances. See [11] for more details. 6 Preliminary Computational Results The enumerative algorithm described in Section 5 has been implemented in ANSI C, and it ran on a personal computer AMD 1333 Mhz. As to the LP solver the package CPLEX 7.0 was used. To test the performance of our approach both on the 1-PDTSP and on the PDTSP, we have considered a classical test-bed PDTSP instance introduced in Mosheiov [15]. It consists of the depot, 12 pickup customers and 12 delivery customers. The capacity   of the vehicle in the PDTSP instance is given by Q := max{ i∈V :qi >0 qi , − i∈V :qi <0 qi } = 45 and the cost cij by the Euclidean distance between points i and j. By solving the TSP on this instance to optimality, we got the Hamiltonian cycle illustrated in [15] but with a different objective value (we obtained 4431 while Mosheiov got 4434). We tried different roundings of the Euclidean distances but we did not succeed in obtaining the same optimal TSP value. In our final implementation we computed the costs using the same code instruction from CONCORDE [4]. We also generated 1-PDTSP instances by using the same demands and distances of the Mosheiov instance and reducing the capacity of the vehicle. In particular we noticed that when Q ≥ 16 an optimal solution of the 1-PDTSP instance is an optimal TSP solution using the costs cij , while there are a customer with demand 7. Therefore, we generated ten instances, one for each Q ∈ {7, 8, . . . , 15, 16}. The One-Commodity Pickup-and-Delivery Travelling Salesman Problem 103 Table 1 summarizes the results of our experiments. For each instance, the meaning of a column is as follows: Sep1 : shows the percentage ratio of the lower bound over the optimal objective value, using the separation procedure Sep1 described in Section 5.1; Sep2 : shows the percentage ratio of the lower bound when the heuristic procedure Sep2 described in Section 5.2 is applied; Sep3 : shows the percentage ratio of the lower bound when the heuristic separation procedure Sep3 described in Section 5.2 is also applied; r-LB: shows the percentage ratio after considering some clique inequalities according with the heuristic separation procedure described in Section 5.2; UB0 : is the percentage ratio of the initial heuristic described in Section 5.3; Optimum: is the optimal objective function value (and denominator in ratios); Heu: is the time in seconds of the AMD 1333 Mhz. to perform the heuristic procedures; Root: is the time in seconds of the AMD 1333 Mhz. to perform the root node of the branch-and-bound tree excluding the heuristic procedures; Time: is the overall time of the algorithm in seconds of the AMD 1333 Mhz; Deep: is the maximum level explored in the branch-and-bound tree; Nodes: is the number of explored branch-and-bound nodes. Table 1. Ten 1-PDTSP instances and the PDTSP instance from data in [15] n 25 25 25 25 25 25 25 25 25 25 25 Q 7 8 9 10 11 12 13 14 15 16 45 Sep1 88.18 90.43 92.30 88.55 91.59 91.59 95.29 98.50 98.50 99.50 100.00 Sep2 93.43 97.49 96.92 99.40 97.76 96.41 98.88 100.00 100.00 99.50 100.00 Sep3 94.16 97.51 98.85 99.44 99.88 97.76 99.61 100.00 100.00 99.50 100.00 r-LB 95.36 98.00 98.85 99.44 99.88 97.76 99.61 100.00 100.00 99.50 100.00 UB0 Optimum Heu Root Time Deep Nodes 100.00 5734 0.17 0.16 3.46 10 155 100.00 5341 0.17 0.11 0.82 7 35 100.00 5038 0.12 0.11 0.33 2 5 100.00 4979 0.05 0.11 0.16 2 5 100.00 4814 0.06 0.11 0.17 1 3 100.00 4814 0.06 0.05 0.17 2 5 100.00 4627 0.05 0.05 0.16 1 3 100.00 4476 0.06 0.05 0.11 0 1 100.00 4476 0.05 0.06 0.11 0 1 100.00 4431 0.06 0.05 0.11 3 7 100.00 4467 0.06 0.06 0.17 0 1 Table 1 shows that, on the Mosheiov data, the complexity of the classical PDTSP is similar to the classical TSP, while the 1-PDTSP turns out to be harder when Q decreases. In fact, we found 26 and 10 violated cliques inequalities when Q = 7 and Q = 8, respectively, while none was found when Q ≥ 9. Without the clique inequality separation the number of explored branch-and-bound nodes to solve the instance with Q = 7 increases from 117 to 279 and the computational time from 106.5 to 244.5 seconds. The initial heuristic approach consumed 0.2 seconds on each instance and the quality of the provided solution was quite good. As a final observation, the heuristic PDTSP solution in Mosheiov [15] has an objective value of 4635 (using our cost matrix, and 4634 in [15]) while an 104 H. Hernández-Pérez and J.-J. Salazar-González optimal PDTSP solution has a value of 4467. See [11] for other computational experiments on the randomly generated instances described in [15] and [9], where the behaviors of the overall algorithm is similar. References 1. R.K. Ahuja, T.L. Magnanti, J.B. Orlin, “Network Flows”, in G.L. Nemhauser, A.H.G. Rinnooy Kan, M.J. Todd (Editors), “Optimization” I, North-Holland, 1989. 2. S. Anily, R. Hassin, “The Swapping Problem”, Networks 22 (1992) 419–433. 3. S. Anily, G. Mosheiov, “The traveling salesman problem with delivery and backhauls”, Operations Research Letters 16 (1994) 11–18. 4. D. Applegate, R. Bixby, V. Chvatal, W. Cook, “Concorde: a code for solving Traveling Salesman Problem”, http://www.math.princeton.edu/tsp/concorde.html, 1999. 5. P. Augerat, J.M. Belenguer, E. Benavent, A. Corberán, D. Naddef, G. Rinaldi, “Computational Results with a Branch and Cut Code for the Capacitated Vehicle Routing Problem”, Research Report 949-M, 1995, Universite Joseph Fourier, Grenoble, France. 6. E. Balas, M.W. Padberg, “Set Partitioning: A survey”, SIAM Review 18 (1976) 710–760. 7. J.F. Benders, “Partitioning Procedures for Solving Mixed Variables Programming Problems”, Numerische Mathematik 4 (1962) 238–252. 8. M.R. Garey, D.S. Johnson, “Computers and Intractability: A Guide to the Theory of N P-Completeness”, W.H. Freeman and Co., 1979. 9. M. Gendreau, G. Laporte, D. Vigo, “Heuristics for the traveling salesman problem with pickup and delivery”, Computers & Operations Research 26 (1999) 699–714. 10. B.L. Golden, W.R. Stewart, “Empirical analysis of heuristics”, in E.L. Lawler, J.K. Lenstra, A.H.G. Rinnooy Kan, D.B. Shmoys (Editors), The Traveling Salesman Problem: A Guided Tour of Combinatorial Optimization, Wiley, 1985. 11. H. Hernández, J.J. Salazar, “A Branch-and-Cut Algorithm for the Pickup-andDelivery Travelling Salesman Problem”, technical report, University of La Laguna, 2001. Available on http://webpages.ull.es/users/jjsalaza/pdtsp.htm. 12. M. Jünger, G. Reinelt, G. Rinaldi, “The Travelling Salesman Problem”, in M.O. Ball, T.L. Magnanti, C.L. Monma, G.L. Nemhauser (Editors), Handbook in Operations Research and Management Science: Network Models, Elsevier, 1995. 13. L.G. Khachian, “A polynomial Algorithm in Linear Programming”, Soviet Mathematics Doklady 20 (1979) 191–194. 14. L. Gouveia, “A result on projection for the the Vehicle Routing Problem”, European Journal of Operational Research 83 (1997) 610–624. 15. G. Mosheiov, “The Travelling Salesman Problem with pick-up and delivery”, European Journal of Operational Research 79 (1994) 299–310. 16. D. Naddef, “Polyhedral theory and Branch-and-Cut algorithms for Symmetric TSP”, Chapter 4 in G. Gutin, A. Punnen (Editors), The Traveling Salesman Problem and its Variants, Kluwer, 2001. 17. M.W. Padberg, G. Rinaldi, “A branch-and-cut algorithm for the resolution of largescale symmetric traveling salesman problems”, SIAM Review 33 (1991) 60–100. 18. M.W.P. Savelsbergh, M. Sol, “The General Pickup and Delivery Problem”, Transportation Science 29 (1995) 17–29. 19. P. Toth, D. Vigo (Editors), “The Vehicle Routing Problem”, SIAM Discrete Mathematics and its Applications, 2001. Reconstructing a Simple Polytope from Its Graph Volker Kaibel⋆ TU Berlin, MA 6–2 Straße des 17. Juni 136 10623 Berlin, Germany kaibel@math.tu-berlin.de http://www.math.tu-berlin.de/˜kaibel Abstract. Blind and Mani [2] proved that the entire combinatorial structure (the vertex-facet incidences) of a simple convex polytope is determined by its abstract graph. Their proof is not constructive. Kalai [15] found a short, elegant, and algorithmic proof of that result. However, his algorithm has always exponential running time. We show that the problem to reconstruct the vertex-facet incidences of a simple polytope P from its graph can be formulated as a combinatorial optimization problem that is strongly dual to the problem of finding an abstract objective function on P (i.e., a shelling order of the facets of the dual polytope of P ). Thereby, we derive polynomial certificates for both the vertexfacet incidences as well as for the abstract objective functions in terms of the graph of P . The paper is a variation on joint work with Michael Joswig and Friederike Körner [12]. 1 Introduction The face lattice LP of a (convex) polytope P is any lattice that is isomorphic to the lattice formed by the set of all faces of P (including ∅ and P itself), ordered by inclusion. It is well-known to be determined by the vertex-facet incidences of P , i.e., by any graph that is isomorphic to the bipartite graph whose nodes are the vertices and the facets of P , where the edges are defined by the pairs {v, f } of vertices v and facets f with v ∈ f . In lattice theoretic terms, LP is a ranked, atomic, and coatomic lattice, and thus, the sub-poset formed by its atoms and coatoms already determines the whole lattice. Actually, one can compute LP from the vertex-facet incidences of P in O (η · α · λ) time, where η is the minimum of the number of vertices and the number of facets, α is the number of vertex-facet incidences, and λ is the total number of faces of P [14]. The graph GP = (VP , EP ) of a polytope P is any graph that is isomorphic to the graph whose nodes are the vertices of P , where two nodes are adjacent if and only if the convex hull of the corresponding two vertices is a one-dimensional face of P . Phrased differently, GP is the graph defined on the rank one elements of LP , ⋆ Supported by the Deutsche Forschungsgemeinschaft, FOR 413/1–1 (Zi 475/3–1). M. Jünger et al. (Eds.): Combinatorial Optimization (Edmonds Festschrift), LNCS 2570, pp. 105–118, 2003. c Springer-Verlag Berlin Heidelberg 2003  106 V. Kaibel where two rank one elements are adjacent if and only if they are below a common rank two element. While the vertex-facet incidences completely determine the face-lattice of any polytope, the graph of a polytope in general does not encode the entire combinatorial structure. This can be seen, e.g., from the examples complete graph on n nodes and the ofnthe cut polytope associated with the n−1 -dimensional cyclic polytope with 2 vertices, which both have complete 2 graphs. Another example is the four-dimensional polytope shown in Fig. 1 whose graph is isomorphic to the graph of the five-dimensional cube. Fig. 1. A Schlegel-diagram (projection onto one facet) of a four-dimensional polytope with the graph of a five-dimensional cube, found by Joswig & Ziegler [13]. Actually, in all dimensions less than four such ambiguities cannot occur. For one- or two dimensional polytopes this is obvious, and for three-dimensional polytopes it follows from Whitney’s theorem [18] saying that every 3-connected planar graph has a unique (up to reflection) plane embedding. A d-dimensional polytope P is simple if every vertex of P is contained in precisely d facets, which is equivalent to GP being d-regular (the polytope is nondegenerate in terms of Linear Programming). Every face of a simple polytope is simple as well. None of the examples showing that it is, in general, impossible to reconstruct the face lattice of a polytope from its graph, is simple. In fact, Blind and Mani [2] proved in 1987 that the face lattice of a simple polytope is determined by its graph. Their proof (which we sketch in Sect. 2) is not constructive and crucially relies on the topological concept of homology. In 1988, Kalai [15] found a short and elegant proof (reviewed in Sect. 3) that does only use elementary geometric and combinatorial reasoning with the main advantage of being algorithmic. However, the running time of the method that can be devised from it is exponential in the size of the graph. Perles conjectured in the 1970’s (see [15]) that for a d-dimensional simple polytope P every subset F ⊂ VP that induces a (d − 1)-regular, connected, and non-separating subgraph of GP corresponds to the vertex set of a facet of P . A proof of this conjecture would have lead immediately to a polynomial time algorithm that, given the graph GP = (VP , EP ) of a simple polytope P , decides Reconstructing a Simple Polytope from Its Graph 107 for a set of subsets of VP if it corresponds to the set of vertex sets of facets of P . However, Haase and Ziegler [10] recently disproved Perles’ conjecture. They found a four-dimensional simple polytope whose graph has a 3-regular, nonseparating, and even 3-connected induced subgraph that does not correspond to any facet. Refining ideas from Kalai’s proof (Sect. 4), we show that the problem of reconstructing the vertex-facet incidences of a simple polytope P from its graph GP can be formulated as a combinatorial optimization problem that has a well-stated strongly dual problem (Sect. 5). The optimal solutions to this dual problem are certain orientations of GP (induced by “abstract objective functions”) that are important also in different contexts. In particular, we provide short certificates for both the vertex-facet incidences of a simple polytope and for the abstract objective functions in terms of GP . We conclude with some remarks on the complexity status of the problem to decide whether a claimed solution to the reconstruction problem indeed are the vertex-facet incidences of the respective polytope in Sect. 6. The material presented here has evolved from joint work with Michael Joswig and Friederike Körner [12]. The basic ideas and results are the same in both papers. However, the concept of a “facoidal system of walks” is newly introduced here. It differs from the corresponding notion of a “2-system” (introduced in [12]) with the main effect that one knows how to compute efficiently some facoidal system of walks from the graph GP of a simple polytope P (see Proposition 3), while it is unclear how to find a 2-system from GP efficiently. Furthermore, the proof of Theorem 3 we give here is different from the corresponding proof in [12]. Finally, the complexity theoretic statement of Corollary 5 does not appear in [12]. For all notions from the theory of polytopes that we use without (sufficient) explanations, we refer to Ziegler’s book [19]. We use the terms d-polytope and kface for d-dimensional polytopes and k-dimensional faces, respectively. Often we will identify a face F of a simple polytope P with the subset of nodes of GP that corresponds to the vertex set of F . Whenever we talk about “polynomial time” or “efficient” this refers to the size of the graph GP of the respective (simple) polytope P . 2 The Theorem of Blind and Mani Blind and Mani [2] proved their theorem in the dual setting, i.e., for simplicial rather than for simple polytopes. Nevertheless, we sketch parts of their proof in terms of simple polytopes here. The starting point is the observation that, while a priori it is by no means clear if the graph GP = (VP , EP ) of a simple polytope P determines the face lattice of P , it is easy to see that the 2-faces of P (as subsets of VP ) carry the entire information on the combinatorial structure of P . Let P be a simple polytope. For a node v ∈ VP of GP denote by Γ (v) ⊂ VP the subset of nodes that are adjacent to v (the neighbors of v). For any k-element subset S ⊂ Γ (v) there is a k-face of P that contains the vertices that correspond 108 V. Kaibel to S ∪ {v} (and no vertex that corresponds to a node in Γ (v) \ S). We call the subset F (S ∪ {v}) ⊆ VP of nodes corresponding to the vertices of that face the k-face spanned by S ∪ {v}. For an edge e = {v, w} ∈ EP let Ψ(v,w) : Γ (v) \ {w} −→ Γ (w) \ {v} be the map that assigns to each subset S ⊂ Γ (v) \ {w} the subset T ⊂ Γ (w) \ {v} with F (S ∪ {v, w}) = F (T ∪ {w, v}). The maps Ψ(v,w) are cardinality preserving bijections, where Ψ(w,v) is the inverse of Ψ(v,w) . Proposition 1. Let P be a simple polytope. For each e = {v, w} ∈ EP and S ⊆ Γ (v) \ {w} we have Ψ(v,w) (S) = Ψ(v,w) (S) (where U is the respective complement of the set U ). Proof. This follows from the fact that we have Ψ(v,w) (S1 ∩ S2 ) = Ψ(v,w) (S1 ) ∩ Ψ(v,w) (S2 ) for all S1 , S2 ⊆ Γ (v) \ {w}. k With the notations of Proposition 1 denote by Ψ(v,w) the restriction of the k map Ψ(v,w) to the (k − 1)-element subsets of Γ (v) \ {w}. There are quite obvious k algorithms that compute from the maps Ψ(v,w) , {v, w} ∈ EP , the k-faces of P (as subsets of VP ), and vice versa, in polynomial time in the number fk (P ) of k-faces of the simple d-polytope P . Since both f2 (P ) as well as fd−1 (P ) are bounded polynomially in the size of GP , the following result follows. Corollary 1. There are polynomial time algorithms that, given the graph GP of a simple d-polytope P , compute the set of facets of P from the set of 2-faces of P (both viewed as sets of subsets of VP ), and vice versa. For the rest of this section let P1 and P2 be two simple polytopes, and let g : VP1 −→ VP2 be an isomorphism of the graphs GP1 = (VP1 , EP1 ) and GP2 = (VP2 , EP2 ) of P1 and P2 , respectively (i.e., g is an in both directions edge preserving bijection). The core of Blind and Mani’s paper [2] is the following result. Proposition 2. The graph isomorphism g maps every cycle in GP1 that corresponds to a 2-face of P1 to a cycle in GP2 that corresponds to a 2-face of P2 . Blind and Mani’s proof proceeds in the dual setting, i.e., in terms of the boundary complexes ∂P1 ⋆ and ∂P2 ⋆ of the simplicial dual polytopes P1 ⋆ and P2 ⋆ of P1 and P2 , respectively. The strategy is to show that, if some cycle in GP1 corresponding to a 2-face of P1 was mapped to some cycle in GP2 that does not correspond to any 2-face of P2 , then a certain sub-complex of ∂P2 ⋆ would have a certain non-vanishing (reduced) homology group. They complete their proof of Proposition 2 by showing that the respective homology group, however, is zero. The key ingredient they use to prove this is the following. For each face F of P1 ⋆ there is a shelling order of the facets of P1 ⋆ (i.e., an ordering satisfying certain Reconstructing a Simple Polytope from Its Graph 109 convenient topological properties, which, however, can be expressed completely combinatorially, see Sect. 3) in which the facets containing F come first. From Proposition 2 one can deduce that the graph isomorphism g actually induces a bijection between the cycles in GP1 that correspond to 2-faces of P1 and the cycles in GP2 that correspond to 2-faces of P2 . Once this is established, Proposition 1 yields the following result. Theorem 1 (Blind & Mani [2]). Every isomorphism between the graphs GP1 and GP2 of two simple polytopes P1 and P2 , respectively, induces an isomorphism between the vertex-facet incidences of P1 and P2 . In particular, the graph of a simple polytope determines its entire face lattice. 3 Kalai’s Constructive Proof Kalai realized that the existence of shelling orders as exploited by Blind and Mani can be used directly in order to devise a simple proof which does not rely on any topological notions like homology [15]. He formulated his proof in the original setting, i.e., for simple polytopes, where the notion corresponding to “shelling” is called “abstract objective function.” From now on, let P be a simple d-polytope with n vertices. For simplicity of notation, we will identify each face of P not only with the corresponding subset of VP , but also with the corresponding induced subgraph of GP . Furthermore, by saying that w ∈ W ⊂ VP is a sink of W we mean that w is a sink of the orientation induced on the subgraph of GP that is induced by W . Definition 1. Every bijection ϕ : VP −→ {1, . . . , n} induces an acyclic orientation Oϕ of the graph GP of P , where an edge is directed from its larger end-node to its smaller end-node (with respect to ϕ). The map ϕ is called an abstract objective function ( AOF) if Oϕ has a unique sink in every non-empty face of P (including P itself ). Such an orientation of GP is called an AOF-orientation. The inverse orientation of an AOF-orientation is an AOF-orientation as well (this follows, e.g., from Theorem 3). Thus, every AOF-orientation also has a unique source in every non-empty face. From the fact that the simplex algorithm works correctly (on every face) one easily derives that every linear function that assigns pairwise different values to the vertices of P induces an AOF-orientation (this is a consequence of the convexity of the faces). From this observation, the following fact follows (which is dual to the existence of the shelling orders required in Blind and Mani’s proof). Lemma 1. Let W ⊂ VP be any face of P . There is an AOF-orientation of GP for which W is terminal, i.e., no edge in the cut defined by W is directed from W to VP \ W . In a sense, this statement can be reversed. 110 V. Kaibel Lemma 2. Let W ⊂ VP be a set of nodes inducing a k-regular connected subgraph of GP , and let O be an AOF-orientation for which W is terminal. Then W is a k-face of P . Proof. Since O is acyclic, it has a source s in W . Let w1 , . . . , wk ∈ W be the neighbors of s in W , and let F := F ({t, w1 , . . . , wk }) ⊂ VP be the k-face of P that is spanned by t, w1 , . . . , wk . Since O has unique sources on non-empty faces, s ∈ W ∩ F must be the unique source of F . By the acyclicity of O there hence is a monotone path from s to every node in F . Since W is terminal this implies F ⊆ W . Because both F and W induce k-regular connected subgraphs of GP , F = W follows. Lemma 1 and Lemma 2 imply that one can compute the vertex-facet incidences of P , provided that one knows all AOF-orientations of GP . Kalai’s crucial discovery is that one can compute the AOF-orientations just from GP (i.e., without explicitly knowing the faces of P ). Definition 2. For an orientation O of GP let hk (O) be the number of nodes with in-degree k. The number H (O) := d  k=0 hk (O) · 2k is called the H-sum of O. Since every subset of neighbors of a vertex v of P together with v span a face of P containing no other neighbors of v, one finds (by double-counting) that H (O) =  F face of P  # sinks of O in F  (1) is the total number of sinks induced by O on faces of P . Consequently, since every acyclic orientation has at least one sink in every non-empty face, we have the following characterization. Lemma 3. An orientation O of GP is an AOF-orientation if and only if it is acyclic and has minimal H-sum among all acyclic orientations of GP (which then equals the number of non-empty faces of P ). Thus, by enumerating all 2 AOF-orientations of GP . d·n 2 = √ d·n 2 orientations of GP one can find all Theorem 2 (Kalai [15]). There is an algorithm that computes the vertex-facet √ d·n  incidences of a simple d-polytope with n vertices from its graph in O 2 steps. Reconstructing a Simple Polytope from Its Graph 4 111 Walks and Orientations In this section, we refine the ideas of Kalai’s proof and combine them with the observation (exploited by Blind and Mani) that it suffices to identify the 2faces from the graph of a simple polytope, even with respect to the question for polynomial time reconstruction algorithms (see Corollary 1). Let us start with a result that emphasizes the importance of the 2-faces even more. The result was known for cubes [11]; for three-dimensional simple polytopes it was independently proved by Develin [4]). For general simple polytopes it seems that it was assumed to be false (see [19, Ex. 8.12 (iv)]). Theorem 3. An acyclic orientation O of the graph GP of a simple polytope P is an AOF-orientation if and only if it has a unique sink on every 2-face of P . Proof. The “only if” part is clear by definition. For the “if” part, let ϕ : VP −→ {1, . . . , n} be a bijection inducing an acyclic orientation O = Oϕ that has a unique sink in every 2-face of P . Suppose there is a face F of P in which O has two sinks t1 , t2 ∈ F (t1 = t2 ). We might assume F = P (because F itself is a simple polytope with every 2-face of F being a 2-face of P ). Since GP is connected, there is a path in GP connecting t1 and t2 . Let Π = ∅ be the set of all these paths. For every π ∈ Π we denote by µ(π) the maximal ϕ-value of any node in π. Let πmin ∈ Π be a path with minimal µ-value among all paths in Π, where v ∈ VP is the node in πmin with ϕ(v) = µ(πmin ) (see Fig. 2). v1 t1 v v2 t2 Fig. 2. Illustration of the proof of Theorem 3. The fat grey path is the one yielding the contradiction. Obviously, v is a source in the path πmin (in particular, v ∈ {t1 , t2 }). Let C be the 2-face spanned by v and its two neighbors v1 and v2 in πmin . Since v is the unique source of O in C, v has the largest ϕ-value among all nodes in the union U of C and πmin . But U \ {v} induces a connected subgraph of GP containing both t1 and t2 , which contradicts the minimality of µ(πmin ). From now on let, again, P be a simple polytope. The ultimate goal is to find the system of cycles in the graph GP that corresponds to the set of 2-faces 112 V. Kaibel of P . However, we even do not know how to prove or disprove efficiently that a given system of cycles actually is the one we are searching for. We now define more general systems having the property that one can at least generate one of them in polynomial time (which in general, of course, will not be the desired one), and among which the one corresponding to the set of 2-faces of P can be characterized using AOF-orientations. Definition 3. (i) A sequence W = (w0 , . . . , wl−1 ) (with l ≥ 3) of nodes in GP is called a closed smooth walk in GP , if {wi , wi+1 } is an edge of GP and wi−1 = wi+1 for all i (where, as in the following, all indices are taken modulo l). Note that the wi need not be pairwise disjoint. We will identify two closed smooth walks if they differ only by a cyclic shift and/or a “reflection” of their node sequences. (ii) A set W of closed smooth walks in GP is a facoidal system of walks if for every triple v, v1 , v2 ∈ VP (v1 = v2 ) such that both v1 and v2 are neighbors of v there is a unique closed smooth walk (w0 , . . . , wl−1 ) ∈ W with (wi−1 , wi , wi+1 ) ∈ {(v1 , v, v2 ), (v2 , v, v1 )} for some i, which is also required to be unique. The system of 2-faces of P yields a uniquely determined (recall the identifications mentioned in part 1 of the definition) facoidal system of walks in GP , which is denoted by CP . In general, there are many other facoidal systems of walks (see Fig. 3). Fig. 3. A facoidal system of four walks in the graph of the three-dimensional cube. For each path λ in GP of length two denote by v(λ) the inner node of λ. Let G⋆P be the graph defined on the paths of length two in GP , where two paths λ1 and λ2 are adjacent if and only if they share a common edge and v(λ1 ) = v(λ2 ) holds (see Fig. 4). A 2-factor in a graph G is a set of (not self-intersecting) cycles in G such that every node is contained in a unique cycle. Checking whether a graph has a 2-factor and finding one (if it exists) can be reduced (by a procedure due to Tutte [17]) to searching for a perfect matching in a related graph (which can be performed in polynomial time by Edmonds’ algorithm [6]). Reconstructing a Simple Polytope from Its Graph 113 Fig. 4. The left constellation gives rise to an edge of G⋆P , while the right one does not. Proposition 3. For simple polytopes P , (i) there is a (polynomial time computable) bijection between the facoidal systems of walks in GP and the 2-factors of G⋆P , (ii) checking whether a given set of node-sequences in GP is a facoidal system of walks can be done in polynomial time, and (iii) one can find a facoidal system of walks in GP in polynomial time. Proof. Part 2 is obvious, part 3 follows from part 1 by Tutte’s reduction [17] and Edmonds algorithm [6], and part 1 is readily obtained from the definitions. Proposition 3 shows that facoidal systems of walks have quite convenient algorithmic properties. However, they become useful only due to the fact that the system CP corresponding to the 2-faces of P can be well-characterized among them, as we will demonstrate next. Definition 4. Let O be any orientation of GP . (i) The H2 -sum of O is defined as H2 (O) := d  k=0   k hk (O) · . 2 (ii) A closed smooth walk (w0 , . . . , wl−1 ) in GP has a sink ( source, respectively) at position i (with respect to the orientation O), if the edges {wi , wi−1 } and {wi , wi+1 } both are directed towards (away from, respectively) wi . The following follows immediately from the definitions. Lemma 4. For every orientation O of GP the sum H2 (O) equals the total number of sinks (with respect to O) in every facoidal system of walks in GP . Now we can formulate and prove the main result of this section (where f2 (P ) denotes the number of 2-faces of P ). 114 V. Kaibel Theorem 4. Let P be a simple polytope, W a facoidal system of walks in GP , and O an acyclic orientation of GP . Then #W ≤ f2 (P ) ≤ H2 (O) (2) holds. (i) The first inequality holds with equality if and only if W = CP (i.e., W “is” the set of 2-faces of P ). (ii) The second inequality holds with equality if and only if O is an AOF-orientation of GP . Proof. Since O is acyclic, every closed smooth walk in GP must have at least one sink with respect to O. Thus, Lemma 4 implies #W ≤ H2 (O) , (3) f2 (P ) = #CP ≤ H2 (O) , (4) yielding where by Theorem 3 equality holds in (4) if and only if O is an AOF-orientation. Because GP has an AOF-orientation O0 (see Lemma 1), inequality (3) gives #W ≤ H2 (O0 ) = f2 (P ). Hence, it remains to prove that #W = #CP implies W = CP . Suppose, that #W = #CP holds. It thus suffices to show CP ⊆ W (since we know already #CP ≥ #W). Let C ∈ CP be any closed smooth walk corresponding to a 2-face of P . By Lemma 2 there is an AOF-orientation OC of GP such that C is terminal with respect to OC . Let w1 ∈ VP be the unique source in C (with respect to OC ), and let w0 and w2 be the two neighbors of w1 in C. By definition, there is a (unique) W = (w0 , w1 , w2 , . . . , wl−1 ) ∈ W. Because of #W = #CP = H2 (OC ) the closed smooth walk W has a unique sink at some position j and its unique source at position 1. Thus, the two paths (w1 , w2 , . . . , wj ) and (w1 , w0 , . . . , wj ) both are monotone. Since C is terminal, this implies that these two paths are contained in C. Therefore we have C = W ∈ W. 5 Good Characterizations Theorem 4 immediately yields characterizations of sets of 2-faces and of AOForientations that are similar to Kalai’s characterization of AOF-orientations (see Lemma 3). Corollary 2. Let P be a simple polytope. (i) A facoidal system of walks in GP is the system CP of 2-faces of P if and only if it has maximal cardinality among all facoidal systems of walks in GP . (ii) An acyclic orientation of GP is an AOF-orientation if and only if it has minimal H2 -sum among all acyclic orientations of GP . Reconstructing a Simple Polytope from Its Graph w1 w0 115 w2 C W wj Fig. 5. Illustration of the proof of Theorem 4. If W = C then C cannot be terminal. Unfortunately, for arbitrary graphs the problem of finding a 2-factor with as many cycles as possible is N P-hard. This follows from the fact that the question whether a graph can be partitioned into triangles is N P-complete [7, Prob. GT11]. With respect to algorithmic questions, the following good characterizations (in the sense of Edmonds [5,6]) of the set of 2-faces (and thus, by Proposition 1 of the vertex-facet incidences) as well as of AOF-orientations may be more valuable than those in Corollary 2. Corollary 3. Let P be a simple polytope. (i) Let W be a facoidal system of walks in GP . Either there is an acyclic orientation of GP having a unique sink in every walk of W, or there is a facoidal system of walks in GP of larger cardinality than #W. In the first case, W = CP “is” the set of 2-faces of P , in the second, it is not. (ii) Let O be an acyclic orientation of GP . Either there is a facoidal system W of walks in GP such that O has a unique sink in every walk in W, or there is an acyclic orientation of GP with smaller H2 -sum than H2 (O). In the first case, O is an AOF-orientation, in the second, it is not. For graphs G of simple polytopes let us define Problem (A) as max #W subject to W facoidal system of walks in G and Problem (B) as min H2 (O) subject to O acyclic orientation of G . A third consequence of Theorem 4 is the following result. Corollary 4. The Problems (A) and (B) form a pair of strongly dual combinatorial optimization problems. The optimal solution of Problem (A) yields the 2-faces of the respective polytope (and thus its vertex-facet incidences, see Proposition 1). Every optimal solution to Problem (B) is an AOF-orientation of the graph. 116 V. Kaibel Thus, the answer to Perles’ original question whether the vertex-facet incidences of a simple polytope are at all determined by its graph, is not only “yes” (as proved by Blind and Mani), or “yes, and they can be computed” (as shown by Kalai), but at least “yes, and they can be computed by solving a combinatorial optimization problem that has a well-stated strongly dual problem.” 6 Remarks Corollary 4 suggests to design a primal-dual algorithm for the problem of reconstructing (the vertex-facet incidences of) a simple polytope from its graph. Such an algorithm would start by computing an arbitrary facoidal system W of walks in the given graph (see Proposition 3) and any acyclic orientation O. Then it would check for #W = H2 (O). If equality holds then by Theorem 4 one is done. Otherwise, the algorithm would try to improve either W or O by exploiting the reasons for #W = H2 (O). For a concise treatment of different classical and recent applications of the primal-dual method in Combinatorial Optimization see [8]. Such a (polynomial time) primal-dual algorithm would in particular yield polynomial time algorithms for the problem to determine an (arbitrary) AOForientation from the graph GP of a simple polytope P and the set of 2-faces of P , as well as for the problem to determine the 2-faces of P from GP and an AOForientation. As for the first of these two problems it is worth to mention that no polynomial time method is known that would find any AOF-orientation even if the input is the entire face lattice of P . For the second problem no polynomial time algorithm is known as well. Let (C) be the problem to decide for the graph GP of a simple polytope P and a set C of subsets of nodes of GP if C is the set of the subsets of nodes of GP that correspond to the 2-faces of P . Let (D) be the problem to decide for the graph GP of a simple polytope P and an orientation O of GP if O is an AOF-orientation. The good characterizations in Corollary 3 may tempt one to conjecture that these two problems can be solved in polynomial time. Unfortunately, from the complexity theoretic point of view, Corollary 3 does not provide us with any evidence for that. In particular, it does not imply that problems (C) and (D) are contained in N P ∩ coN P. The reason is that the problem (G) to decide for a given graph if there is any simple polytope P such that G is isomorphic to the graph of P is neither known to be in N P nor in coN P. Corollary 3 only shows that both problems (C) and (D) are in N P ∩coN P, if one restricts them to any class of graphs for which problem (G) is in N P ∩ coN P. The problem (S) to decide for a given lattice L if there is a simple polytope P such that the face lattice LP of P is isomorphic to L is known as the Steinitz problem for simple polytopes. Corollary 5. If problem (G) is contained in N P or coN P, then problem (S) is contained in N P or coN P, respectively. Reconstructing a Simple Polytope from Its Graph 117 Proof. The face lattice LP of a polytope P is ranked. The graph G having the rank one elements of LP as its nodes, where two rank one elements are adjacent if and only if they are below a common rank two element, is isomorphic to the graph of P . It can be computed from LP in polynomial time. Corollary 1 shows that one can compute the vertex-facet incidences (and thus the entire face lattice LP , see the first paragraph of the introduction) of a simple polytope P in polynomial time (in the size of LP ) from the poset that is induced by the elements of rank one (corresponding to the vertices), rank two (corresponding to the 1-faces), and rank three (corresponding to the 2-faces). Together with the first part of Corollary 3 this proves the claim. Extending results of Mnëv [16] by using techniques described in [3] one finds (see [1, Cor. 9.5.11]) that there is a polynomial (Karp-)reduction of the problem to decide whether a system of linear inequalities has an integral solution to problem (S). Thus, problem (S) (and therefore, by Corollary 5, problem (G)) is not contained in coN P, unless N P = coN P. Furthermore, there are rational simple polytopes P with the property that every rational simple polytope Q whose graph is isomorphic to the graph of P has vertices with super-polynomial coding lengths in the size of the graphs (this follows from Theorem B in [9]). Thus, it seems also unlikely that problem (G) is contained in N P. The results presented in Sect. 4 hence do neither lead to efficient algorithms nor to new examples of problems in N P ∩ coN P not (yet) known to be solvable in polynomial time. Nevertheless, they show that the problem to reconstruct a simple polytope from its graph can be modeled as a combinatorial optimization problem with a strongly dual problem. We hope that this is an appearance of Combinatorial Optimization Jack Edmonds is pleased to see in this volume dedicated to him. Acknowledgements. I thank Günter M. Ziegler for valuable comments on an earlier version of the manuscript. References 1. A. Björner, M. Las Vergnas, B. Sturmfels, N. White, and G. M. Ziegler. Oriented Matroids (2nd ed.), volume 46 of Encyclopedia of Mathematics and Its Applications. Cambridge University Press, 1999. 2. R. Blind and P. Mani-Levitska. Puzzles and polytope isomorphisms. Aequationes Math., 34:287–297, 1987. 3. J. Bokowski and B. Sturmfels. Computational Synthetic Geometry, volume 1355 of Lecture Notes in Mathematics. Springer, Heidelberg, 1989. 4. M. Develin. E-mail conversation, Nov 2000. develin@bantha.org. 5. J. Edmonds. Maximum matching and a polyhedron with 0,1-vertices. J. Res. Natl. Bur. Stand. – B (Math. and Math. Phys.), 69B:125–130, 1965. 6. J. Edmonds. Paths, trees, and flowers. Can. J. Math., 17:449–467, 1965. 7. M. R. Garey and D. S. Johnson. Computers and Intractability. A Guide to the Theory of NP-Completeness. W. H. Freeman and Company, New York, 1979. 118 V. Kaibel 8. M. X. Goemans and D. P. Williamson. The primal-dual method for approximation algorithms and its application to network design problems. In D. Hochbaum, editor, Approximation Algorithms, chapter 4. PWS Publishing Company, 1997. 9. J. E. Goodman, R. Pollack, and B. Sturmfels. The intrinsic spread of a configuration in Rd . J. Am. Math. Soc., 3(3):639–651, 1990. 10. C. Haase and G. M. Ziegler. Examples and counterexamples for Perles’ conjecture. Technical report, TU Berlin, 2001. To appear in: Discrete Comput. Geometry. 11. P. L. Hammer, B. Simeone, T. M. Liebling, and D. de Werra. From linear separability to unimodality: A hierarchy of pseudo-boolean functions. SIAM J. Discrete Math., 1:174–184, 1988. 12. M. Joswig, V. Kaibel, and F. Körner. On the k-systems of a simple polytope. Technical report, TU Berlin, 2001. arXiv: math.CO/0012204, to appear in: Israel J. Math. 13. M. Joswig and G.M. Ziegler. Neighborly cubical polytopes. Discrete Comput. Geometry, 24:325–344, 2000. 14. V. Kaibel and M. Pfetsch. Computing the face lattice of a polytope from its vertexfacet incidences. Technical report, TU Berlin, 2001. arXiv:math.MG/01060043, submitted. 15. G. Kalai. A simple way to tell a simple polytope from its graph. J. Comb. Theory, Ser. A, 49(2):381–383, 1988. 16. N. E. Mnëv. The universality theorems on the classification problem of configuration varieties and convex polytopes varieties. In O.Ya. Viro, editor, Topology and Geometry – Rohlin Seminar, volume 1346 of Lecture Notes in Mathematics, pages 527–543. Springer, Heidelberg, 1988. 17. W. T. Tutte. A short proof of the factor theorem for finite graphs. Can. J. Math., 6:347–352, 1954. 18. H. Whitney. Congruent graphs and the connectivity of graphs. Am. J. Math., 54:150–168, 1932. 19. G. M. Ziegler. Lectures on Polytopes. Springer-Verlag, New York, 1995. Revised edition 1998. An Augment-and-Branch-and-Cut Framework for Mixed 0-1 Programming Adam N. Letchford1 and Andrea Lodi2 1 2 Department of Management Science, Lancaster University, Lancaster LA1 4YW, England A.N.Letchford@lancaster.ac.uk DEIS, University of Bologna, Viale Risorgimento 2, 40136 Bologna, Italy alodi@deis.unibo.it Abstract. In recent years the branch-and-cut method, a synthesis of the classical branch-and-bound and cutting plane methods, has proven to be a highly successful approach to solving large-scale integer programs to optimality. This is especially true for mixed 0-1 and pure 0-1 problems. However, other approaches to integer programming are possible. One alternative is provided by so-called augmentation algorithms, in which a feasible integer solution is iteratively improved (augmented) until no further improvement is possible. Recently, Weismantel suggested that these two approaches could be combined in some way, to yield an augment-and-branch-and-cut (ABC) algorithm for integer programming. In this paper we describe a possible implementation of such a finite ABC algorithm for mixed 0-1 and pure 0-1 programs. The algorithm differs from standard branch-and-cut in several important ways. In particular, the terms separation, branching, and fathoming take on new meanings in the primal context. 1 Introduction One of the most successful methods for solving Integer and Mixed-Integer Linear Programs (ILPs and MILPs, respectively) is the branch-and-cut approach, in which strong cutting planes are used to strengthen the linear programming relaxations at each node of a branch-and-bound tree (see Padberg & Rinaldi [22]; Caprara & Fischetti [5]). Branch-and-cut is currently very popular because it appears to be much more robust than the use of either cutting planes or branching in isolation. However, although branch-and-cut is popular, research has been and still is being conducted into other approaches to integer programming — based on Lagrangian, surrogate or group relaxation, lattice basis reduction, test sets, and so on (see, e.g., Nemhauser & Wolsey [19]). Of particular interest in the present paper are so-called augmentation algorithms, in which a feasible solution is iteratively improved (augmented) until no further improvement is possible (and it can be proved that this is the case). Some recent results on augmentation algorithms can be found in Firla et al. [8], Haus et al. [14], Schulz et al. [23], Thomas [24] and Urbaniak et al. [25]. M. Jünger et al. (Eds.): Combinatorial Optimization (Edmonds Festschrift), LNCS 2570, pp. 119–133, 2003. c Springer-Verlag Berlin Heidelberg 2003  120 A.N. Letchford and A. Lodi Recently, Weismantel [26] suggested the possibility of somehow combining elements of augmentation and branch-and-cut algorithms, to yield an augmentand-branch-and-cut algorithm for integer programming — with the convenient acronym ABC. In this paper we describe such an ABC algorithm for mixed 0-1 programs. It is based on a new primal cutting plane algorithm which we presented in [17], combined with some branching rules which are specifically tailored to the primal context. The remainder of the paper is structured as follows. In Section 2 we briefly review the literature on branch-and-cut methods. In Section 3 we review the literature on primal cutting plane methods, including our new algorithm. In Section 4, we examine the issue of branching in the primal context. In Section 5 we show how the various components of the algorithm are integrated to give the general-purpose ABC method for mixed 0-1 programs. Preliminary computational results are reported in Section 6, while conclusions are given in Section 7. 2 Basic Concepts of Branch-and-Cut In this section we briefly review the literature on standard branch-and-cut algorithms. Although much of this material is widely known, it is necessary to include it here so that in subsequent sections we can show how our primal ABC approach departs from the standard approach. Suppose that a MILP has n integer-constrained variables, p continuous variables and m linear constraints. The vector of integer-constrained variables will be denoted by x and the vector of continuous variables by y. Let us suppose that the MILP takes the form: max{cT x + dT y : Ax + Gy ≤ b, x ∈ ZZ n+ , y ∈ IRp+ }, where c and d are objective coefficient vectors, A and G are matrices of appropriate dimension (m × n and m × p, respectively) and b is an m-vector of right hand sides. The feasible region of the linear programming (LP) relaxation of the MILP is the polyhedron P := {(x, y) ∈ IRn+p : Ax + Gy ≤ b}, + and the convex hull of feasible MILP solutions is the polyhedron PI := conv{x ∈ ZZ n+ , y ∈ IRp+ : Ax + Gy ≤ b}. We have PI ⊆ P and in this paper we assume that containment is strict. For simplicity we also assume throughout the paper that the MILP is a maximization problem. Suppose that the vector (x∗ , y ∗ ) is an optimal solution to the LP relaxation. If x∗ is integral, then (x∗ , y ∗ ) is an optimal solution to the MILP and we are done. If not, then the value of the objective function provides an upper bound on the value of the optimum, but further work will be needed to solve the MILP to optimality. We can either cut (add extra inequalities) or branch (divide the problem into subproblems). An Augment-and-Branch-and-Cut Framework for Mixed 0-1 Programming 2.1 121 Cutting A cutting plane (or cut) is an extra linear inequality which (x∗ , y ∗ ) does not satisfy, but which is satisfied by all solutions to the MILP. That is, the inequality must be valid for PI but not for P . If we can find a cut, then we can add it to the LP relaxation and re-solve (typically via the dual simplex method), to obtain a new (x∗ , y ∗ ). If the new x∗ is integral, we are done, otherwise the procedure of adding cuts continues until x∗ becomes integral. As cuts are added, we obtain a non-increasing sequence of upper bounds. In order to generate a cutting plane, one faces the following problem: The Separation Problem: Given some (x∗ , y ∗ ) ∈ P , find an inequality which is valid for PI and violated by (x∗ , y ∗ ), or prove that none exists. A famous theorem of Grötschel, Lovász & Schrijver [13] states that, under certain technical assumptions, the separation problem is N P-hard if and only if the original MILP is. However, if (x∗ , y ∗ ) is an extreme point of the current LP relaxation, which will be the case if the simplex method is being used, then cuts can be generated fairly easily. (For example, one can use the cuts of Gomory [11], [12]; or the disjunctive cuts of Balas, Ceria & Cornuéjols [1].) In general, cutting plane algorithms based on ‘general-purpose’ cuts such as Gomory or disjunctive cuts exhibit slow convergence. This can be alleviated somewhat by adding several cuts in one go before reoptimizing by dual simplex, see for example Balas, Ceria & Cornuéjols [1] and Balas, Ceria, Cornuéjols & Natraj [3]. In general, however, it is preferable to use inequalities which take problem structure into account, especially inequalities which are ‘deep’ in the sense of inducing facets (or faces of high dimension) of the polyhedron PI (see Padberg & Grötschel [20]; Nemhauser & Wolsey [19]). It is often the case that several classes of deep inequalities are known for a given problem, and frequently each class contains an exponential number of members. To use a particular class of inequalities in practice, one needs to solve the following modified separation problem: The Separation Problem for a Class of Inequalities: Given some class F of inequalities which are valid for PI and some (x∗ , y ∗ ) ∈ P , find a member of F which is violated by (x∗ , y ∗ ), or prove that none exists. It frequently happens that this modified separation problem is polynomially solvable for some classes of inequalities, and N P-hard for others. Yet, even in the latter case it is frequently possible to devise heuristics for separation which perform reasonably well in practice. Cutting plane algorithms based on deep inequalities typically converge much more quickly than algorithms based on general-purpose cuts. However, there is one disadvantage: due to the heuristic nature of the separation algorithms, or due to the lack of a complete description of PI , it may happen that no deep cuts can be found even though x∗ is still fractional. (This never happens with 122 A.N. Letchford and A. Lodi Gomory or disjunctive cuts.) If we want to solve the MILP to optimality, and if we want to avoid using general-purpose cuts, then we will need to branch as described in the next subsection (or use some other solution technique). Note that the cutting plane procedure yields a (typically good) upper bound on the optimum. 2.2 Branching Instead of adding cuts to remove an invalid vector (x∗ , y ∗ ), one can branch instead, i.e., divide the original problem into subproblems. The most common method of branching is to choose an index i such that x∗i is fractional and to create two subproblems. In one, the constraint xi ≤ ⌊x∗i ⌋ is added; in the other, the constraint xi ≥ ⌈x∗i ⌉ is added. (Here, ⌊·⌋ and ⌈·⌉ denote rounding down and rounding up to the nearest integer, respectively.) In this way, x∗ is excluded from each of the two branches created. Each of these subproblems can be quickly solved using the dual simplex method. In the standard branch-and-bound method, often attributed to Land & Doig [15], recursive branching leads to a tree-like structure of subproblems. (In the case of mixed 0-1 programs, for example, a given node of the branch-and-bound tree will correspond to fixing a specified subset of the variables to zero and another specified subset of the variables to one.) The tree is then explored, e.g., by breadth-first or depth-first search. Along the way, branches are ‘pruned’ (removed from consideration) when their associated LP upper bound is lower than the current lower bound (corresponding to the best integer solution found so far). The algorithm terminates when the only remaining node is the root node of the tree. Note that in general the set of variables for which x∗i is fractional may be large, and therefore some kind of heuristic rule is needed for choosing the branching variable. A common method is to choose the variable whose fractional part is closest to one-half, but other rules can perform better, particularly if they take specific problem structure into account. 2.3 The Overall Scheme The two methods of cutting and branching are in a sense complementary. Cutting planes, especially deep ones, can lead to very tight upper bounds, but may not yield a feasible solution for a long time. Branching, on the other hand, allows one to find feasible solutions relatively quickly, but the upper bounds may be too weak to limit the size of the search tree. The natural solution is to integrate cutting and branching within a single algorithm. This yields the branch-and-cut technique, in which cutting planes are used at each node of the branch-and-bound tree to strengthen the LP relaxations (see Padberg & Rinaldi [22]; Balas, Ceria & Cornuéjols [2]; Balas, Ceria, Cornuéjols & Natraj [3]; Caprara & Fischetti [5]). In branch-and-cut it is normal to use inequalities which are valid for PI as cutting planes. Inequalities of this kind are valid globally, i.e., at every node of the An Augment-and-Branch-and-Cut Framework for Mixed 0-1 Programming 123 branch-and-cut tree. Thus, any violated inequality generated at any node may be used to strengthen the upper bound at any other node of the tree (assuming of course that it is violated). This would not be possible with cuts which are only valid at a particular node of the tree. There are a few more ingredients which are needed to form a sophisticated branch-and-cut algorithm. We have already mentioned rules for selecting the branching variable, but there are other key components such as pricing, variable fixing, cut pools, and so on (Padberg & Rinaldi [22]). In Section 5 we briefly review these and consider how to adapt them to the primal context. 3 3.1 Primal Cutting Plane Algorithms The Basic Concept The goal of a primal cutting plane algorithm is to iteratively move from one corner of PI to a better one until no further improvement is possible. To begin the method, one must know of a feasible solution to the ILP, preferably a good one, which we will denote by (x̄, ȳ). Moreover, it must be a solution with a very specific property: it must be an extreme point of both P and PI . Equivalently, it must be a basic feasible solution to the LP relaxation of the problem. If no such (x̄, ȳ) is known, then artificial variables must be used to find one, much as in the ordinary simplex method. However, in practice it is often easy to find one. Indeed, for mixed 0-1 problems, all we need to do is find any feasible solution to the MILP: it can be converted into a basic feasible solution by fixing the x variables at their given values and solving an LP to determine the y variables. Given a suitable (x̄, ȳ), one then constructs the associated simplex tableau. If (x̄, ȳ) is dual feasible, it is optimal and the method stops. If not, a primal simplex pivot is made, leading to a new vector (or the same one in the case of degeneracy), which we will denote by (x∗ , y ∗ ). If x∗ is integral, then we have found an improved MILP solution, and (x∗ , y ∗ ) becomes the new (x̄, ȳ). If on the other hand it is fractional, then a cutting plane is generated which cuts off (x∗ , y ∗ ). Then, another attempt is made to pivot from (x̄, ȳ), leading to a different (x∗ , y ∗ ), and so on. The method terminates when (x̄, ȳ) has been proved to be dual feasible. In fact, it is possible to compute (x∗ , y ∗ ) from the information in the (x̄, ȳ) tableau without actually performing a pivot. Therefore it is not actually necessary to pivot to (x∗ , y ∗ ) in order to determine if it is a feasible MILP solution or not. For this reason, some primal cutting plane algorithms only perform the pivot explicitly when it leads to an augmentation. 3.2 Algorithms from the 1960s Several primal cutting plane algorithms, for pure ILPs only, appeared in the 1960s (Ben-Israel and Charnes [4], Young [27], [28], Glover [10]). The algorithms 124 A.N. Letchford and A. Lodi in [4] and [27] are extremely complicated and, as shown in [28] and [10], they can be simplified considerably without affecting convergence. The simplest of these algorithms, and in our view the best one, is that of Young [28]. Detailed descriptions of the Young algorithm can be found in Garfinkel & Nemhauser [9], in our paper [17], and also in Firla et al. [8]. The basic idea behind Young’s algorithm is to begin with an all-integer tableau and to perform primal simplex pivots whenever the pivot element is equal to 1. In this way one guarantees that the subsequent tableau (and the associated x∗ ) is also allinteger. If the pivot element is not equal to 1, then a cutting plane is added to the tableau so that the pivot element in the enlarged tableau is 1. Young proved that, with an appropriate lexicographic variable selection rule, his algorithm is finitely convergent. However, the performance of the algorithm in practical computation has been disappointing. The main problem is that one often encounters extremely long sequences of degenerate pivots, without either augmenting or proving dual feasibility. 3.3 Padberg and Hong’s Algorithm The key to obtaining a viable primal cutting plane algorithm is the use of strong (preferably facet-inducing) cutting planes. To our knowledge the first authors to use strong cutting planes in a primal context were Padberg & Hong [21]. (A detailed description of this paper appears in Padberg & Grötschel [20] and in our paper [17].) Padberg and Hong implemented a primal cutting plane algorithm for the Travelling Salesman Problem (TSP), based on facet-defining cuts such as subtour elimination constraints (SECs) and 2-matching, comb and chain inequalities. The algorithms used by Padberg and Hong to generate violated inequalities — which they called constraint identification algorithms — are very similar to what are now known as separation algorithms. However, there was a subtle difference: as well as requiring the inequality to be violated by x∗ , they also required that it be tight (satisfied as an equation) at x̄. The reason for this is that, in the case of 0-1 problems like the TSP, only inequalities which are tight at x̄ can help in either augmenting or proving dual feasibility of x̄. Therefore the problem solved by Padberg and Hong’s algorithms is as follows (see also [17], [16], [7]): The Primal Separation Problem for a Class of Inequalities: Given some class F of inequalities which are valid for PI , some (x∗ , y ∗ ) ∈ P and some (x̄, ȳ) which is an extreme point of both P and PI , find a member of F which is violated by (x∗ , y ∗ ) and tight for (x̄, ȳ), or prove that none exists. It is easy to show (see Padberg & Grötschel [20]) that a given primal separation can be transformed to the equivalent standard (dual) version. However, the reverse does not hold in general and in the papers Letchford & Lodi [16] An Augment-and-Branch-and-Cut Framework for Mixed 0-1 Programming 125 and Eisenbrand, Rinaldi & Ventura [7] it is shown that for many classes of inequalities, primal separation is substantially easier than standard separation. This gives some encouragement for pursuing the primal approach, especially for (mixed) 0-1 problems. 3.4 Our Algorithm In our recent paper [17] we argued that primal cutting plane algorithms were worthy of more attention, and we proposed a generic algorithm for the case of mixed 0-1 problems based on primal separation algorithms. For the sake of brevity we do not describe this in detail here, but the basic structure of the algorithm is as follows: – Step 1: Find a ‘good’ initial basic feasible solution (x̄, ȳ) and construct an appropriate tableau. – Step 2: If (x̄, ȳ) is dual feasible, stop. – Step 3: Perform a primal simplex pivot. If it is degenerate, return to step 2. – Step 4: Let (x∗ , y ∗ ) be the new vector obtained. If (x∗ , y ∗ ) is a feasible solution to the MILP, set (x̄, ȳ) := (x∗ , y ∗ ) and return to step 2. – Step 5: Call primal separation for known strong inequalities. If any are found, pivot back to (x̄, ȳ), add one or more cuts to the LP, and return to step 3. – Step 6: If x∗ is integral but (x∗ , y ∗ ) is infeasible, call the ‘special’ cut generating procedure (described below) and return to step 3. – Step 7: Generate one or more general-purpose cuts (such as Gomory fractional or mixed-integer cuts), add them to the LP, pivot back to (x̄, ȳ) and return to step 3. The reason that step 6 is necessary is that we do not require that the entire constraint system Ax + Gy ≤ b be present in the initial LP. (For some problems, the constraint system is too large to be handled all at once.) Therefore it is theoretically possible to obtain a vector (x∗ , y ∗ ) which has an integral x component, but which is not a feasible MILP solution because it violates a constraint in the system Ax + Gy ≤ b which is not tight at (x̄, ȳ). The ‘special’ cut generating procedure to handle this exceptional case is as follows: – 6.1. Find an inequality in the system Ax + Gy ≤ b which is violated by (x∗ , y ∗ ) yet not tight at (x̄, ȳ), and add it to the LP. – 6.2. Perform a dual simplex pivot to arrive at a new point (x̂, ŷ) which is a convex combination of (x̄, ȳ) and (x∗ , y ∗ ). Set (x∗ , y ∗ ) := (x̂, ŷ). – 6.3. Generate a Gomory mixed-integer cut from a row of the tableau which corresponds to a fractional structural variable and add it to the LP. (We show in [17] that it is guaranteed to be tight at (x̄, ȳ).) – 6.4. Perform a dual simplex pivot to return to (x̄, ȳ) and discard the inequality generated in step 6.2 (which is no longer tight). 126 A.N. Letchford and A. Lodi In a sense, the mixed-integer cut generated in step 6.3 is a ‘rotated’ version of the inequality generated in step 6.1 — rotated in such a way as to provide a solution to the primal separation problem. Computational results given in [17] clearly demonstrate that this algorithm is significantly better than that of Young. Nevertheless, it seems desirable to branch in step 7 instead of adding general-purpose cuts. That is our goal in the next section. 4 Branching in a Primal Context Branching is desirable in ordinary branch-and-cut when (x∗ , y ∗ ) has a fractional x component, but the separation algorithms fail to find any violated cuts. In our primal approach, branching is desirable when (x̄, ȳ) is not dual feasible, the adjacent point (x∗ , y ∗ ) has a fractional x component, and the primal separation algorithms fail. However, branching is problematic in the primal context. Suppose one tried to branch in the standard way, by picking a variable index i such that x∗i is fractional, and imposing either xi = 0 or xi = 1. Then the current best MILP solution (x̄, ȳ) would be excluded from one of the two subproblems, and we would lose our starting basis on one of the two branches. Therefore, just as a non-standard separation problem appears in the primal context, a non-standard form of branching is also needed. One wants to branch in such a way that (x∗ , y ∗ ) is removed, but (x̄, ȳ) remains intact on both branches. We have developed suitable branching rules for 0-1 MILPs, which we now describe. First, let us assume for simplicity of notation that the 0-1 variables have been complemented so that x̄ is a vector of n zeroes, which we denote here by 0. Moreover, let us also assume that to save on memory our LP contains only constraints which are tight for (x̄, ȳ), together with upper bounds of 1 on the x variables. (Our cutting plane algorithm described in Subsection 3.4 is designed to run in this way.) We begin with a fairly naive branching rule: Naive Branching Rule: Suppose that there are a pair of variable indices i and j such that 0 < x∗i < x∗j ≤ 1. Create two branches, one with the constraint xi = 0 added; the other with the constraint xi ≥ xj added. With this rule, no feasible solution is lost, (x̄, ȳ) is feasible for both branches, and (x∗ , y ∗ ) is removed in both branches. One drawback however is that any feasible solution (x, y) which satisfies xi = xj = 0 will be valid on both branches. It would be more desirable to branch in such a way that no feasible solution, apart from (x̄, ȳ) itself, appeared on both branches. Let us consider the issue in more detail. The goal of branching is to force a variable xi to be integral, even though x∗i is currently fractional. An ideal branching rule would therefore be (xi = 0) ∨ (xi = 1), but, as we have seen, this would make (x̄, ȳ) infeasible on the ‘up’-branch. So, suppose that the current An Augment-and-Branch-and-Cut Framework for Mixed 0-1 Programming 127 LP feasible region (including any cutting planes added) is of the form: {x ∈ [0, 1]n , y ∈ IRp+ : Āx + Ḡy ≤ b̄}, where all of the inequalities in the system Āx + Ḡy ≤ b̄ are currently tight for (x̄, ȳ). If we had imposed xi = 1, then the resulting feasible region would be P 1 := {x ∈ [0, 1]n , y ∈ IRp+ : Āx + Ḡy ≤ b̄, xi = 1}. Given that we do not want to remove (x̄, ȳ), we consider instead the smallest polyhedron containing both (x̄, ȳ) and P 1 . It is not difficult to show that, when x̄ = 0, this polyhedron is: P 2 := conv{0 ∪ P 1 } = {x ∈ [0, 1]n , y ∈ IRp+ : Āx + Ḡy ≤ b̄, xj ≤ xi ∀j = i}. By definition, this new polyhedron has no extreme points in which xi is fractional. Moreover, the next time we attempt to pivot from (x̄, ȳ), we will arrive at a point (x∗ , y ∗ ) with x∗i = 1. Therefore our main branching rule for 0-1 MILPs is as follows: Main Branching Rule: Suppose that 0 < x∗i < 1. Create two branches, one with the constraint xi = 0 added; the other with the constraints xj ≤ xi ∀j = i added. Note that (x̄, ȳ) remains basic after the change. With this branching rule, just as with the original one, no feasible solution is lost, (x̄, ȳ) is feasible for both branches, and (x∗ , y ∗ ) is removed in both branches. However, in addition, the only vectors which can appear on both branches are those with x component equal to 0. This is therefore a more powerful partition, which we would expect to lead to quicker convergence of the algorithm. Let us consider what happens when several branchings have occurred. Suppose that we wish to ‘fix’ variables xi for i ∈ N0 to zero, and variables xi for i ∈ N1 to one. The polyhedron of interest is now the convex hull of (x̄, ȳ) and the set {x ∈ [0, 1]n , y ∈ IRp+ : Āx + Ḡy ≤ b̄, xi = 0 (i ∈ N0 ), xi = 1 (i ∈ N1 )}. By a similar argument to that given above it can be shown that, when x̄ = 0, the polyhedron in question is P 3 := {x ∈ [0, 1]n , y ∈ IRp+ : Āx + Ḡy ≤ b̄, xj = 0 ∀j ∈ N0 , xj = xi ∀j ∈ N1 \ {i}, xj ≤ xi ∀j ∈ / (N0 ∪ N1 )}, (1) where i is an arbitrary index in N1 . That is, in order to perform further branching in an ‘upward’ direction it is merely necessary to add equations of the form xj = xi . Thus the system of inequalities can be easily modified as branching progresses. Note that the very compact and clean form of polyhedron P 3 is possible because all the constraints in the LP before the branching operation are tight at 128 A.N. Letchford and A. Lodi (x̄, ȳ). If it is desired to include non-tight constraints in the LP, then it is still possible to perform the above branching but the non-tight inequalities require some additional ‘work’. Note also that the same kind of argument does not apply to general MILPs (i.e., MILPs in which the integer variables are not restricted to be binary). The branching rule implicitly relies on the fact that any 0-1 vector x̄ is an extreme point of the unit hypercube. It is this property which enables us to complement variables in order to get x̄ = 0. There is no analogous complementation procedure in the case of general MILPs. To close this section, we must mention one potential problem. If only constraints which are tight are included in the LP (along with the upper bounds on the x variables), then there is a (small) risk that the LP will be unbounded. If this happens, however, it will be because the profit can be increased without limit by changing ȳ while leaving x̄ unchanged. The solution to this is to solve a (typically small) LP to see if it is possible to augment by changing only the y component. If so, then one should augment and continue from there. 5 The Overall ABC Algorithm At this point we have the main ingredients for the ABC algorithm: the primal separation component and the branching rules. However, some more details are necessary in order to specify how the overall algorithm works. Fathoming of Nodes: Obviously we need some way of pruning the branching tree and, in particular, of fathoming a node. It is not difficult to see, from the properties of the primal simplex method, that in ABC a node can be fathomed when (x̄, ȳ) is dual feasible. Cut Pool: In order to keep the size of the basis small, it is normal in standard branch-and-cut to delete inequalities from the LP whenever their slack exceeds some small positive value. However, to avoid wasting time by separating the same inequality more than once, it is common to store these constraints in a so-called cut pool (e.g., Padberg & Rinaldi [22]). The natural primal analogue of this is as follows: tight cuts are kept in the LP and non-tight cuts are kept in the pool. Whenever (x̄, ȳ) is augmented, constraints which are no longer tight can be put into the pool and any constraints in the pool which have become tight can be put into the LP. Handling Augmentations: At first sight it would appear that every time an improved feasible solution (x̄, ȳ) is found, it will be necessary to discard the branch-and-cut tree and begin branching and cutting from scratch. In fact, this is not necessary. It is possible to work with a single tree. When a node is fathomed, it means that no feasible solution exists with objective value greater than (x̄, ȳ) when the associated variables are fixed. Given that the new (x̄, ȳ) has a greater objective value than the old one, this remains true after the augmenta- An Augment-and-Branch-and-Cut Framework for Mixed 0-1 Programming 129 tion. Hence, it is necessary only to construct a new basis at the root node, which can be done using the cuts which are now tight at the new (x̄, ȳ). Pricing: When n is very large, it is normal practice in standard branch-andcut to begin with only a subset of the variables and to include other variables only when needed. This is done by pricing, i.e., computing the LP reduced costs of the remaining variables and adding the variables whose reduced costs are positive (Padberg & Rinaldi [22]). This can be done in the primal approach as well. Handling Problems with m Huge: As mentioned, for many important problems, the number of constraints m needed to define P is exponential in the problem input, but optimization over P is still possible because the (standard) separation problem for these constraints is solvable in polynomial time. These problems can be dealt with in the primal context also, because (as explained in Section 4) we only keep tight constraints in the LP. Finally, the reader will have noticed that up to now there has been no mention of upper bounds in the ABC context. This is because, strictly speaking, they are not needed: to fathom a node of the tree, it is only necessary to prove dual feasibility. Nevertheless, there are reasons for thinking that some kind of upper bounding mechanism might be desirable. The main one is this: if for some reason the ABC algorithm has to be interrupted before optimality has been achieved, then an upper bound can be used to assess the quality of the final (x̄, ȳ). The simplest way to produce an upper bound, based on the idea of Padberg and Hong, is to solve the final LP at the root node to optimality. This LP can be solved in a relatively small number of primal simplex pivots, because (x̄, ȳ) can be used as a starting basis. However, note that the resulting upper bound is unlikely to be better than the upper bound which would be obtained from a dual approach (assuming that similar inequalities and separation algorithms are used). Another reason for wanting an upper bounding mechanism is to somehow eliminate variables from the problem entirely. In standard branch-and-cut, this is done as follows. Any variable with a reduced cost greater than the difference between the current upper and lower bounds may be eliminated from the problem (i.e., fixed at zero). This is called reduced cost fixing. In the ABC context we can do something similar, at least at the root node, by using the reduced costs from the optimal solution to the LP relaxation. However, again our feeling is that this might lead to less powerful fixing than is achievable in the dual context. 6 Preliminary Computational Results We implemented a first version of an ABC algorithm as described in the previous sections. For the LP solution, we used the CPLEX 7.0 callable library of ILOG. We tested the algorithm on the same set of 50 multi-dimensional 0-1 knapsack instances we already considered in [17] and [18]. Specifically, the problems are 130 A.N. Letchford and A. Lodi m×n n of the form max{cT x : Ax ≤ b, x ∈ {0, 1}n }, where c ∈ Z+ , A ∈ Z+ and m b ∈ Z+ , which were randomly generated as follows. For any pair (n, m) with n ∈ {5, 10, 15, 20, 25} and m ∈ {5, 10}, we constructed 5 random instances whose objective function coefficients are integers generated uniformly between 1 and 10. Moreover, for the instances with m = 5, the left-hand side coefficients are also integers generated uniformly between 1 and 10, while for the instances with m = 10, the left-hand side coefficients have a 50% chance of being an integer generated uniformly between 1 and 10, but also have a 50% chance of being zero. That is, these instances are sparse. In all cases the right hand side of each constraint was set to half the sum of the left hand side coefficients1 . Cutting. For the cutting part of the algorithm we used the same policy developed in [17]: we generate primally violated lifted cover inequalities, heuristically separated as described in [17] since their separation has been shown to be NPhard (see, [16] for details), and when we are not able to find any of them we resort to generating a round of Gomory fractional cuts strengthened as in Letchford & Lodi [18]. After 25 consecutive rounds of Gomory fractional cuts, in order to avoid numerical problems, we branch. Branching. The branching tree is explored in a depth-first search. Differently from what we described in Section 4 we do not complement the current x̄, thus we distinguish between a left-branch, which is fixing a variable to the same value assumed in the incumbent solution, and a right-branch which corresponds to implicitly imposing the other value through the addition of a set of inequalities as described in Section 4. There are two interesting things to point out. First, the choice of exploring the tree in depth-first manner implies that just two sets must be maintained during the search: we call Nlef t (resp. Nright ) the set of the variables which are fixed according to (resp. at the contrary of) their value in the incumbent solution. These sets correspond to sets N0 and N1 of (1), and an augmentation is simply handled by moving the variables which are currently contained in Nright to Nlef t (and, obviously, by manipulating some of the constraints added in the right-branches). Second, as alluded to in Section 4, only in the case of the first right-branch a set of constraints must be added, and specifically n − |Nlef t | − 1 constraints. In the following right-branches, it is enough to change an inequality (previously introduced in the first right-branch) into an equality. A straightforward example of this behavior is the following. Example. Assume that the incumbent solution is such that x̄i = 1 and x̄j = 0. Since the first right-branch, at node h, has been performed on variable xi , a constraint xj + xi ≤ 1 has been added at node h. If at node k (for which node h is a father), we want to explore the right-branch associated to variable xj , it is enough to transform the previously added constraint into xj + xi = 1. 1 This is well-known to lead to non-trivial instances of the multi-dimensional 0-1 knapsack problem. An Augment-and-Branch-and-Cut Framework for Mixed 0-1 Programming 131 Other implementation details and further advances will be discussed in following studies. Concerning the results on the multi-dimensional knapsack instances, they are reported in Table 1. Table 1. ABC algorithm. Preliminary results on multi-dimensional knapsack instances. m n 5 10 5 15 20 25 5 10 10 15 20 25 nodes primal overall Aug. to Opt. nodes LCIs cuts 2.0 1.0 1.0 4.4 5.2 4.8 1.0 1.0 7.8 16.8 8.6 26.4 48.6 25.2 736.0 10.0 2.4 189.4 42.6 2503.2 13.8 682.0 765.0 69.0 2196.6 0.8 0.6 1.0 5.0 5.0 4.6 1.0 1.0 9.0 10.0 7.0 1.8 8.6 13.0 172.6 10.0 8.6 97.4 34.0 669.6 13.2 597.8 1521.0 77.0 8815.0 Table 1 reports for each pair (m, n), the average results on 5 instances of the number of augmentation performed by the algorithm (Aug.) starting by the trivial solution with all the variables set to 0, the number of branching nodes to find the optimal solution (nodes to Opt.), and the number of branching nodes to prove optimality (nodes). The last two columns of the table refer to cuts by reporting the average number of cuts added (overall cuts) and the average number of primal lifted cover inequalities separated (primal LCIs). With respect to the algorithm outlined in Section 3.4, we resort to generating rounds of primally-violated Gomory fractional cuts as soon as primal separation fails, and not only when an integer infeasible point is encountered. Each round of Gomory cuts contains at most 25 cuts tight to x̄. Moreover, step 1. of the algorithm above is disregarded, in the sense that we start with the trivial 0solution, and we do not apply during the search any heuristic to improve the current solution. The results obtained show that an augment-and-branch-and-cut algorithm is a viable way of solving 0-1 ILPs (and MILPs) provided that all the sophisticated techniques developed for standard branch-and-cut algorithms were also implemented in this context. Indeed, a comparison with a general-purpose branchand-cut framework like CPLEX 7.0 is totally unfair at the moment due to the much larger arsenal of cuts and to the great level of software engineering development of the current version of CPLEX. Just to give an idea, however, by avoiding in CPLEX presolve, primal heuristic, and cut generation but including cover and Gomory fractional inequalities and performing a depth-first search, the average number of nodes required for the 5 instances with m = 5 and n = 25 is 80.2 with respect to 765.0 reported in Table 1, i.e., almost ten times fewer. 132 A.N. Letchford and A. Lodi Finally, we also preliminary tested our ABC implementation on three of the very famous instances proposed by Crowder, Johnson & Padberg [6], namely p0033, p0040 and p0201. For these instances we start by the first integer solution given by CPLEX. The algorithm works quite well on the two smallest instances: it is able to prove optimality for the starting solution of p0040 without any branching and just 2 primal cuts, while the initial solution of p0033 is augmented twice and solved with 367 nodes. On p0201, instead, degeneracy becomes a severe problem. More than 95% of the time is spent performing degenerate pivots, and in this situation we resort to branching so that the number of nodes becomes huge. This suggests that the method could be improved by some form of antistalling device, or by periodically ‘purging’ the LP of unnecessary non-binding constraints. Further progress on this issue will be discussed in future studies. 7 Conclusion We have examined how to perform separation and branching within the primal context and we have seen that, just as Weismantel suggested, it is possible to integrate augmentation, branching and cutting within a single framework, at least for (mixed) 0-1 problems. We have also shown that most of the components of a sophisticated branch-and-cut algorithm have a primal counterpart. Moreover, we have implemented and computationally tested the first version of an ABC algorithm which is a completely new approach to integer programming. The effectiveness of such an approach clearly needs to be proved on harder instances and future (actually, current) work will be devoted to obtaining sophisticated ABC algorithms and ad hoc implementations for specific classes of problems (e.g., for the TSP). References 1. E. Balas, S. Ceria & G. Cornuéjols (1993) A lift-and-project cutting plane algorithm for mixed 0-1 programs. Math. Program. 58, 295–324. 2. E. Balas, S. Ceria & G. Cornuéjols (1996) Mixed 0-1 programming by lift-andproject in a branch-and-cut framework. Mgt. Sci. 42, 1229–1246. 3. E. Balas, S. Ceria, G. Cornuéjols & N. Natraj (1996) Gomory cuts revisited. Oper. Res. Lett. 19, 1–9. 4. A. Ben-Israel & A. Charnes (1962) On some problems of diophantine programming. Cahiers du Centre d’Études de Recherche Opérationelle 4, 215–280. 5. A. Caprara & M. Fischetti (1997) Branch-and-Cut Algorithms. In M. Dell’Amico, F. Maffioli & S. Martello (eds.) Annotated Bibliographies in Combinatorial Optimization, pp. 45-64. New York, Wiley. 6. H. Crowder, E.L. Johnson & M.W. Padberg (1983) Solving large-scale zero-one linear programming problems. Oper. Res. 31, 803–834. 7. F. Eisenbrand, G. Rinaldi & P. Ventura (2001) 0/1 primal separation and 0/1 optimization are equivalent. Working paper, IASI, Rome. 8. R.T. Firla, U.-U. Haus, M. Köppe, B. Spille & R. Weismantel (2001) Integer pivoting revisited. Working paper, Institute of Mathematical Optimization, University of Magdeburg. An Augment-and-Branch-and-Cut Framework for Mixed 0-1 Programming 133 9. R.S. Garfinkel & G.L. Nemhauser (1972) Integer Programming. New York: Wiley. 10. F. Glover (1968) A new foundation for a simplified primal integer programming algorithm. Oper. Res. 16, 727–740. 11. R.E. Gomory (1958) Outline of an algorithm for integer solutions to linear programs. Bulletin of the AMS 64, 275–278. 12. R.E. Gomory (1960) An algorithm for the mixed-integer problem. Report RM-2597, Rand Corporation, 1960 (Never published). 13. M. Grötschel, L. Lovász & A.J. Schrijver (1988) Geometric Algorithms and Combinatorial Optimization. Wiley: New York. 14. U.-U. Haus, M. Köppe & R. Weismantel (2000) The integral basis method for integer programming. Math. Meth. of Oper. Res. 53, 353–361. 15. A.H. Land & A.G. Doig (1960) An automatic method for solving discrete programming problems. Econometrica 28, 497–520. 16. A.N. Letchford & A. Lodi (2001) Primal separation algorithms. Technical Report OR/01/5. DEIS, University of Bologna. 17. A.N. Letchford & A. Lodi (2002) Primal cutting plane algorithms revisited. Math. Methods of Oper. Res., to appear. 18. A.N. Letchford & A. Lodi (2002) Strengthening Chvátal-Gomory Cuts and Gomory fractional cuts. Oper. Res. Letters, to appear. 19. G.L. Nemhauser & L.A. Wolsey (1988) Integer and Combinatorial Optimization. New York: Wiley. 20. M.W. Padberg & M. Grötschel (1985) Polyhedral computations. In E. Lawler, J. Lenstra, A. Rinnooy Kan, D. Shmoys (eds.). The Traveling Salesman Problem, John Wiley & Sons, Chichester, 307–360. 21. M.W. Padberg & S. Hong (1980) On the symmetric travelling salesman problem: a computational study. Math. Program. Study 12, 78–107. 22. M.W. Padberg & G. Rinaldi (1991) A branch-and-cut algorithm for the resolution of large-scale symmetric travelling salesman problems. SIAM Rev. 33, 60–100. 23. A. Schulz, R. Weismantel & G. Ziegler (1995) 0-1 integer programming: optimization and augmentation are equivalent. In: Lecture Notes in Computer Science, vol. 979. Springer. 24. R. Thomas (1995) A geometric Buchberger algorithm for integer programming. Math. Oper. Res. 20, 864–884. 25. R. Urbaniak, R. Weismantel & G. Ziegler (1997) A variant of Buchberger’s algorithm for integer programming. SIAM J. on Discr. Math. 1, 96–108. 26. R. Weismantel (1999) Private communication. 27. R.D. Young (1965) A primal (all-integer) integer programming algorithm. J. of Res. of the National Bureau of Standards 69B, 213–250. 28. R.D. Young (1968) A simplified primal (all-integer) integer programming algorithm. Oper. Res. 16, 750–782. A Procedure of Facet Composition for the Symmetric Traveling Salesman Polytope Jean François Maurras and Viet Hung Nguyen⋆ LIM, Université de la Mediterranée, 163 Avenue de Luminy, 13288 Marseille, France Abstract. We propose a new procedure of facet composition for the Symmetric Traveling Salesman Polytope(STSP). Applying this procedure to the well-known comb inequalities, we obtain completely or partially known classes of inequalities like clique-tree, star, hyperstar, ladder inequalities for STSP. This provides a proof that a large subset of hyperstar inequalities which are until now only known to be valid, are indeed facets defining inequalities of STSP and this also generalizes ladder inequalities to a larger class. Finally, we describe some new facet defining inequalities obtained by applying the procedure. 1 Introduction The Symmetric Traveling Salesman Polytope ST SP n is the convex hull of the incidence vectors of all the Hamiltonian cycles of a complete undirected graph with n nodes. This polytope is associated with the well-known traveling salesman problem which is one of the most basic NP-hard combinatorial optimization problems. Thus, a considerable amount of research work has been devoted to characterizing or describing classes of facet defining inequalities for ST SP n . Due to the very complex structure of ST SP n it is very difficult to describe all inequalities which define facets for this polytope. A technique used to simplify the description is to define some operations on the inequalities that allow the derivation of new inequalities from others that have already been characterized. One of these operations is the composition of inequalities that produces new facet defining inequalities by merging two or more inequalities, known to be facet defining, which satisfy some conditions. These inequalities are called blocks of the composition. Naddef and Rinaldi [9] described a procedure of facet composition for ST SP called 2-sum composition. This procedure helped to derive a large class of facet defining inequalities called regular parity path-tree. Some others facet composition procedures have been given by Queyranne and Wang [13], [12]. Blocks inequalities of these procedures need only to satisfy some simple conditions that can be easily verified. Let us recall some known facts about ST SP n . Let Kn = (Vn , En ), it is proved that the affine hull of the hamiltonian cycles of Kn are n degree equations corresponding to the n vertices of Kn . These equations denote the degree constraints which say that any hamiltonian cycle contains exactly two edges of ω(v), the ⋆ Current address: LIP6, 4 place Jussieu 75005 Paris M. Jünger et al. (Eds.): Combinatorial Optimization (Edmonds Festschrift), LNCS 2570, pp. 134–146, 2003. c Springer-Verlag Berlin Heidelberg 2003  A Procedure of Facet Composition 135 set of incident edges with v, for all v ∈ Vn . Thus, the dimension d of ST SP n is equal to n(n − 1)/2 − n. Given a valid inequality I of ST SP n , a hamiltonian cycle whose incidence vector satisfies I at equality is called a tight hamiltonian cycle with respect to I. A set of tight hamiltonian cycles with respect to I that contains d hamiltonian cycles whose incidence vectors are affinely independent, is called a kernel of I. We have the following: The inequality I defines a facet of ST SP n if and only if there exists a kernel of I. If I is a facet defining inequality, in general the total number of tight hamiltonian cycles with respect to I is significantly greater than d and there are many different kernels of I. Intuitively, kernels whose cardinality is close to d express better the specific structure of the facet defined by I than kernels whose cardinality is much bigger than d. More precisely, the hamiltonian cycles of a kernel whose cardinality close to d usually share more common properties than the hamiltonian cycles of a kernel with greater cardinality. In this paper, we develop this intuition into a procedure of facet composition for ST SP n . The procedure can be resumed as follows: We are given two or more facet defining inequalities and the corresponding kernels whose hamiltonian cycles have some identical property. Composing these inequalities is then composing the corresponding kernels to obtain a kernel of the new inequality. Thus, this new inequality defines a facet of ST SP provided that it is valid. Thus our procedure aims to exploit the specific structure of inequalities to extend them. To illustrate the procedure, we apply it to the well-known comb inequalities [2][5]. This allows us to obtain completely or partially known classes of inequalities like clique-tree [6], star [4], hyperstar [3], ladder [1] inequalities for STSP. The readers can find descriptions of these inequalities in [10] and [7]. In our knowledge, until now there is no proof of facet defining for hyperstar inequalities. By our procedure of composition, we provide a proof of being facet defining for a large subset of hyperstar inequalities. We also give a generalization of ladder inequalities and some other new facet defining inequalities. The paper is organized as follows. First, we introduce some notations and notions. We then describe the procedure of composition. Finally we apply the procedure to comb inequalities. Because of space limit, we will only present an extended abstract, a more complete version can be found in [11] 2 Definitions and Notations Let G = (V, E) be a undirected graph. The edge between two vertices u and v in G will be denoted as uv. For X ⊂ V , E(X) denotes the set of all edges uv ∈ E 136 J.F. Maurras and V.H. Nguyen such that both u, v ∈ X and ω(X) denotes the set of all edges uv ∈ E such that u ∈ X and v ∈ V \ X. For X, X ′ ⊂ V and X ∩ X ′ = ∅, (X : X ′ ) denotes the set of edges for which one endnode belongs to X and the other belongs to X ′ . For Y ⊂ E, V (Y ) denotes the set of all vertices u such that at least one edge of Y is incident with u. A cycle C of G will be  considered as a set of edges. Let α be a vector of RE , for Y ⊂ E, let α(Y ) = e∈Y αe . Let Kn = (Vn , En ) be a complete undirected graph with n vertices. Let I ≡ γ T x ≤ γ0 be a facet defining inequality of ST SP n . Definition 1 (Subclique). A subset X ⊂ Vn is a subclique if γe = γe′ for all e, e′ ∈ E(X). This common coefficient is called γX . Definition 2 (Critical set). A subset D ⊂ En is a critical set with respect to I if there is a kernel of I such that every hamiltonian cycle in I contains at least one edges of D. Indeed, it is conceivable that for some edges of D, there is no hamiltonian cycle in I which contains them. Definition 3 (δ-critical set). A subset D ⊂ En is δ-critical set with respect to I if there is a kernel of I such that every hamiltonian cycle in I contains exactly δ mutually non-adjacent edges of D, i.e. a matching of cardinality δ of D. Definition 4 (Co-critical sets). Let D1 , D2 , . . . , Dk be a collection of edge sets such that Di is δi -critical set for all i = 1, 2, . . . , k. These sets are co-critical if there is a kernel of I whose each member contains a matching of cardinality δi of Di for all i = 1, 2, . . . , k and the unionof these matchings form a matching k of cardinality δ1 + δ2 + . . . + δn of the set i=1 Di . Let G = (V, E) be a undirected graph. Let ST SP (G) be the symmetric traveling salesman polytope defined on G. We assume that the dimension dG of ST SP (G) is equal to |E| − |V |. Definition 5 (3-cycle forest). A 3-cycle forest F ⊂ E of G is a spanning subgraph of G whose connected components contain exactly one cycle of length 3. An inequality β T x ≤ β0 or an equality β T x = β0 is said F-canonic if βe = 0 for all e ∈ F where F is a 3-cycle forest of G. We have the following lemma Lemma 1. Let F be 3-cycle forest of G and I ≡ γ T x ≤ γ0 be a valid inequality of ST SP (G). Let β T x = β0 be a F-canonic equality. (i) If all the tight hamiltonian cycles with respect to I also satisfy the equality β T x = β0 and fixing k coefficients βe1 , βe2 , . . . , βek at 0 where ei ∈ E \ F, implies β = 0 (the zero vector), then I defines a (dG − k)-face of the ST SP (G). (ii) Inversely, if I defines a (dG − k)-face of ST SP (G) and there is (dG − k + 1) tight hamiltonian cycles affinely independent with respect to I which also satisfy the equality β T x = β0 then by fixing k coefficients βe1 , βe2 , . . . , βek at 0 where ei ∈ E \ F, one can show that β = 0. The proof of this lemma is based on the fact that F is a column base of the incidence matrix of G. Therefore, there is a unique inequality (up to a positive multiple) F-canonic that is equivalent with I. A Procedure of Facet Composition 3 137 Facet Composition by Mean of δ-Critical Sets Let us consider two complete undirected graphs Kn1 = (Vn1 , En1 ) and Kn2 = (Vn2 , En2 ). Let I1 ≡ γ T x ≤ γ0 be a facet defining inequality of ST SP n1 and I2 ≡ αT x ≤ α0 be a facet defining inequality of ST SP n2 . Suppose that there are a subclique X1 of Kn1 and a subclique X2 of Kn2 such that: H1 |X1 | = |X2 | = 2δ + 1 and γX1 = γX2 = ρ. H2 E(X1 ) is a δ-critical set with respect to I1 and E(X2 ) is a δ-critical set with respect to I2 . H3 Let U1 ⊂ E(X1 ) (respectively U2 ⊂ E(X1 )) be any set of δ + 1 edges consisting of a matching of cardinality δ plus an edge such that V (U1 ) = X1 (respectively V (U2 ) = X2 ). There is a tight hamiltonian cycle of Kn1 (respectively Kn2 ) with respect to I1 (respectively I2 ) that contains the edges of U1 (respectively U2 ). By uniting X1 and X2 into a unique set X, we obtain a new graph G = (V, E) from Kn1 and Kn2 such that V = (Vn1 \ X1 ) ∪ (Vn2 \ X2 ) ∪ X and E = (En1 \ E(X1 )) ∪ (En2 \ E(X2 )) ∪ E(X). Set d1 = n1 (n1 − 1)/2 and d2 = n2 (n2 − 1)/2. We have |E| = d1 + d2 − |E(X)|. Let η ∈ RE such that: – ηe = γe for all e ∈ En1 . – ηe = αe for all e ∈ En2 . Let us call dG = |E| − n1 − n2 + |X| = |E| − n1 − n2 + 2δ + 1 the dimension of ST SP (G). Theorem 1. The inequality I ≡ η T x ≤ γ0 +α0 −(2δ+1)ρ defines a (dG −2)-face of ST SP (G). Proof. We shall show the validity of I and then explain briefly the outline of the proof for the inequality I to define a (dG − 2)-face. Let C be any hamiltonian cycle of G. We inspect this cycle by going through it in a given direction D. We meet the vertices of G which belong to either Vn1 \ X or Vn2 \ X or X. In this inspection, as the vertices of X are like bridges between the vertices of Vn1 \ X and the vertices of Vn2 \ X, so we will go through alternatively a maximal directed path contained in En1 \ E(X) and a maximal directed path contained in En2 . Thus we obtain a set P of maximal directed disjoint paths which are contained either in En1 \E(X) or in En2 . The paths in En1 \E(X) begin and end with a vertex of Vn1 \ X. The paths in En2 begin and end with a vertex of X. The total number of these paths is obviously positive and even. Suppose that there are 2p such paths. Let U ⊂ X = {u1 , u2 , . . . , u2p }, the set of the end vertices of all maximal directed paths in En2 of P. For i odd and 1 ≤ i ≤ 2p − 1, assume that following the direction D, between ui and ui+1 there is a maximal directed path in En2 . We derive that for j even and 2 ≤ j ≤ 2p − 2, there is a maximal directed path in En1 \ E(X) between uj and uj+1 and there is a maximal directed path in 138 J.F. Maurras and V.H. Nguyen C2 Kn2 u1 u2 u3 u4 G Kn1 C C1 Fig. 1. A hamiltonian cycle C of the graph G and its partition into C1 and C2 . En1 \ E(X) between u2p and u1 . Replace the maximal directed paths in En2 by the edges u1 u2 , u3 u4 , . . . , u2p−1 u2p , we obtain a cycle C1 (non necessarily hamiltonian) of Kn1 . Similarly, replace the maximal directed paths in En1 by the edges u2 u3 , u4 u5 , . . . , u2p u1 , we obtain a cycle C2 (non necessarily hamiltonian) of Kn2 . Remark 1. The edges of C1 ∩ E(U ) (respectively C2 ∩ E(U )) form a matching of E(U ). The edge of (C1 ∪ C2 ) ∩ E(U ) form a hamiltonian cycle of the subgraph induced by U . In the case where |U | = 2, this cycle reduces to an edge counted two times (a loop if we give a direction to the two edges). The cycle C1 is called a complement of C2 with respect to U and vice-versa. Thus, all hamiltonian cycles of G are obtained from a pair of complementary cycles C1 and C2 . We calculate now the value of η(C). We have obviously η(C) = η(C1 ) + η(C2 ) − 2pρ. Let us call W1 (respectively W2 ) the set of vertices of C1 (respectively C2 ) which belong to X. The sets W1 , W2 and U form a partition of X. Thus, we have |W1 | + |W2 | = 2δ + 1 − 2p. Note that we can complete the cycle C1 to a hamiltonian cycle C1′ of Kn1 by replacing a particular edge in C1 ∩ E(U ), for example u1 u2 , by a path with all the vertices of W2 as the interior vertices and u1 , u2 as the two ends. We obtain: η(C1′ ) = η(C1 ) + |W2 |ρ. Similarly for the cycle C2 , we obtain a hamiltonian cycle C2′ of Kn2 and η(C2′ ) = η(C2 ) + |W2 |ρ. Since these two cycles are respectively hamiltonian cycles of Kn1 and Kn2 , we also have: η(C1′ ) = γ(C1′ ) ≤ γ0 and η(C2′ ) = α(C2′ ) ≤ α0 . All above equations allow us to derive that η(C) ≤ γ0 + α0 − (2δ + 1)ρ. Hence, inequality I is valid. We characterize now tight hamiltonian cycles of G with respect to I. Let K1 be a set of tight hamiltonian cycles of Kn1 with respect to I1 which is A Procedure of Facet Composition 139 a kernel corresponding to the critical set E(X1 ) = E(X). Similarly, let K2 be a set of tight hamiltonian cycles of Kn2 with respect to I2 which is a kernel corresponding to the critical set E(X2 ) = E(X). Let U = {u1 , u2 , . . . , u2δ } be a subset of cardinality 2δ of X and let C1′ ∈ K1 . The δ non-adjacent edges of C1′ in E(X) are denoted by ui ui+1 for all i = 1, . . . , 2δ−1 and odd. Let w be the only vertex of X that does not belong to U and C2′ ∈ Kn2 that contains the edges u2 w, wu4 and u2δ u1 and the δ − 2 non-adjacent edges uj uj+1 for all j = 4, . . . , 2δ − 2 and even. Replacing the edges u2 w and wu4 by the edge u2 u4 in C2′ , we obtain a (n2 − 1)-cycle C2 of Kn2 . Note that C2 is the complement of C1′ with respect to U and the hamiltonian cycle C of G obtained from C1′ and C2 , is tight with respect to I since η(C) = η(C1′ ) + η(C2′ ) − 2δρ = γ0 − (α0 − ρ) − 2δρ = γ0 − α0 − (2δ + 1)ρ. The cycle C2 is called a maximal complement of C1′ . Symmetrically, we can obtain a tight hamiltonian cycle of G from a tight hamiltonian cycle in Kn2 and one of its maximal complement. Let K be the set of all tight hamiltonian cycles with respect to I that are built like above. We hope that K is a kernel of I. But in fact, we will show that K is nearly a kernel of I, i.e. K contains dG − 1 tight hamiltonian cycles affinely independent. We give the outline of the proof. Let X = {x1 , x2 , . . . , x2δ+1 } and F be a 3-cycle forest of G such that the path x1 x2 . . . xi xi+1 . . . x2δ+1 and the edge x1 x3 form a connected component of F. Let β T x = β0 be a F-canonic equality. By definition, we have βe = 0 for all e ∈ F. Suppose that all tight hamiltonian cycles of G with respect to I also satisfy β T x = β0 . By using the tight hamiltonian cycles in K, we show that βe = 0 for all e ∈ E(X). We also show that the tight hamiltonian cycles of Kn1 in K1 βe xe = β1 , and the tight hamiltonian cycles of Kn2 in satisfy the equality e∈En1 K2 satisfy the equality  βe xe = β2 , where β1 + β2 = β0 . e∈En2 Since F ∩ En1 is a 3-cycle forest of Kn1 and Kn1 is a kernel of I1 , thus according to the second part of Lemma 1, by fixing a coefficient βe1 (e1 ∈ En1 \ F) at 0, we can derive that βe = 0 for all e ∈ En1 . The same holds for a coefficient βe2 (e2 ∈ En2 \ F) and βe for all e ∈ En2 . Thus, fixing two coefficients βe1 and βe2 at 0 implies that βe = 0 for all e ∈ E. According to the first part of Lemma 1, we conclude that I defines a (dG −2)-face of ST SP (G). We discuss now how to transform this (dG − 2)-face into a facet, i.e. a (dG − 1)face of ST SP (G). Let us call crossing edges, the edges in (Vn1 \ X : Vn2 \ X) which do not belong to E. We give several sufficient conditions on the facet defining inequalities I1 and I2 such that we can add a crossing edge e to the graph G, give a value to the coefficient ηe and the new inequality I ≡ η T x ≤ η0 defined now on the new graph G is valid and there are two tight hamiltonian cycles affinely independent containing the edge e. Then it is easy to see that 140 J.F. Maurras and V.H. Nguyen these two hamiltonian cycles are affinely independent to the hamiltonian cycles in K. Thus I defines a facet of ST SP (G). Suppose that there exists in G two vertices v1 ∈ Vn1 \ X and v2 ∈ Vn2 \ X such that – For all i = 2, 3, . . . , 2δ + 1, we have ηv1 x1 > ηv1 xi and ηv2 x1 > ηv2 xi . For all i, j = 2, 3, . . . , 2δ + 1 and i = j, ηv1 xi = ηv1 xj and ηv2 xi = ηv2 xj . In addition, ηv1 x1 + ηv2 xi = ηv2 x1 + ηv1 xi . – For an edge v1 xi (resp. v2 xi ) where i = 2, 3, . . . , 2δ + 1 • any tight hamiltonian cycle with respect to I1 (resp. I2 ) containing v1 xi (resp. v2 ) contains a path between v1 (resp. v2 ) and x1 which does not contain other vertices de X but x1 . • any not tight hamiltonian cycle C1 (resp. C2 ) with respect to I1 (resp. I2 ) containing v1 xi (resp. v2 ), η(C1 ) = γ0 − ηv1 x1 (resp. η(C2 ) = α0 − ηv2 x1 ). – For any δ non-adjacent edges in E(X), there exist tight hamiltonian cycles with respect to I1 (resp. I2 ) of Kn1 (resp. Kn2 ) which contain these edges and the edge v1 x1 (resp. v2 x1 ). Proposition 1 Add the edge v1 v2 to G and set ηv1 v2 := ηv1 x1 + ηv2 xi − ρ(= ηv2 x1 + ηv1 xi − ρ) where xi ∈ X \ {x1 } The new inequality I ≡ η T x ≤ η0 defines a facet for STSP(G). Proof. Because of space limit, we omit the proof of the validity of I. We will specify now two tight hamiltonian cycles affinely independent containing the edge v1 v2 . Let X1 = X ∪ {v1 } and X2 = X ∪ {v2 }, we can see that these sets are of even cardinality. Let C1 be a tight hamiltonian cycle with respect to I1 containing an edge v1 xj where j = 2 . . . 2δ + 1. By definition, C1 contains a path between v1 and x1 and thus we can find a cycle C2 which is complementary of C1 with respect to X1 such that the cycle C2∗ obtained by replacing in C2 the edges v1 v2 and v1 x1 by the edge v2 x1 , is a tight hamiltonian cycle with respect to I2 . Let M = C1 ∪ C2 ∩ E(X1 \ v1 ), we have V (M ) = X and |M | = 2δ. We can see that the edge set C1 ∪ C2∗ ∪ v1 v2 \ M \ {v2 x1 , v1 xj } forms a hamiltonian cycle C containing v1 v2 in G. We have η(C) = η(C1 ) + η(C2 ) − ηv1 v2 − η(M ) − ηv2 x1 − ηv1 xj = = α0 + γ0 + ηv2 x1 + ηv1 xj − ρ − 2δρ − ηv2 x1 − ηv1 xj = α0 + γ0 − (2δ + 1)ρ. Thus C is tight with respect to I. Symmetrically, we can derive from C2′ a tight hamiltonian cycle with respect to I2 containing v2 xj ( j = 2 . . . 2δ + 1) and from C1′ a tight hamiltonian cycle with respect to I1 containing v1 x1 which is complementary of C2′ with respect to X2 , a tight hamiltonian cycle C ′ in G with respect to I. It is easy to see that C and C ′ are affinely independent. Consider the complete graph Kn weighted by a vector γ. A Procedure of Facet Composition 141 Definition 6 (Perfect subclique). A subclique X ⊂ Vn is a perfect subclique if for all v ∈ Vn \ X, all components of γ corresponding to the edges in (v : X) are equal. We can generalize Theorem 1 by replacing vertices in X1 and X2 by perfect subcliques with respect to I. Definition 7 (Super-set). Let X = {S1 , . . . , S2δ+1 } where the sets Si ∈ X are disjoint perfect cliques. Let EX = 2δ+1  (Si : Sj ). i,j=1 The set X is super-set if all components of γ corresponding to the edges in EX are equal. Let us call γEX this common coefficient. Definition 8 (Super-matching). An edge set D ⊂ En is a super matching if D ⊂ EX and for all i = 1, . . . , 2δ + 1 |D ∩ ω(Si )| ≤ 1. Definition 9 (Super δ-critical set). The set EX is a super δ-critical set with respect to a facet-defining I of ST SP n if there exists a kernel of I whose each member contains a super-matching of cardinality δ of X. Definition 10 (Super co-critical set). Let D1 , . . . , Dk be a collection of edge sets such that for all i = 1, . . . , k, Di is super δi -critical with respect to I. These sets are super co-critical if there exists a kernel of I such that every hamiltonian cycle in I contains a super-matching of cardinality δi of Di for all i = 1, . . . , k and the union of these super-matchings form a super-matching of cardinality δ1 + . . . + δk of the set ∪ki=1 Di . Definition 11 (k-vertex-critical). [9] A vertex u ∈ Vn is k-vertex-critical if for all hamiltonian cycles of maximum weight with respect to γ, we have γ(C) = γ0 − k. Let I1 ≡ γ T x ≤ γ0 and I2 ≡ αT x ≤ α0 be respectively a facet-defining inequality of ST SP n1 and ST SP n2 . Suppose that there exists a super-set X1 = {S1 , . . . , S2δ+1 } in Kn1 and a superset X2 = {T1 , . . . , T2δ+1 } in Kn2 such that For all i = 1, . . . , 2δ + 1, we have |Si | = |Ti | and γSi = αTi . Then we set ρi = γSi = αTi for all u ∈ Si , u is ρi -vertex-critical with respect to I1 and for all v ∈ Ti , v is ρi -vertex-critical with respect to I2 . (ii) EX1 is super δ-critical with respect to I1 and EX2 is super δ-critical with respect to I2 . In addition γEX1 = αEX2 = ρ. (iii) For U1 ⊂ EX1 such that |U1 | = δ + 1 and for all i = 1, . . . 2δ + 1 we have U1 ∩ Si = ∅, there is a tight hamiltonian cycle with respect to I1 which contains U1 . Similarly, for U2 ⊂ EX2 such that |U2 | = δ + 1 and for all i = 1, . . . 2δ + 1 we have U2 ∩ Ti = ∅, there is a tight hamiltonian cycle with respect to I2 which contains U2 . (i) 142 J.F. Maurras and V.H. Nguyen By uniting X1 and X2 into a unique set X ( this is done by uniting successively Si and Ti for all i = 1, . . . , 2δ + 1 into a unique set Ri ), we obtain the graph G. We have X = {R1 , . . . , R2 } where Ri = (Si ≡ Ti ) and |E(G)| = d1 + d2 − 2δ+1  |E(Si )| − |EX |. i=1 Let η ∈ RE(G) such that – ηe := γe ∀e ∈ En1 , – ηe := αe ∀e ∈ En2 . Let dG be the dimension of ST SP (G), we can see that dG = |E(G)| − n1 − n2 + 2δ+1 i=1 |Ri |. Theorem 2. The inequality I ≡ η T x ≤ γ0 + α0 − (2δ + 1)ρ − 2δ+1  (|Si | − 1)ρi , i=1 defines a (dG − 2)-face of ST SP (G). The sufficient condition so that I defines a facet of ST SP (G) becomes much simpler than the case of Proposition 1 : there must be at least one subclique of more than one vertex in X. Proposition 1. If there exists at least a set Ri such that |Ri | ≥ 2 and ρi = ρ then the inequality I defines a facet of ST SP (G). 4 Applications to Comb Inequalities Let us consider a complete undirected graph Kn = (Vn , En ). Let H, T1 , T2 , . . . ,T2m+1 be subsets of Vn such that – m ≥ 1, – Ti ∩ Tj = ∅ with 1 ≤ i = j ≤ 2m + 1, – Ti ∩ H = ∅ and Ti \ H = ∅ with 1 ≤ i ≤ 2m + 1. The inequality I ≡ x(E(H)) + 2m+1  i=1 x(E(Ti )) ≤ |H| + 2m+1  |Ti | − m − 1 i=1 defines a comb inequality of ST SP n . The set H is called handle and the sets Ti are called teeth. If |Ti ∩ H| = 1 for all i = 1, 2, . . . , 2m + 1, these inequalities are called Chvátal comb since they have been introduced by Chvátal [2]. The general comb inequalities have been studied by Grötschel and Padberg [5]. A Procedure of Facet Composition 143 2m+1 Let T = i=1 Ti and H̄ = Vn \ H. For each tooth Ti , let Zi = Ti ∩ H and Z̄i = Ti ∩ H̄. Let U = {Z1 , Z2 , . . . Z2m+1 } and Ū = {Z̄1 , Z̄2 , . . . , Z̄2m+1 }. Let R be the family of sets which are composed by the sets Zi and all the singletons {y} where y ∈ H \ T . Let R̄ be the family of sets which are composed by the sets Z̄i and all the singletons {ȳ} where ȳ ∈ H \ T . For a comb inequality I, we will consider two following types of super-sets : – Those whose elements belong to R. Let P be a collection of these sets. – Those whose elements belong to R̄. Let Q be a collection of these sets. Suppose that P = {P1 , P2 , . . . , Pp } and Q = {Q1 , Q2 , . . . , Qq } and the following conditions are satisfied by the super-sets in P and Q : For all i = 1, . . . , p, |Pi | = 2pi + 1 where pi ≤ 1. For all j = 1, . . . , q, |Qi | = 2qj + 1 where qj ≤ 1. (ii) For all i = 1, . . . , p, Pi ∩ U = ∅ and – if Pi  U then for all Pj with j = i, Pi ∩ Pj = ∅. – otherwise, i.e. either Pi = U or Pi \U = ∅, then for all j = i, |Pj ∩Pi | ≤ 1. Similarly, we have the same conditions for Ū and the super-sets Qj ∈ Q. (iii) If p ≥ 2 and q ≥ 2, for all sets Pi ∈ P and Qj ∈ Q such that |Pi | = |Qj |, Pi  U and Qj  Ū , the number of pair Zk , Z̄k such that Zk ∈ Pk and Z̄k ∈ Qj is less than or equal to |Pi | − 2. (i) Theorem 3. The subsets EPi and EQj are respectively super pi -critical and super qj -critical with respect to I for all i = 1, 2, . . . , p and j = 1, 2, . . . , q. In addition, these sets are all super co-critical. H T1 P2 P1 1 Z¯1 Q2 Q1 T2 T3 T4 T5 Fig. 2. An example of collections P and Q for a comb inequality. Now we can apply the procedure of facet composition to comb inequalities. By uniting two super critical sets of R̄ of two comb inequalities, we can obtain a clique tree inequality. We describe briefly combinatorial structure of star, hyperstar and ladder inequalities. Star inequalities are like comb inequalities but the handle H becomes 144 J.F. Maurras and V.H. Nguyen H1 H2 X1 X2 H1 X = (X1 ≡ X2 ) H2 Fig. 3. A clique tree inequality obtained by composing two comb inequalities X2 X1 X2 X1 X = (X1 ≡ X2 ) X = (X1 ≡ X2 ) A. B. Fig. 4. A. A star inequality obtained by composing two comb inequalities. B.A hyperstar inequality obtained by composing a star inequality and a comb inequality X1 X2 X2 X1 X = (X1 ≡ X2 ) X = (X1 ≡ X2 ) A. B. Fig. 5. A. A ladder inequality obtained by composing two comb inequalities. B. A generalized ladder inequality having nested handles. a nested set. An application of our method to comb inequalities (by uniting the handles) gives a nice subset of star inequalities called multiple-handled comb. These inequalities are special cases of path inequalities that have been proved facet-inducing for ST SP n by Naddef and Rinaldi [8]. Hyperstar inequalities accept multiple handles like clique-tree inequalities but handles can be nested. An application of our methods to comb and multi-handled comb inequalities (by uniting (subsets) of handles or (subsets) of teeth) gives hyperstar inequalities A Procedure of Facet Composition 145 whose coefficients of edges in handles are 1 and in these inequalities a tooth intersects a handle also intersects every handle containing it (we call non-degenerate tooth such a tooth, the teeth that do not have this property will be call degenerate teeth) . Figure 4 illustrates an example of the application. Ladder inequalities defined in [1] have only two handles H1 and H2 with some teeth intersecting both of them and two others T1 and T2 intersecting respectively H1 and H2 . There are pending edges between T1 ∩ H1 and T2 ∩ H2 . Among teeth intersecting both handles, there are also degenerate teeth which have no vertices outside the handles. An application of our method to comb inequalities gives ladder inequalities having no degenerate teeth and generalize them to have more than two handles. An example is illustrated in Figure 5. Consider a tooth Ti of a comb inequality I such that |Ti \ H| ≥ 2. Let P be a maximal super-set of odd cardinality which is composed by the set Zi and singletons of Ti \ H. Let |P | = 2δ + 1, we have the following theorem Theorem 4. EP is a super δ-critical set with respect to I. We can apply our procedure of composition to these super critical sets and by this operation, we obtain new facet-defining inequalities of ST SP n which allow an even number of teeth intersecting a handle. Figure 6 gives an example of such inequality. P2 P1 EP1 a super 1-critical set -1 X = (P1 ≡ P2 ) Fig. 6. A new facet-inequality of ST SP n . T1 T1′ T2 T2′ H T3 H′ T3′ Fig. 7. A composition of two comb inequalities giving a star inequality with a degenerate tooth. 146 5 J.F. Maurras and V.H. Nguyen Remark To generate star, hyperstar inequalities and ladder inequalities having degenerate teeth, we need to extend the composition method which allows to unit one vertex belonging to a handle and one vertex outside the handles. An example is given in Figure 7. References 1. S. Boyd, W. Cunningham, M. Queyranne, and Y. Wang. Ladders for travelling salesmen. SIAM Journal on Optimization, 5:408–420, 1995. 2. V. Chvátal. Edmonds polytopes and weakly hamiltonian graphs. Mathematical Programming, 5:29–40, 1973. 3. B. Fleischman. Cutting planes for the symmetric traveling salesman problem. Technical report, Universitat Hamburg, 1987. 4. B. Fleischman. A new class of cutting planes for the symmetric travelling salesman problem. Mathematical Programming, 40:225–246, 1988. 5. M. Grötschel and M. Padberg. On the symmetric traveling salesman problem I: inequalities. Mathematical Programming, 16:265–280, 1979. 6. M. Grötschel and W. Pulleyblank. Clique tree inequalities and the symmetric traveling salesman problem. Mathematics of operations research, 11:537–569, 1986. 7. M. Jünger, G. Reinelt, and G. Rinaldi. The traveling salesman problem. In M. Ball, T. Magnanti, C. Monma, and G. Nemhauser, editors, Handbook on Operations Research and Management Science, pages 225–330. North Holland, 1995. 8. D. Naddef and G. Rinaldi. The symmetric traveling salesman polytope: new facets from the graphical relaxation. Technical Report 248, Instituto di Analisi dei Sistemi ed Informatica, 1988. 9. D. Naddef and G. Rinaldi. The graphical relaxation: a new framework for the symmetric traveling salesman polytope. Mathematical Programming, 58:53–88, 1993. 10. D. Naddef. Handles and teeth in the symmetric traveling salesman polytope. In W. Cook and P. Seymour, editors, Polyhedral Combinatorics, volume 1, pages 61– 74. DIMACS series in Discrete Mathematics and Theoretical of Computer Science, AMS-ACM, 1990. 11. V. H. Nguyen. Polyèdres de cycles : Description, Composition et Lifting de Facettes. PhD thesis, Université de la Méditerranée, Marseille, 2000. 12. M. Queyranne and Y. Wang. Facet-tree composition for symmetric travelling salesman polytopes. Technical Report 90-MSC-001, Faculty of Commerce and Business Administration, University of British Columbia, 1990. 13. M. Queyranne and Y. Wang. Composing facets of symmetric travelling salesman polytopes. Technical report, Faculty of Commerce and Business Administration, University of British Columbia, 1991. Constructing New Facets of the Consecutive Ones Polytope Marcus Oswald and Gerhard Reinelt Institut für Informatik, Universität Heidelberg, Im Neuenheimer Feld 368, D-69120 Heidelberg, Germany, {Marcus.Oswald,Gerhard.Reinelt}@Informatik.Uni-Heidelberg.De Abstract. In this paper we relate the consecutive ones problem to the betweenness problem by pointing out connections between their associated polytopes. We will prove some results about the facet structure of the betweenness polytope and show how facets of this polytope can be used to generate facets of the consecutive ones polytope. Furthermore, the relations with the consecutive ones polytopes will enable us to conclude that the number of facets of the consecutive ones polytope only grows polynomially if the number of columns is fixed. This gives another proof of the fact that the consecutive ones problem is solvable in polynomial time in this case. 1 Introduction A 0/1-matrix A has the consecutive ones property for rows if its columns can be ordered in such a way that in every row the ones occur consecutively. For convenience we just say that A is C1P. Whereas it is easy to check if a matrix is C1P, it is NP-hard to compute for a given 0/1-matrix the minimum number of entries to be switched for obtaining the consecutive ones property. This is the so-called consecutive ones problem. If there are individual penalties for switching an entry and if we want to minimize the total cost for converting the matrix to be C1P we speak about the weighted consecutive ones problem (WC1P). The input of the betweenness problem consists of a set of n objects 1, 2, . . . , n and a set B of betweenness conditions. Every element of B is a triple (i, j, k) requesting that object j should be placed between objects i and k. The task is to find an ordering of the objects such that as few betweenness conditions as possible are violated. If violations are penalized by weights, we call the problem of finding an ordering which minimizes the sum of penalties the weighted betweenness problem (WBWP). Note that there can be non-betweenness conditions as well requiring that a certain object should not be placed between two objects. These conditions can be dealt with easily, so we do not discuss them here. Both the WC1P and a variant of the WBWP occur as models in computational biology. In [2] and [3] first branch-and-cut approaches for these two problems are presented. We review some definitions for the consecutive ones problem. For a 0/1matrix A with m rows and n columns let χA = (a11 , . . . , a1n , . . . , am1 , . . . , amn ) be its characteristic vector. We define the consecutive ones polytope as M. Jünger et al. (Eds.): Combinatorial Optimization (Edmonds Festschrift), LNCS 2570, pp. 147–157, 2003. c Springer-Verlag Berlin Heidelberg 2003  148 M. Oswald and G. Reinelt m,n PC1 = conv{χA | A is an (m, n)-matrix with C1P}. m,n has full dimension m · n. It is easy to see that PC1 m,n We do not want to discuss PC1 in detail here, but mention only that trivial m,n lifting is possible for PC1 and that the trivial inqualities xij ≥ 0 and xij ≤ 1 are facet-defining for all polytopes. Proofs of these theorems and an integer programming formulation of WC1P that consists of facet-defining inequalities only are given in [3]. In section 2 we discuss some aspects of the betweenness polytope. In particular, we prove a trivial lifting theorem for this polytope. In section 3 we define a master polytope which will allow us to point out some relations between the two polytopes. We show how facets of the betweenness polytope induce facets of the consecutive ones polytope. Based on this observation we then show in section 4 that the consecutive ones polytope only has a polynomial number of facets if the number of columns is fixed. 2 The Betweenness Polytope In the following we use indices i(j)k (betweenness triples) for pairwise different objects i, j and k, indicating that we consider whether object j is between objects i and k or not. Since the indices i(j)k and k(j)i are equivalent, we only use i(j)k such that i < k. In vectors, triples are ordered lexicographically, i.e., we use the order 1(2)3, 1(2)4, . . . , n−1(n−2)n. For each permutation π of n ≥ 3 elements 1, . . . , n and each betweenness triple i(j)k we define an indicator χπi(j)k which is 1 if and only if the element j lies   between the elements i and k in the permutation π and 0 otherwise. The 3 n3 dimensional characteristic betweenness vector associated with a permutation π is χπ = (χπ1(2)3 , χπ1(2)4 , . . . , χπn−1(n−2)n ). n The betweenness polytope PBW , n ≥ 3, is the convex hull of all betweenness vectors, i.e., n PBW = conv{χπ | π is a permutation of {1, . . . , n}}. It is easy to show that the following is true. n Lemma 1. For an arbitrary point x = (x1(2)3 , . . . , xn−1(n−2)n ) ∈ PBW and three pairwise different i, j, k, 1 ≤ i, j, k ≤ n, the betweenness equation xi(j)k + xi(k)j + xj(i)k = 1 holds. This lemma characterizes exactly (up to linear combinations) all equations that n are valid for PBW . Constructing New Facets of the Consecutive Ones Polytope 149   n Theorem 1. PBW has dimension 2 n3 . A proof of this theorem is given in [4]. n Let aT x ≤ a0 be valid for PBW and n′ > n. We say that the inequality T n′ T a x ≤ a0 for PBW is obtained from a x ≤ a0 by trivial lifting if  ai(j)k if 1 ≤ i, j, k ≤ n, ai(j)k = 0 otherwise. Trivial lifting means that larger polytopes inherit all nontrivial facets of smaller polytopes. To prove this we need the following two lemmata. n Lemma 2. Let aT x ≤ a0 be facet-defining for PBW , n ≥ 4. For each pair i, j ∈ {1, . . . , n} there is at least one triple e(f )g with |{i, j} ∩ {e, f, g}| ≤ 1 and ae(f )g = 0. Proof. Assume that there is a pair i, j such that for all triples e(f )g with |{i, j}∩ {e, f, g}| ≤ 1 we have ae(f )g = 0. Then the inequality can be written as  (ai(j)k xi(j)k + ai(k)j xi(k)j + aj(i)k xj(i)k ) ≤ a0 . k∈{i,j} / It is easy to find a permutation π such that for every k ai(j)k xi(j)k + ai(k)j xi(k)j + aj(i)k xj(i)k = βk , where βk = max{ai(j)k , ai(k)j , aj(i)k }. Therefore aT x ≤ a0 is already implied by the n − 2 trivial inequalities βk (xi(j)k + xi(k)j + xj(i)k ) ≤ βk (which are in fact equations). n Lemma 3. Let aT x ≤ a0 be facet-defining for PBW , n ≥ 4. For each pair ∗ i, j ∈ {1, . . . , n} there is a vector x with  1  if {i, j} = {e, f } or {i, j} = {f, g},  2 ∗ xe(f )g = 0 if {i, j} = {e, g},    ⋆ otherwise. and x∗ can be written as an affine combination of betweenness vectors χπi which satisfy aT χπi = a0 . Proof. Because of Lemma 2 there is at least one triple e(f )g with ae(f )g = 0 and x∗e(f )g = ⋆. After setting all the other “⋆”-entries in an arbitrary way only fulfilling all the betweenness equations we can always choose x∗e(f )g , x∗f (e)g and x∗e(g)f in such a way that x∗e(f )g + x∗f (e)g + x∗e(g)f = 1 and aT x∗ = a0 holds. Note that these entries not necessarily have to lie between 0 and 1. Since aT x ≤ a0 n defines a facet of PBW , there must be an affine combination of the vertices of the facet which represents x∗ . 150 M. Oswald and G. Reinelt We are now ready to prove the lifting theorem. n , n ≥ 4 and let n′ > n. If Theorem 2. Let aT x ≤ a0 be facet-defining for PBW n′ aT x ≤ a0 is trivially lifted then the resulting inequality defines a facet of PBW . Proof. It is sufficient to show the theorem for n′ = n + 1. Let S(n) denote the set of permutations of {1, 2, . . . , n}. Since all permutations π ′ ∈ S(n + 1) contain a permutation π ∈ S(n), the n+1 . We only have to show that all inequality aT x ≤ a0 remains valid for PBW n+1 T which satisfy aT x ≤ a0 with equations b x = b0 that hold for all x ∈ PBW T equality are multiples of a x = a0 . In our case it is even sufficient to show that be(f )g = bf (e)g = be(g)f for all triples e, f, g with n + 1 ∈ {e, f, g}, because in this case we can reduce the coefficients to 0 by adding suitable multiples of betweenness equations. Afterwards the equation can be trivially downlifted. n And since aT x ≤ a0 is facet-defining for PBW , bT x = b0 must be a multiple of aT x = a0 . We proceed as follows. We show that for each pair i, j ∈ {1, . . . , n} the equation bi(j)n+1 = bi(n+1)j = bj(i)n+1 holds. We construct the vector x∗ according to lemma 3. There are l permutations π1 , . . . , πl ∈ S(n) satisfying aT χπh = a0 and l l   x∗ = dh χπi with dh = 1. h=1 h=1 W.l.o.g. we can choose the permutations in such a way that the elements i occur before j (otherwise we can convert the order without changing the betweenness vectors). Now we construct l permutations π1′ a , . . . , πl′a ∈ S(n + 1) from the permutations π1 , . . . , πl by inserting the element n + 1 directly before i and another l permutations π1′ b , . . . , πl′b ∈ S(n + 1) by inserting the element n + 1 directly after j. So we have πh′ a = (. . . , n + 1, i, . . . , j, . . . ) and πh′ b = (. . . , i, . . . , j, n + 1, . . . ) Due to this construction, bT χπha = bT χπhb = b0 holds for the betweenness vectors of all of these permutations. Summing up all the equations and inserting the values of x∗ , we obtain after some calculations that 0= l  (dh bT χπha − dh bT χπhb ) l  dh bT (χπha − χπhb ) h=1 = h=1 = ... = bn+1(i)j − bi(j)n+1 . Constructing New Facets of the Consecutive Ones Polytope 151 Take for example the contribution of the triples i(k)n + 1, k = j to this sum. πhb πhb πha = 0 and χi(k)n+1 = χi(k)j we get Since χi(k)n+1 l   πh π ha b − dh bi(k)n+1 χi(k)n+1 ) (dh bi(k)n+1 χi(k)n+1 h=1 k=j l  =−  bi(k)n+1 =−  bi(k)n+1 x∗i(k)j =−  bi(k)n+1 · 0 k=j πh b dh χi(k)j h=1 k=j k=j =0 The calculations of the other triples work similarly. To get the second relation we construct permutations πh′ c and πh′ d where πh′ c = (. . . , i, n + 1, . . . , j, . . . ) and πh′ d = (. . . , i, . . . , n + 1, j, . . . ). Here we obtain 0= l  dh bT (χπha − χπhc − χπhd + χπhb ) h=1 = ... = bn+1(i)j − 2bi(n+1)j + bi(j)n+1 . From both relations we get the desired result bi(j)n+1 = bi(n+1)j = bj(i)n+1 . In contrast to the consecutive ones polytope, trivial inequalities only define facets in the smallest case. In [4] it is shown that the complete linear description 3 of PBW is given by the betweenness equation x1(2)3 + x1(3)2 + x2(3)1 = 1 and the three trivial inequalities x1(2)3 ≥ 0, x1(3)2 ≥ 0 and x2(1)3 ≥ 0, and that for n ≥ 4 none of the trivial inequalities xi(j)k ≥ 0 or xi(j)k ≤ 1 is facet-defining. Since we have equations, the same facet-defining inequality can be stated in various ways. Therefore we define a normal form with the property that two facet-defining inequalities define the same facet if and only if their normal forms coincide. Definition 1. A facet-defining inequality is in normal form if it has the following properties. i) The inequality is written as ai(j)k xi(j)k ≥ a0 . ii) All coefficients ai(j)k are nonnegative coprime integers. iii) At least one of the three coefficients ai(j)k , ai(k)j and aj(i)k is zero for pairwise different elements i, j and k, 1 ≤ i, j, k ≤ n. It is easily seen that the normal form of a facet-defining inequality is unique and that it is easy to convert an inequality to normal form. 152 3 M. Oswald and G. Reinelt A Common Polytope Both the feasible solutions of the WBWP and of the WC1P are based on the permutations of n elements. For examining relations between the two problems we define a master problem combining their constraints. Here we seek for a permutation satisfying betweenness conditions as well as having an associated matrix where the ones appear consecutively. We say that a permutation π of n elements establishes the consecutive ones conditions of a 0/1-matrix A ∈ {0, 1}m×n (π establishes C1C of A) if in the permuted matrix A′ ∈ {0, 1}m×n with a′ri = arπ−1 (i) the ones occur consecutively m,n in every row. The common betweenness and consecutive ones polytope PBWC1 is defined as m,n = conv{(χπ , χA ) | A ∈ {0, 1}m×n and π establishes C1C of A}. PBWC1 m,n The single polytopes can be simply obtained from PBWC1 . Namely, the projection m,n n of PBWC1 on the betweenness variables xi(j)k is the betweenness polytope PBW and the projection on the consecutive ones variables xri is the consecutive ones m,n m,n n polytope PC1 . Of course, all valid inequalities for PC1 or PBW remain valid for m,n m,n PBWC1 . One can easily construct valid inequalities for PBWC1 that are formulated both on betweenness and on consecutive ones variables. Let A = (ari ) be an (m, n)-matrix, n ≥ 3, with C1P. Then for all rows r of A, all betweenness triples i(j)k of columns i, j, k of A and all permutations π that establish C1C of A we have χπi(j)k ≤ 2 − arπ−1 (i) + arπ−1 (j) − arπ−1 (k) . Based on this observation we can define so-called linking constraints xi(j)k ≤ 2 − xri + xrj − xrk m,n PBWC1 which are valid for for all r ∈ {1, . . . , m} and all betweenness triples i(j)k. Intuitively there is a close relationship between the consecutive ones and the betweenness problem. We now establish a connection between the facets of the two polytopes by making use of the linking constraints. These constraints are used to eliminate betweenness variables and replace them by consecutive ones variables. Since in the consecutive ones problem we actually deal with matrices we will m,n denote a valid inequality for PC1 by B ◦ x ≤ b0 , where B is the coefficient m n matrix, x is the matrix of the variables and B ◦ x = i=1 j=1 bij xij . Further n we will use Bi.T xi. instead of j=1 bij xij . n Theorem 3. Let aT x ≥ a0 be a facet-defining inequality for PBW , n ≥ 4, in normal form. Further let m be the number of nonzero coefficients of a. We assign pairwise different numbers ri(j)k ∈ {1, . . . , m} to the betweenness triples i(j)k with ai(j)k > 0. Let the inequality B ◦ x ≤ b0 be obtained by summing up −aT x ≤ −a0 and all (scaled) linking constraints ai(j)k xi(j)k ≤ ai(j)k (2−xri(j)k i + m,n xri(j)k j − xri(j)k k ). Then B ◦ x ≤ b0 is facet-defining for PC1 . Constructing New Facets of the Consecutive Ones Polytope 153 Proof. We have to show that all equations C ◦ x = c0 that hold for all x ∈ m,n PC1 which satisfy B ◦ x ≤ b0 with equality are multiples of B ◦ x = b0 . Since B ◦ x ≤ b0 is a conical sum of −aT x ≤ −a0 and some linking constraints, all these inequalities must be fulfilled with equality for all C1P-matrices A that satisfy B ◦ χA = b0 and associated permutations π that establish C1C of A with betweenness vector χπ . Now we compute the entries of the row ri(j)k of C. Since n aT x ≥ a0 is a facet of PBW ,n ≥ 4, there must be at least one vector (χA , χπ ) with π χi(j)k = 1 (otherwise all vertices of the facet would fulfill xi(j)k = 0). Because equality must hold for the linking constraint xi(j)k ≤ 2−xri(j)k i +xri(j)k j −xri(j)k k , there are three possible combinations A A (χA ri(j)k i , χri(j)k j , χri(j)k k ) ∈ {(1, 0, 0), (0, 0, 1), (1, 1, 1)} This means row ri(j)k of χA written in the order of π can look like χA ri(j)k .   (1, . . .            (0, . . .      (0, . . .     (0, ...             (0, . . . = (0, . . .           (0, . . .      (0, ...     (0, . . .             (0, . . .    (0, . . . , 1, 1, 1, 0, 0, . . . , 0, 0, 0, . . . .. . , 0, 1, 1, 0, 0, . . . , 0, 0, 0, . . . , 0, 0, 1, 0, 0, . . . , 0, 0, 0, . . . , 0, 0, 1, 1, 0, . . . , 0, 0, 0, . . . .. . , 0, 0, 1, 1, 1, . . . , 1, 0, 0, . . . , 0, 0, 0, 0, 0, . . . , 0, 0, 1, . . . .. . , 0, 0, 0, 0, 0, . . . , 0, 0, 0, . . . , 0, 0, 0, 0, 0, . . . , 0, 0, 0, . . . , 0, 0, 0, 0, 0, . . . , 0, 0, 0, . . . .. . , 0, 0, 0, 0, 0, . . . , 0, 0, 0, . . . , 0, 0, 1, 1, 1, . . . , 1, 1, 1, . . . , 0, 0, 0, 0, 0, . . . , 0) , 0, 0, 0, 0, 0, . . . , 0) , 0, 0, 0, 0, 0, . . . , 0) , 0, 0, 0, 0, 0, . . . , 0) , 0, 0, 0, 0, 0, . . . , 0) , 1, 1, 1, 0, 0, . . . , 0) , 0, 1, 1, 0, 0, . . . , 0) , 0, 0, 1, 0, 0, . . . , 0) , 0, 0, 1, 1, 0, . . . , 0) , 0, 0, 1, 1, 1, . . . , 1) , 1, 1, 1, 0, 0, . . . , 0) By substituting differences of suitable rows and the remainders of the corresponding matrices into the equation C ◦x = c0 one easily can conclude that Cri(j)k l = 0 for l ∈ / {i, j, k} and Cri(j)k i = −Cri(j)k j = Cri(j)k k =: ci(j)k . This holds for all triples i(j)k with ai(j)k > 0. And since all linking constraints must be satisfied with equality we can construct the equation i(j)k ci(j)k xi(j)k =: cT x = c̃0 which is satisfied by all considered betweenness-vectors χπ . But aT x ≥ a0 is n facet-defining for PBW and therefore cT x = c̃0 must be a multiple of aT x = a0 which also means that C ◦ x = c0 is a multiple of B ◦ x = b0 . 154 M. Oswald and G. Reinelt Consider for example the inequality x1(2)3 + x1(3)2 + x2(1)4 + x3(1)4 ≥ 1 4 and in normal form. The four linking constraints which is facet-defining for PBW −x1(2)3 ≥ −2 + x11 − x12 + x13 −x1(3)2 ≥ −2 + x21 − x23 + x22 −x2(1)4 ≥ −2 + x32 − x31 + x34 −x3(1)4 ≥ −2 + x43 − x41 + x44 4,4 are valid for PBWC1 . Summing up these five inequalities (thus eliminating the betweenness variables) and multiplying by −1 yields the inequality   1 −1 1 0  1 1 −1 0   −1 1 0 1 ◦ x ≤ 7. −1 0 1 1 4,4 As already shown in [3]) this inequality is facet-defining for PC1 . This relation between facets of the two polytopes can be used for a new separation procedure for the WC1P. Assume we are given an LP-solution x∗ = ∗ ) for the (x∗ij ) of the WC1P. First we compute a virtual LP-solution y ∗ = (yi(j)k WBWP by setting ∗ = min{2 − x∗ri + x∗rj − x∗rk }. yi(j)k r Now we can use any separation procedure for the WBWP to find betweennessfacets that violate y ∗ . For any facet we find in this way one can construct a facet for the WC1P which violates x∗ . 4 The Consecutive Ones Problem for a Fixed Number of Columns n If we start with a facet aT x ≥ a0 of PBW in normal form, clearly the support m,n of the constructed facet of PC1 has at most n columns. But what about the number of rows m? According to the construction, m is the number of nonzero   coefficients of a. Since at least one of three coefficients is zero, we have m ≤ 2 n3 . ′ m ,n Taking into account that these facets can be trivially lifted to facets of PC1 m,n ′ with m > m the total number of constructed facets of PC1 for fixed n and n arbitrary m is O(m2( 3 ) ) and thus polynomial in m. Unfortunately, not every facet can be constructed in this way. Now we want to generalize this method to show the surprising result that for fixed n the total m,n number of facets of PC1 grows polynomially in m with the consequence that WC1P is polynomially solvable for fixed n. Constructing New Facets of the Consecutive Ones Polytope 155 n Lemma 4. Let aT x ≥ a0 be a facet-defining inequality for PBW in normal form m,n and B◦x ≤ b0 a derived facet of PC1 . Further let Q = {π | aT χπ = a0 } and Q′ = {π | there exists C ∈ {0, 1}m×n with B ◦ C = b0 and π establishes C1C of A} be the sets of permutations which fulfill the respective inequalities with equality. Then Q = Q′ and for every π ∈ Q, every 0/1-matrix C with the property that π establishes C1C of C and every betweenness triple i(j)k with nonzero coefficient ai(j)k and corresponding row r = ri(j)k the condition T Br. Cr. = ai(j)k (2 − χπi(j)k ) holds. Proof. According to the construction Br. has only 3 nonzero entries bri = brk = T ai(j)k and brj = −ai(j)k . It follows that Br. Cr. = ai(j)k (cri − crj + crk ) and thus the condition is equivalent to a linking constraint. And all these linking constraints have to be fulfilled with equality since B ◦ x ≤ b0 as conical combination of −aT x ≤ −a0 and the linking constraints is fulfilled with equality. A consequence of this observation is the fact that for any C1P matrix C which satisfies a facet-defining inequality B ◦ x ≤ b0 with equality the values of T Br. Cr. only depend on the row r of B and on a permutation π that establishes C1C of C but not on the remainder of the matrix B and not on the matrix C itself. A generalization of this observation is used to prove the following result. m,n Theorem 4. The number of facets of PC1 for fixed n is O(mn!/2 ). m,n Proof. Let B ◦ x ≤ b0 be any nontrivial facet-defining inequality for PC1 . Since the zero matrix and all matrices consisting of zeroes except for one single entry are feasible solutions b0 > 0 follows. Our goal is to show that the support of B has at most n!/2 rows. Let π be an arbitrary permutation with the property that there exists a matrix C ∈ {0, 1}m×n with B ◦ C = b0 and π establishes C1C of C. For each of these permutations π and every row r of B we define T mπr (B) = max{Br. v | v ∈ {0, 1}1×n and π establishes C1C of v} as the maximum possible contribution of row r to the left hand side of the facet. Note that if we denote by π ′ the permutation obtained when reading π in ′ reverse order (i.e., π ′ (i) = n + 1 − π(i), for 1 ≤ i ≤ n), then mπr (B) = mπr (B) holds since the consecutive ones conditions are not affected by reversing the order. Furthermore, for all C1P matrices C with B ◦ C = b0 and establishing permutation π the relation T Br. Cr. = mπr (B) T holds for every row r. Br. Cr. ≤ mπr (B) is clear from the definition of mπr (B). If T we assume that Br. Cr. < mπr (B) then we can construct a new C1P matrix C ′ by replacing the row r of C by the maximum row v from the above definition. But then B ◦ C ′ > b0 contradicting the validity of B ◦ x ≤ b0 . 156 M. Oswald and G. Reinelt Now let s be the number of rows of B and t be the number of considered permutations in an arbitrary order π1 , . . . , πt . We create the s × t-matrix π M (B) = (mij (B)) = (mi j (B)). Since t ≤ n! and since for every πj there is a πj ′ with mij (B) = mij ′ (B) for all rows i, the rank of M (B) is at most n!/2 independently of the number s of rows. Now assume that a facet-defining inequality B ◦x ≤ b0 is given with the number of rows s of B greater than n!/2. Because of rank M (B) ≤ n!/2 at least one row r of M (B) can be written as linear combination M (B)r. = i=r di M (B)i. . T And since Br. Cr. = mrj (B) holds for any C1P matrix C with B ◦ C = b0 and an establishing permutation πj we get T Br. Cr. =  di Bi.T Ci. . i=r m,n This equation holds for every vertex C of {x ∈ PC1 | B ◦ x = b0 }. And since it contains no constant coefficient it cannot be obtained by scaling the equation B ◦ x = b0 with b0 > 0. Therefore the inequality B ◦ x ≤ b0 cannot be facetdefining. m,n Thus the support of every facet-defining inequality for PC1 has at most n!/2 rows and therefore each of these facets can be obtained by trivial lifting from a n!/2,n n!/2,n facet of PC1 . Since the number of facets of PC1 is constant in m and the  m the total number of number of lifting possibilities for one facet is at most n!/2 m,n facet-defining inequalities for PC1 is O(mn!/2 ). As consequence we obtain that WC1P is solvable in polynomial time for fixed n. Corollary 1. WC1P is solvable in polynomial time for fixed n. m,n Proof. According to Theorem 4 all facets of PC1 can be obtained by trivial n!/2,n lifting from facets of PC1 . Calculating these facets  m takes constant time in possibilities for trivial m. And for each of these facets there are at most n!/2 lifting. Thus we need time O(mn!/2 ) to create a complete listing of all facets of m,n and therefore the WC1P is solvable in polynomial time for fixed n. PC1 However, with a fairly simple algorithm we can even solve the problem in linear time (in m). Namely, let the WC1P be formulated as max{B ◦ x | x ∈ {0, 1}m×n is C1P}. Now for each column permutation π and each row r of B we calculate mπr (B). One calculation can be done in O(n) with a scan line algorithm. Since max{B ◦ x | x ∈ {0, 1}m×n is C1P}  π  = max mr (B) | π is permutation of {1, . . . , n} r Constructing New Facets of the Consecutive Ones Polytope 157 holds we are done. The total running time of this algorithm is O(n!mn) and thus linear in m. There are some remaining conjectures and open questions on the relations between the two polytopes. m,n We believe that the linking constraints are facet-defining for PBWC1 . We verified this for m = 1 and 3 ≤ n ≤ 5 but could not yet prove the general case. A further interesting question is if there is a reverse construction from C1Pfacets to BWP-facets. However the inequality   1 1 −1 −1 1 −1 1 −1 ◦ x ≤ 5 1 −1 −1 1 3,4 4 is facet-defining for PC1 but cannot be derived from a facet of PBW by our construction. The constructive proof of Theorem 4 leads to the idea of generalizing the betweenness variables χπi(j)k by defining new variables χπw = max{wT v | v ∈ {0, 1}n and π establishes C1C of v} for suitable vectors w ∈ Zn . For n = 4 we found 10 variables (8 betweenness variables and 2 additional ones) which turn out to be a good choice. At least all m,4 facets of PC1 we have investigated so far can be derived in this way. Moreover it is interesting to study separation for the betweenness polytope because separation routines for this polytope can be employed for the consecutive ones polytope as well. References 1. T. Christof, A. Loebel (1998) PORTA - A Polyhedron Representation Algorithm. www.informatik.uni-heidelberg.de/groups/comopt/software/PORTA 2. T. Christof, M. Oswald and G. Reinelt (1998) Consecutive Ones and A Betweenness Problem in Computational Biology. Proceedings of the 6th IPCO Conference, Houston, 213–228 3. M. Oswald and G. Reinelt (2000) Polyhedral Aspects of the Consecutive Ones Problem, Proceedings of the 5th Conference on Computing and Combinatorics, Sydney, 373–382 4. M. Oswald and G. Reinelt (2001) Some Relations Between Consecutive Ones and Betweenness Polytopes, to appear in: Proceedings of OR2001, Selected Papers, Duisburg A Simplex-Based Algorithm for 0-1 Mixed Integer Programming⋆ Jean-Philippe P. Richard1 , Ismael R. de Farias2 , and George L. Nemhauser1 1 2 School of Industrial and Systems Engineering, Georgia Institute of Technology, Atlanta GA 30332-0205, USA Center for Operations Research and Econometrics, 34 Voie du Roman Pays, 1348 Louvain-La-Neuve, Belgium Abstract. We present a finitely convergent cutting plane algorithm for 0-1 mixed integer programming. The algorithm is a hybrid between a strong cutting plane and a Gomory-type algorithm that generates violated facet-defining inequalities of a relaxation of the simplex tableau and uses them as cuts for the original problem. We show that the cuts can be computed in polynomial time and can be embedded in a finitely convergent algorithm. 1 Introduction Gomory [7] in the 1950’s pioneered the idea of using cutting planes to solve integer programs. In his approach, valid inequalities are generated from rows of the currently fractional simplex tableau. An advantage of this method is that a violated inequality can always be found quickly and the resulting algorithm can be proven to converge in a finite number of iterations. Although appealing conceptually, this algorithm is not effective in practice. Branch-and-cut schemes that use strong cutting planes have proven to be much more effective in solving 0-1 mixed integer programs. They proceed by relaxing the constraint set into a polytope whose structure has been studied and for which families of facet-defining inequalities (facets for short) are known. A separation procedure, usually heuristic, is called to generate a violated facet of the relaxed polytope, which is a cut for the initial problem. This idea was used by Crowder, Johnson and Padberg [5] with the theoretical foundation coming from the polyhedral studies of the 0-1 knapsack polytope by Balas [1], Balas and Zemel [3], Hammer, Johnson and Peled [11] and Wolsey [20]. Subsequently there have been many applications of this approach that imbeds strong cuts into a branch-and-bound algorithm, see surveys by Johnson, Nemhauser and Savelsvergh [12] and Marchand, Martin, Weismantel and Wolsey [13]. The main disadvantage of this approach is that the separation procedure may not return any cut, at which point partial enumeration is required. In this paper we introduce an algorithm that is a hybrid between these two approaches. We generate cuts from the simplex tableau and therefore are able to produce a violated inequality at every step of the algorithm. ⋆ This research was supported by NSF grants DMI-0100020 and DMI-0121495 M. Jünger et al. (Eds.): Combinatorial Optimization (Edmonds Festschrift), LNCS 2570, pp. 158–170, 2003. c Springer-Verlag Berlin Heidelberg 2003  A Simplex-Based Algorithm for 0-1 Mixed Integer Programming 159 Moreover, our cuts are always facets for relaxations of the polytopes defined by the rows of the tableau and yield a finite simplex algorithm for 0-1 mixed integer programming. The cuts are generally less dense than Gomory cuts and can be computed in polynomial time, although Gomory cuts are less expensive to obtain. Our cuts are derived in Richard, de Farias and Nemhauser [18,19] and in order to understand their validity and facetial properties, it is necessary to refer to these papers. We now introduce our basic relaxation. Let M = {1, . . . , m} and N = {1, . . . , n}. Given the sets of positive integers {a1 , . . . , am } and {b1 , . . . , bn } together with the positive integer d, let   S = {(x, y) ∈ {0, 1}m × [0, 1]n | bj yj ≤ d }. aj xj + j∈M j∈N The mixed 0-1 knapsack polytope is P S = conv(S). Note that, since m and n are positive integers, the knaspack inequality contains both continuous and integer variables. As long as the continuous variables are bounded, it is not restrictive to choose the bounds to be 0 and 1. Also, it is not restrictive to require the coefficients aj and bj to be positive since we can always complement variables. Finally we may assume that the coefficients aj and bj are smaller than d, otherwise xj can be fixed to 0 and yj can be rescaled. Given these assumptions, we have Theorem 1. P S is full-dimensional.  Although we are not aware of any previous study of the mixed 0-1 knapsack polytope other than our current work in [18,19], valid and facet-defining inequalities for related polytopes have been known for quite some time. For example there are the mixed integer cuts introduced by Gomory [8], the MIR inequalities introduced by Nemhauser and Wolsey [16] and the mixed disjunctive cuts introduced by Balas, Ceria and Cornuéjols [2]. More closely related to our study is the “0-1 knapsack polytope with a single continuous variable” introduced by Marchand and Wolsey [14]. There are significant differences between the polyhedron of Marchand and Wolsey and P S, which are described in [18]. In Section 2 we propose a conceptual framework to generate cuts for 0-1 mixed integer problems, using some knowledge of P S. This technique requires an extensive use of lifting, including the lifting of continuous variables as studied in [18,19]. In Section 3 we present a family of facets for the mixed 0-1 knapsack polytope that can be used as tableau cuts for general 0-1 mixed integer problems and can be obtained in polynomial time. In Section 4 we sketch how these cuts are used in a finitely convergent algorithm. 2 Generating Cuts from the Simplex Tableau In this section, we present a formal framework to obtain cuts from a relaxation of the simplex tableau. This procedure requires extensive use of lifting techniques and motivates the introduction of the following notation. 160 J.-P.P. Richard, I.R. de Farias, and G.L. Nemhauser Let M0 , M1 be two disjoint subsets of M , and N0 , N1 be two disjoint subsets of N . Define S(M0 , M1 , N0 , N1 ) = { (x, y) ∈ S | xj = 0 ∀j ∈ M0 , xj = 1 ∀j ∈ M1 , yj = 0 ∀j ∈ N0 , yj = 1 ∀j ∈ N1 } and P S(M0 , M1 , N0 , N1 ) = conv(S(M0 , M1 , N0 , N1 )). Consider the general 0-1 mixed-integer program   max cm+j yj cj xj + s.t.  (1) j∈N j∈M aij xj +  bij yj ≤ di ∀i ∈ H (2) ∀j ∈ M (3) ∀j ∈ N (4) j∈N j∈M xj ∈ {0, 1} yj ∈ [0, 1] where H = {1, . . . , h}. Note that each row of (2) together with (3) and (4) has the form of S. We let Q be the set of solutions to (2)-(4) and P Q = conv(Q). Because we work with simplex tableaux, we introduce slacks and assume they are continuous. Since lower and upper bounds on the slack variables are easily obtained, they can be rescaled so that their domain is the interval [0, 1] or substituted out if the bounds are equal. We can therefore replace (2) by   aij xj + bij yj + ui yn+i = di ∀i ∈ H (5) j∈M j∈N and (4) by yj ∈ [0, 1] ∀j ∈ Ñ (6) where Ñ = N ∪ {n + 1, . . . , n + h}. Now consider a solution to the LP relaxation of this problem. If none of the 01 variables is fractional, then the current solution is optimal. So we may assume that at least one of them is fractional. Note that nonbasic variables are either at their lower or upper bound, so that no integrality violations can occur for them. Therefore any 0-1 variable that is fractional in the current LP relaxation has to be basic. Assume the ith variable of the basis is 0-1 and fractional. The ith row of the simplex tableau can be written as     xB(i) + b̃ij = d˜i (7) ãij + b̃ij yj = fi + ãij xj + j∈M0 ∪M1 j∈N0 ∪N1 j∈M1 j∈N1 where fi ∈ (0, 1), M0 and M1 represent the sets of 0-1 variables that are nonbasic at lower and upper bounds respectively in the current tableau, N0 and N1 represent the sets of continuous variables that are nonbasic at lower and upper bounds in the current tableau, and B(i) represents the index of the ith basic variable. We would like to generate a strong cut from (7). A Simplex-Based Algorithm for 0-1 Mixed Integer Programming 161 In (7) we assume that the coefficients ãij and b̃ij are positive. If not, we complement the corresponding variables (they will therefore switch from M0 to M1 , N0 to N1 and vice-versa). Moreover, if we relax (7) to an inequality, we obtain the knapsack constraint in standard form   b̃ij yj ≤ d˜i (8) ãij xj + xB(i) + j∈N0 ∪N1 j∈M0 ∪M1 which defines a mixed 0-1 knapsack polytope that we call P Si . We can also relax the equality in the other direction and still obtain a mixed 0-1 knapsack polytope in standard form by complementing all the variables. Note that we can fix to 0 all the 0-1 variables, including xB(i) whose coefficients in P Si are bigger than d˜i . By doing so, we detect that P Q is not full-dimensional and obtain some members of its equality set. Among the inequalities that can possibly be generated in this way, the only one that cuts off the current solution of the LP relaxation is xB(i) ≤ 0. So we will assume throughout this paper that variables for which the previous discussion applies have been substituted out of (8), that xB(i) is still present in (8) and that P Si is full-dimensional. In P Si (M0 , M1 , N0 , N1 ), the relaxed tableau row in standard form reads xB(i) ≤ fi (9) with 0 < fi < 1. Since xB(i) is an integer variable, the inequality xB(i) ≤ 0 (10) is valid for P Si (M0 , M1 , N0 , N1 ) and is clearly violated by the current solution. Note that (10) can be extended to a facet of P Si by sequentially lifting all the nonbasic variables. Although all lifting orders provide such an inequality, some orders may lead to complex lifting schemes that will usually be expensive to carry out. So we will settle for simple schemes. Note that the mixed integer solutions of Q satisfy the constraints defining P Si so that the inequalities we obtain are valid for P Q. Also, since all the nonbasic variables are fixed at their upper or lower bounds, the inequality obtained at the end of the lifting process will be violated and its absolute violation will be fi . In order to implement this scheme, we need to lift both continuous and 0-1 variables. The lifting of 0-1 variables from a knapsack inequality is well-known, see for example Gu, Nemhauser and Savelsbergh [10]. For the lifting of continuous variables, we will use the results developed in [18,19]. We only need to consider the lifting of continuous variables fixed at 0 (lifting from 0) and fixed at 1 (lifting from 1) since all the continuous variables are nonbasic. 3 A Family of Tableau Cuts In this section we show that we can choose the lifting order in a way that makes the whole lifting sequence polynomial. Consider P Si , the mixed 0-1 knapsack 162 J.-P.P. Richard, I.R. de Farias, and G.L. Nemhauser polytope in standard form associated with a row of the tableau whose basic  variable xB(i) is fractional. First define M0< = {l ∈ M0 |ãil ≤ fi + j∈M1 ãij }  and M0> = {l ∈ M0 |fi + j∈M1 ãij < ãil ≤ d˜i }. Note that M0 is completely covered by M0< and M0> since P Si is full-dimensional. We will lift the variables in the order M1 , M0< , N1 , N0 and M0> . We discuss the lifting problems next and illustrate them on the following example. Example 1. Consider max x1 + 2x2 + 3x3 + 4x4 + 10y1 s.t. 10x1 + 7x2 + 6x3 + 2x4 + 2y1 ≤ 17 6x1 + 12x2 + 13x3 + 7x4 + 13y1 ≤ 32. We introduce the nonnegative continuous slacks ỹ2 and ỹ3 . They are bounded above by 17 and 32 respectively and below by 0. We use these bounds to rescale ỹ2 and ỹ3 so that their domains correspond to the interval [0, 1]. We call the scaled variables y2 and y3 . We solve the linear relaxation of this problem using the simplex method. The optimal tableau is given by 5 96 10 31 91 96 + max − x1 − x2 x4 + y1 y3 + − 13 13 13 13 13 13 32 32 6 12 7 x2 + x3 + y1 + s.t. x4 + y3 = x1 + 13 13 13 13 13 94 192 29 19 16 52 x1 + x2 x4 − y1 + y2 − y3 = − 221 221 221 221 221 221 where the variables x3 and y2 are basic, the variables x1 , x2 and y3 are nonbasic at lower bound, and the variables x4 and y1 are nonbasic at upper bound. The current objective value is 218 13 . For ease of exposition, we present the tableau in fractional form, although our analysis does not require it. We derive a cut from the first row of the tableau whose basic variable x3 = 12 13 . All the coefficients of this row are nonnegative, so this row is already in standard form. The polytope from which we generate our cuts is defined by the inequality 6 7 12 7 32 12 x1 + x2 + x3 + x4 + y1 + y3 ≤ + + 1. 13 13 13 13 13 13 Moreover, we have that M0< = {1, 2}, M0> = ∅, M1 = {4}, N0 = {3} and N1 = {1}.  3.1 Lifting the Members of M1 First we lift the variables of M1 in (10). The following theorem is a corollary of well-known results on lifting 0-1 variables. A Simplex-Based Algorithm for 0-1 Mixed Integer Programming 163 Theorem 2. Assume M1 = {1, . . . , s} and let µ0 = 1 − fi . For j = 1, . . . , s, let αj = 0 and µj = µj−1 − ãij when ãij < µj−1 and let αj = 1 and µj = µj−1 otherwise, then xB(i) + s  αj xj ≤ j=1 s  αj (11) j=1 is a facet of P Si (M0 , ∅, N0 , N1 ).  Inequality (11) is obtained by sequentially lifting the variables of M1 in (10) and is a cover inequality since αj ∈ {0, 1}. Moreover Theorem 2 yields a polynomial algorithm O(n) to obtain (11). We will assume from now on that at least one variable of M1 is lifted at value 1. The case in which no variable from M1 yields α = 1 is discussed in Section 3.6. Example 1 (continued). We lift x4 from 1 by computing µ = 1/13. Since ã14 = 1 7  13 ≥ 13 , we have α4 = 1. So x3 + x4 ≤ 1 is a facet of P Si (M0 , ∅, N0 , N1 ). 3.2 Lifting Members of M0< The minimal cover obtained in Section 3.1 is next lifted with respect to the variables of M0< . There is a polynomial algorithm to perform this task, see Nemhauser and Wolsey [15] and Zemel [21]. We will not present it here, but we note that all the lifting coefficients of the members of M0< can be computed in time O(n2 ) and we denote the resulting facet-defining inequality of P Si (M0> , ∅, N0 , N1 ) by  αj xj ≤ δ. (12) j∈M \M0> Example 1 (continued). We lift the variables x1 and x2 in this order and we determine that both of these lifting coefficients are 0. Therefore x3 + x4 ≤ 1 is a facet of P Si (M0> , ∅, N0 , N1 ) = P Si (∅, ∅, N0 , N1 ).  3.3 Lifting Members of N1 For the lifting from 1 of continuous variables in a 0-1 inequality, we have given a pseudo-polynomial lifting scheme [18] that is based on the function Λ defined, from (12), as   Λ(w) = min ãij ) ãij xj − (fi + j∈M1 j∈M \M0> s.t.  αj xj = δ + w j∈M \M0> xj ∈ {0, 1} ∀j ∈ M \M0> . 164 J.-P.P. Richard, I.R. de Farias, and G.L. Nemhauser If, for some w, the problem defining Λ(w) is infeasible, we define Λ(w) = ∞. We say that the function Λ is superlinear if w∗ Λ(w) ≥ wΛ(w∗ ) for all w ≥ w∗ where w∗ = max argmin{Λ(w) | w > 0}. When the function Λ is superlinear, the lifting algorithm proposed in [18] can be significantly simplified. The following theorem establishes that the lifted covers obtained in Section 3.2 are superlinear with w∗ = 1. Theorem 3 ([19]). For (12) we have Λ(w) ≥ wΛ(1), for all w > 0.  This property leads to the following lifting theorem. Theorem 4 ([19]). Assume N1 = {1, . . . , s}, then  αj xj + θ j∈M \Mo> s  j=p+1 b̃ij yj ≤ δ + θ s  (13) b̃ij j=p+1 is a facet of P Si (M0> , ∅, N0 , ∅) where p = max{k ∈ {1, . . . , s} | p 1 and b = j=1 b̃ij . Λ(1)}, θ = Λ(1)−b k j=1 b̃ij <  This refined version of the lifting algorithm, proven in [19], is valid for lifted covers and requires only the knowledge of Λ(1). This value can be computed in time O(n2 ). So, once Λ(1) is known, Theorem 4 yields a way to compute all the lifting coefficients of variables in N1 in time O(n) independent of the lifting order. Observe that it is possible that p = s in which case no continuous variable appears in (13). Example 1 (continued). We lift from 1 the continuous variable y1 . Note that 1 1 > Λ(1) = 13 . Therefore the inequality x3 + x4 + 13y1 ≤ 14 is a facet of P Si (∅, ∅, N0 , ∅).  3.4 Lifting Members of N0 At this stage, we lift the members of N0 . We have shown in [18] that the lifting coefficient is 0 almost always. More generally, Theorem 5 ([18]). Assume that (13) is a valid inequality of P S(M0> , ∅, N0 , ∅) and that k ∈ N0 . Assume there exists x∗ ∈ S(M0> , ∅, N0 , ∅) that satisfies (13) at equality and the defining inequality of P Si (M0> , ∅, N0 , ∅) strictly at inequality, then in lifting yk from 0, the lifting coefficient is 0.  For the inequality obtained in Theorem 4, the point x∗B(i) = 0, x∗j = 0 for all j ∈ M0< and x∗j = 1 for all j ∈ M1 satisfies the assumption of Theorem 5. Therefore the lifting coefficients for all members of N0 are 0. It follows that inequality (13) is a facet of P S(M0> , ∅, ∅, ∅). A Simplex-Based Algorithm for 0-1 Mixed Integer Programming 165 Example 1 (continued). We lift the continuous variable y3 . Theorem 5 implies that x3 + x4 + 13y1 ≤ 14 is a facet of P Si = P Si (∅, ∅, ∅, ∅). It is also a cut for our initial problem. Moreover, it can be verified that it is a facet of P Q because the points (x1 , x2 , x3 , x4 , y1 ) = (0, 0, 0, 1, 1), (0, 0, 1, 0, 1),(0, 1, 0, 1, 1), 12 (1, 0, 0, 1, 1) and (0, 0, 1, 1, 13 ) belong to Q, make the inequality tight and are affinely independent. If we add this cut and reoptimize, the solution we obtain 12 is (0, 0, 1, 1, 13 ), which is optimal and has an objective value of 211  13 . 3.5 Lifting Members of M0> Finally we lift the members of M0> , i.e. the variables with large coefficients. First suppose the inequality we lift is a 0-1 lifted cover, i.e. p = s. Again, the dynamic programming algorithm presented in [15,21] can be used and all the lifting coefficients for the members of M0> can be computed in time O(n2 ). Now suppose that p < s. There is a closed form expression for the lifting of members of M0> . This closed form expression developed in [19] for general superlinear inequalities is described in the next theorem. For a ∈ R, we define (a)+ = max{a, 0}. Theorem 6 ([19]). Assume p < s and (13) is a facet of P Si (M0> , ∅, ∅, ∅). Then  j∈M \M0> αj xj + θ s  b̃ij yj + j=p+1  G(aj )xj ≤ δ + θ b̃ij (14) j=p+1 j∈M0> where G(a) = δ + θ(a − d∗i − b)+ and d∗i = d˜i − s   j∈N1 b̃ij is a facet of P Si .  Theorem 6 leads to a linear time algorithm for the lifting of members of M0> when p < s. Thus, the cut we add to the current LP relaxation of the 0-1 mixed integer program is either a 0-1 lifted cover or (14). The previous discussion shows that, in either case, this cut can be derived in time O(n2 ). 3.6 Final Remarks on Lifting In the previous discussion, we omitted the case where all the lifting coefficients of the variables of M1 are 0. In this case, all the lifting coefficients of members of M0< are 0 too because the inequalities xj ≤ 0 for j ∈ M0< would be valid in P Si (M0> , ∅, N0 , N1 ) which contradicts the definition of M0< . At least one member of N1 is lifted with a positive coefficient, otherwise xB(i) ≤ 0 would be valid for P Si which contradicts the full-dimensionality of P Si . The inequality we obtain is a facet of P Si (M0> , ∅, N0 , ∅) that can be turned into a facet of P Si using Theorems 5 and 6. So our cut generation procedure returns either a facet of P Si , if it is full dimensional, or, as discussed in Section 2, a member of its equality set, if it is not full dimensional. All the cuts are generated in time O(n2 ) and are of the standard form     xB(i) + βj (15) αj + βj yj ≤ αj xj + j∈M0 ∪M1 j∈N0 ∪N1 j∈M1 j∈N1 166 J.-P.P. Richard, I.R. de Farias, and G.L. Nemhauser with αj ≥ 0 for j ∈ M0 ∪M1 and βj ≥ 0 for j ∈ N0 ∪N1 . This simple observation leads to the following proposition that will be used to establish finite convergence of our algorithm. For convenience, we will now incorporate the upper bounds on variables in the set of constraints before using the simplex method. We refer to this variant of the simplex method as being without upper bounds. Proposition 1. If we apply the simplex method without upper bounds, the cut generated from the simplex tableau row (7) where 0 < fi < 1 is of the form   xB(i) + βj yj ≤ 0 (16) αj xj + j∈M0 j∈N0 where αj ≤ 0 if ãij < 0, αj ≥ 0 if ãij > 0, βj ≤ 0 if b̃ij < 0, and βj ≥ 0 if b̃ij > 0. Proof. The tableau row (7) on which we generate the cut contains only M0 and N0 . After all the variables with negative coefficients in (7) are complemented to fit in the standard format, we generate (15) that has only nonnegative coefficients. Since all the members of M1 and N1 are complemented variables, they need to be complemented back yielding (16).  4 A Finitely Convergent Algorithm for 0-1 Mixed Integer Programming The ability to generate a cut from every row of the simplex tableau where a basic integer variable is fractional is reminiscent of Gomory cuts. Now, as done by Gomory, we prove a finite convergence result. The approach we take is similar to the one described by Nourie and Venta [17]. Consider the mixed integer problem (1), (3), (5) and (6). For ease of notation, in this section, we denote all the variables in the 0-1 mixed integer problem by x, even if they are continuous, i.e. we define xm+i = yi for i = 1, . . . , n + h. We assume that every extreme point of P Q say xq is such that cxq ∈ Z. This condition can always be met by adequately scaling the objective function. We say that u ∈ Rt is lexicographically larger than 0 (u ≻ 0) if there exists k ∈ {1, . . . , t} such that u1 = . . . = uk−1 = 0 and uk > 0. We say also that, for u, v ∈ Rt , u is lexicographically larger than v (u ≻ v) if u − v ≻ 0. If the two vectors we compare are of different length, we just drop the last components of the longer one and perform the comparison on vectors of the same size. We modify the fractional cutting plane algorithm (FCPA) presented in [15], p. 368-369 to handle our cuts instead of Gomory’s and prove its convergence using the arguments presented in [15], p. 370-373. We recall some of the assumptions under which convergence is established. (i) We use the simplex method without upper bounds. (ii) The objective function is restricted to be integer. This is a valid since we know that cxq is integer for every extreme point xq of P Q. Therefore, an equality of the form x0 = cx, where x0 is an integer variable, is introduced in the set of constraints and the objective function is replaced by x0 . A Simplex-Based Algorithm for 0-1 Mixed Integer Programming 167 (iii) We solve the linear relaxations in such a way that the solution we obtain is lexicographically maximum for the set of optimal solutions and we include rows of the form xj − xj = 0 in the tableaux for nonbasic variables. (iv) We generate a single cut at every step of the algorithm and we generate it from the simplex tableau row whose basic integer variable is fractional and has the smallest index. (v) After we add a cut, the problem must still be of the initial form. In our case, it suffices to introduce the slack and rescale it to be in the interval [0, 1]. The fact that the variable x0 is a general integer variable is not a problem in the lifting steps of our algorithm since we can always keep x0 basic. Note that when x0 is fractional, say x0 = q, the cut x0 ≤ ⌊q⌋ is valid. From the previous observation and Proposition 1 we conclude that the cut we generate from the k th simplex tableau row with basic variable xk  ãkj xj = d˜k (17) xk + j∈N B is of the form αk xk +  αj xj ≤ δk (18) j∈N B where αk = 1, δk = ⌊d˜k ⌋ and N B is the set of nonbasic variables in the current simplex tableau. Having just presented a scheme in which we can embed our cuts, we now prove that these cuts are strong enough to yield an optimal solution in a finite number of iterations. It is not true that this property will be achieved by any family of violated inequalities. For example, as shown by Gomory and Hoffman [9], the family of cuts that require the sum of the nonbasic integer variables to be at least one, see Dantzig [6], do not necessarily lead to a convergent algorithm. They need to be improved, as described by Bowman and Nemhauser [4], to yield a convergent algorithm. A sufficient condition on the strength of cuts needed to obtain finite convergence is described in the next proposition. Proposition 2. Assume that all the cuts generated from simplex tableau rows (17) are of the form (18). Let fk be the fractional part of d˜k and assume that for every l ∈ N B such that αl − αk ãkl < 0 and ãkl ≥ 0, we have fk αl + ãkl (αk ⌊d˜k ⌋ − δk ) ≥ 0. (19) Then FCPA converges in a finite number of iterations. Proof. We extend the proof of Proposition 3.7 from [15]. Assume that we have already added t cuts. We work with a simplex tableau that has v(t) = m+n+h+t rows of the form of (17). Assume that xt is an optimal solution of the current relaxation and that k is the smallest index among all 0-1 variables that are 168 J.-P.P. Richard, I.R. de Farias, and G.L. Nemhauser currently fractional. We define S t = (xt0 , . . . , xtk−1 , ⌊xtk ⌋, uk+1 , . . . , um ), where uk+1 , . . . , um are upper bounds on the integer variables, i.e. 1 in our case. We have that xt ≻ S t . We need to prove  xt+1 S t . Assume that, from row k of the tableau, we generate the cut αk xk + j∈N B αj xj + uxv(t+1) = δk , where N B is the set of nonbasic variables, and add it to the current formulation. Note that we introduce u (which can always be chosen to be positive) as a way to rescale the slacks since their domain has to be the interval [0, 1]. After adding the cut, we make the slack basic and therefore obtain the basic, primal infeasible, dual optimal tableau  ãpj xj = d˜p ∀p ∈ {1, . . . , v(t)} xp + j∈N B xv(t+1) +  αj − αk ãkj δk − αk d˜k xj = u u j∈N B in which the column associated with xj , aj ≻ 0 ∀j ∈ N B. Let (x̂t0 , x̂t ) be the basic solution obtained after a single dual simplex pivot and let xl be the variable ˜ k dk that becomes basic. Clearly βl = αl −αuk ãkl < 0, δk −α < 0 and we have u  x̂t0 x̂t  =  xt−1 0 xt−1  − δk − αk d˜k al . αl − αk ãkl ˜ −αk dk We have that al ≻ 0 and αδkl −α > 0. Now let r be the minimum index for k ãkl which ãrl > 0. We distinguish two cases. First r ≤ k − 1. In that case x̂tj = xtj for all j < r and x̂tr < xtr . Therefore x̂t ≺ S t and so xt+1 ≺ S t . Now assume r ≥ k. We have x̂tj = xtj for all j < k and fk αl + ãkl (αk ⌊d˜k ⌋) − δk ) . x̂tk = ⌊d˜k ⌋ + αl − αk ãkl Now since ãl ≻ 0 and ãjl = 0 for j < k, we have that ãkl ≥ 0. Using (19), we conclude that the numerator of the fraction is nonnegative and, since its denominator is negative, that x̂tk ≤ ⌊d˜k ⌋. It follows that xt+1 x̂tk S t .  We use Proposition 2 to prove that our algorithm is finite. Theorem 7. There exists a pure cutting plane algorithm, based on the cuts presented in Section 3, that solves Problem (1), (3), (5) and (6) in a finite number of iterations. Proof. According to (18), we have αk = 1 and δk = ⌊d˜k ⌋. Condition (19) becomes fk αl ≥ 0 for all l ∈ N B such that αl − ãkl < 0 and ãkl ≥ 0. Since ãkl ≥ 0, we know from Proposition 1 that αl ≥ 0 and so we conclude that condition (19) is satisfied since fk > 0.  A Simplex-Based Algorithm for 0-1 Mixed Integer Programming 169 Proposition 2 can also be used to show that Gomory cuts for integer programs yield a finitely convergent algorithm. Starting from the simplex tableau row (17),  Gomory cuts are of the form j∈N B fj zj ≥ gk where fj and gk are the fractional parts of ãkj and d˜k . Therefore we have that αk = 0, δk = −gk and αl = −fl . Condition (19) is then −gk fl + ãkl gk = gk (ãkl − fl ) ≥ 0 which is satisfied when ãkl ≥ 0 and fl > 0 because gk is positive. 5 Conclusions The cuts we have developed are not strictly comparable to Gomory cuts. We first relax a simplex tableau row into a 0-1 mixed integer knapsack and then we find a facet of this knapsack relaxation. Thus our cuts are strong since they are facets of a good relaxation. But it is interesting that they are also robust in that they can be implemented to yield a finite pure cutting plane algorithm. Nevertheless, the practical use of these cuts is likely to come from imbedding them in a branch-and-cut algorithm which we are currently developing. References 1. E. Balas. Facets of the knapsack polytope. Mathematical Programming, 8:146–164, 1975. 2. E. Balas, S. Ceria, and G. Cornuéjols. A lift-and-project cutting plane algorithm for mixed 0-1 programs. Mathematical Programming, 58:295–324, 1993. 3. E. Balas and E. Zemel. Facets of the knapsack polytope from minimal covers. SIAM Journal on Applied Mathematics, 34:119–148, 1978. 4. V.J. Bowman and G.L. Nemhauser. A finiteness proof for modified Dantzig cuts in integer programming. Naval Research Logistics Quarterly, 17:309–313, 1970. 5. H.P. Crowder, E.L. Johnson, and M.W. Padberg. Solving large-scale zero-one linear programming problems. Operations Research, 31:803–834, 1983. 6. G.B. Dantzig. Note on solving linear programs in integers. Naval Research Logistics Quarterly, 6:75–76, 1959. 7. R.E. Gomory. Outline of an algorithm for integer solutions to linear programs. Bulletin of the American Mathematical Society, 64:275–278, 1958. 8. R.E. Gomory. An algorithm for the mixed integer problem. Technical Report RM-2597, RAND Corporation, 1960. 9. R.E. Gomory and A.J. Hoffman. On the convergence of an integer programming process. Naval Research Logistics Quarterly, 10:121–124, 1963. 10. Z. Gu, G.L. Nemhauser, and M.W.P. Savelsbergh. Lifted cover inequalities for 0-1 integer programs: Complexity. INFORMS Journal on Computing, 11:117–123, 1999. 11. P.L. Hammer, E.L. Johnson, and U.N. Peled. Facets of regular 0-1 polytopes. Mathematical Programming, 8:179–206, 1975. 12. E.L. Johnson, G.L. Nemhauser, and M.W.P. Savelsbergh. Progress in linear programming based branch-and-bound algorithms:an exposition. INFORMS Journal on Computing, 12:2–23, 2000. 13. H. Marchand, A. Martin, R. Weismantel, and L. Wolsey. Cutting planes in integer and mixed integer programming. Technical Report 9953, Université Catholique de Louvain, 1999. 170 J.-P.P. Richard, I.R. de Farias, and G.L. Nemhauser 14. H. Marchand and L.A. Wolsey. The 0-1 knapsack problem with a single continuous variable. Mathematical Programming, 85:15–33, 1999. 15. G.L. Nemhauser and L.A. Wolsey. Integer and Combinatorial Optimization. Wiley, New York, 1988. 16. G.L. Nemhauser and L.A. Wolsey. A recursive procedure for generating all cuts for 0-1 mixed integer programs. Mathematical Programming, 46:379–390, 1990. 17. F.J. Nourie and E.R. Venta. An upper bound on the number of cuts needed in Gomory’s method of integer forms. Operations Research Letters, 1:129–133, 1982. 18. J.-P. P. Richard, I.R. de Farias, and G.L. Nemhauser. Lifted inequalities for 0-1 mixed integer programming : Basic theory and algorithms. Technical Report 02-05, Georgia Institute of Technology, 2002. 19. J.-P. P. Richard, I.R. de Farias, and G.L. Nemhauser. Lifted inequalities for 0-1 mixed integer programming : Superlinear lifting. Technical report, Georgia Institute of Technology, 2002. (in preparation). 20. L.A. Wolsey. Faces for a linear inequality in 0-1 variables. Mathematical Programming, 8:165–178, 1975. 21. E. Zemel. Easily computable facets of the knapsack polytope. Mathematics of Operations Research, 14:760–764, 1989. Mixed-Integer Value Functions in Stochastic Programming Rüdiger Schultz Institute of Mathematics, Gerhard-Mercator University Duisburg Lotharstr. 65, D-47048 Duisburg, Germany schultz@math.uni-duisburg.de Abstract. We discuss the role of mixed-integer value functions in the theoretical analysis of stochastic integer programs. It is shown how the interaction of value function properties with basic results from probability theory leads to structural statements in stochastic integer programming. 1 Stochastic Integer Programs Stochastic programming models are deterministic equivalents to random optimization problems. In the present paper we confine ourselves to linear two-stage models involving integer requirements. The random optimization problem behind these models reads as follows min {cT x + q T y + q ′T y ′ : T x + W y + W ′ y ′ = h(ω), x,y,y ′ ′ m̄ m x ∈ X, y ∈ ZZ+ , y ′ ∈ IR+ }. (1) We assume that all ingredients above have conformal dimensions, that W, W ′ are rational matrices, and that X ⊆ IRm is a nonempty closed polyhedron, possibly involving integrality constraints on components of the vector x. Together with (1) we have a scheme of alternating decision and observation: The decision on x has to be made prior to observing the outcome of the random vector h(ω), and the vector (y, y ′ ) is selected only after having decided on x and observed h(ω). This setting corresponds to a variety of practical optimization problems under uncertainty. It readily extends to the multi-stage situation where finitely (or even infinitely) many of the above alternations occur, see [6,12,17] for further details on stochastic programming modelling. As a mathematical object, problem (1) is ill-posed, since at the moment of decision on x it is not clear which vectors x are feasible, let alone optimal. As a remedy, let us proceed as follows. Rewrite (1) by separating the optimizations in x and (y, y ′ ):  min cT x + min′ {q T y + q ′T y ′ : W y + W ′ y ′ = h(ω) − T x, x y,y  m̄ ′ m′ , y ∈ IR+ } : x∈X . (2) y ∈ ZZ+ M. Jünger et al. (Eds.): Combinatorial Optimization (Edmonds Festschrift), LNCS 2570, pp. 171–184, 2003. c Springer-Verlag Berlin Heidelberg 2003  172 R. Schultz This is where the mixed-integer value function enters the scene. Indeed, in (2) we have an inner optimization problem with right-hand side parameter h(ω) − T x. Introducing the mixed-integer value function ′ m̄ m , y ′ ∈ IR+ }, Φ(t) := min{q T y + q ′T y ′ : W y + W ′ y ′ = t, y ∈ ZZ+ (3) (2) turns into min{cT x + Φ(h(ω) − T x) : x ∈ X}. x (4) In this way, we obtain the family f (x, ω) := cT x + Φ(h(ω) − T x), x ∈ X of realvalued random variables. The problem is still ill-posed, since in (4) the meaning of “minx ” remains unclear, i.e., it is still open how to select a “best” random variable f (x, .). In stochastic programming, scalar parameters of f (x, ω), x ∈ X provide criteria for making the “best” selection. The most widely used scalar parameter in this respect is the expectation. Assuming that h(ω) ∈ IRs is a random vector on some probability space (Ω, A, IP ) we obtain the well-posed optimization problem   min (cT x + Φ(h(ω) − T x)) IP (dω) : x ∈ X . (5) Ω In terms of the random optimization problem (1), the model (5) suggests to select, before observing h(ω), i.e., in a “here-and-now” manner, a decision x such that the expected value of the random costs cT x + Φ(h(ω) − T x) becomes minimal. When addressing risk aversion, other scalar parameters are useful. In the context of stochastic programming some first proposals have been made in [15,16,23]. Following the probability-based approach in [23] we introduce some threshold level ϕo ∈ IR and consider minimization of the probability that the random costs cT x + Φ(h(ω) − T x) exceed ϕo . This leads to the optimization model     min IP {ω ∈ Ω : cT x + Φ(h(ω) − T x) > ϕo } : x ∈ X . (6) As mathematical objects, (5) and (6) are optimization problems in x whose objectives we denote by QIE (x) and QIP (x), respectively. It is evident, that the mixed-integer value function Φ essentially determines the structure of the functions QIE and QIP . As we will see, there is a fruitful interaction of properties of Φ with basic statements from probability theory. The article is organized as follows. In Section 2 we report on what is known about the mixed-integer value function Φ. Section 3 aims at putting together value function properties with basic probability theory. Proceeding step by step, we draw conclusions from various convergence results of probability theory. The final section is an outlook towards related issues beyond the scope of the present paper. Mixed-Integer Value Functions in Stochastic Programming 2 173 Mixed-Integer Value Functions Studying the value function Φ is part of what is usually referred to as stability analysis of optimization problems or parametric optimization. In this area of research the accent is on properties of optimal values and optimal solution sets seen as (multi-)functions of parameters arising in the underlying optimization problems. A variety of results starting from linear programs and leading into nonlinear programming and optimal control is available, see the recent monograph [8] and references therein. In the above stochastic programming context, the value function determines integrands of suitable integrals. Therefore, it is crucial to have global knowledge about the functional dependence of the optimal value on the respective parameter. The typical situation in nonlinear parametric optimization, however, is that properties of optimal values and optimal solutions are available only locally around given parameters. The most comprehensible class for which global results exist are mixed-integer linear programs. This explains that, so far, the models discussed in Section 1 do not go beyond the mixed-integer linear case. The stability of mixed-integer linear programs has been studied in a series of papers by Blair and Jeroslow out of which we refer to [7], and in the monographs [2,3]. Before discussing the mixed-integer value function Φ from (3), let us have a quick look at its linear-programming counterpart: ′ m Φlin (t) := min{q ′T y ′ : W ′ y ′ = t, y ′ ∈ IR+ }. (7) ′ m ) is full-dimensional and that If we assume that W ′ (IR+ T {u ∈ IRs : W ′ u ≤ q ′ } = ∅, then the latter set has vertices dk , k = 1, . . . , K, and it holds by linear programming duality that T Φlin (t) = max{tT u : W ′ u ≤ q ′ } = max dTk t k=1,... ,K ′ m for all t ∈ W ′ (IR+ ). Hence, Φlin is convex and piecewise linear on its (conical) domain of definiton. Without going into details, we mention that this convexity has far reaching consequences when setting up the expectation based stochastic programming model (5) in case integer requirements are missing in the random optimization problem (1), see [6,12,17] for structural and algorithmic results. Let us now turn our attention to the mixed-integer case. In (3) we impose the basic assumptions that ′ T m̄ m W (ZZ+ ) + W ′ (IR+ ) = IRs and {u ∈ IRs : W T u ≤ q, W ′ u ≤ q ′ } =  ∅. 174 R. Schultz Then Φ(t) is a well-defined real number for all t ∈ IRs , [14]. Moreover it holds ′ m̄ m Φ(t) = min{q T y + q ′T y ′ : W y + W ′ y ′ = t, y ∈ ZZ+ , y ′ ∈ IR+ } ′ m m̄ = min{q T y + min {q ′T y ′ : W ′ y ′ = t − W y, y ′ ∈ IR+ } : y ∈ ZZ+ } ′ y y m̄ = min{Φy (t) : y ∈ ZZ+ }, (8) y where Φy (t) = q T y + max dTk (t − W y) k=1,... ,K ′ m for all t ∈ W y + W ′ (IR+ ). T Here, dk , k = 1, . . . , K are the vertices of {u ∈ IRs : W ′ u ≤ q ′ }, and we have applied the argument about Φlin from the purely linear case. For t ∈ / m′ m′ W y + W ′ (IR+ ) the problem miny′ {q ′T y ′ : W ′ y ′ = t − W y, y ′ ∈ IR+ } is infeasible, and we put Φy (t) = +∞. It is convenient to introduce the notation m̄ Y (t) := {y ∈ ZZ+ : Φy (t) < +∞}. According to (8) the value function Φ is made up by the pointwise minimum of a family of convex, piecewise linear functions whose domains of definition are m′ ). By our basic assumption polyhedral cones arising as shifts of the cone W ′ (IR+ m̄ ′ m′ s ′ m′ W (ZZ+ ) + W (IR+ ) = IR , the cone W (IR+ ) is full-dimensional. Some first conclusions about the continuity of Φ may be drawn from the above observations: Suppose that t ∈ IRs does not belong to any boundary of any of the sets m′ ), y ∈ ZZ m̄ . Then the same is true for all points τ in some open W y+W ′ (IR+ ball B around t. Hence, Y (τ ) = Y (t) for all τ ∈ B. With an enumeration (yn )n∈IN of Y (t) we consider the functions Φκ (τ ) := min{Φyn (τ ) : n ≤ κ} for all τ ∈ B. Then limκ→∞ Φκ (τ ) = Φ(τ ) for all τ ∈ B. Since, for any function Φy , its “slopes” are determined by the same, finitely many vectors dk , k = 1, . . . , K, the functions Φκ , κ ∈ IN are all Lipschitz continuous on B with a uniform Lipschitz constant. Thus, the family of functions Φκ , κ ∈ IN is equicontinuous on B and has a pointwise limit there. Consequently, this pointwise limit Φ is continuous on B, in fact Lipschitz continuous with the mentioned uniform constant. (ii) Any discontinuity point of Φ must be located at the boundary of some m′ set W y + W ′ (IR+ ), y ∈ ZZ m̄ . Hence, the set of discontinuity points of m′ Φ is contained in a countable union of hyperplanes. Since W ′ (IR+ ) has only finitely many facets, this union of hyperplanes subdivides into finitely many classes, such that, in each class, the hyperplanes are parallel. By the rationality of the matrices W and W ′ , within each class, the pairwise distance of the hyperplanes is uniformly bounded below by some positive number. m′ (iii) Let tn → t and y ∈ ZZ m̄ such that tn ∈ W y + W ′ (IR+ ) for all suffi′ ′ m ciently large n. Since the set W y + W (IR+ ) is closed, this yields t ∈ (i) Mixed-Integer Value Functions in Stochastic Programming 175 ′ m W y + W ′ (IR+ ). Therefore, for sufficiently large n, Y (tn ) ⊆ Y (t). This paves the way for showing that lim inf tn →t Φ(tn ) ≥ Φ(t), which is the lower semicontinuity of Φ at t. The above analysis has been refined in [2,3,7]. In particular, it is shown in Theorem 3.3 of [7] that, for each t ∈ IRs , the minimization in (8) can be restricted to a finite set (depending on t, in general). Lemma 5.6.1. and Lemma 5.6.2. of [2] provide the representation of the continuity sets of Φ to be displayed in the subsequent proposition. The global proximity result in part (iv) of the subsequent proposition is derived in Theorem 1 at page 115 of [3] and in Theorem 2.1 of [7]. Altogether, we have the following statement about the mixed-integer value function Φ. Proposition 1. Let W, W ′ be matrices with rational entries and assume that T m̄ m′ W (ZZ+ ) + W ′ (IR+ ) = IRs as well as {u ∈ IRs : W T u ≤ q, W ′ u ≤ q ′ } = ∅. Then it holds (i) Φ is real-valued and lower semicontinuous on IRs , (ii) there exists a countable partition IRs = ∪∞ i=1 Ti such that the restrictions of Φ to Ti are piecewise linear and Lipschitz continuous with a uniform constant L > 0 not depending on i, (iii) each of the sets Ti has a representation Ti = {ti + K} \ ∪N j=1 {tij + K} where m′ ) and ti , tij are suitable points from K denotes the polyhedral cone W ′ (IR+ IRs , moreover, N does not depend on i, (iv) there exist positive constants β, γ such that |Φ(t1 ) − Φ(t2 )| ≤ β t1 − t2 + γ whenever t1 , t2 ∈ IRs . 3 Implications of Probability Theory Essential properties of the objective functions QIE and QIP in the optimization problems (5) and (6), respectively, have their roots in properties of Φ. We will study these interrelations by employing some basic tools from probability theory as can be found, for instance, in the textbooks [5,10,18]. Throughout this section we will impose the basic assumptions that W, W ′ m̄ m′ are rational matrices, that W (ZZ+ ) + W ′ (IR+ ) = IRs , and that the set {u ∈ T IRs : W T u ≤ q, W ′ u ≤ q ′ } is non-empty. For convenience we denote by µ the image measure IP ◦ h−1 on IRs . With this notation, the functions QIE and QIP read  (cT x + Φ(h − T x)) µ(dh), x ∈ IRm , QIE (x) = IRs and   QIP (x) = µ {h ∈ IRs : cT x + Φ(h − T x) > ϕo } , x ∈ IRm . 176 3.1 R. Schultz Measurability Since QIE (x) is esssentially determined by an integral over Φ and QIP (x) involves a level set of Φ, it has to be assured that the integral and the probability are taken over a measurable function and a measurable set, respectively. Proposition 2. For any x ∈ IRm , f (x, h) := cT x + Φ(h − T x) is a measurable function of h, implying in particular that QIP (x) is well-defined for all x ∈ IRm . Proof: Φ being lower semicontinuous on IRs , f is measurable as a superposition of measurable functions. Then, {h ∈ IRs : f (x, h) > ϕo } is a measurable subset of IRs , and QIP (x) is well-defined for all x ∈ IRm . q.e.d. 3.2 Integrability A measurable function into the reals is called integrable, in case its positive and negative parts both are. Integrability is often established via an integrable majorant of the absolute value of the function in question. In the present context, integrability is important for assuring that QIE (x) is well-defined for all x ∈ IRm .  Proposition 3. If µ has a finite first moment, i.e., if IRs h µ(dh) < ∞, then QIE (x) is well-defined for all x ∈ IRm . Proof: Our basic assumptions imply that Φ(0) = 0. Together with Proposition 1(iv) this provides the following estimate   |Φ(h − T x) − Φ(0)| µ(dh) |Φ(h − T x)| µ(dh) = s IRs IR ≤ (β h − T x + γ) µ(dh) IRs  h µ(dh) + β T x + γ. ≤β IRs This implies that QIE (x) ∈ IR for all x ∈ IRm , and the proof is complete. q.e.d. 3.3 Continuity of the Probability Measure Given a sequence (Mn )n∈IN of measurable sets in IRs , the limes inferior lim inf n→∞ Mn and the limes superior lim supn→∞ Mn are defined as the sets of all points belonging to all but a finite number of the Mn , and to infinitely many Mn , respectively. If µ is some probability measure on IRs , then it holds     µ lim inf Mn ≤ lim inf µ(Mn ) ≤ lim sup µ(Mn ) ≤ µ lim sup Mn . (9) n→∞ n→∞ n→∞ n→∞ This will be our main tool to deduce (semi-)continuity of the function QIP from the properties of Φ. With x ∈ IRm we introduce the notation M (x) := {h ∈ IRs : cT x + Φ(h − T x) > ϕo }, Me (x) := {h ∈ IRs : cT x + Φ(h − T x) = ϕo }, Md (x) := {h ∈ IRs : Φ is discontinuous at h − T x}. Mixed-Integer Value Functions in Stochastic Programming 177 m Proposition 4. The  function QIP is lower semicontinuous on IR . If, for some x ∈ IRm , it holds µ Me (x) ∪ Md (x) = 0, then QIP is continuous at x. The latter assumption is fulfilled for all x ∈ IRm if µ has a density. Proof: Let us first verify that for all x ∈ IRm M (x) ⊆ lim inf M (xn ) ⊆ lim sup M (xn ) ⊆ M (x) ∪ Me (x) ∪ Md (x). (10) xn →x xn →x Let h ∈ M (x). The lower semicontinuity of Φ (Proposition 1(i)) yields lim inf (cT xn + Φ(h − T xn )) ≥ cT x + Φ(h − T x) > ϕo . xn →x Therefore, there exists an no ∈ IN such that cT xn + Φ(h − T xn ) > ϕo for all n ≥ no , implying h ∈ M (xn ) for all n ≥ no . Hence, M (x) ⊆ lim inf xn →x M (xn ). ˜ of Let h ∈ lim supxn →x M (xn ) \ M (x). Then there exists an infinite subset IN IN such that ˜ cT xn + Φ(h − T xn ) > ϕo ∀n ∈ IN and cT x + Φ(h − T x) ≤ ϕo . Now two cases are posssible. First, Φ is continuous at h−T x. Passing to the limit in the first inequality then yields that cT x + Φ(h − T x) ≥ ϕo , and h ∈ Me (x). Secondly, Φ is discontinuous at h − T x. In other words, h ∈ Md (x), and (10) is established. By (9) we have for all x ∈ IRm     QIP (x) = µ M (x) ≤ µ lim inf M (xn ) xn →x   ≤ lim inf µ M (xn ) = lim inf QIP (xn ), xn →x xn →x   verifying the asserted lower semicontinuity. In case µ Me (x) ∪ Md (x) = 0 this argument extends:       QIP (x) = µ M (x) = µ M (x) ∪ Me (x) ∪ Md (x) ≥ µ lim sup M (xn ) xn →x   ≥ lim sup µ M (xn ) = lim sup QIP (xn ), xn →x xn →x and QIP is continuous at x. According to our discussion preceding Proposition 1 the set Md (x) is contained in a countable union of hyperplanes. In view of (8) the same is true for Me (x). Thus Me (x) ∪ Md (x) is contained in a set of Lebesgue measure zero, and  µ Me (x) ∪ Md (x) = 0 by the absolute continuity of µ. q.e.d. 3.4 Fatou’s Lemma For a sequence (gn )n∈IN of measurable functions from IRs to IR with an integrable minorant g ≤ gn , ∀n ∈ IN , Fatou’s Lemma asserts   gn (h) µ(dh). lim inf gn (h) µ(dh) ≤ lim inf IRs n→∞ n→∞ IRs Together with the lower semicontinuity of Φ this will provide the lower semicontinuity of QIE . 178 R. Schultz Proposition 5. The function QIE is lower semicontinuous on IRm provided that  h µ(dh) < ∞. IRs Proof: Let x ∈ IRm and xn → x. We will apply Fatou’s Lemma essentially to the functions gn (h) := Φ(h−T xn ). Denote r := maxn∈IN xn . Proposition 1(iv) and Φ(0) = 0 then imply Φ(h − T xn ) ≥ Φ(0) − |Φ(h − T xn ) − Φ(0)| ≥ −β h − T xn − γ ≥ −β h − βr T − γ yielding an integrable minorant g for the family of functions gn . By Fatou’s Lemma and the lower semicontinuity of Φ we have  (cT x + Φ(h − T x)) µ(dh) QIE (x) = IRs  ≤ lim inf (cT xn + Φ(h − T xn )) µ(dh) s IR n→∞  (cT xn + Φ(h − T xn )) µ(dh) ≤ lim inf n→∞ IRs = lim inf QIE (xn ). n→∞ q.e.d. 3.5 Lebesgue’s Dominated Convergence Theorem Let gn , g (n ∈ IN ) be measurable functions from IRs to IR fulfilling limn→∞ gn = g, µ-almost surely. If there exists an integrable function ḡ ≥ |gn |, ∀n ∈ IN , µ-almost surely, then Lebesgue’s Dominated Convergence Theorem asserts that gn , g (n ∈ IN ) are integrable and that lim n→∞  IRs gn (h) µ(dh) =  g(h) µ(dh). IRs This theorem will lead us to the continuity of QIE .    Proposition 6. If IRs h µ(dh) < ∞ and µ Md (x) = 0 then QIE is continuous at x. The latter assumption is fulfilled for all x ∈ IRm if µ has a density. Proof: Let x ∈ IRm , xn → x, and r := maxn∈IN xn . Again we employ Proposition 1(iv) and Φ(0) = 0, and we obtain |Φ(h − T xn )| = |Φ(h − T xn ) − Φ(0)| ≤ β h + βr T + γ   providing us with an integrable majorant. By µ Md (x) = 0, we have lim (cT xn + Φ(h − T xn )) = cT x + Φ(h − T x) n→∞ for µ-almost all h ∈ IRs . Mixed-Integer Value Functions in Stochastic Programming 179 Now Lebesgue’s Dominated Convergence Theorem completes the proof:  (cT xn + Φ(h − T xn )) µ(dh) lim QIE (xn ) = lim n→∞ n→∞ IRs  = (cT x + Φ(h − T x)) µ(dh) = QIE (x). IRs q.e.d. 3.6 Convergence of Probability Measures – Rubin’s Theorem The dependence of the optimization problems (5) and (6) on the underlying probability measure, although seemingly a theoretical issue, has practical relevance in various respects. When building models like (5) and (6) the probability measure often enters in a subjective way or results from an approximation based on statistical data. Moreover, the integrals in (5) and (6) are typically multivariate, and their integrands are given only implicitly. This poses insurmountable numerical difficulties if the probability distribution is continuous. A possible remedy is approximation via discrete distributions. All this motivates considerations on whether “small” perturbations of the underlying probability measure result in “small” perturbations of optimal values and optimal solutions to (5) and (6). The mathematical machinery for addressing these issues is provided by the stability analysis of stochastic programs (for surveys see [11,22]). A first and crucial step towards stability analysis is to study QIE and QIP as functions jointly in x and µ. Again the value function Φ will provide some valuable insights. Let us consider QIE and QIP as functions on IRm × P(IRs ) where P(IRs ) denotes the set of all (Borel) probability measures on IRs . As an essential prerequisite some convergence notion is needed on P(IRs ). Here, weak convergence of probability measures has proven both sufficiently general to cover relevant applications and sufficiently specific to enable substantial statements. A sequence w (µn )n∈IN in P(IRs ) is said to converge weakly to µ ∈ P(IRs ), written µn −→ µ, if for any bounded continuous function g : IRs → IR we have   g(ξ)µ(dξ) as n → ∞. (11) g(ξ)µn (dξ) → IRs IRs A basic reference for weak convergence of probability measures is Billingsley’s book [4]. We are heading for sufficient conditions for the continuity of QIE and QIP jointly in x and µ. Beside properties of the value function, a theorem on weak convergence of image measures attributed in [4] to Rubin will turn out most useful. This theorem says: Let gn , g (n ∈ IN ) be measurable functions from IRs to IR and denote E := {h ∈ IRs : ∃hn → h such that gn (hn ) → g(h)}. w w If µn −→ µ and µ(E) = 0, then µn ◦ gn−1 −→ µ ◦ g −1 .   Proposition 7. Let µ ∈ P(IRs ) be such that µ Me (x) ∪ Md (x) = 0. Then the function QIP : IRm × P(IRs ) −→ IR is continuous at (x, µ). 180 R. Schultz w Proof: To prove (i), let xn −→ x (in IRm ) and µn −→ µ (in P(IRs )) be arbitrary sequences. By χn , χ : IRs −→ {0, 1} we denote the indicator functions of the sets M (xn ), M (x), n ∈ IN . With these functions we consider the exceptional set E from above: E := {h ∈ IRs : ∃hn → h such that χn (hn ) → χ(h)}.  c To see that E ⊆ Me (x) ∪ Md (x), assume that h ∈ Me (x) ∪ Md (x) = c  c  Me (x) ∩ Md (x) where the superscript c denotes the set-theoretic complement. Then Φ is continuous at h − T x, and either cT x + Φ(h − T x) > ϕo or cT x + Φ(h − T x) < ϕo . Thus, for any sequence hn → h there is an no ∈ IN such that, for all n ≥ no , either cT xn +Φ(hn −T xn ) > ϕo or cT xn +Φ(hn −T xn ) < ϕo . c Hence, χn (hn ) → χ(h) as hn → h, implying  h∈E .  In view of E ⊆ Me (x) ∪ Md (x) and µ Me (x) ∪ Md (x) = 0 we obtain that w −1 . µ(E) = 0. Rubin’s Theorem now yields µn ◦ χ−1 n −→ µ ◦ χ −1 −1 Since µn ◦ χn , µ ◦ χ , n ∈ IN are probability measures on {0, 1}, their weak convergence particularly implies that     −1 µn ◦ χ−1 {1} . n {1} −→ µ ◦ χ     This is nothing but µn M (xn ) −→ µ M (x) or QIP (xn , µn ) −→ QIP (x, µ). q.e.d. Proposition arbitrary p > 1 and K > 0, and denote ∆p,K(IRs ) := {ν ∈  8. Fix s p P(IR ) : IRs h ν(dh) ≤ K}. Let µ ∈ ∆p,K (IRs ) be such that µ Md (x) = 0. Then the function QIE : IRm × ∆p,K (IRs ) −→ IR is continuous at (x, µ). w Proof: Let xn −→ x in IRm and µn −→ µ in ∆p,K (IRs ). Introduce measurable functions gn , n ∈ IN , and g by gn (h) := Φ(h − T xn ) and g(h) := Φ(h − T x). For the corresponding exceptional set E a simple continuity argument provides Md (x)c ⊆ E c or, equivalently, E ⊆ Md (x). Hence, µ(E) = 0, and Rubin’s Theorem yields w µn ◦ gn−1 −→ µ ◦ g −1 . To prove the assertion it is sufficient to show that   g(h) µ(dh). gn (h) µn (dh) = lim n→∞ IRs IRs Changing variables yields the equivalent statement   t µ ◦ g −1 (dt). t µn ◦ gn−1 (dt) = lim n→∞ IR IR For fixed a ∈ IR+ , consider the truncation κa : IR → IR with  t , |t| < a κa (t) := 0 , |t| ≥ a. (12) Mixed-Integer Value Functions in Stochastic Programming 181 Now      −1 t µ ◦ g −1 (dt) t µ ◦ g (dt) −  n n IR IR     ≤ (t − κa (t)) µn ◦ gn−1 (dt) IR      −1 + κa (t) µ ◦ g −1 (dt) κa (t) µn ◦ gn (dt) − IR IR     −1 + (κa (t) − t) µ ◦ g (dt). (13) IR The proof is completed by showing that, for a given ε > 0, each of the three expressions on the right becomes less than ε/3 provided that n and a are sufficiently large. For the first expression we obtain      (t − κa (t)) µn ◦ gn−1 (dt) ≤ |t| µn ◦ gn−1 (dt)  IR {t:|t|≥a}  = |gn (h)| µn (dh). (14) {h:|gn (h)|≥a} Since p > 1,  IRs p |gn (h)| µn (dh) ≥  |gn (h)| · |gn (h)|p−1 µn (dh) {h:|gn (h)|≥a} ≥ ap−1  |gn (h)| µn (dh). (15) {h:|gn (h)|≥a} Therefore, the estimate in (14) can be continued by  1−p ≤a |gn (h)|p µn (dh). (16) IRs Proposition 1(iv) and gn (0) = 0 imply |gn (h)|p ≤ (β h + β xn · T + γ)p . Since (xn )n∈IN is bounded and all µn belong to ∆p,K (IRs ), there exists a positive constant c such that  |gn (h)|p µn (dh) ≤ c for all n ∈ IN . IRs Hence, (16) can be estimated above by c/ap−1 which becomes less than ε/3 if a is sufficiently large. We now turn to the second expression in (13). Since every probability measure on the real line has at most countably many atoms, we obtain that µ ◦ g −1 ({t : |t| = a}) = 0 for (Lebesgue-)almost all a ∈ IR. 182 R. Schultz Therefore, κa is a measurable function whose set of discontinuity points Dκa has µ ◦ g −1 -measure zero for almost all a ∈ IR. We apply Rubin’s Theorem to w the weakly convergent sequence µn ◦ gn−1 −→ µ ◦ g −1 , cf. (12), and the identical sequence of functions κa . The role of the exceptional set then is taken by Dκa , and Rubin’s Theorem is working due to µ ◦ g −1 (Dκa ) = 0. This yields the conclusion w −1 µn ◦ gn−1 ◦ κ−1 ◦ κ−1 a −→ µ ◦ g a for almost all a ∈ IR. (17) Consider the bounded continuous function η : IR → IR given by   −a , t′ ≤ −a ′ t′ , −a ≤ t′ ≤ a η(t ) :=  a , t′ ≥ a. By the weak convergence in (17), we obtain   ′ ′ η(t′ ) µ ◦ g −1 ◦ κ−1 (dt ) → η(t′ ) µn ◦ gn−1 ◦ κ−1 a (dt ) a as n → ∞. (18) IR IR Changing variables provides   ′ η(κa (t)) µn ◦ gn−1 (dt) η(t′ ) µn ◦ gn−1 ◦ κ−1 (dt ) = a κ−1 IR a (IR)  κa (t) µn ◦ gn−1 (dt). = IR Analogously,  IR ′ η(t ) µ ◦ g −1 ◦ ′ κ−1 a (dt ) =  κa (t) µ ◦ g −1 (dt). IR The above identities together with (18) confirm that the second expression on the right-hand side of (13) becomes arbitrarily small for sufficiently large n and almost all sufficiently large a. Let us finally turn to the third expression in (13). Analogously to (14), (15), and (16) we obtain      −1 1−p (κa (t) − t) µ ◦ g (dt) ≤ a |g(h)|p µ(dh).  IR IRs |g(h)|p µ(dh) is finite due to Proposition 1(iv) and the fact that IRs The integral p h µ(dh) ≤ K. Hence, the third expression in (13) becomes less than ε/3 IRs if a is large enough. q.e.d.  4 Outlook The previous sections have shown how the mixed-integer value function serves as a point of departure for understanding the basic structure of stochastic integer Mixed-Integer Value Functions in Stochastic Programming 183 programs. Let us finally have a look at some further developments whose detailed coverage is beyond the scope of the present paper. Quantitative Statements. The continuity results of Section 3 are all qualitative by nature. Lipschitz continuity of QIE and QIP as functions in x has been studied in [19,20,24]. To quantify the continuity of QIE and QIP as functions of the underlying probability measure a proper metric in the space of probability measures has to be identified. Here “proper” means that the metric should allow an estimation of function value distances at all and that it should metrize important modes of convergence such as weak convergence of probability measures. For the function QIE a first proposal along these lines was made in [21]. The mentioned quantitative studies require as input refined statements about Φ such as parts (ii) and (iii) of Proposition 1. Stability. As already mentioned in Subsection 3.6, perturbation and approximation of the underlying probability measure arise quite naturally in stochastic programming. The stability analysis of stochastic programs then provides justification for replacing unknown probability measures by statistical estimates or for turning numerically intractable multivariate integrals into manageable finite sums by approximating continuous distributions via discrete ones. Typical stability results assert that optimal values and optimal solutions are (semi-)continuous (multi-)functions of the underlying probability measure, [1,20,21]. These results are obtained by putting general techniques from parametric optimization into perspective with stochastic programming, [22]. This leads to studying the joint dependence of relevant integral functionals on both the decision variable and the probability measure. For the latter, Propositions 7 and 8 provide paradigmatic examples. Algorithms. Methods for solving the optimization problems (5) and (6), almost exclusively, rest on the assumption that the probability measures underlying the models are discrete. This does not provide a serious restriction, since, on the one hand, in many practical situations the uncertain data is available via discrete observations, only. On the other hand, the above mentioned stability results justify approximation via discrete measures should the precise model involve a continuous probability distribution. With discrete probability measures, the problems (5) and (6) can be rewritten as large-scale, block-structured, mixedinteger linear programs. Decomposition then becomes the algorithmic method of choice, but the presence of integer variables poses a number of open problems. Some first attempts were made in [9,24], see also the survey [13]. References 1. Artstein, Z.; Wets, R.J-B: Stability results for stochastic programs and sensors, allowing for discontinuous objective functions, SIAM Journal on Optimization 4 (1994), 537–550. 2. Bank, B.; Guddat, J.; Klatte, D.; Kummer, B.; Tammer, K.: Non-linear Parametric Optimization, Akademie-Verlag, Berlin, 1982. 184 R. Schultz 3. Bank, B.; Mandel, R.: Parametric Integer Optimization, Akademie-Verlag, Berlin 1988. 4. Billingsley, P.: Convergence of Probability Measures, Wiley, New York, 1968. 5. Billingsley, P.: Probability and Measure, Wiley, New York, 1986. 6. Birge, J.R.; Louveaux, F.: Introduction to Stochastic Programming, Springer, New York, 1997. 7. Blair, C.E.; Jeroslow, R.G.: The value function of a mixed integer program: I, Discrete Mathematics 19 (1977), 121–138. 8. Bonnans, J.F.; Shapiro, A.: Perturbation Analysis of Optimization Problems, Springer-Verlag, New York, 2000. 9. Carøe, C.C.; Schultz, R.: Dual decomposition in stochastic integer programming, Operations Research Letters 24 (1999), 37–45. 10. Dudley, R.M.: Real Analysis and Probability, Wadsworth & Brooks/Cole, Pacific Grove, California 1989. 11. Dupačová, J.: Stochastic programming with incomplete information: a survey of results on postoptimization and sensitivity analysis, Optimization 18 (1987), 507– 532. 12. Kall, P.; Wallace, S.W.: Stochastic Programming, Wiley, Chichester, 1994. 13. Klein Haneveld, W.K.; van der Vlerk, M.H.: Stochastic integer programming: General models and algorithms, Annals of Opeations Research 85 (1999), 39–57. 14. Nemhauser, G.L.; Wolsey, L.A.: Integer and Combinatorial Optimization, Wiley, New York 1988. 15. Ogryczak, W.; Ruszczyński, A.: From stochastic dominance to mean-risk models: Semideviations as risk measures, European Journal of Operational Research 116 (1999), 33–50. 16. Ogryczak, W.; Ruszczyński, A.: Dual stochastic dominance and related mean-risk models, Rutcor Research Report 10–2001, Rutgers Center for Operations Research, Piscataway, 2001. 17. Prékopa, A.: Stochastic Programming, Kluwer, Dordrecht, 1995. 18. Rudin, W.: Real and Complex Analysis, McGraw-Hill, New York, 1974. 19. Schultz, R.: Continuity properties of expectation functions in stochastic integer programming, Mathematics of Operations Research 18 (1993), 578–589. 20. Schultz, R.: On structure and stability in stochastic programs with random technology matrix and complete integer recourse, Mathematical Programming 70 (1995), 73–89. 21. Schultz, R.: Rates of convergence in stochastic programs with complete integer recourse, SIAM Journal on Optimization 6 (1996), 1138–1152. 22. Schultz, R.: Some aspects of stability in stochastic programming, Annals of Operations Research 100 (2000), 55–84. 23. Schultz, R.: Probability objectives in stochastic programs with recourse, Preprint 506–2001, Institute of Mathematics, Gerhard-Mercator University Duisburg, 2001. 24. Tiedemann, S.: Probability Functionals and Risk Aversion in Stochastic Integer Programming, Diploma Thesis, Department of Mathematics, Gerhard-Mercator University Duisburg, 2001. Exact Algorithms for NP-Hard Problems: A Survey Gerhard J. Woeginger Department of Mathematics University of Twente, P.O. Box 217 7500 AE Enschede, The Netherlands Abstract. We discuss fast exponential time solutions for NP-complete problems. We survey known results and approaches, we provide pointers to the literature, and we discuss several open problems in this area. The list of discussed NP-complete problems includes the travelling salesman problem, scheduling under precedence constraints, satisfiability, knapsack, graph coloring, independent sets in graphs, bandwidth of a graph, and many more. 1 Introduction Every NP-complete problem can be solved by exhaustive search. Unfortunately, when the size of the instances grows the running time for exhaustive search soon becomes forbiddingly large, even for instances of fairly small size. For some problems it is possible to design algorithms that are significantly faster than exhaustive search, though still not polynomial time. This survey deals with such fast, super-polynomial time algorithms that solve NP-complete problems to optimality. In recent years there has been growing interest in the design and analysis of such super-polynomial time algorithms. This interest has many causes. – It is now commonly believed that P=NP, and that super-polynomial time algorithms are the best we can hope for when we are dealing with an NPcomplete problem. There is a handful of isolated results scattered across the literature, but we are far from developing a general theory. In fact, we have not even started a systematic investigation of the worst case behavior of such super-polynomial time algorithms. – Some NP-complete problems have better and faster exact algorithms than others. There is a wide variation in the worst case complexities of known exact (super-polynomial time) algorithms. Classical complexity theory can not explain these differences. Do there exist any relationships among the worst case behaviors of various problems? Is progress on the different problems connected? Can we somehow classify NP-complete problems to see how close we are to the best possible algorithms? – With the increased speed of modern computers, large instances of NPcomplete problems can be solved effectively. For example it is nowadays routine to solve travelling salesman (TSP) instances with up to 2000 cities. M. Jünger et al. (Eds.): Combinatorial Optimization (Edmonds Festschrift), LNCS 2570, pp. 185–207, 2003. c Springer-Verlag Berlin Heidelberg 2003  186 G.J. Woeginger And if the data is nicely structured, then instances with up to 13000 cities can be handled in practice (Applegate, Bixby, Chvátal & Cook [2]). There is a huge gap between the empirical results from testing implementations and the known theoretical results on exact algorithms. – Fast algorithms with exponential running times may actually lead to practical algorithms, at least for moderate instance sizes. For small instances, an algorithm with an exponential time complexity of O(1.01n ) should usually run much faster than an algorithm with a polynomial time complexity of O(n4 ). In this article we survey known results and approaches to the worst case analysis of exact algorithms for NP-hard problems, and we provide pointers to the literature. Throughout the survey, we will also formulate many exercises and open problems. Open problems refer to unsolved research problems, while exercises pose smaller questions and puzzles that should be fairly easy to solve. Organization of this survey. Section 2 collects some technical preliminaries and some basic definitions that will be used in this article. Sections 3–6 introduce and explain the four main techniques for designing fast exact algorithms: Section 3 deals with dynamic programming across the subsets, Section 4 discusses pruning of search trees, Section 5 illustrates the power of preprocessing the data, and Section 6 considers approaches based on local search. Section 7 discusses methods for proving negative results on the worst case behavior of exact algorithms. Section 8 gives some concluding remarks. 2 Technical Preliminaries How do we measure the quality of an exact algorithm for an NP-hard problem? Exact algorithms for NP-complete problems are sometimes hard to compare, since their analysis is done in terms of different parameters. For instance, for an optimization problem on graphs the analysis could be done in terms of the number n of vertices, or possibly in the number m of edges. Since the standard reductions between NP-complete problems may increase the instance sizes, many questions in computational complexity theory depend delicately on the choice of parameters. The right approach seems to be to include an explicit complexity parameter in the problem specification (Impagliazzo, Paturi & Zane [21]). Recall that the decision version of every problem in NP can be formulated in the following way: Given x, decide whether there exists y so that |y| ≤ m(x) and R(x, y). Here x is an instance of the problem; y is a short YES-certificate for this instance; R(x, y) is a polynomial time decidable relation that verifies certificate y for instance x; and m(x) is a polynomial time computable and polynomially bounded complexity parameter that bounds the length of the certificate y. A trivial exact algorithm for solving x would be to enumerate all possible strings with lengths Exact Algorithms for NP-Hard Problems: A Survey 187 up to m(x), and to check whether any of them yields a YES-certificate. Up to polynomial factors that depend on the evaluation time of R(x, y), this would yield a running time of 2m(x) . The first goal in exact algorithms always is to break the triviality barrier, and to improve on the time complexity of this trivial enumerative algorithm. Throughout this survey, we will measure the running times of algorithms only with respect to the complexity parameter m(x). We will use a modified big-Oh notation that suppresses all other (polynomially bounded) terms that depend on the instance x and the relation R(x, y). We write O∗ (T (m(x))) for a time complexity of the form O(T (m(x)) · poly(|x|)). This modification may be justified by the exponential growth of T (m(x)). Note that for instance for simple graphs with m(x) = n vertices and m edges, the running time 1.7344n · n2 m5 is sandwiched between the running times 1.7344n and 1.7345n . We stress, however, the fact that the complexity parameter m(x) in general is not unique, and that it heavily depends on the representation of the input. For an input in the form of an undirected graph, for instance, the complexity parameter might be the number n of vertices or the number m of edges. Time complexities and complexity classes. Consider a problem in NP as defined above, with instances x and with complexity parameter m(x). An algorithm for this problem has sub-exponential time complexity, if the running time depends polynomially on |x| and if the logarithm of the running time depends sub-linearly √ m(x) 5 on m(x). For instance, a running time of |x| · 2 would be sub-exponential. A problem in NP is contained in the complexity class SUBEXP (the class of SUB-EXPonentially solvable problems) if for every fixed ε > 0, it can be solved in poly(|x|) · 2ε·m(x) time. The complexity class SNP (the class Strict NP) was introduced by Papadimitriou & Yannakakis [32] for studying the approximability of optimization problems. SNP constitutes a subclass of NP, and it contains all problems that can be formulated in a certain way by a logical formula that starts with a series of second order existential quantifiers, followed by a series of first order universal quantifiers, followed by a first-order quantifier-free formula (a Boolean combination of input and quantifier relations applied to the quantified element variables). In this survey, the class SNP will only show up in Section 7. As far as this survey is concerned, all we need to know about SNP is that it is a fairly broad complexity class that contains many of the natural combinatorial optimization problems. Downey & Fellows [7] introduced parameterized complexity theory for investigating the complexity of problems that involve a parameter. This parameter may for instance be the treewidth or the genus of an underlying graph, or an upper bound on the objective value, or in our case the complexity parameter m(x). A whole theory has evolved around such parameterizations, and this has lead to the so-called W-hierarchy, an infinite hierarchy of complexity classes: F P T ⊆ W [1] ⊆ W [2] ⊆ · · · ⊆ W [k] ⊆ · · · ⊆ W [P ]. 188 G.J. Woeginger We refer the reader to Downey & Fellows [8] for the exact definitions of all these classes. It is commonly believed that all W-classes are pairwise distinct, and that hence all displayed inclusions are strict. Some classes of optimization problems. Let us briefly discuss some basic classes of optimization problems that contain many classical problems: the class of subset problems, the class of permutation problems, and the class of partition problems. In a subset problem, every feasible solution can be specified as a subset of an underlying ground set. For instance, fixing a truth-assignment in the satisfiability problem corresponds to selecting a subset of TRUE variables. In the independent set problem, every subset of the vertex set is a solution candidate. In a permutation problem, every feasible solution can be specified as a total ordering of an underlying ground set. For instance, in the TSP every tour corresponds to a permutation of the cities. In single machine scheduling problems, feasible schedules are often specified as permutations of the jobs. In a partition problem, every feasible solution can be specified as a partition of an underlying ground set. For instance, a graph coloring is a partition of the vertex set into color classes. In parallel machine scheduling problems, feasible schedules are often specified by partitioning the job set and assigning every part to another machine. As we observed above, all NP-complete problems possess trivial algorithms that simply enumerate and check all feasible solutions. For a ground set with cardinality n, subset problems can be trivially solved in O∗ (2n ) time, permutation problems can be trivially solved in O∗ (n!) time, and partition problems are trivial to solve in O∗ (cn log n ) time; here c > 1 denotes a constant that does not depend on the instance. These time complexities form the triviality barriers for the corresponding classes of optimization problems. More technical remarks. All optimization problems considered in this survey are known to be NP-complete. We refer the reader to the book [14] by Garey & Johnson for (references to) the NP-completeness proofs. We denote the base two logarithm of a real number z by log(z). 3 Technique: Dynamic Programming across the Subsets A standard approach for getting fast exact algorithms for NP-complete problems is to do dynamic programming across the subsets. For every ‘interesting’ subset of the ground set, there is a polynomial number of corresponding states in the state space of the dynamic program. In the cases where all these corresponding states can be computed in reasonable time, this approach usually yields a time complexity of O∗ (2n ). We will illustrate these benefits of dynamic programming by developping algorithms for the travelling salesman problem and for total completion time scheduling on a single machine under precedence constraints. Sometimes, the number of ‘interesting’ subsets is fairly small, and then an even better time complexity might be possible. This will be illustrated by discussing the graph 3-colorability problem. Exact Algorithms for NP-Hard Problems: A Survey 189 The travelling salesman problem (TSP). A travelling salesman has to visit the cities 1 to n. He starts in city 1, runs through the remaining n − 1 cities in arbitrary order, and in the very end returns to his starting point in city 1. The distance from city i to city j is denoted by d(i, j). The goal is to minimize the total travel length of the salesman. A trivial algorithm for the TSP checks all O(n!) permutations. We now sketch the exact TSP algorithm of Held & Karp [16] that is based on dynamic programming across the subsets. For every non-empty subset S ⊆ {2, . . . , n} and for every city i ∈ S, we denote by Opt[S; i] the length of the shortest path that starts in city 1, then visits all cities in S − {i} in arbitrary order, and finally stops in city i. Clearly, Opt[{i}; i] = d(1, i) and Opt[S; i] = min {Opt[S − {i}; j] + d(j, i) : j ∈ S − {i}} . By working through the subsets S in order of increasing cardinality, we can compute the value Opt[S; i] in time proportional to |S|. The optimal travel length is given as the minimum value of Opt[{2, . . . , n}; j] + d(j, 1) over all j with 2 ≤ j ≤ n. This yields an overall time complexity of O(n2 2n ) and hence O∗ (2n ). This result was published in 1962, and from nowadays point of view almost looks trivial. Still, it yields the best time complexity that is known today. Open problem 31 Construct an exact algorithm for the travelling salesman problem with time complexity O∗ (cn ) for some c < 2. In fact, it even would be interesting to reach such a time complexity O∗ (cn ) with c < 2 for the closely related, but slightly simpler Hamiltonian cycle problem (given a graph G on n vertices, does it contain a spanning cycle). √ Hwang, Chang & Lee [19] describe a sub-exponential time O(c n log n ) exact algorithm with some constant c > 1 for the Euclidean TSP. The Euclidean TSP is a special case of the TSP where the cities are points in the Euclidean plane and where the distance between two cities is the Euclidean distance. The approach in [19] is heavily based on planar separator structures, and it cannot be carried over to the general TSP. The approach can be used to yield similar time bounds for various NP-complete geometric optimization problems, as the Euclidean p-center problem and the Euclidean p-median problem. Total completion time scheduling under precedence constraints. There is a single machine, and there are n jobs 1, . . . , n that are specified by their length pj and their weight wj (j = 1, . . . , n). Precedence constraints are given by a partial order on the jobs; if job i precedes job j in the partial order (denoted by i → j), then i must be processed to completion before j can begin its processing. All jobs are available at time 0. We only consider non-preemptive schedules, in which all pj time units of job Jj must be scheduled consecutively. The goal is to schedule the jobs on the single machine such that all nprecedence constraints are obeyed and such that the total completion time j=1 wj Cj is minimized; here Cj is the time at which job j is completed in the given schedule. A trivial algorithm checks all O(n!) permutations of the jobs. 190 G.J. Woeginger Dynamic programming across the subsets yields a time complexity of O∗ (2n ). A subset S ⊆ {1, . . . , n} of the jobs is called an ideal, if j ∈ S and i → j always implies i ∈ S. In other words, for every job j ∈ S the ideal S also contains all jobs that have to be processed before j. For an ideal S, we denote by first(S) all jobs in S without predecessors, by last(S) all jobs in S without successors,  and by p(S) = i∈S pi the total processing time of the jobs in S. For an ideal S, we denote by Opt[S] the smallest possible total completion time for the jobs in S. Clearly, for any j ∈ first({1, . . . , n}) we have Opt[{j}] = wj pj . Moreover, for |S| ≥ 2 we have Opt[S] = min {Opt[S − {j}] + wj p(S) : j ∈ last(S)} . This DP recurrence is justified by the observation that some job j ∈ last(S) has to be processed last, and thus is completed at time p(S). By working through the ideals S in order of increasing cardinality, we can compute all values Opt[S] in time proportional to |S|. The optimal objective value can be read from Opt[{1, . . . , n}]. This yields an overall time complexity of O∗ (2n ). Similar approaches yield O∗ (cn ) time exact algorithms for many other single machine scheduling problems. Exercise 32 Use dynamic programming across the subsets to get exact algorithms with time complexity O∗ (2n ) for the following two scheduling problems. (a) Minimizing the weighted number of late jobs. There are n jobs that are specified by a length pj , a penalty wj , and a due date dj . If a job j is completed after its due date dj , one has to pay a penalty pj . The goal is to sequence the jobs on a single machine such that the total penalty for the late jobs is minimized. (b) Minimizing the total tardiness. There are n jobs that are specified by a length pj and a due date dj . If a job j is completed at time Cj in some fixed schedule, then its tardiness is Tj = max{0, Cj − dj }. The goal is to sequence the jobs on a single machine such that the total tardiness of the jobs is minimized. Exercise 33 Total completion time scheduling under precedence constraints and job release dates. That is the problem that we have solved above, but with the additional restriction that every job j cannot be processed before its release rj . As a consequence, their might be gaps in the middle of the schedule where the machine is idle. Use dynamic programming across the subsets to get an exact algorithm with time complexity O∗ (3n ) for this problem. Graph coloring. Given a graph G = (V, E) with n vertices, color the vertices with the smallest possible number of colors such that adjacent vertices never receive the same color. This smallest possible number is the chromatic number χ(G) of the graph. Every color class is a vertex set without induced edges; such a vertex set is called an independent set. An independent set is maximal, if none of its proper supersets is also independent. For any graph G, there exists a feasible coloring with χ(G) colors in which at least one color class is a maximal Exact Algorithms for NP-Hard Problems: A Survey 191 independent set. Moon & Moser [29] have shown that a graph with n vertices contains at most 3n/3 ≈ 1.4422n maximal independent sets. By considering a collection of n/3 independent triangles, we see that this bound is best possible. Paull & Unger [36] designed a procedure that generates all maximal independent sets in a graph in O(n2 ) time per generated set. Based on the ideas introduced by Lawler [26], we present a dynamic program across the subsets with a time complexity of O∗ (2.4422n ). For a subset S ⊆ V of the vertices, we denote by G[S] the subgraph of G that is induced by the vertices in S, and we denote by Opt[S] the chromatic number of G[S]. If S is empty, then clearly Opt[S] = 0. Moreover, for S = ∅ we have Opt[S] = 1 + min {Opt[S − T ] : T maximal indep. set in G[S]} . We work through the sets S in order of increasing cardinality, such that when we are handling S, all its subsets have already been handled. Then the time needed to compute the value Opt[S] is dominated by the time needed to generate all maximal independent subsets T of G[S]. By the above discussion, this can be done in k 2 3k/3 time where k is the number of vertices in G[S]. This leads to an overall time complexity of n   n     n 2 k/3 n k/3 2 k 3 3 ≤ n = n2 (1 + 31/3 )n . k k k=0 k=0 Since 1 + 31/3 ≈ 2.4422, this yields the claimed time complexity O∗ (2.4422n ). Very recently, Eppstein [11] managed to improve this time complexity to O∗ (2.4150n ) where 2.4150 ≈ 4/3 + 34/3 /4. His improvement is based on carefully counting the small maximal independent sets in a graph. Finally, we turn to the (much easier) special case of deciding whether χ(G) = 3. Lawler [26] gives a simple O∗ (1.4422n ) algorithm: Generate all maximal independent sets S, and check whether their complement graph G[V − S] is bipartite. Schiermeyer [42] describes a rather complicated modification of this idea that improves the time complexity to O∗ (1.415n ). The first major progress is due to Beigel & Eppstein [4] who get a running time of O∗ (1.3446n ) by applying the technique of pruning the search tree; see Section 4 of this survey. The current champion algorithm has a time complexity of O∗ (1.3289n ) and is due to Eppstein [10]. This algorithm combines pruning of the search tree with several tricks based on network flows and matching. Exercise 34 (Nielsen [30]) Find an O∗ (1.7851n ) exact algorithm that decides for a graph on n vertices whether χ(G) = 4. Hint: Generate all maximal independent sets of cardinality at least n/4 (why?), and use the algorithm from [10] to check their complement graphs. Eppstein [10] also shows that for n/4 ≤ k ≤ n/3, a graph on n vertices contains at most O(34k−n 4n−3k ) maximal independent sets. Apply this result to improve the time complexity for 4-coloring further to O∗ (1.7504n ). 192 G.J. Woeginger Open problem 35 Design fast algorithms for k-colorability where k is small, say for k ≤ 6. Design faster exact algorithms for the general graph coloring problem. Can we reach running times around O∗ (2n )? 4 Technique: Pruning the Search Tree Every NP-complete problem can be solved by enumerating and checking all feasible solutions. An organized way for doing this is to (1) concentrate on some piece of the feasible solution, to (2) determine all the possible values this piece can take, and to (3) branch into several subcases according to these possible values. This naturally defines a search tree: Every branching in (3) corresponds to a branching of the search tree into subtrees. Sometimes, it can be argued that certain values for a certain piece can never lead to an optimal solution. In these cases we may simply ignore all these values, kill the corresponding subtrees, and speed-up the search procedure. Every Branch-and-Bound algorithm is based on this idea, and we can also get exact algorithms with good worst case behavior out of this idea. However, to get the worst case analysis through, we need a good mathematical understanding of the evolution of the search tree, and we need good estimates on the sizes of the killed subtrees and on the number and on the sizes of the surviving cases. We will illustrate the technique of pruning the search tree by developping algorithms for the satisfiability problem, for the independent set problem in graphs, and for the bandwidth problem in graphs. The satisfiability problem. Let X = {x1 , x2 , . . . , xn } be a set of logical variables. A variable or a negated variable from X is called a literal. A clause over X is the disjunction of literals from X. A Boolean formula is in conjunctive normal form (CNF), if it is the conjunction of clauses over X. A formula in CNF is in k-CNF, if all clauses contain at most k literals. A formula is satisfiable, if there is a truth assignment from X to {0, 1} which assigns to each variable a Boolean value (0=false, 1=true) such that the entire formula evaluates to true. The k-satisfiability problem is the problem of deciding whether a formula F in kCNF is satisfiable. It is well-known that 2-satisfiability is polynomially solvable, whereas k-satisfiability with k ≥ 3 is NP-complete. A trivial algorithm checks all possible truth assignments in O∗ (2n ) time. We will now describe an exact O∗ (1.8393n ) algorithm for 3-satisfiability that is based on the technique of pruning the search tree. Let F be a Boolean formula in 3-CNF with m clauses (m ≤ n3 ). The idea is to branch on one of the clauses c with three literals ℓ1 , ℓ2 , ℓ3 . Every satisfying truth assignment for F must fall into one of the following three classes: (a) literal ℓ1 is true; (b) literal ℓ1 is false, and literal ℓ2 is true; (c) literals ℓ1 and ℓ2 are false, and literal ℓ3 is true. Exact Algorithms for NP-Hard Problems: A Survey 193 We fix the values of the corresponding one, two, three variables appropriately, and we branch into three subtrees according to these cases (a), (b), and (c) with n − 1, n − 2, and n − 3 unfixed variables, respectively. By doing this, we cut away the subtree where the literals ℓ1 , ℓ2 , ℓ3 all are false. The formulas in the three subtrees are handled recursively. The stopping criterion is when we reach a formula in 2-CNF, which can be resolved in polynomial time. Denote by T (n) the worst case time that this algorithm needs on a 3-CNF formula with n variables. Then T (n) ≤ T (n − 1) + T (n − 2) + T (n − 3) + O(n + m). Here the terms T (n − 1), T (n − 2), and T (n − 3) measure the time for solving the subcase with n − 1, n − 2, and n − 3 unfixed variables, respectively. Standard calculations yield that T (n) is within a polynomial factor of αn where α is the largest real root of α3 = α2 +α+1. Since α ≈ 1.8393, this gives a time complexity of O∗ (1.8393n ). In a milestone paper in this area, Monien & Speckenmeyer [28] improve the branching step of the above approach. They either detect a clause that can be handled without any branching, or they detect a clause for which the branching only creates formulas that contain one clause with at most k − 1 literals. A careful analysis yields a time complexity of O∗ (β n ) for k-satisfiability, where β is the largest real root of β = 2 − 1/β k−1 . For 3-satisfiability, this time complexity is O∗ (1.6181n ). Schiermeyer [41] refines these ideas for 3-satisfiability even further, and performs a quantitative analysis of the number of 2-clauses in the resulting subtrees. This yields a time complexity of O∗ (1.5783n ). Kullmann [24, 25] writes half a book on the analysis of this approach, and gets time complexities of O∗ (1.5045n ) and O∗ (1.4963n ) for 3-satisfiability. The current champion algorithms for satisfiability are, however, not based on pruning the search tree, but on local search ideas; see Section 6 of this survey. Exercise 41 For a formula F in CNF, consider the following bipartite graph GF : For every logical variable in X, there is a corresponding variable-vertex in GF , and for every clause in F , there is a corresponding clause-vertex in GF . There is an edge from a variable-vertex to a clause-vertex if and only if the corresponding variable is contained (in negated or un-negated form) in the corresponding clause. The planar satisfiability problem is the special case of the satisfiability problem that contains all instances with formulas F in CNF for which the graph GF is planar. Design a sub-exponential time exact algorithm for the planar 3-satisfiability problem! Hint: Use the planar separator theorem of Lipton & Tarjan [27] to break the formula F into two smaller, independent pieces. Running times of roughly √ O∗ (c n ) are possible. The independent set problem. Given a graph G = (V, E) with n vertices, the goal is to find an independent set of maximum cardinality. An independent set S ⊆ V is a set of vertices that does not induce any edges. Moon & Moser [29] 194 G.J. Woeginger have shown that a graph contains at most 3n/3 ≈ 1.4422n maximal (with respect to inclusion) independent sets. Hence the first goal is to beat the time complexity O∗ (1.4422n ). We describe an exact O∗ (1.3803n ) algorithm for independent set that is based on the technique of pruning the search tree. Let G be a graph with m edges. The idea is to branch on a high-degree vertex: If all vertices have degree at most two, then the graph is a collection of cycles and paths. It is straightforward to determine a maximum independent set in such a graph. Otherwise, G contains a vertex v of degree d ≥ 3; let v1 , . . . , vd be the neighbors of v in G. Every independent set I for G must fall into one of the following two classes: (a) I does not contain v. (b) I does contain v; then I cannot contain any neighbor of v. We dive into two subtrees. The first subtree deals with the graph that results from removing vertex v from G. The second subtree deals with the graph that results from removing v together with v1 , . . . , vd from G. We recursively compute the maximum independent set in both subtrees, and update it to a solution for the original graph G. Denote by T (n) the worst case time that this algorithm needs on a graph with n vertices. Then T (n) ≤ T (n − 1) + T (n − 4) + O(n + m). Standard calculations yield that T (n) is within a polynomial factor of γ n where γ ≈ 1.3803 is the largest real root of γ 4 = γ 3 +1. This yields the time complexity O∗ (1.3803n ). The first published paper that deals with exact algorithms for maximum independent set is Tarjan & Trojanowski [46]. They give an algorithm with running time O∗ (1.2599n ). This algorithm follows essentially the above approach, but performs a smarter (and pretty tedious) structural case analysis of the neighborhood around the high-degree vertex v. The algorithm of Jian [22] has a time complexity of O∗ (1.2346n ). Robson [38] further refines the approach. A combinatorial argument about connected regular graphs helps to get the running time down to O∗ (1.2108n ). Robson’s algorithm uses exponential space. Beigel [3] presents another algorithm with a weaker time complexity of O∗ (1.2227n ), but polynomial space complexity. Robson [39] is currently working on a new algorithm which is supposed to run in time O∗ (1.1844n ). This new algorithm is based on a detailed computer generated subcase analysis where the number of subcases is in the tens of thousands. Open problem 42 (a) Construct an exact algorithm for the maximum independent set problem with time complexity O∗ (cn ) for some c ≤ 1.1. If this really is doable, it will be very tedious to do. (b) Prove a lower bound on the time complexity of any exact algorithm for maximum independent set that is based on the technique of pruning the search tree and that makes its branching decision by solely considering the subgraphs around a fixed chosen vertex. Exact Algorithms for NP-Hard Problems: A Survey 195 Exercise 43 (a) Design an algorithm with time complexity O∗ (1.1602n ) for the restriction of the maximum independent set problem to graphs with maximum degree three! Warning: This is not an easy exercise. See Chen, Kanj & Jia [5] for a solution. (b) Design a sub-exponential time exact algorithm for the restriction of the maximum independent set problem to planar graphs! Hint: Use the planar separator theorem of Lipton & Tarjan [27]. Open problem 44 An input to the Max-Cut problem consists of a graph G = (V, E) on n vertices. The goal is to find a partition of V into two sets V1 and V2 that maximizes the number of edges between V1 and V2 in E. (a) Design an exact algorithm for the Max-Cut problem with time complexity O∗ (cn ) for some c < 2. (b) Design an exact algorithm for the restriction of the Max-Cut problem to graphs with maximum degree three that has a time complexity O∗ (cn ) for some c < 1.5. Gramm & Niedermeier [15] state an algorithm with time complexity O∗ (1.5160n ). The bandwidth problem. Given a graph G = (V, E) with n vertices, a linear arrangement is a bijective numbering f : V → {1, . . . , n} of the vertices from 1 to n (which can be viewed as a layout of the graph vertices on a line). In some fixed linear arrangement, the stretch of an edge [u, v] ∈ E is the distance |f (u) − f (v)| of its endpoints, and the bandwidth of the linear arrangement is the maximum stretch over all edges. In the bandwidth problem, the goal is to find a linear arrangement of minimum bandwidth for G. A trivial algorithm checks all possible linear arrangements in O∗ (n!) time. We will sketch an exact O∗ (20n ) algorithm for the bandwidth problem that is based on the technique of pruning the search tree. This beautiful algorithm is due to Feige & Kilian [13]. The algorithm checks for every integer b with 1 ≤ b ≤ n in O∗ (20n ) time whether the bandwidth of the input graph G is less or equal to b. To simplify the presentation, we assume that both n and b are powers of two (and otherwise analogous but more messy arguments go through). Moreover, we assume that G is connected. The algorithm proceeds in two phases. In the first phase, it generates an initial piece of the search tree that branches into up to 5n subtrees. In the second phase, each of these subtrees is handled in O(4n ) time per subtree. The goal of the first phase is to break the set of ‘reasonable’ linear arrangements into up to n 5n−1 subsets; in each of these subsets the approximate position of every single vertex is known. More precisely, we partition the interval [1, n] into 2n/b segments of length b/2, and we will assign every vertex to one of these segments. We start with an arbitrary vertex v ∈ V , and we check all 2n/b possibilities for assigning v to some segment. Then we iteratively select a yet unassigned vertex x that has a neighbor y that has already been assigned to some segment. In any linear arrangement with bandwidth b, vertex x can not be placed more than two segments away from vertex y; hence, vertex x can only 196 G.J. Woeginger be assigned to five possible segments. There are n − 1 vertices to assign, and we end up with O∗ (5n ) assignments. In the second phase, we check which of these O∗ (5n ) assignments can be extended to a linear arrangement of bandwidth at most b. All assignments are handled in the same way: If an assignment stretches some edge from a segment to another segment with at least two other segments in between, then this assignment can never lead to a linear arrangement with bandwidth b; therefore, we may kill such an assignment right away. If an edge goes from a segment to the same segment or to one of the adjacent segments, then it will have stretch at most b regardless of the exact positions of vertices in segments; therefore, such an edge may be removed. Hence, in the end we are only left with edges that either connect consecutive even numbered segments or consecutive odd numbered segments. The problem decomposes into two independent subproblems, one within the even numbered segments and one within the odd numbered segments. All these subproblems now are solved recursively. We break every segment into two subsegments of equal length. We try all possibilities for assigning every vertex from every segment into the corresponding left and right subsegments. Some of these refined assignments can be killed right away since they overstretch some edge; other edges are automatically short, and hence can be removed. In any case, we end up with two independent subproblems (one within the right subsegments and one within the left subsegments) that both can be solved recursively. Denote by T (k) the time needed for solving a subproblem with k vertices. Then T (k) ≤ 2k · (T (k/2) + T (k/2)). Standard calculations yield that T (k) is in O(4k ). Therefore, in the second phase we check O∗ (5n ) assignments in O∗ (4n ) time per assignment. This yields an overall time complexity of O∗ (20n ). Feige & Kilian [13] do a more careful analysis and improve the time complexity below O∗ (10n ). Open problem 45 (Feige & Kilian [13]) Does the bandwidth problem have considerably faster exact algorithms? For instance, can it be solved in O∗ (2n ) time? Exercise 46 In the minimum sum linear arrangement problem, the input is a graph G = (V, E) with n vertices. The goal is to find a linear arrangement of G that minimizes the sum of the stretches of all edges. Design an exact algorithm with time complexity O∗ (2n ) for this problem. Hint: Do not use the technique of pruning the search tree. 5 Technique: Preprocessing the Data Preprocessing is an initial phase of computation, where one analyzes and restructures the given data, such that later on certain queries to the data can be Exact Algorithms for NP-Hard Problems: A Survey 197 answered quickly. By preprocessing an exponentially large data set or part of this data in an appropriate way, we may sometimes gain an exponentially large factor in the running time. In this section we will use the technique of preprocessing the data to get fast algorithms for the subset sum problem and for the binary knapsack problem. We start this section by discussing two very simple, polynomially solvable toy problems where preprocessing helps a lot. In the first toy problem, we are given two integer sequences x1 , . . . , xk and y1 , . . . , yk and an integer S. We want to decide whether there exist an xi and a yj that sum up to S. A trivial approach would be to check all possible pairs in O(k 2 ) overall time. A better approach is to first preprocess the data and to sort the xi in O(k log k) time. After that, we may repeatedly use bisection search in this sorted array, and search for the k values S − yj in O(log k) time per value. The overall time complexity becomes O(k log k), and we save a factor of k/ log k. By applying the same preprocessing, we can also decide in O(k log k) time, whether the sequences xi and yj are disjoint, or whether every value xi also occurs in the sequence yj . In the second toy problem, we are given k points (xi , yi ) in two-dimensional space, together with the n numbers z1 , . . . , zk , and a number W . The goal is to determine for every zj the largest value yi , subject to the condition that xi +zj ≤ W . The trivial solution needs O(k 2 ) time, and by applying preprocessing this can be brought down to O(k log k): If there are two points (xi , yi ) and (xj , yj ) with xi ≤ xj and yi ≥ yj , then the point (xj , yj ) may be disregarded since it is always dominated by (xi , yi ). The subset of non-dominated points can be computed in O(k log k) time by standard methods from computational geometry. We sort the non-dominated points by increasing x-coordinates and store this sequence in an array. This completes the preprocessing. To handle a value zj , we simply search in O(log k) time through the sorted array for the largest value xi less or equal to W − zj . In both toy problems preprocessing improved the time complexity from O(k 2 ) to O(k log k). Of course, when dealing with exponential time algorithms an improvement by a factor of k/ log k is not impressive at all. The right intuition is to think of k as roughly 2n/2 . Then preprocessing the data yields a speedup from k 2 = 2n to k log k = n2n/2 , and such a speedup of 2n/2 indeed is impressive! The subset sum problem. In this problem, the input consists of positive integers a1 , . . . , an and S. The question is whether there exists a subset of the ai that sums up to S. The subset sum problem belongs to the class of subset problems, and can be solved (trivially) in O∗ (2n ) time. By splitting the problem into two halves and by the first half, the time complexity can be brought √ preprocessing n down to O∗ ( 2 ) ≈ O∗ (1.4145n ).  Let X denote the set of all integers of the form i∈I ai with I ⊆ {1, . . . , ⌊n/2⌋}, and let Y denote the set of all integers of the form i∈I ai with I ⊆ {⌊n/2⌋ + 1, . . . , n}. Note that 0 ∈ X and 0 ∈ Y . It is straightforward to compute X and Y in O∗ (2n/2 ) time by complete enumeration. The subset sum instance has a solution if and only if there exists an xi ∈ X and a yj ∈ Y with xi + yj = S. But now we are back at our first toy problem that we discussed at 198 G.J. Woeginger the beginning of this section! By preprocessing X and by searching for all S − yj in the sorted structure, we can solve this problem in O(n2n/2 ) time. This yields an overall time of O∗ (2n/2 ). Exercise 51 An input to the Exact-Hitting-Set problem consists of a ground set X with n elements, and a collection S of subsets over X. The question is whether there exists a subset Y ⊆ X such that |Y ∩ T | = 1 for all T ∈ S. Use the technique of preprocessing the data to get an exact algorithm with time complexity O∗ (2n/2 ) ≈ O∗ (1.4145n ). Drori & Peleg [9] use the technique of pruning the search tree to get a time complexity of O∗ (1.2494n ) for the Exact-Hitting-Set problem. Exercise 52 (Van Vliet [47]) In the Three-Partition problem, the input consists of 3n positive integers a1 , . . . , an , b1 , . . . , bn , and c1 , . . . , cn , together with an integer D. The question is to determine whether there exist three permutations π, ψ, φ of {1, . . . , n} such that aπ(i) + bψ(i) + cφ(i) = D holds for all i = 1, . . . , n. By checking all possible triples (π, ψ, φ) of permutations, this problem can be solved trivially in O∗ (n!3 ) time. Use the technique of preprocessing the data to improve the time complexity to O∗ (n!). The binary knapsack problem. Here the input consists of n items that are specified by a positive integer value ai and a positive integer weight wi (i = 1, . . . , n), together with a bound W . The goal is to find a subset of the items with the maximum total value subject to the condition that the total weight does not exceed W . The binary knapsack problem is closely related to the subset sum problem, and it can be solved (trivially) in O∗ (2n ) time. In 1974, Horowitz & Sahni [18] used a preprocessing trick to improve the time complexity to O∗ (2n/2 ). Forevery I ⊆ {1, . . . , ⌊n/2⌋}  we create a compound item xI with value aI = i∈I ai and weight wI = i∈I wi , and we put this item into the set X. For every J ⊆ {⌊n/2⌋+1, . . . , n} we put a corresponding compound item yJ into the set Y . The sets X and Y can be determined in O∗ (2n/2 ) time. The solution of the knapsack instance now reduces to the following: Find a compound item xI in X and a compound item yJ in Y , such that wI + wJ ≤ W and such that aI + aJ becomes maximum. But this can be handled by preprocessing as in our second toy problem, and we end up with an overall time complexity and an overall space complexity of O∗ (2n/2 ). In 1981, Schroeppel & Shamir [45] improved the space complexity of this approach to O∗ (2n/4 ), while leaving its time complexity unchanged. The main trick is to split the instance into four pieces with n/4 items each, instead of two pieces with n/2 items. Apart from this, there has been no progress on exact algorithms for the knapsack problem since 1974. Open problem 53 Construct an exact algorithm for the subset sum √ problem or the knapsack problem with time complexity O∗ (cn ) for some c < 2, or prove that no such algorithm can exist under some reasonable complexity assumptions. Exact Algorithms for NP-Hard Problems: A Survey 6 199 Technique: Local Search The idea of using local search methods in designing exact exponential time algorithms is relatively new. A local search algorithm is a search algorithm that wanders through the space of feasible solutions. At each step, this search algorithm moves from one feasible solution to another one nearby. In order to express the word ‘nearby’ mathematically, we need some notion of distance or neighborhood on the space of feasible solutions. For instance in the satisfiability problem, the feasible solutions are the truth assignments from the set X of logical variables to {0, 1}. A natural distance between truth assignments is the Hamming distance, that is, the number of bits where two truth assignments differ. In this section we will concentrate on the 3-satisfiability problem where the input is a Boolean formula F in 3-CNF over the n logical variables in X = {x1 , x2 , . . . , xn }; see Section 4 for definitions and notations for this problem. We will describe three exact algorithms for 3-satisfiability that all are based on local search ideas. All three algorithms are centered around the Hamming neighborhood of truth assignments: For a truth assignment t and a non-negative integer d, we denote by H(t, d) the set of all truth assignments that have Hamming distance at most d from assignment t. It is easy to see that H(t, d) contains d   exactly k=0 nk elements. Exercise 61 For a given truth assignment t and a given non-negative integer d, use the technique of pruning the search tree to check in O∗ (3d ) time whether the Hamming neighborhood H(t, d) contains a satisfying truth assignment for the 3-CNF formula F . In other words, the Hamming neighborhood H(t, d) can be searched quickly for the 3-satisfiability problem. For the k-satisfiability problem, the corresponding time complexity would be O∗ (k d ). First local search approach to 3-satisfiability. We denote by 0n (respectively, 1n ) the truth assignment that sets all variables to 0 (respectively, to 1). Any truth assignment is in H(0n , n/2) or in H(1n , n/2). Therefore by applying the search algorithm from Exercise 61 twice, we get an exact algorithm with running √ n time O∗ ( 3 ) ≈ O∗ (1.7321n ) for 3-satisfiability. It is debatable whether this algorithm should be classified under pruning the search tree or under local search. In any case, it is due to Schöning [44]. Second local search approach to 3-satisfiability. In the first approach, we essentially covered the whole solution space by two balls of radius d = n/2 centered at 0n and 1n . The second approach works with balls of radius d = n/4. The crucial idea is to randomly choose the center of a ball, and to search this ball with the algorithm from Exercise 61. If we only do this once, then we ignore most of the solution space, and the probability for answering correctly is pretty small. But by repeating this procedure a huge number α of times, we can boost the proban/4   bility arbitrarily close to 1. A good choice for α is α = 100 · 2n / k=0 nk . The 200 G.J. Woeginger algorithm now works as follows: Choose α times a truth assignment t uniformly at random, and search for a satisfying truth assignment in H(t, n/4). If in the end no satisfying truth assignment has been found, then answer that the formula F is not satisfiable. We will now discuss the running time and the error probability of this algorithm. By Exercise 61, the running time can be bounded by roughly α · 3n/4 . By applying Stirling’s approximation, one can show that up to a polynomial factor n/4   the expression k=0 nk behaves asymptotically like (256/27)n/4 . Therefore, the upper bound α · 3n/4 on the running time is in O∗ ((3/2)n ) = O∗ (1.5n ). Now let us analyze the error probability of the algorithm. The only possible error occurs, if the formula F is satisfiable, whereas the algorithm does not manage to find a good ball H(t, n/4) that contains some satisfying truth assignment for F . For a single ball, the probability of containing a satisfying truth assignn/4   ment equals k=0 nk /2n , that is the number of elements in H(t, n/4) divided by the overall number of possible truth assignments. This probability equals 100/α. Therefore the probability of selecting a ball that does not contain any satisfying truth assignment is 1 − 100/α. The probability of α times not selecting such a ball equals (1 − 100/α)α , which is bounded by the negligible value e−100 . In fact, the whole algorithm can be derandomized without substantially increasing the running time. Dantsin, Goerdt, Hirsch, Kannan, Kleinberg, Papadimitriou, Raghavan & Schöning [6] do not choose the centers of the balls at random, but they take all centers from a so-called covering code so that the resulting balls cover the whole solution space. They show that such covering codes can be computed within reasonable amounts of time. The approach in [6] yields deterministic exact algorithms for k-satisfiability with running time 2 n O∗ ((2− k+1 ) ). For 3-satisfiability, [6] improve the time complexity further down ∗ to O (1.4802n ) by using a smart idea for an underlying branching step. This is currently the fastest known deterministic algorithm for 3-satisfiability. Third local search approach to 3-satisfiability. The first approach was based on selecting the center of a ball deterministically, and then searching through the whole ball. The second approach was based on selecting the center of a ball randomly, and then searching through the whole ball. The third approach now is based on selecting the center of a ball randomly, and then doing a short random walk within the ball. More precisely, the algorithm repeats the following procedure roughly 200 · (4/3)n times: Choose a truth assignment t uniformly at random, and perform 2n steps of a random walk starting in t. In each step, first select a violated clause at random, then select a literal in the selected clause at random, and finally flip the truth value of the corresponding variable. If in the very end no satisfying truth assignment has been found, then answer that the formula F is not satisfiable. The intuition behind this algorithm is as follows. If we start far away from a satisfying truth assignment, then the random walk has little chance of stumbling towards a satisfying truth assignment. Hence, it is a good idea to terminate it quite early after 2n steps, without wasting time. But if the starting point is Exact Algorithms for NP-Hard Problems: A Survey 201 very close to a satisfying truth assignment, then the probability is high that the random walk will be dragged closer and closer towards this satisfying truth assignment. And if the random walk indeed is dragged into a satisfying truth assignment, then with high probability this happens within the first 2n steps of the random walk. The underlying mathematical structure is a Markov chain that can be analyzed by standard methods. Clearly, the error probability can be made negligibly small by sufficiently often restarting the random walk. And up to a polynomial factor, the running time of the algorithm is proportional to the number of performed random walks. This implies that the time complexity is O∗ ((4/3)n ) ≈ O∗ (1.3334n ). This algorithm and its analysis are due to Schöning [43]. Some of the underlying ideas go back to Papadimitriou [31] who showed that 2-SAT can be solved in polynomial time by a randomized local search procedure. The algorithm easily generalizes to the k-satisfiability problem, and yields a randomized exact algorithm with time complexity O∗ ((2(k − 1)/k)n ). The fastest known randomized exact algorithm for 3-satisfiability is due to Hofmeister, Schöning, Schuler & Watanabe [17], and has a running time of O∗ (1.3302n ). It is based on a refinement of the above random walk algorithm. Open problem 62 Design better deterministic and/or randomized algorithms for the k-satisfiability problem. More resuls on exact algorithms for k-satisfiability and related problems can be found in the work of Paturi, Pudlak & Zane [34], Paturi, Pudlak, Saks & Zane [35], Pudlak [37], Rodošek [40], and Williams [48]. 7 How Can We Prove That a Problem Has No Sub-exponential Time Exact Algorithm? All the problems discussed in this paper are NP-complete, and almost all of the developped algorithms use exponential time. Of course we cannot expect to find polynomial time algorithms for NP-complete problems, but maybe there exist better, sub-exponential, super-polynomial algorithms? How can we settle such questions? Since our understanding of the landscape around the complexity classes P and NP still is fairly poor, the only available way of proving negative results on exact algorithms is by arguing relative to some widely believed conjectures. For instance, an NP-hardness proof establishes that some problem does not have a polynomial time algorithm, given that the widely believed conjecture P=NP holds true. The right conjecture for disproving the existence of sub-exponential time exact algorithms seems to be the following. Widely believed conjecture 71 SNP ⊆ SUBEXP. We already mentioned in Section 2 that the class SNP is a broad complexity class that contains many important combinatorial optimization problems. 202 G.J. Woeginger Therefore, if the widely believed Conjecture 71 is false, then quite unexpectedly all these important problems would possess relatively fast, sub-exponential time algorithms. However, the exact relationship between the P versus NP question and Conjecture 71 is unclear. Open problem 72 Does SNP ⊆ SUBEXP imply P = NP? Impagliazzo, Paturi & Zane [21] introduce the concept of SERF-reduction (Sub-Exponential Reduction Family) that preserves sub-exponential time complexities. Consider two problems A1 and A2 in NP with complexity parameters m1 and m2 , respectively. A SERF-reduction from A1 to A2 is a family Tε of Turing-reductions from A1 to A2 over all ε > 0 with the following two properties: – The reduction Tε (x) can be done in time poly(|x|) · 2ε·m1 (x) . – If the reduction Tε (x) queries A2 with input x′ , then m2 (x′ ) is linearly bounded in m1 (x) and the length of x′ is polynomially bounded in the length of x. SERF-reducibility is transitive. Moreover, if problem A1 is SERF-reducible to problem A2 and if problem A2 has a sub-exponential time algorithm, then also problem A1 has a sub-exponential time algorithm. Consider some problem A that is hard for the complexity class SNP under SERF-reductions. If problem A had a sub-exponential time algorithm, then all the problems in SNP had sub-exponential time algorithms, and this would contradict the widely believed Conjecture 71 that SNP ⊆ SUBEXP. The k-satisfiability problem plays a central role for sub-exponential time algorithms, the same central role that it plays everywhere else in computational complexity theory. There are two natural complexity parameters for k-satisfiability, the number of logical variables and the number of clauses. Impagliazzo, Paturi & Zane [21] prove that the two variants of k-satisfiability with these two complexity parameters are SERF-reducible to each other, and hence are equivalent under SERF-reductions. This indicates that for k-satisfiability the exact parameterization is not very important, and that all natural parameterizations of k-satisfiability should be SERF-reducible to each other. Most important, the paper [21] shows that for any fixed k ≥ 3 the k-satisfiability problem is SNPcomplete under SERF-reductions. As we discussed above, this implies that for any fixed k ≥ 3 the k-satisfiability problem cannot have a sub-exponential time algorithm, unless SNP ⊆ SUBEXP. Therefore, the widely believed Conjecture 71 could also be formulated in the following way. Widely believed conjecture 73 (Exponential Time Hypothesis, ETH) For any fixed k ≥ 3, k-satisfiability does not have a sub-exponential time algorithm. Now let sk denote the infimum of all real numbers δ with the property that there exists an O∗ (2δn ) exact algorithm for solving the k-satisfiability problem. Observe that sk ≤ sk+1 and 0 ≤ sk ≤ 1 hold trivially for all k ≥ 3. The Exact Algorithms for NP-Hard Problems: A Survey 203 exponential time hypothesis conjectures sk > 0 for all k ≥ 3, and that the numbers sk converge to some limit s∞ > 0. Impagliazzo & Paturi [20] prove that under ETH, sk ≤ (1 − α/k) · s∞ holds, where α is some small positive constant. Consequently, under ETH we can never have sk = s∞ and the time complexities for k-satisfiability must increase more and more as k increases. Open problem 74 (Impagliazzo & Paturi [20]) Assuming the exponential time hypothesis for k-satisfiability, obtain evidence for the hypothesis that s∞ = 1. Now let us discuss the behavior of some other problems in NP. Impagliazzo, Paturi & Zane [21] show that for any fixed k ≥ 3 the k-colorability problem is SNP-complete under SERF-reductions. Hence 3-colorability can not be solved in sub-exponential time, unless SNP ⊆ SUBEXP. The paper [21] also shows that the Hamiltonian cycle problem and the independent set problem (both with the number of vertices as complexity parameter) can not be solved in sub-exponential time, unless SNP ⊆ SUBEXP. Johnson & Szegedy [23] strengthen the result on the independent set problem by showing that the independent set problem in arbitrary graphs is equally difficult as in graphs with maximum degree three: Either both of these problems have a sub-exponential time algorithm, or neither of them does. Feige & Kilian [13] prove that also the bandwidth problem can not be solved in sub-exponential time, unless SNP ⊆ SUBEXP. For all results listed in this paragraph, the proofs are done by translating classical NP-hardness proofs from the 1970s into SERF-reductions. The main technical problem is to keep the complexity parameters m(x) under control. In another line of research, Feige & Kilian [12] show that if in graphs with n vertices independent sets of size O(log n) can be found in polynomial time, then the 3-satisfiability problem can be solved in sub-exponential time. This result probably does not speak against the ETH, but indicates that finding small independent sets is difficult. The W-hierarchy gives rise to yet another widely believed conjecture that can be used for disproving the existence of sub-exponential time exact algorithms. As we already mentioned in Section 2, the general belief is that all the W-classes are pairwise distinct. The following (cautious) conjecture only states that the W-hierarchy does not collapse completely. Widely believed conjecture 75 FPT = W[P]. Abrahamson, Downey & Fellows [1] proved that Conjecture 75 is false if and only if the satisfiability problem for Boolean circuits can be solved in sub-exponential time poly(|x|) · 2o(n) . Here |x| denotes the size of the Boolean circuit that is given as an input, n denotes the number of input variables of the circuit, and o(n) denotes some sub-linear function in n. Since the k-satisfiability problem is a special case of the Boolean circuit satisfiability problem, the exponential time hypothesis ETH implies Conjecture 75. It is not known whether the reverse implication also holds. 204 G.J. Woeginger Most optimization problems that are mentioned in this survey possess exact algorithms with time complexity O∗ (cm(x) ), i.e., exponential time where the exponent grows linearly in the complexity parameter m(x). The quadratic assignment problem is a candidate for a natural problem that does not possess such an exact algorithm. Open problem 76 In the quadratic assignment problem (QAP) the input consists of two n × n matrices A = (aij ) and B = (bij ) (1 ≤ i, j ≤ n) with real entries. nobjective is to find a permutation π that minimizes the cost funcnThe tion i=1 j=1 aπ(i)π(j) bij . The QAP can be solved in O∗ (n!) time. The QAP is a notoriously hard problem, and no essentially faster algorithms are known (Pardalos, Rendl & Wolkowicz [33]). Prove that (under some reasonable complexity assumptions) the QAP can not be solved in O∗ (cn ) time, for any fixed value c. 8 Concluding Remarks Currently, when we are dealing with an optimization problem, we are used to look at its computational complexity, its approximability behavior, its online behavior (with respect to competitive analysis), its polyhedral structure. Exact algorithms with good worst case behavior should probably become another standard item on this list, and we feel that the known techniques and results as described in Sections 3–6 deserve to be taught in our introductory algorithms courses. There remain many open problems and challenging questions around the worst case analysis of exact algorithms for NP-hard problems. This seems to be a rich and promising area. We only have a handful of techniques available, and there is ample space for improvements and for new results. Acknowledgement. I thank David Eppstein, Jesper Makholm Nielsen, and Ryan Williams for several helpful comments on preliminary versions of this paper, and for providing some pointers to the literature. Furthermore, I thank an unknown referee for many suggestions how to improve the structure, the English, the style, and the contents of this paper. References 1. K.A. Abrahamson, R.G. Downey, and M.R. Fellows [1995]. Fixed-parameter tractability and completeness IV: On completeness for W[P] and PSPACE analogues. Annals of Pure and Applied Logic 73, 235–276. 2. D. Applegate, R. Bixby, V. Chvátal, and W. Cook [1998]. On the solution of travelling salesman problems. Documenta Mathematica 3, 645–656. 3. R. Beigel [1999]. Finding maximum independent sets in sparse and general graphs. Proceedings of the 10th ACM-SIAM Symposium on Discrete Algorithms (SODA’1999), 856–857. 4. R. Beigel and D. Eppstein [1995]. 3-Coloring in time O(1.3446n ): A no-MIS algorithm. Proceedings of the 36th Annual Symposium on Foundations of Computer Science (FOCS’1995), 444–453. Exact Algorithms for NP-Hard Problems: A Survey 205 5. J. Chen, I.A. Kanj, and W. Jia [1999]. Vertex cover: Further observations and further improvements. Proceedings of the 25th Workshop on Graph Theoretic Concepts in Computer Science (WG’1999), Springer, LNCS 1665, 313–324. 6. E. Dantsin, A. Goerdt, E.A. Hirsch, R. Kannan, J. Kleinberg, C.H. Pa2 padimitriou, P. Raghavan, and U. Schöning [2001]. A deterministic (2− k+1 )n algorithm for k-SAT based on local search. To appear in Theoretical Computer Science. 7. R.G. Downey and M.R. Fellows [1992]. Fixed parameter intractability. Proceedings of the 7th Annual IEEE Conference on Structure in Complexity Theory (SCT’1992), 36–49. 8. R.G. Downey and M.R. Fellows [1999]. Parameterized complexity. Springer Monographs in Computer Science. 9. L. Drori and D. Peleg [1999]. Faster exact solutions for some NP-hard problems. Proceedings of the 7th European Symposium on Algorithms (ESA’1999), Springer, LNCS 1643, 450–461. 10. D. Eppstein [2001]. Improved algorithms for 3-coloring, 3-edge-coloring, and constraint satisfaction. Proceedings of the 12th ACM-SIAM Symposium on Discrete Algorithms (SODA’2001), 329–337. 11. D. Eppstein [2001]. Small maximal independent sets and faster exact graph coloring. Proceedings of the 7th Workshop on Algorithms and Data Structures (WADS’2001), Springer, LNCS 2125, 462–470. 12. U. Feige and J. Kilian [1997]. On limited versus polynomial nondeterminism. Chicago Journal of Theoretical Computer Science (http://cjtcs.cs.uchicago.edu/). 13. U. Feige and J. Kilian [2000]. Exponential time algorithms for computing the bandwidth of a graph. Manuscript. 14. M.R. Garey and D.S. Johnson [1979]. Computers and Intractability: A Guide to the Theory of NP-Completeness. Freeman, San Francisco. 15. J. Gramm and R. Niedermeier [2000]. Faster exact solutions for Max2Sat. Proceedings of the 4th Italian Conference on Algorithms and Complexity (CIAC’2000), Springer, LNCS 1767, 174–186. 16. M. Held and R.M. Karp [1962]. A dynamic programming approach to sequencing problems. Journal of SIAM 10, 196–210. 17. T. Hofmeister, U. Schöning, R. Schuler, and O. Watanabe [2001]. A probabilistic 3-SAT algorithm further improved. Manuscript. 18. E. Horowitz and S. Sahni [1974]. Computing partitions with applications to the knapsack problem. Journal of the ACM 21, 277–292. 19. R.Z. Hwang, R.C. Chang, and R.C.T. Lee [1993]. The searching over separators strategy to solve some NP-hard problems in subexponential time. Algorithmica 9, 398–423. 20. R. Impagliazzo and R. Paturi [2001]. Complexity of k-SAT. Journal of Computer and System Sciences 62, 367–375. 21. R. Impagliazzo, R. Paturi, and F. Zane [1998]. Which problems have strongly exponential complexity? Proceedings of the 39th Annual Symposium on Foundations of Computer Science (FOCS’1998), 653–663. 22. T. Jian [1986]. An O(20.304n ) algorithm for solving maximum independent set problem. IEEE Transactions on Computers 35, 847–851. 23. D.S. Johnson and M. Szegedy [1999]. What are the least tractable instances of max independent set? Proceedings of the 10th ACM-SIAM Symposium on Discrete Algorithms (SODA’1999), 927–928. 206 G.J. Woeginger 24. O. Kullmann [1997]. Worst-case analysis, 3-SAT decisions, and lower bounds: Approaches for improved SAT algorithms. In: The Satisfiability Problem: Theory and Applications, D. Du, J. Gu, P.M. Pardalos (eds.), DIMACS Series in Discrete Mathematics and Theoretical Computer Science 35, 261–313. 25. O. Kullmann [1999]. New methods for 3-SAT decision and worst case analysis. Theoretical Computer Science 223, 1–72. 26. E.L. Lawler [1976]. A note on the complexity of the chromatic number problem. Information Processing Letters 5, 66–67. 27. R.J. Lipton and R.E. Tarjan [1979]. A separator theorem for planar graphs. SIAM Journal on Applied Mathematics 36, 177–189. 28. B. Monien and E. Speckenmeyer [1985]. Solving satisfiability in less than 2n steps. Discrete Applied Mathematics 10, 287–295. 29. J.W. Moon and L. Moser [1965]. On cliques in graphs. Israel Journal of Mathematics 3, 23–28. 30. J.M. Nielsen [2001]. Personal communication. 31. C.H. Papadimitriou [1991]. On selecting a satisfying truth assignment. Proceedings of the 32nd Annual Symposium on Foundations of Computer Science (FOCS’1991), 163–169. 32. C.H. Papadimitriou and M. Yannakakis [1991]. Optimization, approximation, and complexity classes. Journal of Computer and System Sciences 43, 425–440. 33. P. Pardalos, F. Rendl, and H. Wolkowicz [1994]. The quadratic assignment problem: A survey and recent developments. In: Proceedings of the DIMACS Workshop on Quadratic Assignment Problems, P. Pardalos and H. Wolkowicz (eds.), DIMACS Series in Discrete Mathematics and Theoretical Computer Science 16, 1–42. 34. R. Paturi, P. Pudlak, and F. Zane [1997]. Satisfiability coding lemma. Proceedings of the 38th Annual Symposium on Foundations of Computer Science (FOCS’1997), 566–574. 35. R. Paturi, P. Pudlak, M.E. Saks, and F. Zane [1998]. An improved exponential time algorithm for k-SAT. Proceedings of the 39th Annual Symposium on Foundations of Computer Science (FOCS’1998), 628–637. 36. M. Paull and S. Unger [1959]. Minimizing the number of states in incompletely specified sequential switching functions. IRE Transactions on Electronic Computers 8, 356–367. 37. P. Pudlak [1998]. Satisfiability – algorithms and logic. Proceedings of the 23rd International Symposium on Mathematical Foundations of Computer Science (MFCS’1998), Springer, LNCS 1450, 129–141. 38. J.M. Robson [1986]. Algorithms for maximum independent sets. Journal of Algorithms 7, 425–440. 39. J.M. Robson [2001]. Finding a maximum independent set in time O(2n/4 )? Manuscript. 40. R. Rodošek [1996]. A new approach on solving 3-satisfiability. Proceedings of the 3rd International Conference on Artificial Intelligence and Symbolic Mathematical Computation Springer, LNCS 1138, 197–212. 41. I. Schiermeyer [1992]. Solving 3-satisfiability in less than O(1.579n ) steps. Selected papers from Computer Science Logic (CSL’1992), Springer, LNCS 702, 379– 394. 42. I. Schiermeyer [1993]. Deciding 3-colorability in less than O(1.415n ) steps. Proceedings of the 19th Workshop on Graph Theoretic Concepts in Computer Science (WG’1993), Springer, LNCS 790, 177–182. Exact Algorithms for NP-Hard Problems: A Survey 207 43. U. Schöning [1999]. A probabilistic algorithm for k-SAT and constraint satisfaction problems. Proceedings of the 40th Annual Symposium on Foundations of Computer Science (FOCS’1999), 410–414. 44. U. Schöning [2001]. New algorithms for k-SAT based on the local search principle. Proceedings of the 26th International Symposium on Mathematical Foundations of Computer Science (MFCS’2001), Springer, LNCS 2136, 87–95. 45. R. Schroeppel and A. Shamir [1981]. A T = O(2n/2 ), S = O(2n/4 ) algorithm for certain NP-complete problems. SIAM Journal on Computing 10, 456–464. 46. R.E. Tarjan and A.E. Trojanowski [1977]. Finding a maximum independent set. SIAM Journal on Computing 6, 537–546. 47. A. van Vliet [1995]. Personal communication. 48. R. Williams [2002]. Algorithms for quantified Boolean formulas. Proceedings of the 13th ACM-SIAM Symposium on Discrete Algorithms (SODA’2002). Author Index Cameron, Kathie Maurras, Jean François 34 134 Edmonds, Jack 11, 27, 31 Euler, Reinhardt 39 Nemhauser, George L. 158 Nguyen, Viet Hung 134 Farias, Ismael R. de 158 Firla, Robert T. 48 Fischetti, Matteo 64 Oswald, Marcus Gruber, Gerald Johnson, Ellis L. Reinelt, Gerhard 147 Rendl, Franz 78 Richard, Jean-Philippe P. 78 Hernández-Pérez, Hipólito 27 147 89 158 Salazar-González, Juan-José Schultz, Rüdiger 171 Spille, Bianca 48 Kaibel, Volker 105 Karp, Richard M. 31 Toth, Paolo Letchford, Adam N. 119 Lodi, Andrea 64, 119 Weismantel, Robert 48 Woeginger, Gerhard J. 185 64 89