Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                

Operations Research Linear Programming Introduction

2017, IEEE

It is related to research linear basic concept

Operations Research Linear Programming 8/14/04 1 Introduction:  Why Linear Programming?  Important tool by itself  Theoretical basis for later developments (IP, Network, Graph, Nonlinear, scheduling, Sets, Coding, Game, … )  Formulation + package is not enough for advanced applications and interpretation of results 8/14/04 2 Linear Programming (LP) A model consisting of linear relationships representing a firm’s objective and resource constraints LP is a mathematical modeling technique used to determine a level of operational activity in order to achieve an objective, subject to restrictions called constraints 8/14/04 3 Linear Programming Modeling Process Real-World Problem Implementation 8/14/04 Recognition and Definition of the Problem Interpretation Validation and Sensitivity Analysis of the Model Formulation and Construction of the Mathematical Model Solution of the Model 4 Linear Programming Mathematical Model  decision variables  linear objective function  maximization  minimization  linear constraints  equations =  inequalities  or   nonnegativity constraints 8/14/04 5  Mathematical Programming Problem: min/max f(x) subject to gi(x)  0, i = 1, ..., m, (hj(x) = 0, j = 1, ..., k,) ( x  X  R n) f, gi, hj : Rn  R  If f, gi, hj linear (affine) function  linear programming problem If f, gi, hj (or part of them) nonlinear function  nonlinear programming problem If solution set restricted to be integer points  integer programming problem 8/14/04 6    Linear programming: problem of optimizing (maximize or minimize) a linear (objective) function subject to linear inequality constraints. General form: {max, min} c'x subject to ai'x  bi , iM1 ai'x  bi , iM2 ai'x = bi , iM3 xj  0, jN2 xj  0, jN1 , c, ai , x Rn (There may exist variables unrestricted in sign) inner product of two column vectors x, y  Rn : x’y = i = 1n xiyi If x’y = 0, x, y  0, then x, y are said to be orthogonal. In 3-D, the angle between the two vectors is 90 degrees. ( vectors are column vectors unless specified otherwise) 8/14/04 7     Big difference from systems of linear equations is the existence of objective function and linear inequalities (instead of equalities) Much deeper theoretical results and applicability than systems of linear equations. x1, x2, …, xn : (decision) variables bi : right-hand-side ai'x { , ,  } bi : i th constraint xj { ,  } 0 : nonnegativity (nonpositivity) constraint c'x : objective function Other terminology: feasible solution, feasible set (region), free (unrestricted) variable, optimal (feasible) solution, optimal cost, unbounded 8/14/04 8 Important submatrix multiplications  Interpretation of constraints: A: mn   a1 '    |   A A    1   am '    | |  An   |  Ax  nj1 A j x j  im1 ai ' xei , where ei is i-th unit vector y ' A  im1 yi ai '  nj1 y' A j e j ' denote constraints as Ax { , ,  } b 8/14/04 9  Any LP can be expressed as min c'x, Ax  b max c'x  min (-c'x) and take negative of the optimal cost ai'x  bi  -ai'x  -bi ai'x = bi  ai'x  bi , -ai'x  -bi nonnegativity (nonpositivity) are special cases of inequalities which will be handled separately in the algorithms. Feasible solution set of LP can always be expressed as Ax  b (or Ax  b) (called polyhedron, a set which can be described as a solution set of finitely many linear inequalities)  We may sometimes use max c'x, Ax  b form (especially, when we study polyhedron) 8/14/04 10 Brief History of LP (or Optimization)  Gauss: Gaussian elimination to solve systems of equations Fourier(early 19C) and Motzkin(20C) : solving systems of linear inequalities Farkas, Minkowski, Weyl, Caratheodory, … (19-20C): Mathematical structures related to LP (polyhedron, systems of alternatives, polarity) Kantorovich (1930s) : efficient allocation of resources (Nobel prize in 1975 with Koopmans) Dantzig (1947) : Simplex method 1950s : emergence of Network theory, Integer and combinatorial optimization, development of computer 1960s : more developments 1970s : disappointment, NP-completeness, more realistic expectations Khachian (1979) : ellipsoid method for LP 8/14/04 11 1980s : personal computer, easy access to data, willingness to use models Karmarkar (1984) : Interior point method 1990s : improved theory and software, powerful computers software add-ins to spreadsheets, modeling languages, large scale optimization, more intermixing of O.R. and A.I. Markowitz (1990) : Nobel prize for portfolio selection (quadratic programming) Nash (1994) : Nobel prize for game theory 21C (?) : Lots of opportunities more accurate and timely data available more theoretical developments better software and computer need for more automated decision making for complex systems need for coordination for efficient use of resources (e.g. supply chain management, traditional engineering problems, bio...) 8/14/04 12 Application Areas of Optimization  Operations Managements Production Planning Scheduling (production, personnel, ..) Transportation Planning, Logistics Energy Military Finance Marketing E-business Telecommunications Games Engineering Optimization (mechanical, electrical, bioinformatics, ... ) System Design … 8/14/04 13 Standard form problems  Standard form : min c'x, Ax = b, x  0 Ax  nj1 A j x j  im1 ai ' xei    Find optimal weights (nonnegative) from possible nonnegative linear combinations of columns of A to obtain b vector Find optimal solution that satisfies linear equations and nonnegativity Reduction to standard form Free (unrestricted) variable xj  xj+ - xj- , xj+, xj-  0 j aijxij  bi  j aijxij + si = bi , si  0 (loose variable) j aijxij  bi  j aijxij - si = bi , si  0 (surplus variable) 8/14/04 14  Any (practical) algorithm can solve the LP problem in equality form only (except nonnegativity)  Modified form of the simplex method can solve the problem with free variables directly (without using difference of two variables). It gives more sensible interpretation of the behavior of the algorithm. 8/14/04 15 Formulation examples  Minimum cost network flow problem Directed network G=(N, A), (|N| = n ) arc capacity uij , (i, j) A, unit flow cost cij , (i, j) A bi : net supply at node i (bi > 0: supply node, bi < 0: demand node), (i bi = 0) Find min cost transportation plan that satisfies supply, demand at each node and arc capacities. minimize subject to 8/14/04 (i, j)A cijxij {j : (i, j)A} xij - {j : (j, i)A} xji = bi , i = 1, …, n (out flow - in flow = net flow at node i) (some people use, in flow – out flow = net flow) xij  uij , (i, j)A xij  0 , (i, j)A 16  Choosing paths in a communication network ( (fractional) multicommodity flow problem)  Multicommodity flow problem: Several commodities share the network. For each commodity, it is min cost network flow problem. But the commodities must share the capacities of the arcs. Generalization of min cost network flow problem. Many applications in communication, distribution / transportation systems    Several commodities case Actually one commodity. But there are multiple origin and destination pairs of nodes (telecom, logistics, ..) Given telecommunication network (directed) with arc set A, arc capacity uij bits/sec, (i, j) A, unit flow cost cij /bit , (i, j) A, demand bkl bits/sec for traffic from node k to node l. Data can be sent using more than one path. Find paths to direct demands with min cost. 8/14/04 17 Decision variables: xijkl : amount of data with origin k and destination l that traverses link (i, j) A Let bikl = bkl if i = k -bkl if i = l 0 otherwise  Formulation (flow formulation) minimize (i, j)A k l cijxijkl subject to {j : (i, j)A} xijkl -  {j : (j, i)A} xjikl = bikl , i, k, l = 1, …, n (out flow - in flow = net flow at node i for commodity from node k to node l) k l xijkl  uij , (i, j)A (The sum of all commodities should not exceed the capacity of link (i, j) ) xijkl  0 , (i, j)A, k, l =1, …, n 8/14/04 18  Alternative formulation (path formulation) Let K: set of origin-destination pairs (commodities) P(k): set of all possible paths for sending commodity k  K P(k;e): set of paths in P(k) that traverses arc e  A E(p): set of links contained in path p Decision variables: ypk : fraction of commodity k sent on path p kK pP(k) wpkypk pP(k) ypk = 1, for all kK kK pP(k; e) bkypk  ue , for all eA 0  ypk  1, for all p  P(k), k  K where wpk = bkeE(p) ce minimize subject to  If ypk  {0, 1}, it is a single path routing problem (path selection, integer multicommodity flow). 8/14/04 19  path formulation has smaller number of constraints, but enormous number of variables. can be solved easily by column generation technique. Integer version is more difficult to solve.  Extensions: Network design - also determine the number and type of facilities to be installed on the links (and/or nodes) together with routing of traffic.  Variations: Integer flow. Junction of traffic may not be allowed. Determine capacities and routing considering rerouting of traffic in case of network failure, Robust network design (data uncertainty), ... 8/14/04 20  Pattern classification (Linear classifier) Given m objects with feature vector ai Rn , i = 1, …, m. Objects belong to one of two classes. We know the class to which each sample object belongs. We want to design a criterion to determine the class of a new object using the feature vector. Want to find a vector (x, xn+1) Rn+1 with x Rn such that, if i S, then ai'x  xn+1, and if i S, then ai'x < xn+1. (if it is possible) 8/14/04 21  Find a feasible solution (x, xn+1) that satisfies ai'x  xn+1, i S ai'x < xn+1. i S for all sample objects i Is this a linear programming problem? ( no objective function, strict inequality in constraints) 8/14/04 22  Is strict inequality allowed in LP? consider min x, x > 0  no minimum point.  If the system has a feasible solution (x, xn+1), we can make the difference of the RHS and LHS large by using solution M(x, xn+1) for M > 0 and large. Hence there exists a solution that makes the difference at least 1 if the system has a solution. i S Remedy: Use ai'x  xn+1, ai'x  xn+1-1, i S  Important problem in data mining with applications in target marketing, bankruptcy prediction, medical diagnosis, process monitoring, … 8/14/04 23 Piecewise linear convex obj. functions   Some problems involving nonlinear functions can be modeled as LP. Def: Function f : Rn  R is called a convex function if for all x, y Rn and all   [0, 1] f(x + (1- )y)  f(x) + (1- )f(y). ( the domain may be restricted) f called concave if -f is convex (Figure: the line segment joining (x, f(x)) and (y, f(y)) in Rn+1 is not below the locus of f(x) ) 8/14/04 24   Def: x, y Rn, 1, 2  0, 1+ 2 = 1 Then 1x + 2y is said to be a convex combination of x, y. Generally, i=1k ixi , where i=1k i = 1 and i  0, i = 1, ..., k is a convex combination of the points x1, ..., xk Def: A set S  Rn is convex if for any x, y S, we have x + (1 - ) y S for any  [ 0, 1 ]. 1x + 2y = 1x + (1 - 1) y, 0  1  1 = y + 1 (x – y) (line segment joining x, y lies in S) Figure: x (1 = 1) (x-y) y (1 = 0) (x-y) 8/14/04 25 ( x, f ( x))  R n1 f (x) ( y, f ( y)) (x  (1   ) y, f ( x)  (1   ) f ( y))  f ( x)  (1   ) f ( y) f ( x  (1   ) y) x 8/14/04  x  (1   ) y y x  Rn 26     relation between convex function and convex set Def: f: Rn  R. Define epigraph of f as epi(f) = { (x, )  Rn+1 :   f(x) } Then previous definition of convex function is equivalent to epi(f) being a convex set. When dealing with convex functions, we frequently consider epi(f) to exploit the properties of convex sets. Consider operations on functions that preserve convexity and operations on sets that preserve convexity. 8/14/04 27  Example: Consider f(x) = max i = 1, …, m (ci'x + di), ci Rn, di R (maximum of affine functions, called a piecewise linear convex function.) f (x) c1'x+d1 c2'x+d2 c3'x+d3 x  Rn x 8/14/04 28  Theme: Let f1, …, fm : Rn  R be convex functions. Then f(x) = max i = 1, …, m fi(x) is also convex. proof) f(x + (1- )y) = max i=1, …, m fi(x + (1- )y )  max i=1, …, m (fi(x) + (1- )fi(y) )  max i=1, …, m fi(x) + max i=1, …, m (1- )fi(y) = f(x) + (1- )f(y) 8/14/04 29  Min of piecewise linear convex functions Minimize max I=1, …, m (ci'x + di) Subject to Ax  b Minimize Subject to 8/14/04 z z  ci'x + di , Ax  b i = 1, …, m 30  Q: What can we do about finding max of a piecewise linear convex function? maximum of a piecewise linear concave function (can be obtained as min of affine functions)? Min of a piecewise linear concave function? 8/14/04 31  Convex function has a nice property such that a local min point is a global min point. (when domain is Rn or convex set) Hence finding min of a convex function defined over a convex set is usually easy. But finding a max of a convex function is difficult to solve. Basically, we need to examine all local max points. Similarly, finding a max of concave function is easy, but finding min of a concave function is difficult. 8/14/04 32  In constraints, f(x)  h where f(x) is piecewise linear convex function f(x) = max i=1, …, m (fi'x + gi).  fi'x + gi  h, i = 1, … , m Q: What about constraints f(x)  h ? Can it be modeled as LP?  Def: f: Rn  R, convex function,   R The set C = { x: f(x)   } is called the level set of f level set of a convex function is a convex set. solution set of LP is convex (easy)  non-convex solution set can’t be modeled as LP.  8/14/04 33 Problems involving absolute values  Minimize i = 1, …, n ci |xi| subject to Ax  b (assume ci  0) More direct formulations than piecewise linear convex function is possible. (2) Min i ci (xi+ + xi-) (1) subject to Min i ci zi Ax+ - Ax-  b subject to x+ , x -  0 Ax  b xi  zi , i = 1, …, n (want xi+ = xi if xi  0, xi- = -xi if xi -xi  zi , i = 1, …, n < 0 and xi+xi- = 0, i.e., at most one of xi+, xi- is positive in an optimal solution. ci  0 guarantees that.) 8/14/04 34 Data Fitting  Regression analysis using absolute value function Given m data points (ai , bi ), i = 1, …, m, ai Rn , bi R. Want to find x Rn that predicts results b given a with function b = a'x Want x that minimizes prediction error | bi - ai'x | for all i. minimize subject to 8/14/04 z bi - ai'x  z, i = 1, … , m -bi + ai'x  z, i = 1, … , m 35  Alternative criterion minimize i = 1, …, m | bi - ai'x | minimize z1 + … + zm subject to bi - ai'x  zi , i = 1, … , m -bi + ai'x  zi , i = 1, … , m Quadratic error function can't be modeled as LP, but need calculus method (closed form solution) 8/14/04 36  Special case of piecewise linear objective function : separable piecewise linear objective function. function f: Rn  R, is called separable if f(x) = f1(x1) + f2(x2) + … + fn(xn) fi(xi) c1 < c2 < c3 < c4 c4 c3 slope: ci c2 c1 0 8/14/04 x1i a1 x2i a2 x3i a3 xi x4i 37 Express xi in the constraints as xi  x1i + x2i + x3i + x4i , where 0  x1i  a1, 0  x2i  a2 - a1 , 0  x3i  a3 - a2, 0  x4i In the objective function, use : min c1x1i + c2x2i + c3x3i + c4x4i Since we solve min problem, it is guaranteed that we get xki > 0 in an optimal solution implies xji , j < k have values at their upper bounds. 8/14/04 38 Graphical representation and solution  Let a  Rn, b R. Geometric intuition for the solution sets of { x : a’x = 0 } { x : a’x  0 } { x : a’x  0 } { x : a’x = b } { x : a’x  b } { x : a’x  b } 8/14/04 39  Geometry in 2-D { x : a’x  0 } a 0 { x : a’x  0 } 8/14/04 { x : a’x = 0 } 40  Let z be a (any) point satisfying a’x = b. Then { x : a’x = b } = { x : a’x = a’z } = { x : a’(x – z) = 0 } Hence x – z = y, where y is any solution to a’y = 0, or x = y + z. Similarly, for { x : a’x  b }, { x : a’x  b }. { x : a’x  b } a z { x : a’x  b } 0 { x : a’x = b } { x : a’x = 0 } 8/14/04 41  min c1x1 + c2x2 s.t. -x1 + x2  1, x1  0, x2  0 x2 c=(1, 0) c=(-1, -1) c=(1, 1) c=(0, 1) x1 {x: x1 + x2 = 0} 8/14/04 {x: x1 + x2 = z} 42  Representing complex solution set in 2-D ( n variables, m equations (coefficient vectors are linearly independent), nonnegativity, and n – m = 2 ) x3 x1 = 0 x2 x2 = 0 x3 = 0 x1 8/14/04 43 LP Model Formulation  Decision variables   Objective function     mathematical symbols representing levels of activity of an operation a linear relationship reflecting the objective of an operation most frequent objective of business firms is to maximize profit most frequent objective of individual operational units (such as a production or packaging department) is to minimize cost Constraint  8/14/04 a linear relationship representing a restriction on decision making 44 LP Model Formulation (cont.) Max/min subject to: z = c1x1 + c2x2 + ... + cnxn a11x1 + a12x2 + ... + a1nxn (≤, =, ≥) b1 a21x1 + a22x2 + ... + a2nxn (≤, =, ≥) b2 : am1x1 + am2x2 + ... + amnxn (≤, =, ≥) bm xj = decision variables bi = constraint levels cj = objective function coefficients aij = constraint coefficients 8/14/04 45 LP Model: Example RESOURCE REQUIREMENTS PRODUCT Bowl Mug Labor (hr/unit) 1 2 Clay (lb/unit) 4 3 Revenue ($/unit) 40 50 There are 40 hours of labor and 120 pounds of clay available each day Decision variables x1 = number of bowls to produce x2 = number of mugs to produce 8/14/04 46 LP Formulation: Example Maximize Z = $40 x1 + 50 x2 Subject to x1 + 2x2 40 hr (labor constraint) 4x1 + 3x2 120 lb (clay constraint) x1 , x2 0 Solution is x1 = 24 bowls x2 = 8 mugs Revenue = $1,360 8/14/04 47 Graphical Solution Method 1. Plot model constraint on a set of coordinates in a plane 2. Identify the feasible solution space on the graph where all constraints are satisfied simultaneously 3. Plot objective function to find the point on boundary of this space that maximizes (or minimizes) value of objective function 8/14/04 48 Graphical Solution: Example x2 50 – 40 – 4 x1 + 3 x2 120 lb 30 – Area common to both constraints 20 – 10 – 0– 8/14/04 x1 + 2 x2 40 hr | 10 | 20 | 30 | 40 | 50 | 60 x1 49 Computing Optimal Values x2 40 – 4 x1 + 3 x2 120 lb 30 – x1 + 4x1 + 2x2 = 3x2 = 40 120 4x1 + -4x1 - 8x2 = 3x2 = 160 -120 5x2 = x2 = 40 8 2(8) = = 40 24 20 – x1 + x1 x1 + 2 x2 40 hr 10 – 0 –8 | 10 | 24 | 20 30 | 40 x1 Z = $50(24) + $50(8) = $1,360 8/14/04 50 Extreme Corner Points x1 = 0 bowls x2 =20 mugs Z = $1,000 x2 40 – 30 – 20 – A 10 – 0– 8/14/04 x1 = 224 bowls x2 =8 mugs x1 = 30 bowls Z = $1,360 x2 =0 mugs Z = $1,200 B | 10 | 20 C| | 30 40 x1 51 Objective Function x2 40 – 4x1 + 3x2 120 lb Z = 70x1 + 20x2 30 – Optimal point: x1 = 30 bowls x2 =0 mugs Z = $2,100 20 –A 10 – B x1 + 2x2 40 hr 0– 8/14/04 | 10 | 20 | C 30 | 40 x1 52 Minimization Problem CHEMICAL CONTRIBUTION Brand Gro-plus Crop-fast Nitrogen (lb/bag) Phosphate (lb/bag) 2 4 4 3 Minimize Z = $6x1 + $3x2 subject to 2x1 + 4x2  16 lb of nitrogen 4x1 + 3x2  24 lb of phosphate x 1, x 2  0 8/14/04 53 Graphical Solution x2 14 – x1 = 0 bags of Gro-plus 12 – x = 8 bags of Crop-fast 2 Z = $24 10 – 8–A Z = 6x1 + 3x2 6– 4– B 2– 0– 8/14/04 | 2 | 4 | 6 | 8 C | 10 | 12 | 14 x1 54 Simplex Method   A mathematical procedure for solving linear programming problems according to a set of steps Slack variables added to ≤ constraints to represent unused resources    Surplus variables subtracted from ≥ constraints to represent excess above resource requirement. For example    2x1 + 4x2 ≥ 16 is transformed into 2x1 + 4x2 - s1 = 16 Slack/surplus variables have a 0 coefficient in the objective function  8/14/04 x1 + 2x2 + s1 = 40 hours of labor 4x1 + 3x2 + s2 = 120 lb of clay Z = $40x1 + $50x2 + 0s1 + 0s2 55 Solution Points with Slack Variables 8/14/04 56 Solution Points with Surplus Variables 8/14/04 57 Solving LP Problems with Excel Click on “Tools” to invoke “Solver.” Objective function =E6-F6 =E7-F7 =C6*B10+D6*B11 =C7*B10+D7*B11 Decision variables – bowls (x1)=B10; mugs (x2)=B11 8/14/04 58 Solving LP Problems with Excel (cont.) After all parameters and constraints have been input, click on “Solve.” Objective function Decision variables C6*B10+D6*B11≤40 C7*B10+D7*B11≤120 Click on “Add” to insert constraints 8/14/04 59 Solving LP Problems with Excel (cont.) 8/14/04 60 Sensitivity Analysis 8/14/04 61 Sensitivity Range for Labor Hours 8/14/04 62 Sensitivity Range for Bowls 8/14/04 63 Assignment problem Example A law firm maintains a large staff of young attorneys who hold the title of junior partner. The firm concerned with the effective utilization of this personnel resources, seeks some objective means of making lawyer-to-client assignments. On march 1, four new clients seeking legal assistance came to the firm. While the current staff is overloads and identifies four junior partners who, although busy, could possibly be assigned to the cases. Each young lawyer can handle at most one new client. Furthermore each lawyer differs in skills and specialty interests. Seeking to maximize the overall effectiveness of the new client assignment, the firm draws up the following table, in which he rates the estimated effectiveness (of a scale of 1 to 9) of each lawyer on 64 8/14/04 each new case. Assignment problem Client case Lawyer Divorce Corporate merger embezzlement exhibitionism Adam 6 2 8 5 Brook 9 3 5 8 Carter 4 8 3 4 Darwin 6 7 6 4 8/14/04 65 Assignment problem Solution Decision variables: 1 if attorney i is assigned to case j Let Xij = 0 otherwise Where : i = 1, 2, 3, 4 stands for Adam, Brook, Carter, and Darwin respectively j = 1, 2, 3, 4 stands for divorce, merger, embezzlement, and exhibitionism respectively. The LP formulation will be as follows: 8/14/04 66 Assignment problem Max Z = 6 X11 + 2 X12 + 8 X13 + 5 X14 + 9 X21 + 3 X22 + 5 X23 + 8 X24 + 4 X31 + 8 X32 + 3 X33 + 4 X34 + 6 X41 +7 X42 + 6 X43 + 4 X44 St. X11 + X21 + X31 + X41 = 1 (divorce case) X12 + X22 + X32 + X42 = 1 (merger) X13 + X23 + X33 + X43 = 1 (fraud) X14 + X24 + X34 + X44 = 1 (exhibitionism) X11 + X12 + X13 + X14 = 1 (Adam) X21 + X22 + X23 + X24 = 1 (Brook) X31 + X32 + X33 + X34 = 1 (Carter) X41+ X42 + X43 + X44 = 1 (Darwin) 8/14/04 The optimal solution is: X13 = X24 = X32 = X41 = 1. All other variables are equal to zero. 67 Transportation problem Example The Top Speed Bicycle Co. manufactures and markets a line of 10-speed bicycles nationwide. The firm has final assembly plants in two cities in which labor costs are low, New Orleans and Omaha. Its three major warehouses are located near the larger market areas of New York, Chicago, and Los Angeles. The sales requirements for next year at the New York warehouse are 10000 bicycles, at the Chicago warehouse 8000 bicycles, and at the Los Angeles warehouse 15000 bicycles. The factory capacity at each location is limited. New Orleans can assemble and ship 20000 bicycles; the Omaha plant can produce 15000 bicycles per year. The cost of shipping one bicycle from each factory to each warehouse differs, and these unit shipping costs are: 8/14/04 68 Transportation problem New Orleans New York $2 Chicago 3 Los Angeles 5 Omaha 3 1 4 The company wishes to develop a shipping schedule that will minimize its total annual transportation cost 8/14/04 69 Transportation problem Solution To formulate this problem using LP, we again employ the concept of double subscribed variables. We let the first subscript represent the origin (factory) and the second subscript the destination (warehouse). Thus, in general, Xij refers to the number of bicycles shipped from origin i to destination j. Therefore, we have six decision variables as follows: X11 = # X12 = # X13 = # X21 = # X22 = # X23 8/14/04 =# of bicycles shipped from New Orleans to New York of bicycles shipped from New Orleans to Chicago of bicycles shipped from New Orleans to Los Angeles of bicycles shipped from Omaha to New York of bicycles shipped from Omaha to Chicago of bicycles shipped from Omaha to Los Angeles 70 Transportation problem Min Z = 2 X11 + 3 X12 + 5 X13 + 3 X21 + X22 + 4 X23 St X11 + X21 = 10000 (New York demand) X12 + X22 = 8000 (Chicago demand) X13 + X23 = 15000 (Los Angeles demand) X11 + X12 + X13  20000 (New Orleans Supply X21 + X22 + X23  15000 (Omaha Supply) Xij  0 for i = 1, 2 and j = 1, 2, 3 The optimal solution is: X11 = 10000, X12 = 0, X13 = 8000, X21 = 0, X22 = 8000, X23 = 7000, and Z = $96000 8/14/04 71 Integer Optimization • Many of decisions and variables in real cases are inherently integers. • In our example the number of products to be manufactured can not be fractional and it has to be integer. • Many of the problems that are faced in real cases, have choosing an option in their structure, for example: – Which stock to invest in – Which route to choose to move in – Which arc of the graph to choose 8/14/04 72 Integer Variables • The only difference between LP problems and IP problems in structure is the definition of the decision variables. • How ever this change will fundamentally change the characteristics of the problem in hand. • LP problems have a convex set as their feasible region while IP problems have set of integer vectors as their feasible region (non-convex). • Dealing with non-convex optimization problems are much harder than convex optimization problems. 73 8/14/04 How to Solve IP Problems • LP relaxation: LP relaxation of and IP problem is when we allow the variables to take on fractional values. • The idea to solve IP problems is to solve series of specific LP problems which will guide us towards an integer feasible solution. • This method is called Branch-and-Bound. • In this method, we branch on fractional values and make them to be the lower or upper integer value. • It is important to note that rounding the LP solution will not necessarily give the optimal or even feasible solution. 8/14/04 74 Example • Remember we got 210 as our objective function by rounding down the LP solutions. • If we solve the same problem as an IP the optimal solution will be: 8/14/04 75 Important Characteristics of IP Problems • IP problems generally are much more complicated than LP problems. • Many of IP classes are known as NP-hard problems, which takes very long processing time for computers to find the optimal solution (Years, for problems of real case size). • There has been extensive research in this area in order to find ways to be able to tackle these problems. – Cutting Planes – Branch-and-Cut – Column Generation techniques – Branch-and-Price –8/14/04 Meta Heuristics 76