Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
0% found this document useful (0 votes)
64 views

IntroToOR Solutions

This document contains solutions to exercises for an introduction to operations research course. It includes solutions to linear programming problems using different methods like the simplex method, two-phase method, and duality. It also covers integer programming, greedy heuristics, improvement heuristics, totally unimodular matrices, and branch and bound. The document is organized into chapters with different weeks and exercises related to these optimization techniques.
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
64 views

IntroToOR Solutions

This document contains solutions to exercises for an introduction to operations research course. It includes solutions to linear programming problems using different methods like the simplex method, two-phase method, and duality. It also covers integer programming, greedy heuristics, improvement heuristics, totally unimodular matrices, and branch and bound. The document is organized into chapters with different weeks and exercises related to these optimization techniques.
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 84

42101 - Introduction to Operations Research

Solutions
Christian Keilstrup Ingwersen
Morten Bondorf Gerdes
Richard Lusby
Stefan Røpke
Updated: November 22, 2022

Danmarks Tekniske Universitet


Contents
1 Introduction to Linear Programming 1
1.1 Exercise 1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1
1.2 Exercise 2 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1
1.3 Exercise 3 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2
1.4 Exercise 4 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4

2 Linear Programming: Simplex method 6


2.1 Exercise 1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6
2.2 Exercise 2 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7
2.3 Exercise 3 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7
2.4 Exercise 4 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8
2.5 Exercise 5 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9
2.6 Exercise 6 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12

3 Linear Programming: Two phase method, basic graph theory 14


3.1 Exercise 1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14
3.2 Exercise 2 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15
3.3 Exercise 3 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18
3.4 Exercise 4 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19
3.5 Exercise 5 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21

4 Linear Programming: Simplex on matrix form 24


4.1 Exercise 1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 24
4.1.1 First solution to exercise 1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 24
4.1.2 Second solution to exercise 1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 26
4.2 Exercise 2 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27
4.3 Exercise 3 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 28
4.4 Exercise 4 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29

5 Linear Programming: Duality and Sensitivity analysis 30


5.1 Exercise 1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 30
5.2 Exercise 2 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 30
5.3 Exercise 3 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 32
5.4 Exercise 4 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 32
5.5 Exercise 5 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 33

6 Week 6: Modeling with integer variables 34

7 Week 7: Modeling with Integer variables II 39


7.1 Exercise 1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 39
7.2 Exercise 2 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 39
7.3 Exercise 3 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 40
7.4 Exercise 4 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 41
7.5 Exercise 5 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 42
7.6 Exercise 6 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 45

8 Week 8: Greedy heuristics 48


8.1 Exercise 1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 48
8.2 Exercise 2 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 52
8.3 Exercise 3 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 53
9 Week 9: Improvement heuristics 57
9.1 Exercise 1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 57
9.2 Exercise 2 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 57
9.3 Exercise 3 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 60

10 Week 10, Totally Unimodular Matrices 64


10.1 Exercise 1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 64
10.2 Exercise 2 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 66
10.3 Exercise 3 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 66

11 Week 11, Branch & Bound 68


11.1 Exercise 1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 68
11.2 Exercise 2 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 68
11.3 Exercise 3 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 69
11.4 Exercise 4 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 70

12 Netwok simplex 72
12.1 Week 12, Exercise 1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 72
12.2 Exercise 2 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 73
12.3 Exercise 3 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 75
12.4 Exercise 4 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 78

13 Linear Programming: Dual Simplex 79


13.1 Exercise 5 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 79
13.2 Exercise 6 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 80
42101 - Introduction to Operation Research

1 Introduction to Linear Programming


1.1 Exercise 1
Let x1 , x2 , x3 represent how many liters of each of kind of chocolate that should be produced. The profit of
selling the two first products is 200 kr per liter and the profit of selling the last product is 700 kr per liter. This
gives us the objective function Z = 200x1 + 200x2 + 700x3 which we want to maximize.

Since Peter only has 22 bars of chocolate and premium uses 2 bars, superb 1 bar and elite 3 bars, we have the
constraint: 2x1 + x2 + 3x3 ≤ 22.

In order to produce the hot chocolate Peter also needs milk powder. Peter has 20 bags of milk powder, which
gives the constraint: x1 + 2x2 + 4x3 ≤ 20.

All the constraints related to the resource limitations are now defined, but since Peter only is able to carry 10
liters of hot chocolate we need to add a constraint constraining the total production i.e. x1 + x2 + x3 ≤ 10.

Combining the constraints and the objective function, we have the following LP
max Z = 200x1 + 200x2 + 700x3 (1)
2x1 + x2 + 3x3 ≤ 22 (2)
x1 + 2x2 + 4x3 ≤ 20 (3)
x1 + x2 + x3 ≤ 10 (4)
x1 , x2 , x3 ≥ 0 (5)

Solving the problem in Julia, we find that the optimal solution is Z = 3640 kr and Peter should produce 5.6
liters of premium and 3.6 liters of elite hot chocolate.

To solve the problem in Julia, the following code is used


using JuMP, GLPK
m = Model(GLPK.Optimizer)
@variable(m, x1 >= 0 )
@variable(m, x2 >= 0 )
@variable(m, x3 >= 0 )
@objective(m, Max, 200x1 +200x2 + 700x3 )
@constraint(m, 2x1 + x2 + 3x3 <= 22 )
@constraint(m, x1 + 2x2 + 4x3 <= 20 )
@constraint(m, x1 + x2 + x3 <= 10 )
optimize!(m)
println("Objective value: ", JuMP.objective_value(m))
println("x1 = ", JuMP.value(x1))
println("x2 = ", JuMP.value(x2))
println("x2 = ", JuMP.value(x3))

1.2 Exercise 2
From the given text the use of frame parts and electrical components are seen to be the following for the two
products.

Product Frame parts Electrical components Profit


1 1 2 1
2 3 2 2*

Table 1: *Any excess over 60 units of product 2 brings no profit

1
42101 - Introduction to Operation Research

We now define the following decision variables:


x1 = production of product 1 (units)
x2 = production of product 2 (units)

With a profit of $1 for product 1 and $2 for product 2, the total profit is Z = x1 + 2x2 , which also is our
objective function.
The resources are limited, which gives us a couple of constraints. Since there only are 200 units of frame parts
and product 1 uses 1 frame part per unit, while product 2 uses 3 frame parts per unit, we have the following
constraint:
x1 + 3x2 ≤ 200.

Likewise since there only are 300 units of electrical components and product 1 and 2 each uses 2 components
per unit, we have the following constraint: 2x1 + 2x2 ≤ 300.

Since there is no profit if more than 60 units of x2 are produced, we introduce the constraint x2 ≤ 60.

Combined this gives us the following LP

max Z = x1 + 2x2 (6)


x1 + 3x2 ≤ 200 (7)
2x1 + 2x2 ≤ 300 (8)
x2 ≤ 60 (9)
x1 , x2 ≥ 0 (10)

When using the graphical method, the optimal solution is seen to lie in the intersection of the two first constraints
resulting in an objective value of Z ∗ = $175, where 125 units of product 1 are produced and 25 units of product
2 are produced.

1.3 Exercise 3
The money available for investment is 6000$ and the time available is 600 hours. The description of the situation
can be seen in Table 2.

Table 2: Description of investment problem

Project 1 Project 2 Limitations


Profit 4500 4500
Time 400 500 600
Money 5000 4000 6000

2
42101 - Introduction to Operation Research

Based on the exercise and Table 2 a linear programming model is formulated.


Let x1 be the investment share in project 1 and let x2 be the investment share in project 2. The profit of
investing in both project 1 and project 2 is 4500 $. The total profit is then calculated by Z = 4500x1 + 4500x2 .
The money needed for investing in project 1 is 5000$ and for project 2 it is 4000$. It is not possible to invest
more money than you have and therefore you are bounded by 6000$. This can be formulated as the constraint
5000x1 + 4000x2 ≤ 6000.

Since we only have 600 hours available we also need to constrain the time spend on each project i.e. 400x1 +
500x2 ≤ 600.
Since both x1 and x2 are fractions between 0 and 1 it holds that x1 ∈ [0, 1] and x2 ∈ [0, 1].
The final model is given by

max Z = 4500x1 + 4500x2 (11)


s.t. 400x1 + 500x2 ≤ 600 (12)
5000x1 + 4000x2 ≤ 6000 (13)
x1 , x2 ∈ [0, 1] (14)

The following plot shows the objective function as the blue line, and the constraints as the red and green line.

Figure 1: Graphical solution of investment problem

It can be seen that the optimal value is located in the intersection between the two constraints. The optimal
value is found by solving two equations with two unknowns.

400x1 + 500x2 = 600 and 5000x1 + 4000x2 = 6000 ⇐⇒ (15)


2
x1 = x2 = (16)
3
Putting the optimal value for the decision variables into the objective function gives the optimal value Z =
4500 · 23 + 4500 · 23 = 6000
This means that in order to maximize your profit you should invest 23 of the total investment in project 1 and
2
3 of the total investment in project 2, which will result in a profit of 6000$
Solving the problem with Julia requires the following piece of code

using JuMP, GLPK


m = Model(GLPK.Optimizer)

3
42101 - Introduction to Operation Research

@variable(m,0<=x1<=1)
@variable(m,0<=x2<=1)
@objective(m, Max, 4500*x1 + 4500*x2)
@constraint(m,400*x1 + 500*x2 <= 600)
@constraint(m,5000*x1 + 4000*x2 <= 6000)

optimize!(m)
println("Objective value: ", JuMP.objective_value(m))
println("x1 = ", JuMP.value(x1))
println("x2 = ", JuMP.value(x2))

1.4 Exercise 4
Based on the table in the description a linear program is formulated. The objective is to minimize the cost of
the nutrition plan. Let x1 be the the number of portions of steak per day and let x2 be the number of portions
of potatoes per day. The model can be formulated directly from the table.

min Z = 4x1 + 2x2 (17)


s.t. 5x1 + 15x2 ≥ 50 (18)
20x1 + 5x2 ≥ 40 (19)
15x1 + 2x2 ≤ 60 (20)
x1 ≥ 0, x2 ≥ 0 (21)

The following plot shows the objective function as the orange line, and the constraints as the blue, red and
green line.

Figure 2: Graphical solution to nutrition problem

4
42101 - Introduction to Operation Research

It can be seen that the optimal solution is the intersection between the red line and the green line. The green
line represents constraint number 1 and the red line represent constraint number 2. The best solution is found
by solving two equations with two unknowns.

5x1 + 15x2 = 50 and 20x1 + 5x2 = 40 ⇐⇒ (22)


14 32
x1 = , x2 = (23)
11 11
The solution gives the minimum cost Z ∗ = 12011 . This means that Ralph has to eat
14
11 portions of steak a day
32
and 11 portions of potatoes a day which will cost him 120
11 $
Solving the problem with Julia requires the following piece of code

using JuMP, GLPK


m = Model(GLPK.Optimizer)

@variable(m,x1>=0)
@variable(m,x2>=0)
@objective(m, Min, 4*x1 + 2*x2)
@constraint(m,5*x1 + 15*x2 >= 50)
@constraint(m,20*x1 + 5*x2 >= 40)
@constraint(m,15*x1 + 2*x2 <= 60)

optimize!(m)
println("Objective value: ", JuMP.objective_value(m))
println("x1 = ", JuMP.value(x1))
println("x2 = ", JuMP.value(x2))

5
42101 - Introduction to Operation Research

2 Linear Programming: Simplex method


2.1 Exercise 1
The problem is rewritten in augmented form by introducing slack variables

Max Z = 2x1 + x2 (24)


s.t. x1 + x2 + x3 = 40 (25)
4x1 + x2 + x4 = 100 (26)
x1 , x2 , x3 , x4 ≥ 0 (27)

The first table of the simplex method is then established

Z x1 x2 x3 x4 RHS
Z 1 -2 -1 0 0 0
x3 0 1 1 1 0 40
x4 0 4 1 0 1 100

The pivot column is the column with the most negative coefficient in the objective row. In this case it is x1 .
The minimum ratio test shows that x4 is the leaving variable.

Z x1 x2 x3 x4 RHS ratio
Z 1 -2 -1 0 0 0
x3 0 1 1 1 0 40 40
x4 0 4 1 0 1 100 25

To make the new tabular legal, row 2 (R2) is divided by 4. Row 2 is then added two times to row 0 (R0), and
subtracted 1 time from row 1(R1).

Z x1 x2 x3 x4 RHS ratio
Z 1 0 − 12 0 1
2 50
3
x3 0 0 4 1 - 41 15 20
1 1
x1 0 1 4 0 4 25 100

x3 is the leaving variable while x2 is the new basis variable.


Operations to make the new tabular legal: R1new = R1 3 , R0new = R0 +
1
2 · R1new , R2new = R2 − 1
4 · R1new
4

Z x1 x2 x3 x4 RHS
2 1
Z 1 0 0 3 3 60
4
x2 0 0 1 3 − 13 20
x1 0 1 0 − 31 1
3 20

Since there are no negative coefficients in the objective row (R0), the optimal solution is found. The optimal
solution is x1 = x2 = 20, corresponding to Z ∗ = 60

6
42101 - Introduction to Operation Research

The feasible solution space is indicated on the figure below. Simplex starts in the point (x1 , x2 ) = (0, 0) (A
on the figure). In the first iteration it moves to the point (x1 , x2 ) = (25, 0) (B on the figure). In the second
iteration we reach the point (x1 , x2 ) = (20, 20) which is optimal (c on the figure).

2.2 Exercise 2
x2 should leave the basis. We only have to consider x2 , x3 , x4 as these are the ones with coefficient greater than
zero 0 in the column of x1 . Of these x2 has ratio 03 = 0 which is minimal.

2.3 Exercise 3
The figure below shows the constraint boundaries and the feasible solution space (the darkest blue area). Every
intersection between two constraint boundaries is a corner-point solution. If the corner-point solution is inside
the feasible area then the corner point solution is also a corner-point feasible solution. On the figure below the
corner-point feasible solution are indicated with white points and the corner point solutions are black. There are
4 corner-point feasible solutions and 10 corner-point solutions (remember that a corner-point feasible solution
also is a corner-point solution).

7
42101 - Introduction to Operation Research

2.4 Exercise 4
The problem is rewritten only with positive variables by introducing: x01 = x1 + 7

max Z = x01 + 3x2 − 7 (28)


s.t. x01 + 2x2 ≤ 7 (29)
x01 , x2 ≥ 0 (30)

We write this LP on augmented form and insert in a tableau. We pivot x2 into the basis and then the optimality
test show that we are done.

-----------------------------------------------
b.v. | eq. | x1’ x2 x3 | RHS
-----------------------------------------------
| 0 | -1.00 -3.00 0.00 | -7.00
x3 | 1 | 1.00 2.00 1.00 | 7.00
-----------------------------------------------
b.v. | eq. | x1’ x2 x3 | RHS
-----------------------------------------------
| 0 | 0.50 0.00 1.50 | 3.50
x2 | 1 | 0.50 1.00 0.50 | 3.50
This gives us the solution x01 = 0, x2 = 3.5 and Z = 3.5 but remember that this is the solution to the rewritten
problem. The solution to the original problem is:

x1 = x01 − 7 = 0 − 7 = −7, x2 = 3.5, Z = 3.5

8
42101 - Introduction to Operation Research

2.5 Exercise 5
− −
Since x2 is unrestricted we write it as x2 = x+ +
2 − x2 with x2 ≥ 0 and x2 ≥ 0 and substitute this into the LP:


min Z = x1 + 2 x+

2 − x2
subject to

−x1 − x+

2 − x2 ≤ 5

2x1 + x+

2 − x2 ≤ 6

x1 , x+
2 , x2 ≥ 0

We have to remember to change minimization to maximization and gets



max −Z = −x1 − 2x+
2 + 2x2
subject to

−x1 − x+
2 + x2 ≤ 5

2x1 + x+
2 − x2 ≤ 6

x1 , x+
2 , x2 ≥ 0

We then solve this LP using the simplex algorithm:


-----------------------------------------------------------------
b.v. | eq. | x1 x2+ x2- x3 x4 | RHS
-----------------------------------------------------------------
| 0 | 1.00 2.00 -2.00 0.00 0.00 | 0.00
x3 | 1 | -1.00 -1.00 1.00 1.00 0.00 | 5.00
x4 | 2 | 2.00 1.00 -1.00 0.00 1.00 | 6.00
-----------------------------------------------------------------
b.v. | eq. | x1 x2+ x2- x3 x4 | RHS
-----------------------------------------------------------------
| 0 | -1.00 0.00 0.00 2.00 0.00 | 10.00
x2- | 1 | -1.00 -1.00 1.00 1.00 0.00 | 5.00
x4 | 2 | 1.00 0.00 0.00 1.00 1.00 | 11.00
-----------------------------------------------------------------
b.v. | eq. | x1 x2+ x2- x3 x4 | RHS
-----------------------------------------------------------------
| 0 | 0.00 0.00 0.00 3.00 1.00 | 21.00
x2- | 1 | 0.00 -1.00 1.00 2.00 1.00 | 16.00
x1 | 2 | 1.00 0.00 0.00 1.00 1.00 | 11.00
We get the solution x1 = 11, x− ∗ ∗
2 = 16 and −Z = 21 ⇒ Z = −21. We rewrite to the original variables and get
+ − ∗
the solution x1 = 11, x2 = x2 − x2 = −16, Z = −21
c) The drawing is shown in Figure 3. Simplex starts in point A, in iteration 1 we move to point B and in iteration
2 we move to point C which is optimal. In the figure we have drawn the objective function for Z = −21 (dotted
line) Z decreases as we move the objective line further down. This shows that the solution found by Simplex
matches the figure.
d) Now we have to solve
min x1 + 2x2
subject to
−x1 − x2 ≤ 5
x1 + 2x2 ≤ 6
x1 ≥ 0
x2 ∈ R

9
42101 - Introduction to Operation Research

x2

A x1

Figure 3: Feasible solution space for the LP in task 5.a

10
42101 - Introduction to Operation Research

In the same way as before we re-write this to



min Z = x1 + 2 x+

2 − x2

subject to

−x1 − x+

2 − x2 ≤ 5

x1 + 2 x+

2 − x2 ≤ 6

x1 , x+
2 , x2 ≥ 0

changing to maximization:

max −Z = −x1 − 2x+
2 + 2x2

subject to

−x1 − x+
2 + x2 ≤ 5

x1 + 2x+
2 − 2x2 ≤ 6

x1 , x+
2 , x2 ≥ 0

We solve this using the simplex algorithm:

-----------------------------------------------------------------
b.v. | eq. | x1 x2+ x2- x3 x4 | RHS
-----------------------------------------------------------------
| 0 | 1.00 2.00 -2.00 0.00 0.00 | 0.00
x3 | 1 | -1.00 -1.00 1.00 1.00 0.00 | 5.00
x4 | 2 | 1.00 2.00 -2.00 0.00 1.00 | 6.00
-----------------------------------------------------------------
b.v. | eq. | x1 x2+ x2- x3 x4 | RHS
-----------------------------------------------------------------
| 0 | -1.00 0.00 0.00 2.00 0.00 | 10.00
x2- | 1 | -1.00 -1.00 1.00 1.00 0.00 | 5.00
x4 | 2 | -1.00 0.00 0.00 2.00 1.00 | 16.00

we notice that x1 should enter the basis in the last tableau. However, since we have only negative elements in
the column it means that increasing x1 would cause −Z to increase as well as the two basis variables x− 2 and
x4 . This implies that the problem is unbounded, meaning that Z goes to −∞.

e) On the figure below we see that the solution space is unbounded. We have inserted three lines corresponding
to Z = 0, Z = −6 and Z = −12. It is clear that we can make Z as negative as we want by continue moving
downwards, to the right. and that we can improve the objective infinetly.

11
42101 - Introduction to Operation Research

2.6 Exercise 6
Before starting the simplex method, the problem is first rewritten to augmented form by introducing three slack
variables x4 , x5 and x6 . Rewriting the problem we end up with

max Z = x1 − 7x2 + 3x3 (31)


s.t. 2x1 + x2 − x3 + x4 = 4 (32)
4x1 − 3x2 + x5 = 2 (33)
− 3x1 + 2x2 + x3 + x6 = 3 (34)
x1 , x2 , x3 , x4 , x5 , x6 ≥ 0 (35)

This is used to establish the initial table of the simplex method as seen below.

Bv Z x1 x2 x3 x4 x5 x6 Rhs Ratio
Z 1 -1 7 -3 0 0 0 0
x4 0 2 1 -1 1 0 0 4
x5 0 4 -3 0 0 1 0 2
x6 0 -3 2 1 0 0 1 3 3

Here x3 is joining basis as it is the variable with the most negative coefficient in the objective row (R0). Since
the only variable greater than zero in the x3 column is in the fourth row, x6 is the leaving variable. Next step
is to ensure the pivot variable is equal to one and the other values in the pivot column should be equal to zero.
The pivot variable is already one, so only the following operations are needed:
R1new = R1 + R3 and R0new = R0 + 3 · R3 resulting in the following table

12
42101 - Introduction to Operation Research
Bv Z x1 x2 x3 x4 x5 x6 Rhs Ratio
Z 1 -10 13 0 0 0 3 9
x4 0 -1 3 0 1 0 1 7
x5 0 4 -3 0 0 1 0 2 0.5
x3 0 -3 2 1 0 0 1 3

Next up x1 is entering basis and x5 is leaving basis. To come to the next iteration, the following operations are
performed:
R2new = R2/4 → R1new = R1 + R2new → R0new = R0 + 10 · R2new → R3new = R3 + 3 · R2new

Bv Z x1 x2 x3 x4 x5 x6 Rhs Ratio
Z 1 0 5.5 0 0 2.5 3 14
x4 0 0 2.25 0 1 0.25 1 7.5
x1 0 1 -0.75 0 0 0.25 0 0.5
x3 0 0 -0.25 1 0 0.75 1 4.5

There are no more negative values in the objective row (R0), which means we have found an optimal solution
Z ∗ = 14, when (x1 , x2 , x3 ) = (0.5, 0, 4.5).

13
42101 - Introduction to Operation Research

3 Linear Programming: Two phase method, basic graph theory


3.1 Exercise 1
The problem is rewritten in augmented form by introducing artificial variables and surplus variables

max − Z = −2x1 − x2 − 3x3 (36)


s.t. 5x1 + 2x2 + 7x3 + x4 = 420 (37)
3x1 + 2x2 + 5x3 − x5 + x6 = 280 (38)
x1 , x2 , x3 , x4 , x5 , x6 ≥ 0 (39)

Since the problem has artificial variables, the two-phase method is used. First step is to minimize the artificial
variables: min Z = x4 + x6 ⇐⇒ max − Z = −x4 − x6
The constraints remain the same as in the original problem.
The first tabular is (rewritting −Z = −x4 − x6 to −Z + x4 + x6 = 0 before inserting in the tableau)

Z x1 x2 x3 x4 x5 x6 RHS
Z -1 0 0 0 1 0 1 0
x4 0 5 2 7 1 0 0 420
x6 0 3 2 5 0 -1 1 280

The tabular is not legal, since the coefficient in R0 is 1 for both x4 and x6 . Subtracting R1 and R2 from R0
gives the following tabular

Z x1 x2 x3 x4 x5 x6 RHS Ratio
Z -1 -8 -4 -12 0 1 0 -700
x4 0 5 2 7 1 0 0 420 60
x6 0 3 2 5 0 -1 1 280 56

x6 is the leaving variable and x3 is the entering variable.


Row operations: R2new = R2 5 , R0new = R0 + 12 · R2new , R1new = R1 − 7 · R2new

Z x1 x2 x3 x4 x5 x6 RHS Ratio
Z -1 - 45 4
5 0 0 - 75 12
5 -28
4
x4 0 5 - 45 0 1 7
5 - 75 28 20
3 2
x3 0 5 5 1 0 - 15 1
5 56

x4 is the leaving variable and x5 is the entering variable.


Row operations: R1new = R1 7
7 , R0new = R0 + 5 · R1new , R2new = R2 +
1
5 · R1new
5

Z x1 x2 x3 x4 x5 x6 RHS
Z -1 0 0 0 1 0 1 0
4
x5 0 7 - 47 0 5
7 1 -1 20
5 2 1
x3 0 7 7 1 7 0 0 60

Phase 1 (minimizing the artificial variables) is now done, since there are no negative coefficients in the objective
row. We check that both artificial variables are zero since they are both non-basic. This means that we have
found a feasible solution to the original problem and we can go on to phase 2.
The original objective function is inserted to the last tabular from phase 1, and the columns of the artificial
variables are deleted.

14
42101 - Introduction to Operation Research
Z x1 x2 x3 x5 RHS
Z -1 2 1 3 0 0
4
x5 0 7 - 47 0 1 20
5 2
x3 0 7 7 1 0 60

First of all the tabular is made legal by subtracting R2 from R0 three times.

Z x1 x2 x3 x5 RHS Ratio
Z -1 - 71 1
7 0 0 -180
4
x5 0 7 - 74 0 1 20 35
5 2
x3 0 7 7 1 0 60 84

x5 is the leaving variable and x1 is the entering variable.


Row operations: R1new = R1 1
4 , R0new = R0 + 7 · R1new , R2new = R2 −
5
7 · R1new
7

Z x1 x2 x3 x5 RHS
1
Z -1 0 0 0 4 -175
7
x1 0 1 -1 0 4 35
x3 0 0 1 1 - 45 35

There are no negative coefficients in the objective row and the optimal solution is found. x1 = x3 = 35 and
Z = 175.
Since the coefficient of x2 in the objective row is 0 and x2 is not a basis variable, there exist more than one
optimal solution. Forcing x2 to be a basis variable, and doing one more iteration of the simplex method, would
result in a solution with the same objective value Z = 175 but with x1 = 70 and x2 = 35. Linear combinations
of the two solutions would result in the same optimal solution: X ∗ = w1 · (35, 0, 35) + w2 · (70, 35, 0), w1 + w2 = 1
and w1 , w2 ≥ 0

3.2 Exercise 2
We notice that the second constraint has a negative right hand side so we first multiply this constraint by -1 to
get:

max Z = x1 + 2x2
subject to

x1 + x2 ≤ 4
x1 − x2 ≥ 6
2x1 + x2 = 6
x1 , x2 ≥ 0

Then we write on augmented form, adding a slack variable to the first constraint, an artificial variable and a
surplus variable to the second constraint and an artificial variable to the third constraint.

max Z = x1 + 2x2
subject to

x1 + x2 + x3 = 4
x1 − x2 + x̄4 − x5 = 6
2x1 + x2 + x̄6 = 6
x1 , x2 , x3 , x̄4 x5 , x̄6 ≥ 0

15
42101 - Introduction to Operation Research

In phase 1 we have to minimize the sum of artificial variables:

Phase 1: min Z = x̄4 + x̄6

which is equivalent to
max −Z = −x̄4 − x̄6
which again is equivalent to

max −Z
−Z + x̄4 + x̄6 = 0

We are now ready to enter the information into a tableau and start solving phase 1 (red variables are the
artificial variables):

We first put the tableau on proper form by doing a row operation on row 0. After that we see that x1 enters
the basis and x6 leaves.
When we restore proper form we see that we have reached the optimal solution to phase 1. We check if all
artificial variables are zero in the optimal solution. This is NOT the case. This tells us that the LP is infeasible
and we cannot go on with phase 2.

16
42101 - Introduction to Operation Research

b) We have illustrated the solution space on the figure below. The red line corresponds to the equality
constraint, the green line corresponds to the first less-than-or-equal-to constraint and the blue line corresponds
to the second less-than-or-equal-to constraint. We see that the solutions that satisfy all these three constraints
has x2 < 0 and therefore such solutions are not feasible (since x2 ≥ 0 was part of the model). The LP is
therefore infeasible, just like we found out using phase 1 of the 2-phase method. By looking at the two tableaus
from task a) we see that phase 1 starts in (x1 , x2 ) = (0, 0). It then moves to (x1 , x2 ) = (3, 0). This improves the
objective function from Z = 12 to Z = 3 (i.e. moving close to the feasible solutions). We also see that x6 = 0
in this solution indicating that the equality constraint now is satisfied (since the point is on the red line). We
cannot move to a point that has lower Z value and phase 1 is therefore done at this point.

17
42101 - Introduction to Operation Research

3.3 Exercise 3
a) The abstract LP model is:
n
X
min cj x j (40)
j=1

subject to

n
X
aij xj ≥ bi ∀i = 1, . . . , m (41)
j=1

xj ≥ 0 ∀j = 1, . . . , n (42)

Let’s start by looking at constraints (41). We have a constraint of this type for all i = 1, . . . , m, that is for
every time interval that we have defined. Each such constraint ensures that we have enough agents on duty
for the particular time interval (remember that aij is one if shift j covers time interval i and zero otherwise).
Constraints (42) are non-negativity requirements. We could add that all variables should take integer values,
but if we do so, our model is no longer a LP.
The objective function (40) minimizes the cost of assigning agents to shifts. Each term cj xj in the summation
computes the cost of assigning xj agents to shift j.
b,c)

using JuMP, GLPK


model = Model(with_optimizer(GLPK.Optimizer))

# This is the data defined in task (c)


m = 3
n = 5
b = [2,4,3]
c = [1500, 2000, 800, 1800, 1300]
a = [1 1 0 0 0;
0 1 1 1 0;
0 0 0 1 1]

# This is the Julia implementation of the abstract model


@variable(model, x[1:n] >= 0 )
@objective(model, Min, sum(c[j]*x[j] for j=1:n) )
@constraint(model, [i=1:m], sum(a[i,j]*x[j] for j=1:n) >= b[i])

optimize!(model)
println("Objective value: ", JuMP.objective_value(model))
println("x = ", JuMP.value.(x))

Solving the model results in the solution x = [1.0, 1.0, 0.0, 3.0, 0.0] and an objective value of 8900.

d) When using the data from (week2-ex3-dat.txt) one gets an objective value of 21250 and a solution
x = [2, 0, 0, 0, 0, 0, 1, 1, 4, 3, 0, 1, 2, 2, 1, 1, 3, 0, 0, 0, 0, 0, 0, 0]

18
42101 - Introduction to Operation Research

3.4 Exercise 4
Julia model
module test
using JuMP, GLPK
model = Model(GLPK.Optimizer)

mutable struct Arc


from::Int64
to::Int64
cost::Int64
UB::Int64
end

m = 7
arcs = [
Arc(1,3,1,0),Arc(1,7,9,0),Arc(2,1,4,0),
Arc(2,5,6,30),Arc(2,6,5,0),Arc(3,6,7,50),
Arc(4,5,8,30),Arc(6,5,12,0),Arc(7,2,2,0),Arc(7,4,3,0)]

demands = [100,0,20,-10,-50,-40,-20]

@variable(model, x[arcs] >= 0 )


@objective(model, Min, sum(a.cost*x[a] for a in arcs) )
@constraint(model, [i=1:m], sum(x[a] for a in arcs if a.from==i)
- sum(x[a] for a in arcs if a.to==i) == demands[i] )

for a in arcs
if a.UB > 0
@constraint(model, x[a] <= a.UB )
end
end

print(model)
optimize!(model)
println("Objective value: ", JuMP.objective_value(model))
for a in arcs
println("Flow on arc (",a.from,",",a.to,") is ", JuMP.value.(x[a]))
end
Solution:
Objective value: 1510.0
Flow on arc (1,3) is 30.0
Flow on arc (1,7) is 70.0
Flow on arc (2,1) is 0.0
Flow on arc (2,5) is 30.0
Flow on arc (2,6) is 0.0
Flow on arc (3,6) is 50.0
Flow on arc (4,5) is 10.0
Flow on arc (6,5) is 10.0
Flow on arc (7,2) is 30.0
Flow on arc (7,4) is 20.0
end

19
42101 - Introduction to Operation Research

Drawing
Green numbers are the assigned flow (found by LP), blue numbers are surplus / deficit. We see that the flow
respects upper bounds and that the flow from/to each node match with the surplus / deficit.

Modified model. We need to send at least as many units through arc (6,5) at as through arc (2,5). We can
express this as
x65 ≥ x25
or
x65 − x25 ≥ 0
We can write this in Julia as

@constraint(model, sum(x[a] for a in arcs if a.from==6 && a.to==5) -


sum(x[a] for a in arcs if a.from==2 && a.to==5) >= 0)

or

@constraint(model, x[arcs[8]] - x[arcs[4]] >= 0)

In the second approach we used that arc (6,5) is in the eight position in the list of arcs and arc (2,5) is in the
fourth position in the list. With this change the objective value increases to 1570 and the optimal solution
becomes:

20
42101 - Introduction to Operation Research

We see that the added constraint has changed the solution and that the new solution respects the new constraint.

3.5 Exercise 5
The network representation of the problem is: red numbers are cost per unit of flow and blue numbers are
surplus / deficit

We will denote nodes as follows:


Node #
Factory 1 1
Factory 2 2
Distribution Center 3
Warehouse 1 4
Warehouse 2 5
With this the LP formulation becomes

21
42101 - Introduction to Operation Research

min 3x13 + 7x14 + 4x23 + 9x25 + 2x34 + 4x35


subject to

x13 + x14 = 80
x23 + x25 = 70
−x13 − x23 + x34 + x35 = 0
−x14 − x34 = −60
−x25 − x35 = −90
x14 , x25 ≥ 0
0 ≤ x13 , x23 , x34 , x35 ≤ 50

We can solve this in Julia by typing in the LP as it is written above, or we can use our generic minimum cost
flow model. We use the latter.

module test
using JuMP, GLPK
model = Model(GLPK.Optimizer)

mutable struct Arc


from::Int64
to::Int64
cost::Int64
UB::Int64
end

m = 5
arcs = [
Arc(1,3,3,50),Arc(1,4,7,0),Arc(2,3,4,50),
Arc(2,5,9,0),Arc(3,4,2,50),Arc(3,5,4,50)]

demands = [80,70,0,-60,-90]

@variable(model, x[arcs] >= 0 )


@objective(model, Min, sum(a.cost*x[a] for a in arcs) )
@constraint(model, [i=1:m], sum(x[a] for a in arcs if a.from==i)
- sum(x[a] for a in arcs if a.to==i) == demands[i] )

for a in arcs
if a.UB > 0
@constraint(model, x[a] <= a.UB )
end
end

print(model)
optimize!(model)
println("Objective value: ", JuMP.objective_value(model))
for a in arcs
println("Flow on arc (",a.from,",",a.to,") is ", JuMP.value.(x[a]))
end
end

When solving this we get an objective of 1100. The flow is shown on the figure below:

22
42101 - Introduction to Operation Research

23
42101 - Introduction to Operation Research

4 Linear Programming: Simplex on matrix form


4.1 Exercise 1
We will show two solutions to this exercise. The first version will be one where we write up the simplex tableau
in each iteration and use matrix computations to compute the new tableau in each iteration. This is similar to
what we did in class. The second solution is a bit more "streamlined" since it avoids the tableaus and only look
directly at the matrix computations to make the necessary decisions (i.e. optimality test, ingoing and leaving
variable and so on).

4.1.1 First solution to exercise 1


We start by entering the initial information into a tableau. We see that x2 enters the basis and x7 leaves the
basis. Our new basic variables are xB = [x6 , x2 ]

Preparation, all based on the information found in the LP:

AI=[
2 3 3 2 2 1 0;
3 5 4 2 4 0 1]
A=[ 2 3 3 2 2 ;
3 5 4 2 4 ]
b=[20; 30]
c=[5 8 7 4 6]
c0=[c 0 0]

Basis: xB = [x6 , x2 ], we extract the right columns from AI and c0

# This is an easy way of extracting column 6 and 2 from the AI matrix


B=AI[:,[6, 2]]
# and we can do something similar to to get cB:
cB = c0[[6 2]]

Now we can do the computations to find the next tableau we can either use:

cB B −1 A − c | cB B −1 | cB B −1 b
 
B −1 A | B −1 | B −1 b
from the slide on fundamental insight:

[ cB*inv(B)*A-c cB*inv(B) cB*inv(B)*b;


inv(B)*A inv(B) inv(B)*b]

Or we can use

1 cB B −1
  
1 −c 0 0
0 B −1 0 A I b

24
42101 - Introduction to Operation Research

M=[ 1 cB*inv(B) ; [0;0] inv(B)]


firstTab = [1 -c0 0; [0;0] AI b]
M*firstTab

In both cases the result is

1.0 -0.2 0.0 -0.6 -0.8 0.4 0.0 1.6 48.0


0.0 0.2 0.0 0.6 0.8 -0.4 1.0 -0.6 2.0
0.0 0.6 1.0 0.8 0.4 0.8 0.0 0.2 6.0

(actually, in my output I had a “-4.44089e-16”, but I converted that to a zero). We insert this into the tableau

We see that x4 enters the tableau and x6 leaves it. Our new basis is xb = [x4 , x2 ]

# Extracting column 4 and 2 from the AI matrix


B=AI[:,[4, 2]]
# extract 4th and 2nd coefficient from c0:
cB = c0[[4 2]]

We can again do either

[ cB*inv(B)*A-c cB*inv(B) cB*inv(B)*b;


inv(B)*A inv(B) inv(B)*b]

or

M=[ 1 cB*inv(B) ; [0;0] inv(B)]


M*firstTab

In both cases the result is:

1.0 0.0 0.0 0.0 0.0 0.0 1.0 1.0 50.0


0.0 0.25 0.0 0.75 1.0 -0.5 1.25 -0.75 2.5
0.0 0.5 1.0 0.5 0.0 1.0 -0.5 0.5 5.0

We can already see that we have reached the optimal tableau (no negative numbers in row zero), but let’s insert
the numbers in the tableau:

25
42101 - Introduction to Operation Research

So the solution is x2 = 5, x4 = 2.5 (all other x variables are zero) and Z = 50.

4.1.2 Second solution to exercise 1


From the initial problem, we can construct c, A, B, b and cB .
" # " # " #
  1 0 20 2 3 3 2 2  
c= 5 8 7 4 6 ,B= ,b= A= , cB = 0 0
0 1 30 3 5 4 2 4
" #
x6
Further more we want to construct a basis vector for keeping track on the current variables in basis, xb =
x7
When using the simplex method on matrix form, it is important to notice that after each iteration only B and
cB are updated while c, b and A remains constant.

First step is the optimality test: cB B −1 A − c = −5 −8 −7


 
−4 −6
Since −8 is the most negative number, x2 is entering the basis.

Next step is to find the variable leaving basis, which is done by the minimum ratio test. The right hand side is
computed by
" #
20
B −1 b =
30
And the column under the entering variable is found by
" #
3
B −1 A2 = , where the subscript denotes the second column of the A matrix.
5
since 30/5 < 20/3, x7 is leaving the basis.

Before beginning the next iteration, B, cB and xb are updated.


" #
x6
xb =
x2
Since x7 is the leaving variable, the column corresponding to x7 in the B matrix is updated to the column in
the A matrix corresponding to the in going variable x2 .
" #
1 3
B=
0 5

26
42101 - Introduction to Operation Research

The value corresponding to x7 in cB is also replaced by the value corresponding to x2 in the c vector.
 
cB = 0 8

Now we are ready


 for the second iteration, which
 is exactly like the first iteration. First the optimality test:
cB B −1 A − c = −1/5 0 −3/5 −4/5 2/5

In this iteration x4 is seen to enter basis. To find out which variable is leaving the basis, we are using the min
ratio test. The right hand side is now
" # " #
x6 2
−1
xb = =B b=
x2 6
And the column under the entering variable is found by
" #
4/5
−1
B A4 =
2/5
The ratios are 2/ 45 = 25 and 6/ 25 = 15, so x6 is leaving the basis. Again before beginning the next iteration, B,
cB and xb are updated.
" #
x4
xb =
x2
Since x6 is the leaving variable, the column corresponding to x6 in the B matrix is updated to the column in
the A matrix corresponding to the in going variable x4 .
" #
2 3
B=
2 5
The value corresponding to x6 in cB is also replaced by the value corresponding to x4 in the c vector.
 
cB = 4 8

We are now ready for the third iteration, starting with the optimality test:

cB B −1 A − c =
 
0 0 0 0 0

Since there are no negative values, the optimal solution is found. The right hand side in the optimal solution is
" # " #
x4 5/2
−1
=B b=
x2 5
And the optimal object value is

Z ∗ = cB B −1 b = 50

4.2 Exercise 2
From the initial problem we know that
   
2 2 2 1/2
     
c= 6 1 2 ,b=  3  and A =  −4 −2 −3/2 
 

1 1 2 1/2

27
42101 - Introduction to Operation Research
 
1 1 2
 
And from the final optimal table we can see that y ∗ = and S ∗ = 
 
2 0 2  −2 0 4 

1 0 −1
So in order to get to get the full last table, we are missing Z ∗ , b∗ , A∗ and z ∗ − c

From our formulas we know that

Z ∗ = cB B −1 b = y ∗ b = 6

and
 
7
 
b∗ = B −1 b = S ∗ b = 
 0 

1
Similar for A∗
 
0 4 0
 
A∗ = B −1 A = S ∗ A = 
 0 4 1 

1 0 0
and at last

z ∗ − c = cB B −1 A − c = y ∗ A − c =
 
0 7 0

By using the formulas on the slides, all the missing values are found.

4.3 Exercise 3
From the initial problem, A is determined:  
1 2 1
A=
2 1 3
From the final simplex tableau, cb B −1 can be read:

cb B −1 =
3 4

5 5

From fundamental insight we now that:

cb B −1 A − c = 10
7 
0 0 ⇔
 
 1 2 1
c = cb B −1 A − 10
7
0 = 35 4
7
0 = 23
    
0 5 − 10 0 2 3
2 1 3

This means that c1 = 32 , c2 = 2 and c3 = 3


From fundamental insight we know that the right hand side in every iteration is given by B −1 b.
In this example for the the final tableau it holds that
 3
− 15
   
b 1
B −1 b = 51 2 = ⇔
−5 5 2b 3
   3 −1    
b 5 − 51 1 5
= =
2b − 15 2
5 3 10

This means that b = 5

28
42101 - Introduction to Operation Research

The optimal objective value is calculated in two different ways.


Using the coefficients: Z = c1 x1 + c2 x2 + c3 x3 = 32 · 0 + 2 · 1 + 3 · 3 =11 
 5
Using fundamental insight and the value of b: Z = cb B −1 b = 53 45

= 11
10

4.4 Exercise 4
1. True: If the solutions is optimal but not a CPF solution there exists two optimal CPF solutions. On the
line between the two optimal CPF solutions there are infinite many optimal solutions.
2. True: We need to show that

I. any point on the line segment connecting x∗ and x∗∗ are feasible and
II. that any point on the line segment connecting x∗ and x∗∗ has the same objective value as x∗ or x∗∗
I. follows since the feasible area is a convex set. It is also possible to prove in detail using similar arguments
as bellow.
II. We know that cx∗ = cx∗∗ for two different solutions x∗ and x∗∗ . Any point on the line between x∗ and
x∗∗ can be written as αx∗ + (1 − α)x∗∗ for an α ∈ [0; 1]. We compute the objective at such a point:

(a)
Z = c(αx∗ + (1 − α)x∗∗ ) =
(b)
cαx∗ + c(1 − α)x∗∗ =
(c)
αcx∗ + (1 − α)cx∗∗ =
(d)
αcx∗ + (1 − α)cx∗ =
(α + 1 − α)cx∗ = cx∗

(a) follows from that matrix product is distributive over matrix addition, (b) follows from the commutative
property, (c) follows from the assumption that cx∗ = cx∗∗ and (d) follows from the distributive law
3. False: It is not guaranteed that the system of equations has a solution. Furthermore the solutions to n
constraints might not fulfill all of the constraints:

In the example in the figure above there are two variables and three constraints. The solutions to two
constraints (green and black line) is not in the feasible region and therefore the statement is false.

29
42101 - Introduction to Operation Research

5 Linear Programming: Duality and Sensitivity analysis


5.1 Exercise 1
The dual problem is a minimization problem with variables y1 and y2 which are associated with each of the
constraints in the primal problem. The right hand side of the primal problem becomes the coefficients in the
objective function in the dual problem. Similar the coefficients in the objective function in the primal problem
becomes the right hand side of the constraints in the dual problem. The coefficients on the left hand side of the
primal problem are transposed to construct the dual problem. All of the constraints in the dual problem are
"≥"-constraints since the variables in the primal problem are ≥ 0. Similar the variables in the dual problem
are ≥ 0 since the constraints in the primal problem are "≤"-constraints.
min Z = 12y1 + y2 (43)
s.t. y1 + y2 ≥ −1 (44)
y1 + y2 ≥ −2 (45)
2y1 − y2 ≥ −1 (46)
y1 ≥ 0, y2 ≥ 0 (47)

5.2 Exercise 2
(a) We should determine the allowable range for the right hand side b1 and b2 .
For the final tabular to stay optimal we need to make sure that the right hand sides are greater than 0. When
changing the right hand side we add δ to
 respectively b1 and b2 . Determining the allowable range for b1 includes
∗ δ
using fundamental insight: S b + S ≥0
0
   
1 −2 30
From the final simplex tabular we know S ∗ = and b =
1 −1 10
Calculating the allowable range for b1 :
             
δ 1 −2 30 1 −2 δ 10 δ 10 + δ
S∗b + S = + = + = ≥0
0 1 −1 10 1 −1 0 20 δ 20 + δ
The interval for δ:
10 + δ ≥ 0 ⇒ δ ≥ −10
20 + δ ≥ 0 ⇒ δ ≥ −20
This means that the basis will remain feasible and optimal as long as δ ≥ −10. The objective function is
 
∗ ∗ δ
Z =y b+y = 40 + y1∗ δ = 40 + δ
0
as long as δ ≥ −10. Note that we read that the dual variable y1∗ has value 1 in the tableau in the exercise text
(row zero under the first slack variable).
The same procedure is made for b2 .
             
0 1 −2 30 1 −2 0 10 −2δ 10 − 2δ
S∗b + S = + = + = ≥0
δ 1 −1 10 1 −1 δ 20 −δ 20 − δ
The interval for δ:
10 − 2δ ≥ 0 ⇒ δ ≤ 5
20 − δ ≥ 0 ⇒ δ ≤ 20
This means that the basis will remain feasible and optimal as long as δ ≤ 5. The objective function is
 
∗ ∗ 0
Z =y b+y = 40 + y2∗ δ = 40 + δ
δ
as long as δ ≤ 5. Note that we read that the dual variable y2∗ has value 1 in the tableau in the exercise text
(row zero under the first slack variable).

30
42101 - Introduction to Operation Research

(b) Using sensitivity analysis we want to determine the allowable range for c1 and c2 . In other words we need
to know the value of c1 and c2 such that the last basis remains unchanged.

In the objective function we change c1 to c1 + δ and introduce a δ-row (as part of the objective function
containing the coefficients of δ in the objective row) in the final simplex tabular.

Z x1 x2 x3 x4 RHS
Z 1 0 0 1 1 40
δ 0 -1 0 0 0 0
x2 0 0 1 1 -2 10
x1 0 1 0 1 -1 20

The tabular is made legal by adding R3 to R1

Z x1 x2 x3 x4 RHS
Z 1 0 0 1 1 40
δ 0 0 0 1 -1 20
x2 0 0 1 1 -2 10
x1 0 1 0 1 -1 20

For the tabular to stay optimal, the objective row has to be greater than 0: y ∗ A − (c + δ) ≥ 0. It means that
the coefficients of x3 and x4 needs to be greater than 0.

1 + 1δ ≥ 0 ⇒ δ ≥ −1
1 − 1δ ≥ 0 ⇒ δ ≤ 1

This means that δ ∈ [−1; 1] in order for the tableau to stay optimal. In the tableau above we read that the
objective value is Z = 40 + 20δ as long as δ ∈ [−1; 1]
The exact same procedure is made for c2

Z x1 x2 x3 x4 RHS
Z 1 0 0 1 1 40
δ 0 0 -1 0 0 0
x2 0 0 1 1 -2 10
x1 0 1 0 1 -1 20

The tabular is made legal by adding R2 to R1

Z x1 x2 x3 x4 RHS
Z 1 0 0 1 1 40
δ 0 0 0 1 -2 10
x2 0 0 1 1 -2 10
x1 0 1 0 1 -1 20

The coefficients of x3 and x4 needs to be greater than 0 for the tabular to stay optimal.

1 + 1δ ≥ 0 ⇒ δ ≥ −1
1
1 − 2δ ≥ 0 ⇒ δ ≤
2
This means that δ ∈ [−1; 12 ] in order for the tableau to stay optimal. In the tableau above we read that the
objective value is Z = 40 + 10δ as long as δ ∈ [−1; 21 ]

31
42101 - Introduction to Operation Research

5.3 Exercise 3
From the optimal table we can see that y ∗ = 1
 
1 . By knowing the production
 costs of the new product
2
we can construct the corresponding column in the A matrix, i.e Ae = . From earlier we know that the
3
optimality test is given by y ∗ Ae − ce and if it less than zero the current solution is not the optimal solution.

So if we want to know what the profit should be before the new product enters basis, i.e becomes part of the
optimal solution, we need to solve the inequality y ∗ Ae − ce < 0, where ce is the profit.
 
 2
y ∗ Ae − ce = 1 1

− ce < 0 ⇔ 5 − ce < 0 ⇔ ce > 5
3
The profit of the new product should be greater than 5.

5.4 Exercise 4
Given the problem

max Z = 5x1 + 4x2


s.t. 2x1 + 3x2 ≥ 10
x1 + 2x2 = 20
x1 ∈ R, x2 ≥ 0

a) Find the dual problem using the SOB method


First step in the SOB method is to construct the objective function for the dual problem. The coefficients for
the dual problems objective function is found as the coefficients of the primal problems right hand side.
The objective function in the dual problem is Min 10y1 + 20y2

Next step is to construct the constraints in the dual problem. Every variable in the primal problem gives a
constraint in the dual problem, so we’ll end up with 2 constraints in the dual problem.
• The coefficients to the constraints in the dual problem is found as the columns in the A matrix for the
primal problem
• The right hand side of the constraints is the coefficients of the objective function in the primal problem
• The type of constraint (≤, ≥=) is determined by the domain of the variable in the primal problem corre-
sponding to the constraint. See the slides for the table.
This gives us the 2 constraints:

2y1 + y2 = 5
3y1 + 2y2 ≥ 4

The third and last step in the SOB method is to determine the domain of the dual variables. The domain
depends on the constraints type in the primal problem. A table of the conversions can be found in the slides.
In our case the domain of the dual variables are y1 ≤ 0 and y2 ∈ R
Combining all the steps we get the final dual problem

min W = 10y1 + 20y2


s.t. 2y1 + y2 = 5
3y1 + 2y2 ≥ 4
y1 ≤ 0, y2 ∈ R

32
42101 - Introduction to Operation Research

b) Use Table 6.12 to convert the primal problem to our standard form given in the beginning of
Sec. 6.1, and construct the corresponding dual problem. Then show that this dual problem is
equivalent to the one obtained in part (a)

From table 6.12 we see that because x1 is unconstrained, we need to introduce x1 = x+
1 − x1


max Z = 5(x+
1 − x1 ) + 4x2

s.t. 2(x+
1 − x1 ) + 3x2 ≥ 10

(x+
1 − x1 ) + 2x2 = 20

x+
1 ≥ 0, x1 ≥ 0, x2 ≥ 0

Next we need to convert the = constraint to a ≤ constraint. From table 6.12 the conversion is seen to be done
− −
by introducing two new constraints: (x+ +
1 − x1 ) + 2x2 ≤ 20 and −(x1 − x1 ) − 2x2 ≤ −20. So the primal problem
on standard form is

max Z = 5(x+
1 − x1 ) + 4x2

s.t. 2(x+
1 − x1 ) + 3x2 ≥ 10

(x+
1 − x1 ) + 2x2 ≤ 20

− (x+
1 − x1 ) − 2x2 ≤ −20

x+
1 ≥ 0, x1 ≥ 0, x2 ≥ 0

At last we need to change the ≥ constraint to a ≤ constraint, which simply is done by multiplying by −1


max Z = 5(x+
1 − x1 ) + 4x2

s.t. − 2(x+
1 − x1 ) − 3x2 ≤ −10

(x+
1 − x1 ) + 2x2 ≤ 20

− (x+
1 − x1 ) − 2x2 ≤ −20

x+
1 ≥ 0, x1 ≥ 0, x2 ≥ 0

Now that the problem is on standard form we can easily construct the dual problem.
min W = −10y1 + 20y2 − 20y3
s.t. − 2y1 + y2 − y3 ≥ 5
2y1 − y2 + y3 ≥ −5
− 3y1 + 2y2 − 2y3 ≥ 4
y1 ≥ 0, y2 ≥ 0, y3 ≥ 0
If we now introduce y10 = −y1 and y2 − y3 = y20 we get
min W = 10y10 + 20y20
s.t. 2y10 + y20 = 5
3y10 + 2y20 ≥ 4
y10 ≤ 0, y20 ∈ R
which is exactly the same we got by using the SOB method.

5.5 Exercise 5
Replacing b to b will only change the objective function in the dual problem, so the domain of solutions will
remain the same. y ∗ is a legal solution to the dual problem and cx is the optimal solution to the new primal
problem. From the weak duality theorem this gives us that cx ≤ y ∗ b.

33
42101 - Introduction to Operation Research

6 Week 6: Modeling with integer variables


Exercise 1
Let xi , for i = 1, . . . , 8, be a binary variable that is 1 if we invest in possibility i and 0 otherwise.
1. x1 + x2 + · · · + x8 ≤ 6
2. x1 + x2 + · · · + x8 ≥ 1
3. x1 + x8 = 2y, where y is a binary variable.
4. x1 + x7 ≤ 1
5. x4 ≤ x2 + x3 + x5
6. 2x1 + x2 + · · · + x8 = 4

Exercise 2
Introduce a binary variable xi that is equal to 1 if one invest in project i and 0 if one do not invest in project
i. The IP model is then
max Z = x1 + 1.8x2 + 1.6x3 + 0.8x4 + 1.4x5 (48)
s.t. 6x1 + 12x2 + 10x3 + 4x4 + 8x5 ≤ 20 (49)
xi ∈ {0, 1} i = 1, . . . , 5 (50)
The following piece of Julia code solves the problem
using JuMP, GLPK
m = Model(GLPK.Optimizer)

@variable(m, x[1:5], Bin )


@objective(m, Max, x[1] + 1.8*x[2] + 1.6*x[3] + 0.8*x[4] + 1.4*x[5] )
@constraint(m, 6*x[1] + 12*x[2] + 10*x[3] + 4*x[4] + 8*x[5] <= 20 )

optimize!(m)
println("Objective value: ", JuMP.objective_value(m))
println("x = ", JuMP.value.(x))
The optimal solution to the problem is:
• Z = 3.4
• x1 = x3 = x4 = 1, x2 = x5 = 0

Exercise 3
Let xi be the production level of product i. Introduce a binary variable yi that indicates whether a product is
being produced or not. We have to ensure that xi only has a value when yi is equal to 1. This can be done
with the following constraint:
xi ≤ M y i ∀i = 1, 2, 3, 4 (51)
where Mi is a large number greater or equal to the maximum number of products that can be produced. From
the third constraint it can be seen that no more than 6000 products can be produced. Furthermore it can be
seen that no more than 6000/3 = 2000 products can be produced as the smallest coefficient on the left hand
side is 3. The constraint, ensuring that xi only has a value when yi is equal to 1, can be formulated as
xi ≤ 2000yi ∀i = 1, 2, 3, 4 (52)
Now we have to model the three described constraints

34
42101 - Introduction to Operation Research

1. No more than two of the products can be produced: y1 + y2 + y3 + y4 ≤ 2


2. First we model that product 3 only can be produced if product 1 or product 2 (or both) are produced:

y1 + y2 ≥ y3 (53)

then we we model that product 4 only can be produced if product 1 or product 2 (or both) are produced:

y1 + y2 ≥ y4 (54)

3. How to model this kind of constraint is described in the lectures in the example "Activate / deactivate
constraints " (in the Danish slides: "Aktivere / deaktivere begrænsninger").
We introduce a binary variable u.

5x1 + 3x2 + 6x3 + 4x4 ≤ 6000 + M u (55)


4x1 + 6x2 + 3x3 + 5x4 ≤ 6000 + M (1 − u) (56)
(57)

For M it should hold that M + 6000 should be greater than the left hand side. We already know that
xi ≤ 2000 which means that the left hand side in both constraint is at most (5 + 3 + 6 + 4) · 2000 = 36000.
This means that we can chose M = 30000
The objective function becomes

max 70x1 + 60x2 + 90x3 + 80x4 − 50000y1 − 40000y2 − 70000y3 − 60000y4

The expression 70x1 + 60x2 + 90x3 + 80x4 in the objective function expresses the revenue from the production
(numbers are from the row "Marginal revenue" in the table in the exercise text). The expression −50000y1 −
40000y2 − 70000y3 − 60000y4 expresses the startup cost associated with the chosen production.
The problem can be solved with Julia as:

using JuMP, GLPK


m = Model(GLPK.Optimizer)

@variable(m,x[1:4]>=0)
@variable(m,y[1:4],Bin)
@variable(m,u,Bin)

M=30000

@objective(m, Max, 70*x[1]+60*x[2]+90*x[3]+80*x[4]


-50000*y[1]-40000*y[2]-70000*y[3]-60000*y[4])
@constraint(m, [i=1:4] ,x[i]<=2000*y[i])
@constraint(m,y[1]+y[2]+y[3]+y[4]<=2)
@constraint(m,y[1]+y[2]>=y[3])
@constraint(m,y[1]+y[2]>=y[4])
@constraint(m,5*x[1]+3*x[2]+6*x[3]+4*x[4]<=6000+M*u)
@constraint(m,4*x[1]+6*x[2]+3*x[3]+5*x[4]<=6000+M*(1-u))

optimize!(m)
println("Objective value: ", JuMP.objective_value(m))
println("x = ", JuMP.value.(x))
println("y = ", JuMP.value.(y))
println("u = ", JuMP.value(u))

The optimal objective value is Z=80000, x2 = 2000, y2 = 1 and u = 0. All other variables are equal to 0

35
42101 - Introduction to Operation Research

Exercise 4
We introduce the following integer variables:
• x11 , x21 ≥ 0: Pieces of toy one and two produced at factory one.
• x12 , x22 ≥ 0: Pieces of toy one and two produced at factory two.
In order to help us define the objective function and constraints we introduce the following binary variables:
• y1 ∈ {0, 1} : 1 if factory one is used else 0
• y2 ∈ {0, 1} : 1 if factory two is used else 0
• t1 ∈ {0, 1} : 1 if toy one is produced else 0
• t2 ∈ {0, 1} : 1 if toy two is produced else 0
With the introduced variables the objective function is:

max 10 (x11 + x12 ) + 15 (x21 + x22 ) − 50000t1 − 80000t2

Just one factory would be used corresponds to the constraint:

y1 + y2 ≤ 1

Toy 1 can be produced at the rate of 50 per hour in factory 1 and 40 per hour in factory 2. Toy 2 can be
produced at the rate of 40 per hour in factory 1 and 25 per hour in factory 2. Factories 1 and 2, respectively,
have 500 hours and 700 hours of production time available before Christmas that could be used to produce
these toys. This can be formulated as the following two constraints:
1 1
x11 + x21 ≤ 500y1
50 40
1 1
x12 + x22 ≤ 700y2
40 25
To ensure that toy one is only produced if we decide to produce toy one and likewise with toy two we need to
add the following two constraints:
x11 + x12 ≤ M1 t1
x21 + x22 ≤ M2 t2
At last we need to find some good values for M1 and M2 . Some good values could be M1 = max{500·50, 700·40}
and M2 = max{500·40, 700·25} as these are the maximum production capacity of toy 1 and toy 2. The problem
can be solved with Julia like this:

using JuMP, GLPK


m = Model(GLPK.Optimizer)

@variable(m, x[1:2,1:2] >= 0)


@variable(m, y[1:2], Bin)
@variable(m, t[1:2], Bin)

@objective(m, Max, 10*(x[1,1]+x[1,2])+15*(x[2,1]+x[2,2])-50000*t[1]-80000*t[2])


@constraint(m, y[1] + y[2] <= 1)
@constraint(m, (1/50)*x[1,1] + (1/40)*x[2,1] <= 500*y[1])
@constraint(m, (1/40)*x[1,2] + (1/25)*x[2,2] <= 700*y[2])
@constraint(m, x[1,1] + x[1,2] <= 28000*t[1])
@constraint(m, x[2,1] + x[2,2] <= 20000*t[2])

36
42101 - Introduction to Operation Research

optimize!(m)
println("Objective value: ", JuMP.objective_value(m))
println("x = ", JuMP.value.(x))
println("y = ", JuMP.value.(y))
println("t = ", JuMP.value.(t))

The optimal objective value is Z = 230000, x12 = 28000, y2 = 1 and t1 = 1. All other variables are equal to 0.
Expressed in words this means that the optimal profit is $230000, toy factory two should be used and only toy
one should be produced. We should produce 28000 units of toy 1.

Exercise 5
There are several ways to formulate the MIP. One of them is the following

max x1 + 2x2 + 3x3


subject to

x1 − x2 = 3y (58)
x1 + 3x2 + 5x3 ≤ 10.7 (59)
−2 ≤ y ≤ 2 (60)
x1 , x2 , x3 ≥ 0 (61)
y ∈Z (62)

To see that this is equivalent to the model in the exercise, let’s first consider the non-linear constraint

|x1 − x2 |= 0 or 3 or 6

from the model in the exericse. This constraint is identical to the constraint

x1 − x2 = −6 or − 3 or 0 or 3 or 6

In the solution we have that y should be an integer between −2 and 2. This means that 3y can take one of the
five values −6, −3, 0, 3 or 6 and that means that constraint (58) together with the definition of y does exactly
the same as the non-linear constraint from the exercise.
The model can be implemented in Julia as shown below. The optimal solution is x1 = 7.175, x2 = 1.175, x3 = 0
and y = 2. The optimal objective value is Z = 9.525.

using JuMP, GLPK


m = Model(GLPK.Optimizer)

@variable(m, x1 >= 0)
@variable(m, x2 >= 0)
@variable(m, x3 >= 0)
@variable(m, -2 <= y <= 2, Int)

@objective(m, Max, x1 + 2x2 + 3x3 )


@constraint(m, x1 - x2 -3y == 0 )
@constraint(m, x1 + 3x2 + 5x3 <= 10.7 )

optimize!(m)
println("Objective value: ", JuMP.objective_value(m))
println("x1 = ", JuMP.value(x1))
println("x2 = ", JuMP.value(x2))
println("x3 = ", JuMP.value(x3))
println("y = ", JuMP.value(y))

37
42101 - Introduction to Operation Research

Exercise 6
We define six binary variables y1j and y2j for j = 1, 2, 3 with yij having the following interpretation
(
1 if xi = j
yij = (63)
0 otherwise

We just need three binary variables per x variable since each x-variable only can take the values 0, 1, 2 and 3
because of the constraint in the original model. The value xi = 0 is obtained by setting yij = 0 for j = 1, 2, 3
so we do not need a yi0 variable.
With these binary variable we can rewrite the model using only yij . Everywhere we have xi we can substitute
with 1yi1 +2yi2 +3yi3 , everywhere we have x2i we can substitute with 12 yi1 +22 yi2 +32 yi3 , everywhere we have x3i
we can substitute with 13 yi1 +23 yi2 +33 yi3 and everywhere we have x4i we can substitute with 14 yi1 +24 yi2 +34 yi3 .
With this we get the objective and constraint (64) of the model below. Constraints (65) and (66) are added to
ensure that at most one of the variables yi1 , yi2 and yi3 take a non-zero value, i = 1, 2 (see definition (63)).

max Z =4 12 y11 + 22 y12 + 32 y13 − 13 y11 + 23 y12 + 33 y13


 

+ 10 12 y21 + 22 y22 + 32 y23 − 14 y21 + 24 y22 + 34 y23


 

subject to

1y11 + 2y12 + 3y13 + 1y21 + 2y22 + 3y23 ≤ 3 (64)


y11 + y12 + y13 ≤ 1 (65)
y21 + y22 + y23 ≤ 1 (66)
y11 , y12 , y13 , y21 , y22 , y23 ∈ {0, 1} (67)

One way of implementing the model in Julia is show below. The model makes heavy use of the sum(... for ...)
function. We could also have written the model in a more explicit way (typing in all the coefficients directly).
The sum notation makes it easier to extend and/or change the model.

using JuMP, GLPK


m = Model(GLPK.Optimizer)

@variable(m, y[1:2,1:3], Bin)

@objective(m, Max,
4(sum(j^2 * y[1,j] for j=1:3))
- sum(j^3 * y[1,j] for j=1:3)
+ 10(sum(j^2 * y[2,j] for j=1:3))
- sum(j^4 * y[2,j] for j=1:3))

@constraint(m, sum(j * y[1,j] for j=1:3) + sum(j * y[2,j] for j=1:3) <= 3 )
@constraint(m, [i=1:2], sum(y[i,j] for j=1:3) <= 1)

print(m)
optimize!(m)
println("Objective value: ", JuMP.objective_value(m))
yVal = JuMP.value.(y)
println("y = ", yVal)
println("x1 (computed) = ", sum(j*yVal[1,j] for j=1:3))
println("x2 (computed) = ", sum(j*yVal[2,j] for j=1:3))

The optimal objective value is Z = 27, y11 = 1, y22 = 1 All other variables are equal to 0. This translates to the
solution x1 = 1 and x2 = 2 for the original problem.

38
42101 - Introduction to Operation Research

7 Week 7: Modeling with Integer variables II


7.1 Exercise 1
As the leader of an oil-exploration drilling venture, you must determine the least-cost selection of five out
of ten possible sites. Label the sites S1 , S2 , S3 , . . . , S10 and the exploration costs associated with each as
C1 , C2 , . . . , C10 . Regional development restrictions are such that:
• Evaluating sites S1 , and S7 will prevent you from exploring site S8
• Evaluating site S3 or S4 prevents you from assessing site S5

• Of the group S3 , S6 , S7 , and S8 , only two sites may be assessed


Formulate an integer program to determine the minimum-cost exploration scheme that satisfies these restrictions

1 if site Sj is selected j = 1, 2, 3, . . . , 10
xj =
0 otherwise

10
X
Minimize: Cj · xj
j=1

s.t. x1 + x7 + x8 ≤ 2
x3 + x5 ≤ 1
x4 + x5 ≤ 1
x3 + x6 + x7 + x8 ≤ 2
10
X
xj = 5
j=1

xj ∈ {0, 1} for j = 1, 2, . . . , 10

7.2 Exercise 2
Suppose that a mathematical model fits linear programming except for the restrictions that

• At least one of the following two inequalities holds

3x1 − x2 − x3 + x4 ≤ 12
x1 + x2 + x3 + x4 ≤ 15

• At least two of the following three inequalities holds

2x1 + 5x2 − x3 + x4 ≤ 30
−x1 + 3x2 + 5x3 + x4 ≤ 40
3x1 − x2 + 3x3 − x4 ≤ 60

Show how to reformulate these restrictions to fit an MIP model.

• At least one of the following two inequalities holds. Introduce one binary variable y which provides the
possibility to activate/deactivate the constraints

3x1 − x2 − x3 + x4 ≤ 12 + M · (1 − y)
x1 + x2 + x3 + x4 ≤ 15 + M · y

39
42101 - Introduction to Operation Research

• At least two of the following three inequalities holds. Introduce three binary variables y1 , y2 , and y3 which
provide the possibility to deactivate the constraints.

2x1 + 5x2 − x3 + x4 ≤ 30 + M · y1
−x1 + 3x2 + 5x3 + x4 ≤ 40 + M · y2
3x1 − x2 + 3x3 − x4 ≤ 60 + M · y3
3
X
yi ≤ 1
i=1

7.3 Exercise 3
Consider the following integer program:

Maximize Z = x1 + 5x2
x1 + 10x2 ≤ 20
x1 ≤ 2
x1 , x2 ∈ N

1. Use a binary representation of the variables to reformulate this model as a Binary Integer Program
We can deduce from the formulations that the upperbounds of variables x1 and x2 are both 2. This
implies that N equals 1 in both cases. To be able to represent all integer values for both x1 and x2 we
therefore need two binary variables for each. We introduce y0 and y1 for x3 and y2 and y3 for x3 .

x1 = y0 + 2y1
x2 = y2 + 2y3

Maximize: y0 + 2y1 + 5y2 + 10y3


s.t y0 + 2y1 + 10y2 + 20y3 ≤ 20
y0 + 2y1 ≤ 2
y0 + 2y1 ≥ 1
y2 + 2y3 ≥ 1
yi ∈ {0, 1} for i = 1, 2, 3, 4

2. Solve the model using a computer and use the optimal solution to identify an optimal solution to the
original problem

y1 = y2 = 1, with optimal objective value: 7. The solution implies x1 = 2 and x2 = 1

Julia Code:

using JuMP
using GLPK

m = Model(GLPK.Optimizer)

@variable(m, y[1:4], Bin)

40
42101 - Introduction to Operation Research

@objective(m, Max, y[1]+2*y[2]+5*y[3]+10*y[4])

@constraint(m, y[1]+2*y[2]+10*y[3]+20*y[4]<=20)
@constraint(m, y[1]+2*y[2]<=2)
@constraint(m, y[1]+2*y[2]>=1)
@constraint(m, y[3]+2*y[4]>=1)

optimize!(m)

println("Objective Value: ", objective_value(m))


println("Variable values: ", value.(y))

7.4 Exercise 4
The Fly-Right Airplane Company builds small jet planes to sell to corporations for the use of the executives.
To meet the needs of these executives, the company’s customers sometimes order a custom design of the air
planes being purchased. When this occurs, a substantial start up cost is incurred to initiate production of these
airplanes.

Fly-Right has recently received purchase requests from three customers with short deadlines. However, as the
company’s production facilities already are almost completely tied up filling previous orders, it will not be able
to accept all three orders. Therefore a decision now needs to be made on the number of airplanes the company
will agree to produce (if any) for each of the three customers.

The table below gives the relevant information on each of the three customers. Each customer has a fixed
start-up cost, paid just once irrespective of how many planes are produced for that customer. The marginal
net revenue is received per plane produced. The third row gives the percentage of available capacity needed
for each aircraft.The last row states the number of airplanes ordered by each customer (less will obviously be
accepted).

Customer
1 2 3
Start-up cost $3 million $ 2 million 0
Marginal net revenue $ 2 million $ 3 million $ 0.8 million
Capacity used per plane 20 % 40 % 20%
Maximum order 3 planes 2 planes 5 planes

1. Formulate an integer linear programming model that can be used to maximize Fly-Right’s total profit
(net revenue - start up costs)

1 if customer i is selected i = 1, 2
yi =
0 otherwise

xi = Number of planes produced for customer i = 1, 2, 3

41
42101 - Introduction to Operation Research

Maximize: 2x1 + 3x2 + 0.8x3 − 3y1 − 2y2


s.t 0.2x1 + 0.4x2 + 0.2x3 ≤ 1.0
x1 ≤ 3 · y1
x2 ≤ 2 · y2
x3 ≤ 5
xi ≥ 0 for i = 1, 2, 3 and integer
yi ∈ {0, 1} for i = 1, 2

2. Solve your model using a computer and state the number of airplanes produced for each customer
The optimal solution is x1 = 0, x2 = 2, x3 = 1, y1 = 0 and y2 = 1, and the optimal objective value is 4.8
million dollars. The solutions states that two airplanes should be produced for Customer 2 and one for
Customer 3.

Julia Code:

using JuMP
using GLPK

m = Model(GLPK.Optimizer)

@variable(m, x[1:3] >= 0, Int)


@variable(m, y[1:2], Bin)

@objective(m, Max, 2*x[1]+3*x[2]+0.8*x[3]-3*y[1]-2*y[2])

@constraint(m, x[1]<=3*y[1])
@constraint(m, x[2]<=2*y[2])
@constraint(m, x[3]<=5)
@constraint(m,0.2*x[1]+0.4*x[2]+0.2*x[3]<=1)

optimize!(m)

println("Objective Value: ", objective_value(m))


println("x-Variable values: ", value.(x))
println("y-Variable values: ", value.(y))

7.5 Exercise 5
The following map shows six intersections at which automatic traffic monitoring stations might be installed. A
station at any particular node can monitor all the road links meeting that intersection. Numbers next to nodes
reflect the monthly cost (in thousands of dollars) of operating a station at that location.

1. Formulate the problem of providing full coverage at minimum total cost as a set covering problem

1 if a station is installed at node i = 1, 2, 3, 4, 5, 6
xi =
0 otherwise

42
42101 - Introduction to Operation Research

65
40 43
1 2 3

4 5 6
48 36
72

Minimize: 40x1 + 65x2 + 43x3 + 48x4 + 72x5 + 36x6


s.t x1 + x2 ≥ 1
x1 + x4 ≥ 1
x2 + x5 ≥ 1
x4 + x5 ≥ 1
x2 + x3 ≥ 1
x3 + x5 ≥ 1
x3 + x6 ≥ 1
x5 + x6 ≥ 1
xi ∈ {0, 1} for i = 1, 2, 3, 4, 5, 6

2. Find the optimal solution to your model using a computer


The optimal solution is x1 = x3 = x5 = 1, x2 = x4 = x6 = 0, and the optimal objective value is 155
thousand dollars. The solution states that stations should be opened at nodes one, three, and five.

Julia Code:

using JuMP
using GLPK

m = Model(GLPK.Optimizer)

cost = [40 65 43 48 72 36]


struct Arc
from::Int64
to::Int64
end

arcs =[Arc(1,4),Arc(2,5),Arc(4,5),Arc(2,3),Arc(1,2), Arc(3,5), Arc(3,6), Arc(5,6)]

@variable(m, x[1:6], Bin)


@objective(m, Min, sum(cost[i]*x[i] for i=1:6))

for i=1:length(arcs)
arc=arcs[i]
@constraint(m, x[arc.from]+x[arc.to]>=1)
end

43
42101 - Introduction to Operation Research

optimize!(m)

println("Objective Value: ", objective_value(m))


println("Variable values ", value.(x))

3. Revise your formulation in Part 1) to obtain a binary integer programming that minimizes the number of
uncovered road links while using at most two stations

1 if a station is installed at node i = 1, 2, 3, 4, 5, 6
xi =
0 otherwise

1 if a road link j is left uncovered for j = 1, 2, 3, 4, 5, 6, 7, 8
yj =
0 otherwise

Minimize: y1 + y2 + y3 + y4 + y4 + y5 + y6 + y7 + y8
s.t x1 + x2 + y1 ≥ 1
x1 + x4 + y2 ≥ 1
x2 + x5 + y3 ≥ 1
x4 + x5 + y4 ≥ 1
x2 + x3 + y5 ≥ 1
x3 + x5 + y6 ≥ 1
x3 + x6 + y7 ≥ 1
x5 + x6 + y8 ≥ 1
6
X
xi ≤ 2
i=1
xi ∈ {0, 1} for i = 1, 2, 3, 4, 5, 6
yj ∈ {0, 1} for j = 1, 2, 3, 4, 5, 6, 7, 8

4. Find the optimal solution to the revised problem using a computer


The optimal solution is x1 = x5 = 1, x2 = x3 = x4 = x6 = 0, and the optimal objective value is two
uncovered roads. The solution states that stations should be opened at nodes one and five.

Julia Code:

using JuMP
using GLPK

m = Model(GLPK.Optimizer)

cost = [40 65 43 48 72 36]


struct Arc
from::Int64
to::Int64
end

arcs =[Arc(1,4),Arc(2,5),Arc(4,5),Arc(2,3),Arc(1,2),Arc(3,5),Arc(3,6),Arc(5,6)]

44
42101 - Introduction to Operation Research

@variable(m, x[1:6], Bin)


@variable(m, y[1:8], Bin)

@objective(m, Min, sum(y[i] for i=1:8))

for i=1:length(arcs)
arc=arcs[i]
@constraint(m, x[arc.from]+x[arc.to]+y[i]>=1)
end

@constraint(m, sum(x[i] for i=1:6)<=2)

optimize!(m)

println("Objective Value: ", objective_value(m))


println("Variable values ", value.(x))
println("Variable values ", value.(y))

7.6 Exercise 6
Air Anton is a small commuter airline running six flights per day from New York City to surrounding resort
areas. Flight crews are all based in New York, working flights to various locations and then returning on the next
flight home. Taking into account complex work rules and pay incentives Air Anton schedulers have constructed
the 8 possible work patterns detailed in the following table. Each row of the table indicates the flights that are
covered in a particular pattern (×) and the daily cost per crew (in thousands of dollars).

The company want to choose the minimum total cost collection of work patterns that cover all flights exactly
once.

Flight
Pattern 1 2 3 4 5 6 Cost
1 - × - × - - 1.40
2 × - - - - × 0.96
3 - × - × × - 1.52
4 - × - - × × 1.60
5 × - × - - × 1.32
6 - - × - × - 1.12
7 - - - × - × 0.84
8 × - × × - - 1.54

1. Formulate the problem of providing full coverage at minimum total cost as a set partitioning problem

1 if pattern i is assigned to a flight crew for i = 1, 2, 3, 4, 5, 7, 8
xi =
0 otherwise

45
42101 - Introduction to Operation Research

8
X
Minimize: ci x i
i=1
8
X
s.t aif xi = 1 ∀f = 1, 2, 3, . . . , 6
i=1
xi ∈ {0, 1} for i = 1, 2, . . . , 8,

where ci states the cost of work pattern i, af i is a binary parameter indicating whether or not flight
f = 1, 2, . . . , 6 is contained in work pattern i = 1, 2, . . . , 8

Written in full this model looks like

Minimize: 1.4x1 + 0.96x2 + 1.52x3 + 1.60x4 + 1.32x5 + 1.12x6 + 0.84x7 + 1.54x8


s.t x2 + x5 + x8 = 1
x1 + x3 + x4 = 1
x5 + x6 + x8 = 1
x1 + x3 + x7 + x8 = 1
x3 + x4 + x6 = 1
x2 + x4 + x5 + x7 = 1
xi ∈ {0, 1} for i = 1, 2, . . . , 8.

2. Find the optimal solution to your model using a computer


The optimal solution is x3 = x5 = 1, x1 = x2 = x4 = x6 = x7 = x8 = 0, and the optimal objective value
is $2840. The solution states that work patterns three and five cover the six flights at minimum cost.

Julia Code:

using JuMP
using GLPK

m = Model(GLPK.Optimizer)

n=8

cost = [1.4 0.96 1.52 1.6 1.32 1.12 0.84 1.54]

A = [0 1 0 0 1 0 0 1;
1 0 1 1 0 0 0 0;
0 0 0 0 1 1 0 1;
1 0 1 0 0 0 1 1;
0 0 1 1 0 1 0 0;
0 1 0 1 1 0 1 0]

@variable(m, x[1:n], Bin)

@objective(m, Min, sum(cost[i]*x[i] for i=1:8))


@constraint(m, [j=1:size(A)[1]], sum(A[j,i]*x[i] for i=1:n) == 1)

46
42101 - Introduction to Operation Research

optimize!(m)

println("Objective Value: ", objective_value(m))


println("Variable values: ", value.(x))

47
42101 - Introduction to Operation Research

8 Week 8: Greedy heuristics


8.1 Exercise 1
You are the marketing manager of a large company and are looking at a number of possible advertising campaigns
in order to attract more customers. Six campaigns are possible, and they are detailed in the table below. Each
campaign requires a certain investment (in millions of dollars) and will yield a certain number of new customers
(in the thousands). At most 5 million dollars is available for the campaigns.
Campaign Investment Return
1 Superbowl half-time Adv. 3M 80
2 Radio Adv. Campaign 800K 20
3 Television (Non peak hour) 500K 22
4 City Newspaper 2M 75
5 Viral Marketing Campaign 50K 4
6 Web advertising 600K 10
1. Assuming it is possible to invest in fractions of a campaign, but not more than one of each, formulate a
Linear Program that can be used to maximize the number of new customers and solve it.

xi = Fraction of campaign i = 1, 2, . . . , 6 chosen

6
X
Maximize Z = p i xi
i=1
6
X
s.t. wi xi ≤ 5
i=1
0 ≤ xi ≤ 1 for i = 1, 2, . . . , 6,

where wi and pi denote the investment required and the return from campaign i, respectively.
Julia Code:

using JuMP
using GLPK

m = Model(GLPK.Optimizer)
n=6

profit = [80 20 22 75 4 10]


weight = [3 0.8 0.5 2 0.05 0.6]

@variable(m, 0<=x[1:n]<=1)
@objective(m, Max, sum(profit[i]*x[i] for i=1:n))
@constraint(m, sum(weight[i]*x[i] for i=1:n)<=5)

optimize!(m)

println("Objective Value: ", objective_value(m))


println("x-values = ", value.(x))

The optimal solution is x1 = 0.816667, x3 = x4 = x5 = 1, x2 = x6 = 0, and the optimal objective value is


166.33 thousand customers. The solution states that campaigns three, four, and five cover will be selected
completely, while campaign one will also be chosen, but with value 0.816667.

48
42101 - Introduction to Operation Research

2. Solve the problem again, this time using a greedy algorithm in which at each iteration you increase, as
much as possible the value of the variable xj that maximizes the ratio wpii , where pi denotes the profit of
investment campaign i and wi refers to the investment required for campaign i. What do you observe?
Can you prove that greedy is an optimal strategy in this case?
Ordering the campaigns in decreasing order of this ratio yields

Campaigns
5 3 4 1 2 6
p
w 80 44 36 26.67 25 16.67

Table 3: Ratio Order

(a) Campaign 5 has the largest ratio, investment required is less than available budget, therefore we first
set x5 = 1, remaining budget = 5.0-0.05 = 4.95
(b) Campaign 3 has the 2nd best ratio, investment required is less than available budget, therefore we
then set x3 = 1, remaining budget = 4.95-0.50 = 4.45
(c) Campaign 4 has the 3rd best ratio, investment required is less than available budget, therefore we
then set x4 = 1, remaining budget = 4.45-2.0 = 2.45
(d) Campaign 1 has the 4th best ratio, the investment required exceeds our available budget, select as
much of x1 as we can, i.e, x1 = 2.45
3 = 0.816667

We have exhausted the available budget, this solution is identical to that found in Part 1).
Proof
Assume we are given a set I = {1, 2, . . . , n} of items. Suppose further that item j gives the best return for
p
the amount invested (i.e, wjj ≥ wpii ∀i ∈ I with i 6= j). Let us assume that the optimal solution does not
p
contain item j. Then, there must exist an item k such that wpkk < wjj . If we replace an  amount of item
p
k with an  amount of item j, the objective function improves by  · ( wjj − wpkk ). Therefore, the optimal
solution must contain item j at its greatest value. This is precisely what the greedy algorithm ensures.
3. Assume now that you must invest in a campaign in its entirety or not at all. Update your model from
part 1) to reflect this.

1 if campaign i = 1, 2, . . . , 6 is chosen
xi =
0 otherwise

6
X
Maximize: Z= pi xi
i=1
6
X
s.t. wi xi ≤ 5
i=1
xi ∈ {0, 1} for i = 1, 2, . . . , 6,

where wi and pi denote the investment required and the return from campaign i, respectively.
4. Solve your model with a computer Julia Code:

49
42101 - Introduction to Operation Research

using JuMP
using GLPK

m = Model(GLPK.Optimizer)
n=6

profit = [80 20 22 75 4 10]


weight = [3 0.8 0.5 2 0.05 0.6]

@variable(m, x[1:n], Bin)


@objective(m, Max, sum(profit[i]*x[i] for i=1:n))
@constraint(m, sum(weight[i]*x[i] for i=1:n)<=5)

optimize!(m)

println("Objective Value: ", objective_value(m))


println("x-values = ", value.(x))

The optimal solution is x1 = x4 = 1, x2 = x3 = x5 = x6 = 0, and the optimal objective value is 155


thousand customers . The solution states that only campaigns one and four are chosen.
5. Solve this binary integer program again using the same greedy algorithm above. What do you observe?

(a) Campaign 5 has the largest ratio, investment required is less than available budget, therefore we first
set x5 = 1, remaining budget = 5.0 − 0.05 = 4.95
(b) Campaign 3 has the 2nd best ratio, investment required is less than available budget, therefore we
then set x3 = 1, remaining budget = 4.95 − 0.50 = 4.45
(c) Campaign 4 has the 3rd best ratio, investment required is less than available budget, therefore we
then set x4 = 1, remaining budget = 4.45 − 2.0 = 2.45
(d) Campaign 1 has the 4th best ratio, the investment required exceeds our available budget, set x1 = 0,
remaining budget = 2.45
(e) Campaign 2 has the 5th best ratio; investment required is less than available budget, therefore we
then set x2 = 1, remaining budget = 2.45 − 0.8 = 1.65
(f) Campaign 6 has the worst ratio; investment required is less than available budget, therefore we then
set x6 = 1, remaining budget = 1.75 − 0.6 = 1.05

This solution states that we should run all campaigns except the first, and yields 131 thousand new
customers. The greedy approach failed to find the optimal solution.
6. You have been given a revised analysis of the marketing campaigns with the following information

Campaign Investment Return


Superbowl half-time Adv. 1M 80
Radio Adv. Campaign 1.8M 20
Television (Non peak hour) 1.5M 22
City Newspaper 1.1M 75
Viral Marketing Campaign 2.2M 4
Web advertising 2M 10

Resolve your binary integer program with the computer and using your greedy algorithm. What do you
observe?
p
Re-ordering the campaigns in decreasing order of the w ratio yields
Applying the greedy algorithm yields

50
42101 - Introduction to Operation Research

Campaigns
1 4 3 2 6 5
p
w 80 68.18 14.67 11.11 5.0 1.818

Table 4: Ratio Order

(a) Campaign 1 has the largest ratio, investment required is less than available budget, therefore we first
set x5 = 1, remaining budget = 5.0 − 1.0 = 4.0
(b) Campaign 4 has the 2nd best ratio, investment required is less than available budget, therefore we
then set x4 = 1, remaining budget = 4.0 − 1.1 = 2.9
(c) Campaign 3 has the 3rd best ratio, investment required is less than available budget, therefore we
then set x3 = 1, remaining budget = 2.9 − 1.5 = 1.4
(d) All other campaigns now require an investment that exceeds the available budget, therefore we set
x2 = x6 = x5 = 0.

This solution states that we should run investment campaigns one, three, and four, giving 177 thousand
new customers.
Solving the mathematical model from Part 4 above with the updated parameters yields the same solution.
Julia Code:

using JuMP
using GLPK

m = Model(GLPK.Optimizer)
n=6

profit = [80 20 22 75 4 10]


weight = [1 1.8 1.5 1.1 2.2 2]

@variable(m, x[1:n], Bin)


@objective(m, Max, sum(profit[i]*x[i] for i=1:n))
@constraint(m, sum(weight[i]*x[i] for i=1:n)<=5)

optimize!(m)

println("Objective Value: ", objective_value(m))


println("x-values = ", value.(x))

7. You suspect the greedy approach might be optimal for 0-1 knapsack problems with this structure. Identify
the structure, and then prove that the greedy approach will always yield the optimal solution.
Ordering the items by decreasing return yields the same sequence as ordering the items by increasing
investment cost.
Proof
Similar exchange argument to Part 4.

51
42101 - Introduction to Operation Research

8.2 Exercise 2
Consider the following binary integer program (notice it is a set covering problem).

Minimize Z = 15x1 + 18x2 +6x3 + 20x4


s.t. x1 + x4 ≥ 1
x1 + x2 + x4 ≥ 1
x2 + x3 + x4 ≥ 1
xi ∈ {0, 1} for i = 1, 2, 3, 4
You would like to solve this problem using a greedy algorithm.
1. Explain why it seems reasonable to choose a free xj to fix at the value one by picking the variable with
the least ratio

cost coefficient of variable j


rj =
number of uncovered constraints that variable j covers
The proposed ratio explicitly seeks minimum cost by including the objective coefficient in its numerator.
Still, it also considers feasibility by dividing by the number of still uncovered rows or elements each free
j could resolve. The effect is to seek the most efficient next choice xj to fix=1, the best in the myopic
sense.
2. Determine the solution obtained by the greedy algorithm
Initial ratios are given in Table 5.

Variables
1 2 3 4
r 7.50 9.00 6.00 6.67

Table 5: Initial Ratios

(a) Variable three has the best ratio, set x3 = 1, updated ratios: r1 = 7.5, r2 = 18, r4 = 10
(b) Variable one has the best ratio, set x1 = 1, all rows now covered. Stop. Solution has a value of 21.
3. Solve the problem using a computer and comment on the result
By inspection we can see that the optimal has a value of 20, and has x4 = 1 Julia Code:

using JuMP
using GLPK

m = Model(GLPK.Optimizer)

@variable(m, x[1:4], Bin)


@objective(m, Min, 15*x[1]+18*x[2]+6*x[3]+20*x[4])
@constraint(m, x[1]+x[4]>=1)
@constraint(m, x[1]+x[2]+x[4]>=1)
@constraint(m, x[2]+x[3]+x[4]>=1)

optimize!(m)

println("Objective Value: ", objective_value(m))


println("x-values = ", value.(x))

52
42101 - Introduction to Operation Research

8.3 Exercise 3
Suppose that you are responsible for scheduling times for lectures in a university. You want to make sure that
any two lectures with a common student occur at different times to avoid a conflict. We could put the various
lectures on a chart and mark with an × any pair that has students in common:

Lecture A C G H I L M P S
(A)stronomy × × × ×
(C)hemistry × ×
(G)reek × × × × ×
(H)istory × × × ×
(I)talian × × ×
(L)atin × × × × × ×
(M)usic × × × ×
(P)hilosphy × ×
(S)panish × × × ×

A more convenient representation of this information is a graph with one vertex for each lecture and in which
two vertices are joined if there is a conflict between them

S H P C

I L G A

Now, we cannot schedule two lectures at the same time if there is a conflict, but we would like to use as few
separate times as possible, subject to this constraint. How many different times are necessary? We can code
each time with a colour, for example 11:00-12:00 might be given the colour green, and those lectures that meet
at this time will be coloured green. The no-conflict rule then means that we need to colour the vertices of our
graph in such away that no two adjacent vertices (representing courses which conflict with each other) have the
same colour.

One can apply the following greedy algorithm to colour the graph

1. colour a vertex with colour 1


2. Pick an uncoloured vertex v. colour with the lowest-numbered colour the has not been used on any
previously coloured adjacent vertices v. If all previously-used colours appear on vertices adjacent to v,
then we must introduce a new colour and number it.
3. Repeat the previous step until all vertices are coloured.

53
42101 - Introduction to Operation Research

Now, answer the following questions:

1. Using the set of colours {Green=1, Red=2, Blue=3, Yellow=4, and Cyan = 5}, colour the vertices in the
order G, L, H, P, M, A, I, S, C using the greedy algorithm above. How many colours do you need?
Four colours are needed, see the graph

S H P C

I L G A

Figure 4: Question one colouring with four colours

2. Using the set of colours {Green=1, Red=2, Blue=3, Yellow=4, and Cyan = 5}, colour the vertices in
the order A, I, P, M, S, C, H, L,G using the greedy algorithm above. How many colours do you need?
Comment on the results to parts 1) and 2).
Five colours are needed, see the graph

S H P C

I L G A

Figure 5: Question two colouring with five colours

The greedy approach is clearly not optimal. It provides solutions of different quality depending on which
node is used to initialize the algorithm.
3. Formulate the graph colouring problem as an integer programming problem
We number the nodes as follows S=1, H=2, P=3, C=4, A=5, G=6, L=7, I=8, and M=9.

1 if node i receives colour j for i = 1, 2, . . . , 9 and j = 1, 2, . . . 4
xij =
0 otherwise

1 if colour j is used for j = 1, 2, 3, 4
yj =
0 otherwise

54
42101 - Introduction to Operation Research

4
X
Minimize: yj
j=1
4
X
xij = 1 for i = 1, 2, . . . , 9
j=1
9
X
xij ≤ 9 · yj for j = 1, 2, 3, 4
i=1
xij + xi0 ,j ≤ 1 for all edges (i, i0 ), j = 1, 2, 3, 4
xij ∈ {0, 1} i = 1, 2, . . . , 9, j = 1, 2, 3, 4
yj ∈ {0, 1} j = 1, 2, 3, 4

4. Solve your model using a computer


Four colours are needed in the optimal solution.
The optimal solution is x14 = x21 = x31 = x41 = x53 = x62 = x73 = x81 = x94 = 1 and y1 = y2 = y3 =
y4 = 1. All other variables have a value of zero. The found colouring is visualized below.
Julia Code:

using JuMP
using GLPK

m = Model(GLPK.Optimizer)

#number of colors
nc=4;
#number of vertices
nv=9

@variable(m, x[1:nv, 1:nc], Bin)


@variable(m, y[1:nc], Bin)

struct Edge
node1::Int32
node2::Int32
end

Edges = [Edge(1,2), Edge(1,7), Edge(1,8), Edge(1,4), Edge(2,5), Edge(2,6), Edge(2,7),


Edge(3,6), Edge(3,7), Edge(4,5), Edge(5,6), Edge(5,9), Edge(6,7), Edge(6,9),
Edge(7,8), Edge(7,9), Edge(8,9)]

@objective(m, Min, sum(y[i] for i=1:nc))


@constraint(m, [i=1:nv], sum(x[i,j] for j=1:nc)==1)
@constraint(m, [j=1:nc], sum(x[i,j] for i=1:nv)<=nv*y[j])

for i=1:length(Edges)
for j=1:nc
edge = Edges[i]
@constraint(m, x[edge.node1,j]+x[edge.node2,j]<=1)
end
end

55
42101 - Introduction to Operation Research

optimize!(m)

println("Objective Value: ", objective_value(m))


println("x-values = ", value.(x))
println("y-values = ", value.(y))

S H P C

I L G A

Figure 6: Colouring obtained by solving the mathematical model

56
42101 - Introduction to Operation Research

9 Week 9: Improvement heuristics


9.1 Exercise 1
Recall the marketing campaign problem from Week 8, Part 4), in which you had decide how many advertising
campaigns (from a set of six possible) you would run, given a maximum budget of 5 million dollars, in order to
maximize the number of new customers. Each campaign required a certain investment (in millions of dollars)
and would yield a certain number of new customers (in the thousands). You were given the following data.

Campaign Investment Return


Superbowl half-time Adv. 3 80
Radio Adv. Campaign 0.80 20
Television (Non peak hour) 0.50 22
City Newspaper 2 75
Viral Marketing Campaign 0.05 4
Web advertising 0.6 10

You have applied a greedy heuristic and obtained a solution in which you will invest in all campaigns except the
Superbowl half-time advertisement. This strategy would lead to 131,000 new customers. Based on the model
formulation given in the answers to the exercises for Week 8, Q1.1, the solution is x1 = 0 and x2 , x3 , x4 , x5 , x6 = 1

1. You decide to try and improve this solution by considering the neighbourhood of solutions obtained by
removing one used campaign and inserting an unused campaign. How many feasible neighbouring solutions
are there?
One way is to note that the sum of all weights in the current solution is 3.95. Item 1 is the most expensive
one so the difference cannot be more than 1.05 otherwise we will go over the budget. In that case only
item 4 is a feasible change in the neighbourhood.

(a) x1 = 1, x2 = 0 ⇒ infeasible
(b) x1 = 1, x3 = 0 ⇒ infeasible
(c) x1 = 1, x4 = 0 ⇒ feasible
(d) x1 = 1, x5 = 0 ⇒ infeasible
(e) x1 = 1, x6 = 0 ⇒ infeasible

There is only one feasible solution. We can swap the values of x1 and x4 .
2. Given this neighbourhood, is there an improving move?
Swapping the values of x1 and x4 , leads to an improvement of 5, 000 customers.
3. Update your solution. Is it locally optimal? Explain your answer.
The new solution is x4 = 0 and x1 , x2 , x3 , x5 , x6 = 1. This is locally optimally given the specified
neighbourhood. No neighbouring solution has a better objective.

9.2 Exercise 2
Consider the Travelling Salesman Problem shown below, where City 1 is considered to be the home city.

57
42101 - Introduction to Operation Research

2 3
8 5
11
1 7
5
4

3 6
7
8
4

1. How many possible tours are there (excluding tours that are simply the reverse of others)?
For a symmetric TSP with n cities and a specified starting city there are n−1! 4!
2 . Therefore there a 2 = 12.
n·(n−1)!
Notice that the formula devised during the lecture said 2 . Here the n is because the starting city is
not fixed, and it can be chosen in n different ways. In this question the starting city is fixed to city 1 and
therefore the n is removed from the formula.
2. Starting at City 1, apply the nearest neighbour heuristic to obtain a feasible solution. What is its objective
value?

The tour will start in city 1. From city 1 we search the for closest city, which is city 3 with a distance of
only 4. Then from city 3 we find with the clostest distance to city 3, albeit it cannot be city 1 as that
would end the tour before it contains all the cities. In this way we move forward, in each iteration adding
a city. When there are no cities left that has not been visited the tour returns to city 1 and completes the
tour. The resulting tour is 1 − 3 − 4 − 2 − 5 − 1. This has an objective value of 28.

2 3
8 5
11
1 7
5
4

3 6
7
8
4

Figure 7: Nearest neighbour solution

3. Consider all possible 2-opt edge swaps. How many lead to different feasible tours?
Let us first get an idea of how many there can be. In a 2-opt edge swap two edges are removed from the
tour and two new ones are added. The two new ones that are added are uniquely determined from the
ones taken out. So if I take out (1, 3) and (2, 4) then it has to be (2, 3) and (1, 4) that needs to be inserted.
In this case (2, 3) and (1, 4) does actually exist, as the graph is complete.

58
42101 - Introduction to Operation Research

In total there are n different edges to choose among for picking the first edge. Then afterwards I can pick
one of the remaining but not a neighbouring edge to the edge chosen as the first one. That means we have
n − 3 possible edges to choose from. For our instance we therefore get 5 options for choosing the first one
and 2 for choosing the second. In this calculation each swap will appear twice so we have to divide by
two to get unique swaps.
This problem is small so we can write up all the potential swaps just to see that the formula actually
works. The table below contains the 10 alternatives.

First edge Second edge possibilities


(1, 3) (2, 4) (2, 5)
(3, 4) (1, 5) (2, 5)
(2, 4) (1, 3) (1, 5)
(2, 5) (1, 3) (3, 4)
(1, 5) (2, 4) (3, 4)

As you can see from the table each swap appears twice, eg. the swap based in edges (1, 5) and (2, 4) also
appears as (2, 4) and (1, 5). Therefore there are five edge pair swaps that lead to feasible tours.
4. Given the solution 1 − 3 − 4 − 2 − 5 − 1, perform a 2-opt swap on edges (2, 4) and (1, 5). What is the new
tour, and what is its objective value?

The resulting tour is 1 − 3 − 4 − 5 − 2 − 1. This has an objective of 26.

2 3
8 5
11
1 7
5
4

3 6
7
8
4

Figure 8: Updated solution

5. Find the optimal solution using Julia. You might like to make use of the Julia code given in Lecture 9.

The optimal solution is x12 = x25 = x31 = x43 = x54 = 1 and has an objective value of 26.
Julia Code
using JuMP
using GLPK
m = Model(GLPK.Optimizer)
n=5
cost = [0 8 4 8 11; 8 0 5 6 3; 4 5 0 4 7; 8 6 4 0 7; 11 3 7 7 0]

#variable definition
#declare two sets of variables - the arc selection variables x_ij come first,

59
42101 - Introduction to Operation Research

#followed by the position variables u_j. The position variables simply state
#the visit number of each node.
@variable(m, x[1:n,1:n], Bin)
@variable(m, u[1:n])
#objective function
@objective(m, Min, sum(cost[i, j] * x[i,j] for i = 1:n for j = 1:n))
# constraints - flow out of node
@constraint(m, [i=1:n], sum(x[i,j] for j=1:n) == 1)
# constraints - flow into node
@constraint(m, [j=1:n], sum(x[i,j] for i=1:n) == 1)
# constraints - node order
for i=1:n, j=1:n
if i != 1 && j!=1
@constraint(m, u[i]-u[j]+(n-1)*x[i,j] <= (n-2))
end
end
# constraints - bounds on u
@constraint(m, u[1]==1)
@constraint(m, [i=2:n], u[i]>=2)
@constraint(m, [i=2:n], u[i]<=5)
optimize!(m)
println("Objective Value: ", JuMP.objective_value(m))
for i=1:n, j=1:n
if JuMP.value(x[i,j]) > 1-1e-6
println("Edge ", i, "-", j, " ", JuMP.value(x[i,j]))
end
end

9.3 Exercise 3
Consider the Traveling Salesman Problem shown below, where City 1 is the home city.

4 10
11 7
10
3
6 5 6 5
1 4

9 6 12
5

1. Starting at City 1, apply the nearest neighbour heuristic to obtain a feasible solution. What is its objective
value?

The resulting solution is 1 − 5 − 2 − 4 − 3 − 6 − 1. This has an objective value of 43. The solution is shown
in Figure 9.
2. Depending on where you start, why might the nearest neighbour heuristic fail to generate a feasible solu-
tion in this case?

60
42101 - Introduction to Operation Research

4 10
11 7
10
3
6 5 6 5
1 4

9 6 12
5

Figure 9: Nearest neighbour solution

The graph is not complete. It is not possible to reach certain cities from others. For example, try running
the nearest neighbour heuristic from City 2.
3. Consider a 3-opt exchange on edges (2, 4), (1, 6) and (3, 6). Based on your solution from question 1 in this
exercise, how many feasible new tours does this generate, and what are their respective objective values?
There is only one feasible new tour, and this has an objective of 40.

4 10
11 7
10
3
6 5 6 5
1 4

9 6 12
5

Figure 10: Edges removed

4 10
11 7
10
3
6 5 6 5
1 4

9 6 12
5

Figure 11: The only other possibility

4. Find the optimal solution using Julia. You might like to make use of the Julia code given in Lecture 9.

61
42101 - Introduction to Operation Research

(Hint: introduce highly penalized edges to generate a complete graph). How does this compare to your
best found tour?
The optimal solution is x15 = x23 = x34 = x46 = x52 = x61 = 1. This has an objective value of 40, and
corresponds to the tour 1 − 5 − 2 − 3 − 4 − 6 − 1. This is the same as the tour found by the 3-opt exchange.

4 10
11 7
10
3
6 5 6 5
1 4

9 6 12
5

Figure 12: The optimal solution

Julia Code
using JuMP
using GLPK

m = Model(GLPK.Optimizer)
n=6
cost = [0 11 50 50 6 9; 11 0 10 7 4 10; 50 10 0 5 50 12;
50 7 5 0 6 6; 6 4 50 6 0 5; 9 10 12 6 5 0]
#variable definition
@variable(m, x[1:n,1:n], Bin)
@variable(m, u[1:n])
#objective function
@objective(m, Min, sum(cost[i, j] * x[i,j] for i = 1:n for j = 1:n))
# constraints - flow out of node
@constraint(m, [i=1:n], sum(x[i,j] for j=1:n) == 1)
# constraints - flow into node
@constraint(m, [j=1:n], sum(x[i,j] for i=1:n) == 1)
# constraints - node order
for i=1:n, j=1:n
if i != 1 && j!=1
@constraint(m, u[i]-u[j]+(n-1)*x[i,j] <= (n-2))
end
end
# constraints - bounds on u
@constraint(m, u[1]==1)
@constraint(m, [i=2:n], u[i]>=2)
@constraint(m, [i=2:n], u[i]<=6)
optimize!(m)
println("Objective Value: ", JuMP.objective_value(m))
for i=1:n, j=1:n
if JuMP.value(x[i,j]) > 1-1e-6

62
42101 - Introduction to Operation Research

println("Edge ", i, "-", j, " ", JuMP.value(x[i,j]))


end
end

63
42101 - Introduction to Operation Research

10 Week 10, Totally Unimodular Matrices


10.1 Exercise 1
Consider the integer program:

Min Z = 2.5x1 + 7.9x2


s.t. x1 ≤ 5 (1)
x2 ≤ 4 (2)
x2 ≥ 2 (3)
x1 + x2 ≥ 6 (4)
x1 , x2 integer (5)
(a) Does the feasible region have integer corner points? Hint: draw it on a graph.
Yes it does

6
5
4
3
2
1

1 2 3 4 5 6

(b) Is the constraint coefficient matrix totally unimodular? Hint: recall that for a 2 × 2 matrix
 
a b
, the determinant is ad − bc. The constraint matrix is
c d
 
1 0
 0 1 
 
 0 1 
1 1
and all the square submatrices and their determinants are:
 
1 0
−→ det = (1) × (1) − 0 × 0 = 1
0 1
 
1 0
−→ det = (1) × (1) − 0 × 0 = 1
0 1
 
1 0
−→ det = (1) × (1) − 0 × (1) = 1
1 1
 
0 1
−→ det = 0 × (1) − (1) × 0 = 0
0 1
 
0 1
−→ det = 0 × (1) − (1) × (1) = −1
1 1
 
0 1
−→ det = 0 × (1) − (1) × (1) = −1
1 1
They are all either 0,1 or -1, so the matrix is totally unimodular.

64
42101 - Introduction to Operation Research

(c) Repeat questions (a) and (b), but with the right-hand-side of constraint (2) changed to 4.5.

6
5
4
3
2
1

1 2 3 4 5 6

Now there are some corner points that are not integer, but nothing changed in the coefficient matrix, so it
is still totally unimodular. This demonstrates that both total unimodularity and integer right hand side
values are necessary for integral extreme points.
(d) Repeat questions (a) and (b), but with the coefficient of x1 in constraint (4) changed to 2.

6
5
4
3
2
1

1 2 3 4 5 6

All the extreme points are still integer with this change. However, the constraint matrix is now
 
1 0
 0 1 
 
 0 1 
2 1
and the square submatrix
 
0 1
−→ det = 0 × (1) − (1) × (2) = −2
2 1

has a determinant of −2, so it is not totally unimodular. This shows that total unimodularity is only a
sufficient condition, and not at all necessary for the feasible region to have integral extreme points.
(e) What if the objective function coefficients are changed?
The definition of total unimodularity is completely independent of the objective function coefficients, so
it does not matter if they change, or even if they are not integer.

65
42101 - Introduction to Operation Research

10.2 Exercise 2
Assume we have an assignment problem with three workers and three tasks. Each worker has to be assigned
to exactly one task, and each task needs to be done by exactly one worker (any worker can be assigned to any
task). Use the binary variable xij to denote whether or not worker i is assigned to task j, and write down the
constraints. The constraints are
x11 + x12 + x13 =1
x21 + x22 + x23 =1
x31 + x32 + x33 =1
x11 + x21 + x31 =1
x12 + x22 + x32 =1
x13 + x23 + x33 =1

(a) Is the constraint coefficient matrix totally unimodular? Hint: use the "sufficient condition" propo-
sition from the slides.
The constraint coefficient matrix is
 
1 1 1 0 0 0 0 0 0
 0 0 0 1 1 1 0 0 0 
  M1 (workers)
 0 0 0 0 0 0 1 1 1 
 
 1 0 0 1 0 0 1 0 0 
 
 0 1 0 0 1 0 0 1 0  M2 (tasks)
0 0 1 0 0 1 0 0 1

and the rows have been partitioned into two sets. It easy to see for each column j that
X X
aij − aij = 0.
i∈M1 i∈M2

(c) Would the optimal solution to the LP relaxation be integral?


Yes, the A matrix is totally unimodular. This implies that the extreme points of the feasible region are
integral (our right hand side values are all 1).

(b) Repeat questions (a) and (b), but now assume that some workers can do multiple tasks and some tasks
require multiple workers.
This only changes the right-hand side, and not the coefficients. So the matrix stays totally unimodular.
Once again, as long as the right-hand side values are integral, the total unimodularity of the matrix
implies that the extreme points of the polyhedron are integral, and thus the LP relaxation solution will
be integral.

10.3 Exercise 3
Are the following matrices totally unimodular or not:

1.  
1 0 1 0 1
 0 1 1 1 0 
A1 = 
 0

0 0 1 1 
1 1 0 0 0
No. Contains the following square submatrix, which has a determinant of 2
 
1 0 1
 1 1 0 
0 1 1

66
42101 - Introduction to Operation Research

2.  
−1 0 1 0 −1 0 0

 0 1 0 1 1 0 1 

A2 = 
 −1 1 0 0 0 0 0 

 0 0 1 0 0 1 1 
0 0 0 1 0 −1 0
Yes, we can partition the rows into two sets M1 and M2 , where the first two rows belong to set M1 and
the last three to M2

67
42101 - Introduction to Operation Research

11 Week 11, Branch & Bound


11.1 Exercise 1
The correct answers are:

• True. For linear programming problems we have a number of different methods. In this course we only
describe and work with the simplex method. But all the methods have a practical performance that let
us solve huge problems (ten-thousands of variables and thousands of constraints) relatively fast (seconds
or minutes). For integer programming our Branch-and-Nound approach in the worst case equals an
exhaustive search, and even when it don’t, we might end up with huge branching trees where we in each
of the nodes have to solve an LP problem.
• True. As a rule of thumb the more variables you have, the more possibilities are there for branching and
therefore your branching tree will potentially become larger. On the other hand the more constraints
you have the more constraining the problem becomes and therefore it is potentially easier to solve the
problem.
• False. You can come up with an integer programming problem where all "integer cornerpoints" of the
optimal LP relaxation are not feasible to the original problem. Example 12.1 in Section 12.5 shows an
example where two out of four cornerpoints are are infeasible. If the grey triangle (defining the feasible
area) would have been more "pointy" then we could have a situation where four out of four cornerpoints
are infeasible (from an integer perspective).
An explanation of each of these points can be found in the first three pages of Section 12.5.

11.2 Exercise 2
The Binary Integer Program is formulated as:
max Z = 2x1 − x2 + 5x3 − 3x4 + 4x5
s.t
3x1 − 2x2 + 7x3 − 5x4 + 4x5 ≤ 6
x1 − x2 + 2x3 − 4x4 + 2x5 ≤ 0
x1 , x2 , x3 , x4 , x5 ∈ {0, 1}
By solving the LP relaxation with Julia we find the solution
 
2
(x1 , x2 , x3 , x4 , x5 ) = , 1, 1, 1, 1
3
with the objective value zLP = 6.333. Current best BIP solution z ∗ = −∞.

Branch x1 = 0: We add the constraint x1 = 0 and solve the LP relaxation with Julia resulting in the solution
(x1 , x2 , x3 , x4 , x5 ) = (0, 0, 1, 1, 1)
with the objective value zLP = 6. Since it is a integer solution we update the current best BIP solution to
z ∗ = 6.

Branch x1 = 1: We add the constraint x1 = 1 and solve the LP relaxation with Julia resulting in the solution
(x1 , x2 , x3 , x4 , x5 ) = (1, 1, 0.8571, 1, 1)

68
42101 - Introduction to Operation Research

with the objective value zLP = 6.286. Normally we should go further down this branch since the solution is
not an integer solution but because bzLP c ≤ z ∗ we know that we wont find a better solution than the current
solution z ∗ = 6.

The optimal solution to the BIP is x3 = x4 = x5 = 1 with an objective value at z ∗ = 6.

11.3 Exercise 3
Solution to parts a) and b)
The MIP problem is:

max −3x1 + 5x2


s.t.
5x1 − 7x2 ≥3
x1 ≤3
x2 ≤3
x1 , x2 ≥0
Where x1 and x2 are integers.

This can be illustrated graphically as

Here it can be easy seen that the optimal LP solution is (x1 , x2 ) = 3, 12



7 which corresponds to an objective
value at zLP = − 37 . Current best MIP solution z ∗ = −∞.

• Branch: x2 ≤ 1
Now the problem can be illustrated graphically as

69
42101 - Introduction to Operation Research

It can be easy seen that the optimal LP solution is (x1 , x2 ) = (2, 1) which corresponds to an objective
value at zLP = −1. Current best MIP solution z ∗ = −1.
• Branch: x2 ≥ 2
No solutions

The optimal solution to the MIP is (x1 , x2 ) = (2, 1) with an objective value at z ∗ = −1.
Solution to parts c) and d)
Let x1 = y1 + 2y2 and let x2 = z1 + 2z2 (both have upper bounds of 3, so in the binary representation we only
need two new variables for each to model the value possibilities of 0,1,2, and 3 for each variable.

Substituting this into the formulation given yields:

Maximize: − 3y1 − 6y2 + 5z1 + 10z2


s.t. 5y1 + 10y2 − 7z1 − 14z2 ≥ 3
yi ∈ {0, 1} for i = 1, 2
zi ∈ {0, 1} for i = 1, 2

Solving the LP relaxation of this yields the solution (y1 , y2 , z1 , z2 ) = (1, 1, 0, 76 ) with objective - 37
Branch on z2 to create two new subproblems z2 = 0 yields the solution (0, 1, 1, 0) with objective value −1.
Adding z2 = 1 results in an infeasible solution. We have therefore found the same solution as that found in
parts a) and b). The solution (0, 1, 1, 0) is equivalent to x1 = 2 and x2 = 1.

11.4 Exercise 4
River Power has four generators currently available for production and wishes to decide which to put on line to
meet the expected 700 megawatt peak demand over the next several hours. The following table shows the cost
to operate each generator (in thousands of dollars per hour) and their outputs in megawatts.

Generator
1 2 3 4
Operating Cost ($000s) 7 12 5 14
Output Power (Megawatts) 300 600 500 1600

Table 6: Exercise 4 Data

(a) Formulate this problem as a BIP.



1 if generator i is switched on, for i = 1, 2, 3, 4
xi =
0 otherwise

Minimize: 7x1 + 12x2 + 5x3 + 14x4 (6)


s.t. 300x1 + 600x2 + 500x3 + 1600x4 ≥ 700 (7)
xi ∈ {0, 1} (8)

(b) Use the Branch-&-bound algorithm to find an optimal solution to this problem. Hint: Use an LP solver
to solve all subproblems that arise.

70
42101 - Introduction to Operation Research

Solve the LP relaxation: This yields the solution (x1 , x2 , x3 , x4 ) = (0, 0, 0, 0.44), with objective value 6.12.
Set the upper bound to ∞

Branch on x4 .
Create two new subproblems, one with the new constraint x4 = 0 (S1) and one with x4 = 1 (S2)
Solve S1: This yields the solution (x1 , x2 , x3 , x4 ) = (0, 0.33, 1, 0), with objective value 9
Solve S2: This yields the solution (x1 , x2 , x3 , x4 ) = (0, 0, 0, 1), with objective value 14. This integer feasi-
ble, update upper bound to 14, and fathom

Look at S1 and branch on x2 . Create two new subproblems, one with the new constraint x2 = 0 (S3) and
one with x2 = 1 (S4)
Solve S3: This yields the solution (x1 , x2 , x3 , x4 ) = (0.67, 0, 1, 0), with objective value 9.67
Solve S4: This yields the solution (x1 , x2 , x3 , x4 ) = (0, 1, 0.2, 0), with objective value 13

Continuing in this way will ultimately lead to the solution (x∗1 , x∗2 , x∗3 , x∗4 ) = (1, 0, 1, 0) with objective value
12.

71
42101 - Introduction to Operation Research

12 Netwok simplex
12.1 Week 12, Exercise 1
(a) When solving a relatively simple transportation problem it is always a good idea to visualize the problem.
The right image shows the given but not optimal solution.

The dual variables are found by following the solutions edges around the problem. If you move forward
through an edge you add the price of the edge to the dual variable and if you move backwards you subtract
the cost of the edge. The dual variable of the starting point is always equal to zero, so we know that
u1 = 0. The cost to go from A1 to B1 is 3 so the dual variable v1 becomes v1 = 0 + 3 = 3. The price to
go from B1 to A2 is 6 so the dual variable is u2 = 3 − 6 = −3. Continuing in the same manner gives us
the dual variables shown on the figure below which corresponds to answer B.

(b) Moving on to the next part of the question we have to find out which edge should be added in the next
iteration of the network simplex. To do so the reduced cost for all of the unused edges are computed:

72
42101 - Introduction to Operation Research

red cost (A1,B3): cA1,B3 + u1 − v3 =8−0−0=8


red cost (A2,B2): cA2,B2 + u2 − v2 =5−3−1=1
red cost (A3,B2): cA3,B2 + u3 − v2 = 2 − 2 − 1 = −1
red cost (A3,B3): cA3,B3 + u3 − v3 =8−2−0=6

(c) Only the edge (A3,B2) has a negative reduced cost so that is the edge we want to add to the solution.
(d) Now we have to determine the flow. Adding the edge we get the circle (A3,B2),(A1,B2),(A1,B1),(A3,B1).
In order to determine the flow we have to look at the backwards flow in the circle i.e. the flow from (A1,B2)
and (A3,B1). The flow in both of the edges is 12 so the flow in the new edge (A3,B2) should be 12 in the
new basis.

12.2 Exercise 2
The greedy algorithm gives the following solution. Notice that an edge with zero flow (A1,B3) is added in order
to make the graph connected. A rule of thumb is that there always should be #nodes - 1 edges. The cost of
the above initial solution is 40 · 3 + 5 · 4 + 0 · 4 + 15 · 8 + 10 · 0 = 260. In the graph below the flow and cost is
written as (x,c), where x is the flow and c is the cost.

To check if the above graph is optimal we need to compute the reduced costs for all of the unused edges. First
the dual variables are found by following the solutions edges around the problem. If you move forward through
an edge you add the price of the edge to the dual variable and if you move backwards you subtract the cost of
the edge. The dual variable of the starting point is always equal to zero, so we know that u1 = 0. The cost to
go from A1 to B2 is 4 so the dual variable v2 becomes v2 = 0 + 4 = 4. The price to go from B2 to A2 is 8 so
the dual variable is u2 = 4 − 8 = −4. Continuing in the same manner gives us the dual variables shown on the
figure below.

73
42101 - Introduction to Operation Research

The reduced costs are calculated


red cost (A2,B1): cA2,B1 + u2 − v1 = 5 − 4 − 3 = −2
red cost (A2,B3): cA2,B3 + u2 − v3 =8−4−4=0
red cost (A3,B1): cA3,B1 + u3 − v1 =0+4−3=1
red cost (A3,B2): cA3,B2 + u3 − v2 =M +4−4=M

It is seen that it will improve the solution to add the edge (A2,B1). Now we have to determine the flow of the
new edge. When adding the edge we get the circle (A1,B2), (A2,B2), (A2,B1), (A1,B1). The backward flow in
this circle is (A2,B2) and (A1,B1) with a backwards flow at 15 and 40. 15 is the smallest flow so we’ll remove
the edge (A2,B2) and the new edge (A2,B1) will get a flow at 15. We can now compute the dual variables in
the same manner as before for our new graph.

Again to check if the solution is optimal we compute the reduced costs for the unused edges.

red cost (A2,B2): cA2,B2 + u2 − v2 = 8 − 2 − 4 = 2

74
42101 - Introduction to Operation Research

red cost (A2,B3): cA2,B3 + u2 − v3 = 8 − 2 − 4 = 2


red cost (A3,B1): cA3,B1 + u3 − v1 = 0 + 4 − 3 = 1
red cost (A3,B2): cA3,B2 + u3 − v2 = M + 4 − 4 = M

Since all of the reduced costs are greater than zero the found solution is optimal. The cost of the optimal
solution is 25 · 3 + 20 · 4 + 0 · 4 + 15 · 5 + 10 · 0 = 230.

12.3 Exercise 3
To minimize the total shipping cost it can be shown that we only have to minimize the miles. By changing the
unit from miles to hecto-miles we get the following parameter table:

Distribution center
1 2 3 4 Supply
1 8 13 4 7 12
Plant 2 11 14 6 10 17
3 6 12 8 9 11
Demand 10 10 10 10

This can be drawn as the following network representation:

In order to obtain the optimal solution we start out with the greedy solution shown below and compute all of
the dual variables as shown in earlier exercises.

75
42101 - Introduction to Operation Research

To check if the solution is optimal we compute the reduced costs for the unused edges.

red cost (A1,B1): cA1,B1 + u1 − v1 =8+0−8=0


red cost (A1,B2): cA1,B2 + u1 − v2 = 13 + 0 − 14 = −1
red cost (A2,B2): cA2,B2 + u2 − v2 = 14 − 3 − 14 = −3
red cost (A2,B3): cA2,B3 + u2 − v3 = 6 − 3 − 4 = −1
red cost (A3,B3): cA3,B3 + u3 − v3 =8+2−4=6
red cost (A3,B4): cA3,B4 + u3 − v4 =9+2−7=4

The most negative reduced cost is from (A2,B2). When adding the edge (A2,B2) we get the circle (A2,B2),
(A3,B2), (A3,B1), (A2,B1). The backward flow in this circle is (A3,B2) and (A2,B1) with a backwards flow at
10 and 9. 9 is the smallest flow so we’ll remove the edge (A2,B1) and the new edge (A2,B2) will get a flow at
9. We can now compute the dual variables in the same manner as before for our new graph.

76
42101 - Introduction to Operation Research

Again the reduced costs are computed:

red cost (A1,B1): cA1,B1 + u1 − v1 =8+0−5=3


red cost (A1,B2): cA1,B2 + u1 − v2 = 13 + 0 − 11 = 2
red cost (A2,B1): cA2,B1 + u2 − v1 = 11 − 3 − 5 = 3
red cost (A2,B3): cA2,B3 + u2 − v3 = 6 − 3 − 4 = −1
red cost (A3,B3): cA3,B3 + u3 − v3 =8−1−4=3
red cost (A3,B4): cA3,B4 + u3 − v4 =9−1−7=1

Since the reduced cost from A2 to B3 is negative the solution is still not optimal. By adding the edge (A2,B3)
we get the circle (A2,B3), (A1,B3), (A1,B4), (A2, B4). The backward flow in this circle is (A1,B3) and (A2,B4)
with a backwards flow at 10 and 8. 8 is the smallest flow so we’ll remove the edge (A2,B4) and the new edge
(A2,B3) will get a flow at 8. We can now compute the dual variables in the same manner as before for our new
graph.

77
42101 - Introduction to Operation Research

To see if the found solution is optimal we again compute the reduced costs for the unused edges.

red cost (A1,B1): cA1,B1 + u1 − v1 =8+0−6=2


red cost (A1,B2): cA1,B2 + u1 − v2 = 13 + 0 − 12 = 1
red cost (A2,B1): cA2,B1 + u2 − v1 = 11 − 2 − 6 = 3
red cost (A2,B4): cA2,B4 + u2 − v4 = 10 − 2 − 7 = 1
red cost (A3,B3): cA3,B3 + u3 − v3 =8+0−4=4
red cost (A3,B4): cA3,B4 + u3 − v4 =9+0−7=2

Since all of the reduced costs are greater than zero the found solution is optimal. The cost of the optimal
solution is 2 · 4 + 10 · 7 + 9 · 14 + 8 · 6 + 10 · 6 + 1 · 12 = 324.

12.4 Exercise 4
From the information given in the text we know that there is three products and five plants. Plant one, two
and three can produce all of the three products but plant four and five can only produce product one and two.
In order to formulate this as a transportation problem we say assume that plant four and five can produce all
of the three products but the cost of producing product three at these plants are M i.e. a really big number.
With the information given we can construct the following parameter table:

Product
1 2 3 Supply
1 41 55 48 40
2 39 51 45 60
Plant 3 42 56 50 40
4 38 52 M 60
5 39 53 M 100
Demand 70 100 90 260/300

It is seen that the supply and demand is not equal so in order to use network simplex to solve the problem we
need to introduce a dummy product with demand 40 to make the problem equal in demand and supply. Be
doing this we get the following parameter table.

78
42101 - Introduction to Operation Research
Product
1 2 3 Dummy Supply
1 41 55 48 0 40
2 39 51 45 0 60
Plant 3 42 56 50 0 40
4 38 52 M 0 60
5 39 53 M 0 100
Demand 70 100 90 40 300/300

Note: You are NOT asked to solve the problem, only to formulate the above parameter table.

13 Linear Programming: Dual Simplex


13.1 Exercise 5
Normally to solve a problem like this we would need to use the 2-phase method, however dual-simplex allows
us to solve the problem directly. In dual-simplex we still need all the constraints to be on the ≤-form but we
also need a negative right hand side, so the first step is to multiply the two constraints with −1 resulting in the
new problem.

min Z = 5x1 + 2x2 + 4x3


s.t. − 3x1 − x2 − 2x3 ≤ −4
− 6x1 − 3x2 − 5x3 ≤ −10
x1 ≥ 0, x2 ≥ 0, x3 ≥ 0

In tabular form this is

Bv Z x1 x2 x3 x4 x5 RHS
Z -1 5 2 4 0 0 0
x4 0 -3 -1 -2 1 0 -4
x5 0 -6 -3 -5 0 1 -10

In dual simplex the outgoing variable is first found as the variable with the most negative right hand side. So in
the above case x5 is the leaving variable. The in going variable is like in normal simplex found by a minimum
ratio test, but only the negative variables in the pivot row are considered. Here we have 56 , 23 and 45 . 23 is
smallest so x2 is the in going variable.

Next step is like in normal simplex to get a 1 in the pivot element and the rest of the pivot column should be
zero. This is done by the following operations:
R2 = R2/(−3) → R0 = R0 − 2R2 → R1 = R1 + R2, resulting in the next table

Bv Z x1 x2 x3 x4 x5 RHS
Z -1 1 0 2/3 0 2/3 -20/3
x4 0 -1 0 -1/3 1 -1/3 -2/3
x2 0 2 1 5/3 0 -1/3 10/3

1 2/3
Here x4 is the leaving variable and the in going is again found with at min ratio test. The ratios are: 1 , 1/3
2/3 1
and 1/3 . The minimum ratio is 1 so x1 is the in going variable.

To get to the next iteration the following row operations have been done
R1 = R1 · (−1) → R0 = R0 − R1 → R2 = R2 − 2R1, resulting in the final table

79
42101 - Introduction to Operation Research
Bv Z x1 x2 x3 x4 x5 RHS
Z -1 0 0 1/3 1 1/3 -22/3
x1 0 1 0 1/3 -1 1/3 2/3
x2 0 0 1 1 2 -1 2

The optimal solution to the problem is found to Z ∗ = 22


3 , when (x1 , x2 ) = ( 32 , 2).

13.2 Exercise 6
First of all the problem is written in standard form by multiplying the second constraint with −1

max Z + x1 + x2 = 0 (9)
s.t. x1 + x2 ≤ 8 (10)
−x2 ≤ −3 (11)
−x1 + x2 ≤ 2 (12)

The dual simplex method is applied to solve the problem sine the right hand side is negative.

Z x1 x2 x3 x4 x5 RHS
Z 1 1 1 0 0 0 0
x3 0 1 1 1 0 0 8
x4 0 0 -1 0 1 0 -3
x5 0 -1 1 0 0 1 2

x4 is the leaving basis variable and x2 is the entering basis variable since it is the only negative coefficient in
the row. The tabular is then made legal with x2 as a basis variable.

Z x1 x2 x3 x4 x5 RHS
Z 1 1 0 0 1 0 -3
x3 0 1 0 1 1 0 5
x2 0 0 1 0 -1 0 3
x5 0 -1 0 0 1 1 -1

The right hand side is still negative, so another iteration of the dual simplex method is made. x5 is the leaving
basis variable while x1 is the entering basis variable.

Z x1 x2 x3 x4 x5 RHS
Z 1 0 0 0 2 1 -4
x3 0 0 0 1 2 1 4
x2 0 0 1 0 -1 0 3
x1 0 1 0 0 -1 -1 1

The optimal solution is x1 = 1, x2 = 3 and Z = −4

The graphical solution can be seen below. The darkest blue triangle is the feasible solution space and the green
line is the objective function. The dual simplex method start in point A(0;0). After the first iteration it reaches
point B(0;3) and in the final iteration it reaches the optimal point C(1;3). The difference between the dual
simplex and the standard simplex is that the dual simplex visit illegal solutions before it reaches the optimal
legal CP-solution. The standard simplex method visit only legal CP-solutions before it reaches the optimal
legal CP-solution.

80
42101 - Introduction to Operation Research

81

You might also like