Operational Research - Linear Programming Part I
Operational Research - Linear Programming Part I
MODULE:OPERATIONAL RESEARCH
YEAR III: COMPUTER SCIENCE
CSC 407: OPERATIONAL RESEARCH
By:
RUBYAGIZA Eric
MSc in Data Science
ericrubyagiza@gmail.com
0788588447
TABLE OF CONTENTS
1. INTRODUCTION
2. LINEAR PROGRAMMING
3. OPTIMIZATION TECHNIQUES
I:INTRODUCTION
In this session:
•What is Operational Research (OR)?
•Origin of OR
•When OR is useful
•Scope of OR
•Limitations of OR
I.1:What is Operational Research?
• Operational Research (OR) is the application of scientific methods to tackle real-life
problems associated with decision-making in businesses and other enterprises.
A way of thinking
• OR combines: & communicating + Analytical
techniques
to provide the means for making more informed and better decisions.
Operations Research can be defined simply as combination of two words operation and
research where operation means some action applied in any area of interest and
research imply some organized process of getting and analysing information about
the problem environment
In Defense operations: Since Second World War operation research have been used
for Defence operations with the aim of obtaining maximum gains with minimum
efforts.
• Assumptions need to be made about the nature and importance of some factors in
order to construct an Operation Research model.
RECAP 1
1. What do you understand by Operation Research?
2. Explain 5 scenarios when Operation Research is very useful?
3. Describe 4 limitations of Operational Research
II:LINEAR PROGRAMMING
In this session:
•What is Linear programming
•Mathematical Formulation of Linear Programming model
•Prototype Example,
•Matrices operations
•Solving Linear Programming Problems,
•Graphical Method,
•Simplex Method,
•Algebra of the simplex method
•Simplex method in Tabular Form
•Adapting to Other Model Forms,
•Post optimality Analysis;
II.1: What is Linear programming
Interpretation
Validation and Solution
Implementation Sensitivity Analysis of the Model
of the Model
12
II.1: What is Linear programming Cont’..
• Linear programming determines the way to achieve the best outcome (such
as maximum profit or lowest cost) in a given mathematical model and given
some list of requirements represented as linear equations.
13
II.2: Mathematical Formulation of Linear Programming model
Add non-negativity
restrictions
14
II.3: Prototype Example
An enterprize has 3 different products: P1, P2, and P3,needs 3 hours to produce
P1; 2 h to produce P2 and 3 h to produce P3 and the enterprize can pay only 20
h.
The enterprize needs 2 units of volume to stock P1; 4 units of volume to stock P2
and 3 units of volume to stock P3 but it possesses 35 units of volume in its stock.
The prices for the different products are: 10 F for P1; 15 F for P2 and 25 F for
P3.
The problem of the owner is to know how many units for each product he has to
make such that he can get the maximum profit.
15
II.3: Prototype Example
A manufacturing of electronic instrument would like to know how much TV,
how much amplifiers, how much receivers, must it manufacture taking into
account stock of a component of which it lays out and those to maximize its
profits. The data concerning the components necessary by apparatus are included
in the table hereafter.
Formulate Mathematical model
16
Questions
Recap 2
18
The numbers in the rectangular array are called the elements of the
matrix.
Thus, in more general terms,
C. To add two matrices A and B, simply add the corresponding elements, so that A
+ B = (aij + bij)
Since only matrices of the same size can be added, only the sum F + H is defined
(G cannot be added to either F or H). The sum of F and H is
Since addition of real numbers is commutative, it follows that addition of matrices (when it is
defined) is also commutative; that is, for any matrices A and B of the same size, A + B will
always equal B + A.
If any matrix A is added to the zero matrix of the same size, the result is clearly.
II.4.1: Addition Cont…
Properties of Matrix Addition: If a, B and C are matrices of same order, then
(a) Commutative Law: A + B = B + A
(b) Associative Law: (A + B) + C = A + (B + C)
(c) Identity of the Matrix: A + O = O + A = A, where O is zero matrix which is
additive identity of the matrix,
(d) Additive Inverse: A + (-A) = 0 = (-A) + A, where (-A) is obtained by changing
the sign of every element of A which is additive inverse of the matrix,
Consider the two matrices A & B of order 2 x 2. Then the difference is given
by:
We can subtract the matrices by subtracting each element of one matrix from the
corresponding element of the second matrix. i.e. A – B = [aij – bij]mxn
II.4.2: Subtraction of Matrices Cont.'s….
Properties of Matrix Addition:
The order of the matrices must be the same
Subtract corresponding elements
Matrix subtraction is not commutative (neither is subtraction of real numbers)
•Matrix subtraction is not associative (neither is subtraction of real numbers)
then the scalar multiple 2 A is obtained by multiplying every entry of A by 2:
II.4.4:Matrix Multiplication
Properties of Matrix multiplicatio: Am×n × Bn×p = Cm×p
•The number of rows in the first matrix must be equal to the number of columns in
the second matrix.
•The order of the product is the number of rows in the first matrix by the number of
columns in the second matrix.
•Each element in the product is the sum of the products of the elements from
row i of the first matrix and column j of the second matrix.
.
II.4.4:Matrix Multiplication Cont’s…
2.
33
Questions
Recap 3
1. Differentiate infeasible, feasible and optimal solution
2. Solve below LP problem using Graphic Method
Maximize Z = 30X1 + 40X2
Subject to
3X1 + 2X2 ≤ 600
3X1 + 5X2 ≤ 800
5X1 + 6X2 ≤ 1100
and X1 ≥ 0, X2 ≥ 0
II.5.2: Simplex Method
Solving Linear programming problem using graphic method is of limited as the
number of variables is substantially large. If the linear programming problem has
larger number of variables, the suitable method for solving is Simplex Method.
The concept behind the simplex method is to start at one extreme feasible point and
compute f(x). The method then moves to another extreme feasible point at the original
point along the boundary of the convex set of feasible points such that a larger value of
f(x) is obtained.
This process continues until f(x) can no longer be increased and a solution point
results or until it is seen that no solution exists.
II.5.2: Simplex Method
In order to apply the simplex method the form of the general linear programming
problem is modified by the introduction of slack variables
t1;t2;...;tm
These slack variables are added to the left-hand side of the inequalities in order to
produce equalities.
Thus, the set of constraints given is rewritten as:
Like the original variables xi; i = 1,...,n the slack variables are also non-negative. If we
have a constraint of the form
ai1x1 + ai2x2 + ... + ainxn ≥ bi
then we introduce a surplus variable instead of a slack variable, and we get
ai1x1 + ai2x2 + ... + ainxn − ti = bi.
II.5.2: Simplex Method
The simplex method process consists of two steps:
1. Find a feasible solution (or determine that none exists).
Solution :
Maximize: 12X1 + 16X2+0S1+0S2
10X1 + 20X2 +S1=120;
8X1 + 8X2+S2=80;
X1, X2, S1, S2 ≥ 0
II.5.2: Simplex Method
CB Cj 12 16 0 0 Solution Ration
i
BV X1 X2 S1 S2
0 S1 10 20 1 0 120
0 S2 8 8 0 1 80
Zj 0(10)+0(8) 0(20)+) 0(1)+0(0) 0(0)+0(1)
= (8)= = =
0 0 0 0
Cj-Zj 12 16 0 0
II.5.2: Simplex Method
Optimality Condition:
For Max
All Cj-Zj ≤0
For Min
All Cj-Zj ≥0
Checking the Optimality , our solution is not optimal as we have Cj-Zj ≥0
Select the maximum value of Cj-Zj to find key column, find the least ration to determine key
row and their intersection is key element
Iteration I
CB Cj 12 16 0 0 Solutio Ratio
i n n
BV X1 X2 S1 S2
16 X2 1/2 1 1/20 0 6
0 S2 8-8(10)/20=4 0 -2/5 1 32
Zj 16(1/2)+0(4)= 4/5 0
8 16
Zj 16(1/2)+0(4)= 4/5 0
8 16
Check the optimality , for max all Cj-Zj ≤ 0, so with second iteration
we reach the optimality
X1=12, x2=16 and Z(opt)=128
Questions
Recap 4
Solve the below LPP using Simplex Method
For LP model in which the number of variables is considerably smaller than the
number of constraints, the computation saving maybe realized by solving the dual
Suppose a “basic solution” satisfies the optimality conditions but not feasible, we
apply dual simplex algorithm.
In regular Simplex method, we start with a Basic Feasible solution (which is not
optimal) and move towards optimality always retaining feasibility. In the dual
simplex method, the exact opposite occurs. We start with an (more than) “optimal”
solution (which is not feasible) and move towards feasibility always retaining
optimality conditions.
The algorithm ends once we obtain feasibility.
II.5.3: Example: Dual Simplex Method
Solve the following LPP by Dual simplex method
Min Z=5x1+6x2
Subject to:
x1+x2≥2
4X1+x2≥4
X1, x2 ≥0
II.5.3: Example: Dual Simplex Method
II.5.3: Example: Dual Simplex Method
II.5.3: Example: Dual Simplex Method
II.5.3: Example: Dual Simplex Method
II.5.3: Example: Dual Simplex Method
II.5.3: Example: Dual Simplex Method
Text Books:
1. Frederick S. Hillier and Gerald J. Lieberman: Introduction to Operations Research,
8th Edition, Tata McGraw Hill, 2005. (Chapters: 1, 2, 3.1 to 3.4, 4.1 to
4.8, 5, 6.1 to 6.7, 7.1 to 7.3, 8, 13, 14, 15.1 to 15.4)
Reference Books:
1. Wayne L. Winston: Operations Research Applications and Algorithms, 4th
Edition, Thomson Course Technology, 2003.
2. Hamdy A Taha: Operations Research: An Introduction, 8th Edition, Prentice
Hall India, 2007