ECOM 6302: Engineering Optimization: Chapter Four: Linear Programming
ECOM 6302: Engineering Optimization: Chapter Four: Linear Programming
ECOM 6302: Engineering Optimization: Chapter Four: Linear Programming
ECOM 6302: Engineering Optimization Chapter Four: Linear Programming Instructor: Associate Prof. Mohammed Alhanjouri
The term linear programming defines a particular class of optimization problems in which the constraints of the system can be expressed as linear equations or inequalities and the objective function is a linear function of the design variables. Linear programming (LP) techniques are widely used to solve a number of military, economic, industrial, and societal problems. The primary reasons for its wide use are the availability of commercial software to solve very large problems and the ease with which data variation (sensitivity analysis) can be handled through LP models.
As a first step in solving this problem, we want to identify all possible values of x1 and x2 that are nonnegative and satisfy the constraints.
For example, the point x1 = 8, x2 = 10 is positive and satisfies all the constraints. Such a point is called a
feasible solution.
The set of all feasible solutions is called the feasible
region.
The best feasible solution is called an optimal solution to the LP problem. The value of the objective function corresponding to an optimal solution is called the optimal value of the linear program To represent the feasible region in a graph, every constraint is plotted, and all values of x1, x2 that will satisfy these constraints are identified.
Handling Inequalities
2 x1 3 x2 x3 4 x4 13 2 x1 3 x2 x3 4 x4 S1 13 x1 0.5 x2 0.1x3 9 x1 0.5 x2 0.1x3 S 2 9
Surplus variable Slack variable
A classical method of generating the solutions to the constraint equations is the GaussJordan elimination scheme (see Appendix C for a review).
Optimal solution
4.4.1 Minimization Problems There are two approaches to solving a minimization problem:
A basic feasible solution in which one or more of the basic variables are zero is called a degenerate basic feasible solution. In contrast, a basic feasible solution in which all the basic variables are positive is said to be nondegenerate.
Degeneracy may be present in the initial formulation of the problem itself (when one or more right-hand side constants are zero) or it may be introduced during simplex calculations whenever two or more rows tie for the minimum ratio. But a more important question is whether it is possible for the simplex method to go on indefinitely without improving the objective function. In fact, examples have been constructed to show that such a thing is theoretically possible.
In such situations the simplex method may get into an infinite loop and will fail to reach the optimal solution. This phenomenon is called classical cycling or simply cycling in the simplex algorithm. Fortunately, such cycling does not happen in practical problems in spite of the fact that many practical problems have degenerate solutions. However, in a small number of cases, practical problems have been observed to fail to converge on a computer. This is mainly due to certain characteristics of the computer software and the way in which the simplex method is programmed. Gass [1] has called this computer cycling. Such problems invariably converge when solved on a different computer system.
The computational efficiency of the simplex method depends on (1) the number of iterations (basic feasible solutions) required to reach the optimal solution, and (2) the total computer time needed to solve the problem. Much effort has been spent on studying computational efficiency with regard to the number of constraints and the decision variables in the problem.
Empirical experience with thousands of practical problems shows that the number of iterations of a standard linear program with m constraints and n variables varies between m and 3m, the average being 2m. A practical upper bound for the number of iterations is 2(m+n), although occasionally problems have violated this bound. If every decision variable had a nonzero coefficient in every constraint, then the computational time would increase approximately in relation to the cube of the number of constraints, m3.