Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                

ECOM 6302: Engineering Optimization: Chapter Four: Linear Programming

Download as pdf or txt
Download as pdf or txt
You are on page 1of 53

ISLAMIC UNIVERSITY OF GAZA Faculty of Engineering (Graduate Studies)

ECOM 6302: Engineering Optimization Chapter Four: Linear Programming Instructor: Associate Prof. Mohammed Alhanjouri

The term linear programming defines a particular class of optimization problems in which the constraints of the system can be expressed as linear equations or inequalities and the objective function is a linear function of the design variables. Linear programming (LP) techniques are widely used to solve a number of military, economic, industrial, and societal problems. The primary reasons for its wide use are the availability of commercial software to solve very large problems and the ease with which data variation (sensitivity analysis) can be handled through LP models.

Another example is shown on page 151

As a first step in solving this problem, we want to identify all possible values of x1 and x2 that are nonnegative and satisfy the constraints.

For example, the point x1 = 8, x2 = 10 is positive and satisfies all the constraints. Such a point is called a

feasible solution.
The set of all feasible solutions is called the feasible

region.
The best feasible solution is called an optimal solution to the LP problem. The value of the objective function corresponding to an optimal solution is called the optimal value of the linear program To represent the feasible region in a graph, every constraint is plotted, and all values of x1, x2 that will satisfy these constraints are identified.

Handling Inequalities
2 x1 3 x2 x3 4 x4 13 2 x1 3 x2 x3 4 x4 S1 13 x1 0.5 x2 0.1x3 9 x1 0.5 x2 0.1x3 S 2 9
Surplus variable Slack variable

Handling Unrestricted Variables


If s1 is an unrestricted variable, then the following transformation of variables is used for s1:

A classical method of generating the solutions to the constraint equations is the GaussJordan elimination scheme (see Appendix C for a review).

Optimal solution

4.4.1 Minimization Problems There are two approaches to solving a minimization problem:

Approach 1. Convert the minimization problem to an


equivalent maximization problem by multiplying the objective function by (-1) and then use the simplex method as outlined for a maximization problem.

Approach 2. Thus all seven steps of the simplex method


can be used for solving minimization problems with a minor modification in step 4

Modified Step 4. If all the coefficients in the c row are positive


or zero, the current basic feasible solution is optimal. Otherwise, select the nonbasic variable with the lowest (most negative) value in the c row to enter the basis.

Degeneracy and Cycling

A basic feasible solution in which one or more of the basic variables are zero is called a degenerate basic feasible solution. In contrast, a basic feasible solution in which all the basic variables are positive is said to be nondegenerate.
Degeneracy may be present in the initial formulation of the problem itself (when one or more right-hand side constants are zero) or it may be introduced during simplex calculations whenever two or more rows tie for the minimum ratio. But a more important question is whether it is possible for the simplex method to go on indefinitely without improving the objective function. In fact, examples have been constructed to show that such a thing is theoretically possible.

In such situations the simplex method may get into an infinite loop and will fail to reach the optimal solution. This phenomenon is called classical cycling or simply cycling in the simplex algorithm. Fortunately, such cycling does not happen in practical problems in spite of the fact that many practical problems have degenerate solutions. However, in a small number of cases, practical problems have been observed to fail to converge on a computer. This is mainly due to certain characteristics of the computer software and the way in which the simplex method is programmed. Gass [1] has called this computer cycling. Such problems invariably converge when solved on a different computer system.

4.4.4 Use of Artificial Variables


First the LP problem is converted to standard form such that all the variables are nonnegative, the constraints are equations, and all the right-hand side constants are nonnegative.

4.5.2 Computational Efficiency of the Simplex Method

The computational efficiency of the simplex method depends on (1) the number of iterations (basic feasible solutions) required to reach the optimal solution, and (2) the total computer time needed to solve the problem. Much effort has been spent on studying computational efficiency with regard to the number of constraints and the decision variables in the problem.

Empirical experience with thousands of practical problems shows that the number of iterations of a standard linear program with m constraints and n variables varies between m and 3m, the average being 2m. A practical upper bound for the number of iterations is 2(m+n), although occasionally problems have violated this bound. If every decision variable had a nonzero coefficient in every constraint, then the computational time would increase approximately in relation to the cube of the number of constraints, m3.

You might also like