Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                

Linear Programming

Download as docx, pdf, or txt
Download as docx, pdf, or txt
You are on page 1of 8

LINEAR PROGRAMMING

1. Meaning of linear programming

A linear programming problem may be defined as the problem of maximizing


or minimizing a linear function subject to linear constraints. The constraints may be
equalities or inequalities. Here is a simple example.

Find numbers x 1 and x 2 that maximize the sum x 1+ x2 subject to the constraints

x 1 ≥ 0, x 2 ≥ 0 and

x 1+ 2 x 2 ≤ 4
4 x1 +2 x 2 ≤ 12

−x 1+ x2 ≤1

In this problem there are two unknowns, and five constraints. All the constraints are
inequalities and they are all linear in the sense that each involves an inequality in
some linear function of the variables. The first two constraints, x 1 ≥ 0 and x 2 ≥ 0, are
special.

These are called nonnegative constraints and are often found in linear
programming problems. The other constraints are then called the main constraints.
The function to be maximized (or minimized) is called the objective function. Here,
the objective function is

x 1+ x2

Since there are only two variables, we can solve this problem by graphing the
set of points in the plane that satisfies all the constraints (called the constraint set)
and then finding which point of this set maximizes the value of the objective function.
Each inequality constraint is satisfied by a half-plane of points, and the constraint set
is the intersection of all the half-planes. In the present example, the constraint set is
the five sided figure shaded in Figure 1.
We seek the point ( x 1 , x 2 ¿ , that achieves the maximum x 1+ x2 of as ( x 1 , x 2 ¿
ranges over this constraint set. The function x 1+ x2 is constant on lines with slope −1,
for example the line x 1+ x2=1, and as we move this line further from the origin up and
to the right, the value of x 1+ x2 increases. Therefore, we seek the line of slope −1 that
is farthest from the origin and still touches the constraint set. This occurs at the
intersection of the lines x 1+ 2 x 2=4and4 x1 +2 x 2 ¿ 12, namely, ( x 1 , x 2 ¿=¿ (8/3, 2/3). The
value of the objective function there is (8/3) + (2/3) = 10/3.

2. The method of solving the linear programming problems


2.1 The simplex tableau and pivot rules may be used to find the solution to the
general problem if the following modifications are introduced.

To solve a standard maximization problem using the simplex method, we take the
following steps:

Step 1. Convert to a system of equations by introducing slack variables to turn


the constraints into equations, and rewriting the objective function in standard form.

Step 2. Write down the initial tableau.

Step 3. Select the pivot column: Choose the negative number with the largest
magnitude in the bottom row (excluding the rightmost entry). Its column is the pivot
column. (If there are two candidates, choose either one.) If all the numbers in the
bottom row are zero or positive (excluding the rightmost entry), then you are done:
the basic solution maximizes the objective function (see below for the basic solution)

Step 4. Select the pivot in the pivot column: The pivot must always be a
positive number. For each positive entry b in the pivot column, compute the ratio a/b,
where a is the number in the Answer column in that row. Of these test ratios, choose
the smallest one. The corresponding number b is the pivot.

Step 5. Use the pivot to clear the column in the normal manner (taking care to
follow the exact prescription for formulating the row operations described in the
tutorial on the Gauss Jordan method,) and then relabelled the pivot row with the
label from the pivot column. The variable originally labelling the pivot row is the
departing or exiting variable and the variable labelling the column is the entering
variable.
Step 6. Go to Step 3.

In summary, the general simplex method consists of three stages. In the first
stage, all equality constraints and unconstrained variables are pivoted (and removed
if desired). In the second stage, one uses the simplex pivoting rules to obtain a
feasible solution for the problem or its dual. In the third stage, one improves the
feasible solution according to the simplex pivoting rules until the optimal solution is
found.

2.2 To solve the linear programming problem using Graphical Method

The graphical method for solving linear programming problems in two


unknowns is as follows.

1. Graph the feasible region.


2. Compute the coordinates of the corner points.
3. Substitute the coordinates of the corner points into the objective function to
see which gives the optimal value. This point gives the solution to the
linear programming problem.
4. If the feasible region is not bounded, this method can be misleading:
optimal solutions always exist when the feasible region is bounded, but
may or may not exist when the feasible region is unbounded.
5. If the feasible region is unbounded, we are minimizing the objective
function, and its coefficients are non-negative, then a solution exists, so
this method yields the solution.

To determine if a solution exists in the general unbounded case:

1. Bound the feasible region by adding a vertical line to the right of the right most
corner point, and a horizontal line above the highest corner point.
2. Calculate the coordinates of the new corner points you obtain.
3. Find the corner point that gives the optimal value of the objective function.
4. If this optimal value occurs at a point of the original (unbounded) region, then
the LP problem has a solution at that point. If not, then the LP problem has no
optimal solution.

3.0 Application of linear programming model in solving linear programming


problem encountered in daily life

In everyday life people are interested in knowing the most efficient way of
carrying out a task or achieving a goal. For example, a farmer might want to know
how many crops to plant during a season in order to maximise yield (produce) or a
stock broker might want to know how much to invest in stocks in order to maximise
profit. These are examples of optimisation problems, where by optimising we mean
finding the maxima or minima of a function.

More formally, uses of linear programming are the technique, in which the
optimization of a linear objective functions, subject to linear equality and linear
inequality constraints. Given a polytope and the real-valued affine function defined
on this polytope, a linear programming method will find a point on the polytope where
this function has the smallest (or largest) value if such point exists, by searching
through the polytope vertices.

Besides that, uses of linear programming may be applied to various fields of


study, which is a major uses of linear programming. It is used most extensively in
business and economics studies, but can also be utilized for some engineering
problems. An industry that uses linear programming models is transportation,
energy, telecommunications, and manufacturing. It has proved useful in modelling
diverse type of problems in planning, routing, scheduling, assignment, and design.
Linear programming is a considerable field for uses of optimization for several
reasons. Many practical problems in operations research may be expressed as
linear programming problem.

Certain special cases of linear programming are network flow problems and
multicommodity flow problems are considered important enough to have generated
much research on specialized algorithms for their solution. A certain number of
algorithms for other types of optimization problems work by solving LP problems as
sub-problems. Historically, ideas from linear programming had inspired many of the
central concepts of optimization theory, such as duality, decomposition, and the
importance of convexity and its generalizations. Likewise, linear programming is
heavily used under microeconomics and company management, such as planning,
production, transportation, technology and other issues.

Though the modern management issues are ever changing, most


companies would like to maximize profits or minimize costs with limited resources.
Therefore, many issues may boil down to linear programming problems.
History of Linear Programming

Linear Programming is one of the youngest areas of mathematics. In 1939, the


Russian mathematician Leonid Vitalyevich Kantorovich had done some work with
optimization problems. However, the modern concept of Linear Programming seems
to have been formulated in 1947 when George Bernard Dantzig and his colleagues
documented a noticeable pattern in a series of military problems.

Dantzig was working for the US Department of the Air Force on a program called
Project SCOOP (Scientific Computation of Optimum Programs), after World War II.
The military needed to organize and expedite supplies to troops. “Programming”
was a “military term that, at that time, referred to plans or schedules for training,
logistical supply or deployment of men. Dantzig mechanized the planning process by
introducing ‘programming in a linear structure’, where ‘programming’ has the military
meaning explained above.”

Dantzig devised what is commonly known as the “simplex method” for solving
linear programming problems. This method was created, in part, to employ the use
of computers (which were in their beginning stages) to do numerous calculations
quickly and accurately. Dantzig (and colleagues) spent the better part of a year
assessing thousands of situations taken from their wartime experience and deciding
whether this model could be used to formulate realistic scheduling problems. They
reviewed each of the “ground rules,” for planning and scheduling, individually and
showed that nearly everyone could be converted into linear programming format
(with the exception of those involving discreteness and non-convexity.)

It was actually in the summer of 1947 that Dantzig set out to create an algorithm
for solving real planning problems (under pressure from the Air Force). The first
thing he observed was that these problems could be represented graphically,
specifically as a polyhedral set. He noticed that the solutions improved as he moved
along the edges of the convex body from one vertex to the next, eventually reaching
the “optimal” solution. At that time, he wasn’t using an objective function and soon
felt that this method was extremely inefficient, so rejected the idea in search of
something better.

In June of this same year, Dantzig got in touch with T.J. Koopmans of the Cowles
Foundation in Chicago. At first Koopmans seemed unresponsive to the presentation
but suddenly became very interested in this linear programming model. “As if he
suddenly saw its significance to economic theory.”In fact, Koopmans was eventually
awarded the Nobel Prize in 1975, the culmination of heading a group of economists
who developed the theory of distribution of resources and its correlation to linear
programming.

Leonid Hurwitz, a student Koopmans, helped Dantzig with what they called
“climbing up the beanpole.” It was the predecessor to the simplex algorithm. It was
enhanced by eliminating the convexity constraint and assuming the variables
summed to unity. However, Dantzig still felt that though this model was more
efficient, it was probably very impractical.

Dantzig consulted with Johnny von Neumann about solution procedures the
following fall. And surprisingly (having searched in vain for any literature on the
subject) received a lecture on the theory of linear programs. It turned out that von
Neumann (and Oskar Morgenstern) had just completed a text on game theory and
suspected that the theory behind Dantzig’s model was an analogue to the one they
had developed for games.

Meanwhile, Dantzig’s group at the Pentagon was testing his simplex method and
finding that it was working very well. According to Dantzig, this was unforeseen. He
hadn’t trusted his intuition in higher dimensions. He thought “that the procedure
would require too many steps wandering from one adjacent vertex to the next.”
When in fact, most of the time, his technique solved problems with m equations in
2m or 3m steps…”truly amazing,” thought Dantzig.

Dantzig wrote a book (over a nine year period) on his work entitled, Linear
Programming and Extensions, published in 1963. He also received many awards for
his work.
Linear Programming has a wide range of practical applications including
business, economics, scheduling, agriculture, medicine, natural science, social
science, transportation, and even nutrition.

You might also like