Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
0% found this document useful (0 votes)
41 views

Linear Programming: I Simulated Annealing

This document introduces linear programming problems and their formulation. The goal is to maximize a linear objective function subject to linear constraints. This can be represented geometrically as finding the optimal point within a polytope feasible region defined by intersecting hyperplanes. The fundamental theorem states the optimal solution must be a vertex of the polytope. Computational methods are needed to efficiently solve large linear programs by exploiting this geometric structure.
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
41 views

Linear Programming: I Simulated Annealing

This document introduces linear programming problems and their formulation. The goal is to maximize a linear objective function subject to linear constraints. This can be represented geometrically as finding the optimal point within a polytope feasible region defined by intersecting hyperplanes. The fundamental theorem states the optimal solution must be a vertex of the polytope. Computational methods are needed to efficiently solve large linear programs by exploiting this geometric structure.
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 5

t i

Next: The Simplex Algorithm Up: Optimization Previous: Simulated annealing

Linear Programming
The basic problem in linear programming (LP) is to maximize a linear objective function of

variables (or minimize ), under a set of linear constraints.

For example, the objective function could be the total profit, the constraints could be some limited resources. Let
represent the quantities of different types of products, represent their unit prices, and

represent the consumption of the ith resource of quantity by the jth product, the goal is to maximize the profit

subject to the constraints imposed by the limited resources .

The formulation of the LP problem above can also be more concisely represented in matrix form. We define:

then the optimization problem can be expressed as

Here the linear inequality constraints and the non-negative constraints are represented in matrix form as

and , respectively. Note that the less (greater) than or equal to signs need to be understood as an element-wise
relationship applied to the corresponding elements of the two vectors involved.
Geometrically, the objective function , as the inner product of the variable vector and the coefficient

vector , is proportional to the projection of onto the direction defined by the coefficient vector . If , i.e.,

is normalized, the objective function is the projection of onto . Without loss of generality, we could

always assume is normalized as this does not change the optimization of the objective function .

Each of the inequality constraints in corresponds to a hyperplane in the n-D vector space perpendicular to its

normal direction :

Also, each of the non-negativity conditions in corresponds to a hyperplane perpendicular to the jth

standard basis vector (all elements are zero except the jth one being 1):

These hyperplanes define a polytope in the n-D space, called the feasible region, in which the optimal solution

must lie. In the most general case (none of the hyperplanes is parallel to any of the others, no more than hyperplanes
intersect at one point), every -combination of these hyperplanes intersect at a point, and they form in total

intersections in the n-D space. The vertices of the polytope of the feasible region are a subset of these intersections at which
all constraints in and are satisfied.

Fundamental theorem of linear programming

The optimal solution of a linear programming problem formulated above is either a vertex of the polytope feasible region or
lies on a hypersurface of the polytope composed of all optimal solutions.
Proof:

Assume the optimal solution is interior to the polytope feasible region , then there must exist some such

that the hypersphere of radius centered at is inside . Evaluate the objective function at a point on the hypersphere

we get

i.e., cannot be the optimal solution as assumed. This contradiction indicates that an optimal solution must be either
at a vertex or on the surface of . Q.E.D.

Based on this theorem, the optimization of a linear programming problem could be solved by exhaustively checking each of
the intersections formed by of the hyperplanes to find (a) whether it is feasible (satisfying all

constraints), and, if so, (b) whether the objective function is maximized at the point. Specifically, we choose of the

equations for the the constraints and solve this n-equation and n-unknown linear system to get the intersection point

formed by some of the hyperplanes in the n-D space. This brute-force method is most straight forward, but the

computational complexity is high when and , and therefore , become large.

Example:
where

This problem has variables with the same number of non-negativity constraints and linear inequality
constraints. The straight lines corresponding to these constraints form intersections, out

of which 5 are the vertices of the polygonal feasible region. The value of the objective function is

proportional to the projection of the dimensional variable vector onto the coefficient vector

. The goal here is to find a point in the polygonal feasible region with maximum projection onto vector .

The intersections are listed in the table below together with their feasibility (whether a vertex of the polygon or

not) and the corresponding objective function value (proportional to the projection in the parentheses). We see that out of the
5 feasible solutions at the vertices of the polygon, the one at is optimal with maximum objective function

value (proportional to the projection .

(x, y) objective function (normalized)


1 5.00, 6.00 feasible 28 (7.77)
2 6.25, 5.50 infeasible 29 (8.04)
3 7.50, 3.00 feasible 24 (6.66)
4 20.00, 0.00 infeasible 40 (11.1)
5 10.00, 0.00 infeasible 20 (5.55)
6 9.00, 0.00 feasible 18 (4.10)
7 0.00, 8.00 feasible 24 (6.66)
8 0.00, 12.00 infeasible 36 (9.98)
9 0.00, 18.00 infeasible 54 (14.99)
10 0.00, 0.00 feasible 0 (0.00)

The goal of this linear programming problem can be intuitively understood as to push the hyperplane, here a line in 2-D
space, along its normal direction , from the origin to as far away as possible inside the feasible region, to
eventually arrive at one of the vertices of the polytope feasible region that is farthest away from the origin.

Homework:

Develop the code to find all intersections formed by given hyperplanes in an n-D space in terms of their

equations and , identify which of them are vertices of the polytope surrounded by these

hypeplanes, and find the vertex corresponding to the optimal solution that maximizes for a given .

t i
Next: The Simplex Algorithm Up: Optimization Previous: Simulated annealing
Ruye Wang 2015-02-12

You might also like