Computing: Dynamic Programming Algorithms For The Zero-One Knapsack Problem
Computing: Dynamic Programming Algorithms For The Zero-One Knapsack Problem
9 by Springer-Verlag 1980
Abstract - - Zusammenfassung
Dynamic Programming Algorithms for the Zero-One Knapsack Problem. New dynamic programming
algorithms for the solution of the Zero-One Knapsack Problem are developed. Original recursive
procedures for the computation of the Knapsack Function are presented and the utilization of
bounds to eliminate states not leading to optimal solutions is analyzed. The proposed algorithms,
according to the nature of the problem to be solved, automatically determine the most suitable
procedure to be employed. Extensive computational results showing the efficiency of the new
and the most commonly utilized algorithms are given. The results indicate that, for difficult
problems, the algorithms proposed are superior to the best branch and bound and dynamic
programming methods.
Algorithmen der dynamischen Optimierung f'~ das 0-1-Knapsack-Problem. Es werden neue Algorith-
men entwickelt, welcher das 0-1-Knapsack-Problem mit dynamischer Optimierung 16sen. Ftir die
Berechnung der Knapsack-Funktion werden rekursive Prozeduren vorgelegt, dann wird die Ver-
wendung yon Schranken untersucht, mit denen sich Zust~inde ausscheiden lassen, die nicht zu
optimalen L6sungen fiihren. Die vorgeschlagenen Algorithmen bestimmen automatisch das am
besten geeignete LSsungsverfahren. Ausfiihrliche numerische Ergebnisse erlauben es, die neuen
Algorithmen mit den gebr~iuchlichsten bekannten Verfahren zu vergleichen. Die Vergleiche weisen
darauf hin, dab die vorgeschlagenen Algorithmen den besten bisher bekannten Ans~itzen mit
Branch and Bound und dynamischer Optimierung iiber!egen sind.
I. Introduction
T h e U n i d i m e n s i o n a l Z e r o - O n e K n a p s a c k P r o b l e m is d e f i n e d by:
maximize P = ~ Pi xi (1)
i=l
s u b j e c t to"
• wi x i< W (2)
i=1
* An earlier version of this paper was presented at the TIMS/ORSA Joint National Meeting in
San Francisco, May 1977.
0010-485 X / 8 0 / 0 0 2 5 / 0 0 2 9 / $ 03.40
30 P. Toth:
Without loss of generality we assume that W, all the profits pi and all the weights
w~ are positive integers. In addition, the following assumptions can be stated:
• wi> W (4)
i=1
The Zero-One Knapsack Problem is a well known problem and several efficient
algorithms have been proposed for its solutiorL These algorithms can be
subdivided into two classes: dynamic programming procedures (Horowitz and
Sahni [4], Ahrens and Finke [1]) and branch and bound methods (Kolesar [63,
Greenberg and Hegerich [3], Horowitz and Sahni [4], Nauss [11], Barr and
Ross [2], Zoltners [13], Martello and Toth [8]).
The computational performance of the branch and bound algorithms depends
largely on the type of data sets considered. Martello and Toth have shown
in [93 that the data sets where the values of Pi and wi are independent, are
much easier than those where a strong correlation between Pi and w~ exists. The
dynamic programming algorithms are less affected by the kind of data sets
and are generally more efficient than the enumeration methods for "hard"
problems, that is for problems having a strong correlation between wz and Pv
Unfortunately, for this kind of problem, the storage requirements of dynamic
programming procedures grow steeply with the size of W, so the only hard
problems which can be solved in a reasonable amount of time, are those having
moderate values of W.
The number of variables of the problem defined by (1), (2) and (3) can be
decreased by utilizing a reduction procedure developed by Ingargiola and
Korsh [53 and improved by Toth [12]. In many cases, the dynamic program-
ming algorithms get great advantage from the application of such reduction
procedure, because not only n but also the value of W is decreased. However
this procedure gives but a small contribution to the solution of the hard problems,
because, as shown in [12], for such problems only a small reduction in the
number of the variables and in the size of the knapsack can be obtained.
In this paper several dynamic programming procedures for the solution of the
Zero-One Knapsack Problem are presented. In addition, the utilization of
upper bounds, inserted in the procedures in order to eliminate the states not
leading to optimal solutions, is analyzed. The proposed algorithms, according to
the nature of the problem to be solved, automatically determine the most suitable
procedure to be employed.
An extensive computational analysis is performed in order to evaluate the
efficiency of the new algorithms and that of the most commonly utilized branch
and bound and dynamic programming techniques.
Dynamic Programming Algorithms for the Zero-One Knapsack Problem 31
v =min
b =2"-1;
~w~,W
ki=l 1 ;
5. Set z = v - 1 .
6. If z <win, return.
7. Set y = z - w,,, f = Fy + Pro. If F: < f set F= = f X= = Xy + b; in any case, set
z = z - 1 and go to 6.
In many cases it is possible to reduce the number of states considered at a
given stage, eliminating all the states (F=, X=) for which there exists at least one
state (Fy, Xy) having Fr>_F= and y<z. This technique has been utilized by
Horowitz and Sahni [4] and Ahrens and Finke [1]. For its application, it is
necessary to develop a new procedure for the computation of all the undominated
states at the m-th stage. The following variables are assumed to be known
before execution of the procedure at the m-th stage:
s,,_ 1 =number of states at stage ( m - l ) ;
b =2m-1;
L l j =total weight of the j-th state, for j = 1, ..., s~_~;
F l j = total profit of the j-th state, for j - - 1, ..., sm_ ~;
X l j ={Xm_x, Xm_2, ...,Xl} , for j = l , ...,s,,_l;
where values x i represent the partial solution of the j-th state, that is
m-1 m--1
After execution of the procedure, the number of states, the total weights, the
total profits and the sets of partial solutions relative to the m-th stage, are
represented, respectively, by s,,, (L2k), ( f 2 k ) a n d (X2k). The sets (Xlj) and
(X2k) are expressed as bit strings. The vectors (Llj), (L2k) , (Flj) and (F2k) are
ordered according to ascending values.
P R O C E D U R E P2
Example:
Let n = 5 , w=191.
(Pi) =(61, 45, 33, 61, 13)
(wi) = (80, 60, 44, 87, 21)
Fig. 1 gives the total weights (L j) and the total profits (Fj) of the undominated
states at each stage m. The optimal solution of the problem is (x~)=(1, 1, 1,0,0)
with a maximum profit P = 139. The total number of the undominated states
is 32; applying procedure P1 the number of states should be 793.
Fig. 1
3 Computing 25/1
34 P. Toth:
the disadvantage that the branch and bound procedure is always executed, even
if the storage requirements are not excessive and therefore its execution could be
avoided.
Example:
Let us consider the previous example in order to illustrate the Horowitz and
Sahni algorithm. Fig. 2 gives the undominated states at each stage of the two
subproblems (q = 2). The total number of states is 15.
Fig. 2
this state will be never utilized in the stages followin9 the m-th One.
b) If a state, defined at the m-th stage, has a total weight L such that
B m= W - min {wi} < L < W,
m<i<n
the state will be never utilized in the stages following the m-th one.
The following changes can be made to Procedure PI:
Replace Steps 2. and 5. with
Set z = rain {v - 1, Bin}.
Replace Step 6. with
If z < max {w,,, Am}, return.
3*
36 P. Toth :
Example:
F o r the example previously considered, Fig. 3 gives the results obtained by
elimination of the unutilized states. The total n u m b e r of states is 12.
m= 1 m=2 m =3 m=4 m =5
1I I
140 1 106 [
I
I
t
I 1I 184 i 139
I
i ,,
Fig. 3
5. A F a t h o m i n g Criterion
Theorem 1: Assume
Pl/wl ~ p 2 / w 2 ~ ... >__pn/Wn
and let
l
l = largest integerfor which ~ wi <_W;
i=1
B2
Then
UB =max {B1, Bz}
is an upper bound of the solution to the problem given by (1), (2) and (3).
Obviously the efficiency of the fathoming criterion tends to increase when high
values of m are considered, since, as m grows, the lower bound LBm gives a better
approximation to the maxfinum profit while at the same time, because of the
ordering of the variables according to decreasing values of the ratios p.]wi,
the upper bounds tend to decrease. Besides, at the same stage m, the states
having high values of the total weight are fathomed more easily than those
with low total weights, because the sum (total profit)+ (upper bound) generally
38 P. Toth :
decreases as the total weight grows. In order to reduce the computing time
required for the evaluation of the upper bounds corresponding to all the
states defined at the m-th stage (1 < m < n), the following procedure can be utilized
after execution of Procedure P 2.a.
P R O C E D U R E P4
Example:
The application of the fathoming criterion to the example previously considered
gives the results shown in Fig. 4. An asterisk indicates the fathomed states.
The total number of states is 8.
Dynamic ProgrammingAlgorithms for the Zero-OneKnapsack Problem 39
6. A N e w D y n a m i c P r o g r a m m i n g A l g o r i t h m
NSlm=min{i=~wi, B m + l } - A m + l =
40 P. Toth:
The solution to the knapsack problem defined by (1), (2), (3), (4) and (5) can
be given by:
P=~ pi-P
i=1
Dynamic ProgrammingAlgorithms for the Zero-One Knapsack Problem 41
minimize 15 = ~ Pi xi (8)
i=1
subject to
• w i 2 i>_ W = ~
n
wi- W (9)
i=i i=1
Example:
For the example previously considered, F i g . 5 gives the results corresponding
to the application of procedure P2 to the transformed problem ( W = 2 9 2 - 1 9 1 =
= 101 < W - 191). The total number of states is 23.
Fig. 5
42 P. T o t h :
s
i
~1 ~
~ll ~
ii I
+
VI vl vq t
VI ,
v, ~_
~ o~.~
- ~
VL ~,
D y n a m i c P r o g r a m m i n g Algorithms for the Zero-One K n a p s a c k Problem 43
t",l
..e
~5
~'q tr t',l
eq~
xzt
e-i
.,-.-t
~ t'N
r~
+ +
r163 9 " .. t"q
II II o
oo v-
,z
44 P. T o t h :
8. Computational Results
The performance of the new algorithms proposed in the previous sections
has been compared with that of the most efficient branch and bound and
dynamic programming methods. The following algorithms have been considered:
BBHS: branch and bound algorithm of Horowitz and Sahni [4] ;
BBMT: branch and bound algorithm of Martello and Toth [8];
DPHS: dynamic programming algorithm of Horowitz and Sahni [4];
DPTI: algorithm of section 6 not utilizing the fathoming criterion (with R =4);
DPT2: algorithm of section 6 utilizing the fathoming criterion (with R=6);
DPT3: algorithm DPT2 applied to the problem obtained according to the
transformation rule of section 7.
All the algorithms have been coded in FORTRAN IV and run on a CDC 6600
computer after execution of the reduction procedure presented in [12].
Several uniformly random data sets have been considered to compare the
efficiency of the above-mentioned algorithms. The data sets are described in
Table 1; for each of them three values of n have been considered (n= 50, 100,
200). The columns of Table 1 give the average times and, in parentheses, the
maximum times relative to the six algorithms; all the times are expressed in
milliseconds and include the times required for the sorting of the variables and
executing the reduction procedure. Each value given in Table 1 has been
computed over 200 problems. Whenever the time-limit assigned to each data
set (260 seconds) was not sufficient to solve all the 600 problems, the average
and maximum times are given only if the number of solved problems is
significant; otherwise, only this number is given.
The results given in Table 1 show that for "not hard" problems (data sets
A, B, C, 1, J, K, L), the branch and bound algorithms, and mainly BBMT, are
more efficient than the dynamic programming methods. On the contrary, when
"hard" problems are considered (data sets D, E, F, G, H) the dynamic programming
procedures become much faster than the branch and bound algorithms.
Of the dynamic programming methods, the most efficient are clearly DPT1
and DPT2. The utilization of the fathoming criterion (algorithm DPT2) leads
to an improvement for data sets J and K and above all for data sets G and H;
on the contrary for data sets D and E algorithm DPT1 is better than DPT2.
The excellent performance of DPT2 in solving data sets G and H probably
depends on the fact that the elements having largest values of wi are consi-
dered in the first stages, so good values of the lower bounds L B m are early
obtained; in addition the last stages, considering the elements having the
smallest values of wi, have high values of A,~, so only states with high total
weights are present and, consequently, the corresponding upper bounds are
generally low.
The bad performance of the dynamic programming methods for data sets
J and K probably depends on the fact that the reduction procedure does
Dynamic Programming Algorithms for the Zero-One Knapsack Problem 45
not work well with such kinds of data sets, as has been shown in [12], and
so problems with a high number of variables are to be solved. The branch
and bound algorithms are not affected by this phenomenon, because their
performance depends on the number of the variables remaining after execution
of the reduction procedure, much less than the performance of the dynamic
programming methods, as has been shown in [9].
The application of the transformation rule of section 7 (algorithm DPT3)
gives an improvement for data sets, A, B, E, H, J and K where, after execution
of the reduction procedure, the value of W is either generally (data sets E and
H) or occasionally (data sets A, B, J, and K) greater than the value of g/.
References
[1] Ahrens, J. H., Finke, G.: Merging and sorting applied to the zero-one Knapsack problem.
Operations Research 23 (1975).
[2] Barr, R. S., Ross, G. T. : A linked list data structure for a binary Knapsack algorithm. Research
Report CCS 232, Centre for Cybernetic Studies, University of Texas (1975).
[3] Greenberg, H., Hegerich, R. L. : A branch search algorithm for the Knapsack problem.
Management Science 16 (1970).
[4] Horowitz, E., Sahni, S. : Computing partitions with applications to the Knapsack problem.
J. ACM 21 (1974).
[5] Ingargiola, G. P., Korsh, J. F.: A reduction algorithm for zero-one single Knapsack problems.
Management Science 20 (1973).
[6] Kolesar, P. J.: A branch and bound algorithm for the Knapsack problem. Management
Science 13 (1967).
[7] Magazine, M., Nemhauser, G., Trotter, L. : When the greedy solution solves a class of Knapsack
problems. Operations Research 23 (1975).
[8] Martello, S., Toth, P.: An upper bound for ther zero-one Knapsack problem and a branch
and bound algorithm. European Journal of Operational Research 1 (1977).
[9] Martello, S., Toth, P.: The 0-1 Knapsack problem, in: Combinatorial optimization (Christo-
tides, N., Mingozzi, A., Sandi, C., Toth, P., eds.). London: J. Wiley 1979.
[10] Morin, T. L., Marsten, R. E.: Branch and bound strategies for dynamic programming. Operations
Research 24 (1976).
[111 Nauss, R. M. : An efficient algorithm for the 0-I Knapsack problem. Management Science 23
(1976).
[121 Toth, P. : A new reduction algorithm for 0-1 Knapsack problems. Presented at ORSA/TIMS
Joint National Meeting, Miami (November 1976).
[13] Zoltners, A. A.: A direct descent binary Knapsack algorithm. J. ACM 25 (1978).