Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
A Tight Lower Bound for On-line Monotonic List Labeling Paul F. Dietz, Joel I. Seiferas, and Ju Zhang November 16, 1993; revised: April 21, 1994 Abstract. Maintaining a monotonic labeling of an ordered list during the insertion of n items requires (n log n) individual relabelings, in the worst case, if the number of usable labels is only polynomial in n. This follows from a lower bound for a new problem, pre x bucketing. 1. Introduction The on-line list-labeling problem can be viewed as one of linear density control. A sequence of n distinct items from some dense, linearly ordered set, such as the real numbers, is received one at a time, in no predictable order. Using \labels" from some discrete linearly ordered set of adequate but limited cardinality, the problem is to maintain an assignment of labels to the items received so far, so that the labels are ordered in the same way as the items they label. In order to make room for the next item received, it might be necessary to change the labels assigned to some of the items previously received. The cost is the total number of labelings and relabelings performed. There are practical applications of on-line list labeling to the design of ecient data structures and algorithms. List labeling has been an especially fruitful approach to the order maintenance problem [Di82, Ts84, DS87, DZ90]. This problem involves the insertion and deletion of items into a linear list, and response to online queries on the relative order of items currently in the list. A low-cost on-line list-labeling algorithm provides an ecient solution (or sometimes a component of an even more ecient solution) to the order maintenance problem, provided its computational overhead is also low. For further discussion of this and other speci c applications, see the earlier papers by Dietz and his collaborators [Di82, DS87, DZ90]. In addition, it seems likely that our problem and related problems of dynamic density control will prove fundamental to the spatially structured maintenance in bounded media of changing data, such as text and pictures on a computer screen [Zh93]. When the number of labels is at least n1+ for some  > 0, it is possible to limit the worst-case cost for on-line labeling of n items to O(n log n) [Di82, Ts84, DS87]. The analyses are subtle; but the best of the algorithms are both simple and fast, and hence practically useful. In this paper we show that the upper bound is tight, and in fact that (n log n) relabelings are required even for an algorithm that is complicated and slow. Key words and phrases. monotonic list labeling, order maintenance, density/congestion management and exploitation, load balancing, bucketing, on line, lower bound, adversary argument. This report will appear in the proceedings of SWAT '94, the Fourth Scandinavian Workshop on Algorithm Theory. It is based on a portion of the third author's doctoral dissertation [Zh93]. The rst author was supported in part by the National Science Foundation under grant CCR-8909667. We thank Jun Tarui and Ioan Macarie for their corrections and suggestions. Typeset by AMS-TEX 1 DIETZ, SEIFERAS, AND ZHANG, LOWER BOUND FOR ON-LINE LABELING 2 Our proof is a surprising adaptation of a lower-bound approach sketched by Dietz and Zhang [DZ90]. That approach seemed to be a dead end that addressed only strategies that satisfy the following \smoothness" property: The list items relabeled before each insertion form a contiguous neighborhood of the list position speci ed for the new item, and the new labels are as widely and equally spaced as possible. Although no good non smooth algorithms have been proposed or analyzed, it has seemed dicult to rule them out. (This is the usual sort of lower-bound predicament.) The key to our adaptation is to imagine appropriate dynamic \recalibrations" of the label space, in terms of which the arbitrary strategy does look fairly smooth. To facilitate the elaboration and adaptation of the earlier argument, we formulate and separately attack variants of a previously unstated combinatorial \bucketing problem" that really lies at the heart of the argument. In the unordered bucketing problem, the challenge is to cheaply insert n items, one at a time, into k buckets. The cost of each insertion is the number of items (including the new one) in the bucket chosen for that insertion. The optimum total cost for the task as described so far is clearly (n2=k), but we allow an additional operation: Between insertions, we can redistribute the contents within any subset of the buckets, at a cost equal to the total number of items in those buckets. Now O(n log n) is an upper bound on the required cost, by the well-known Hennie-Stearns strategy [HS66, Zh93], provided k is (log n). On the other hand, we prove in Section 3 that (n log n= log k) is always a lower bound on the required cost, and we conjecture that (n log n) is a lower bound when k is O(log n). We show in Section 2 that either lower bound leads to a similar lower bound for the problem of primary interest, the n-item, polynomial-label labeling problem. The pre x bucketing problem is like the unordered bucketing problem, except with the constraint that, in terms of some xed linear order of the buckets, the subset for each redistribution must be a pre x of the bucket list. (The Hennie-Stearns strategy still applies.) Under this constraint, each redistribution may as well move all items involved into the very last bucket of the chosen pre x. Section 2 actually shows that even a lower bound on this bucketing problem leads to a labeling lower bound. In Section 4, we prove the needed lower bound on pre x bucketing. 2. Relation to bucketing A relabeling algorithm is normalized if the items it relabels on each insertion, along with the newly inserted item, form a contiguous sublist of the list resulting from the insertion. Since non contiguous relabelings can safely be deferred until later, each labeling algorithm can be replaced at no additional cost by a normalized one. To prove a lower bound for the labeling problem, we show that each normalized algorithm (and hence each algorithm of any kind) performs many relabelings when confronted with some bad-case sequence of insertion requests. That sequence, which will depend on the particular algorithm, will be determined by an \adversary" strategy that interacts with the algorithm|each next insertion will be into a gap that the adversary chooses based on the labeling decisions made by the algorithm in response to earlier insertion requests. Intuitively, the most promising strategy for the adversary is to insert the next item into a gap between items in a part of the label space that is currently \relatively crowded". It seems dicult, however, to formulate an appropriate notion of crowdedness. Ordinary density, for instance, can vary depending on the size and choice of neighborhood. A more robust notion of a \dense point" is a gap all of whose neighborhoods are currently about as dense as the entire label space. The following lemma shows that such a dense point always does exist. DIETZ, SEIFERAS, AND ZHANG, LOWER BOUND FOR ON-LINE LABELING 3 Dense-point Lemma. Consider any nonnegative, integrable function f on the interval [0; 1]. For each (nontrivial) subinterval I , de ne (I) = jI1j Z f(x) dx: I Then there is some point x0 2 [0; 1] such that (I)  12 ([0; 1]) holds whenever I includes x0 . Proof. For the sake of argument, suppose not. Then, for each point x, select a spoiling interval that includes x and that is open in [0; 1]. (An interval is \open in [0; 1]" if it is the intersection of [0; 1] itself and an ordinary open interval of real numbers.) The selected open intervals cover the topologically compact set [0; 1], so some nite subfamily must do so. If any point lies in three or more intervals of the subfamily, then keep only the one that reaches farthest left and the one that reaches farthest right. This leaves a nite family I that covers each point in [0; 1] either once or twice, but each of whose members I satis es (I) < 21 ([0; 1]). Therefore, Z [0;1] Z X X f(x) dx = (I)jI j f(x) dx  Z X X < 1 ([0; 1])jI j = ([0; 1]) 1 jI j  ([0; 1]) = I I 2I 2 2I I I 2I I 2I 2 [0;1] f(x) dx; a contradiction.  Corollary 1. In each labeling, there is a label such that every label-space subinterval containing that label is at least half as dense as the entire label space. (The same applies to either of the two gaps that include the distinguished label, since they themselves are qualifying subintervals.) Proof. If the total number of labels is m, then consider a function f that is con- stantly 1 or 0 on each subinterval ((i ? 1)=m; i=m), depending on whether the i-th label is or is not in use, respectively.  Corollary 2. In each labeling, there is a label in the \middle population third" (where rounding is in favor of that middle third) such that every label-space subinterval containing that label is at least one-sixth as dense as the entire label space. Proof. Ignore the rounded-down left third and the rounded-down right third of the items, and cite Corollary 1.  Although such a dense point always exists, it does not quite suce always to insert into just any such gap. For example, the algorithm that always inserts at the midpoint of the requested gap will be able to maintain an essentially perfect spread without ever relabeling even a single item, if the adversary inserts into the sequence of (dense-point) gaps numbered 1, 2, 1, 4, 3, 2, 1, 8, 7, 6, 5, 4, 3, 2, 1, : : : . The problem is that the adversary is forfeiting an opportunity to use its insertions to selectively increase congestion in a particular locality. To take advantage of its opportunity to create congestion, our adversary will try to keep the relocation of its insertion point \commensurate with" the relabeling response by the algorithm. I.e., unless it has forced the algorithm to move a lot of items away from the insertion point, it will continue to insert into and add congestion to the same neighborhood. To this end, it will actually maintain an entire DIETZ, SEIFERAS, AND ZHANG, LOWER BOUND FOR ON-LINE LABELING 4 nest of k = O(log n) distinct intervals that converge down to the insertion point. The population of the smallest enclosing one of the intervals will be proportional to the number of relabelings performed. This will \justify" relocation of the insertion point to any appropriate gap in that interval, since the relabeling could have reconcentrated the population arbitrarily within the interval. The nest of intervals I1  I2      I that our adversary maintains will satisfy the ve essential conditions listed below. For each label interval I , we denote the number of currently assigned labels (\population") and the total number of labels by pop(I ) and area(I ), respectively. If I  I 0 , then the di erence I ? I 0 consists of (at most) a left interval and a right interval; we denote their respective populations by leftpop(I ? I 0 ) and rightpop(I ? I 0 ). (1) I1 is the whole label space. (2) pop(I ) = O(1). (3) For every i, area(I +1 )  area(I )=2. (4) For every i, pop(I +1 ) = (pop(I )). (5) For every i, leftpop(I ? I +1 ) = (rightpop(I ? I +1 )). As we promised above, it follows from these conditions that k = O(log n), and that the population of the smallest interval enclosing a batch of relabelings is at most some constant times the number of relabelings in the batch. Therefore, if we consider the successive di erences, I ? I +1 , and the innermost interval, I , to be the buckets, then the algorithm solves the resulting (pre x) bucketing problem at a total cost that is at most some constant times the number of relabelings it performs. Since the former has to be (n log n= log k) = (n log n= loglog n), for example, so does the latter. Finally, the following lemma insures that we can appropriately restore the invariant after each batch of relabelings by the algorithm. k k i i i i i i i i i i k Restoration Lemma. Each suciently long and populous interval I has a subin- terval I 0 such that area(I 0 )  area(I )=2, pop(I 0 ) = (pop(I )), and leftpop(I ? I 0 ) = (rightpop(I ? I 0 )). Proof. From a dense point in the middle population third of I (provided by Corollary 2), expand through population leftward and rightward in proportion to the total populations in those directions (which can di er by at most a factor of 2), until half the area is covered. (If this requires a fraction of an item in either direction, then just stop one label short of that item's label.)  3. Lower bound for unordered bucketing This section is devoted to the relatively easy proof of the following lower bound, which we conjecture can be tightened to (n log n) when k is O(log n). Theorem. The cost for unordered bucketing of n items into k buckets is (n log n= log k). Consider the following measure of a con guration's complexity: C= X n log n ; k =1 i i i where n is the number of items in bucket i. (Since lim #0 x log x = 0, it works well to de ne 0 log0 to be 0.) C starts out at 0; and, by the Complexity-range Lemma below, it nally reaches a value no smaller than F = n log n ? n log k: i x DIETZ, SEIFERAS, AND ZHANG, LOWER BOUND FOR ON-LINE LABELING 5 We show below, however, that no operation increases C by more than O(log k) times the cost of the operation. Therefore, the total cost will have to be at least F= log k = (n log n= log k).1 The main operation to consider is the reorganization of k  k buckets containing a total of n  n items. By de nition, the cost of the operation is n . And, by the Complexity-range Lemma below again, the increase in C is indeed at most n log k  n log k = O(n log k): The only other operation is insertion into an n -item bucket. If n = 0, then there is no change in C ; so assume n  1. Then the cost is exactly n + 1, and the increase in C is exactly (n + 1) log(n + 1) ? n log n = log(n + 1) + n (log(n + 1) ? log n ) = log(n + 1) + n O(1=n ); 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 which is certainly O((n + 1) log k). 0 Complexity-range Lemma. If n1 +    + n = n, where each n is nonnegative, P then n log n lies between n log n and n log n ? n log k. Proof. It is easy to argue that the sum is maximized when some n equals n, and minimized when every n equals n=k.  k i i i i i 4. Tight lower bound for prefix bucketing This section is devoted to a proof of the following tight lower bound: Theorem. The cost for pre x bucketing of n items into k = O(log n) buckets is (n log n). It will be convenient to have terminology for the current con guration of (a pre x of) a bucket list. While we are at it, to make Lemmas 2 and 3 possible below, we generalize to allow fractional numbers of items in a bucket. If k is a positive integer and n is a positive real number, then an (n; k)P -con guration is a list L = (n1 ; : : :; n ) of k nonnegative real numbers such that n = n, viewing n as the (possibly fractional) number of items in bucket i. L is nondecreasing if n  n +1 holds for every i < k; and it is exponential, with ratio a and with k initial 0's, if n = 0 for every i  k and n = an +1 for every i 2 fk + 1; : : :; k ? 1g. For each (n; k)-con guration L = (n1; : : :; n ), we de ne two measures of complexity: k i i i 0 i i 0 i 0 i k C (L) = X n log n ; M (L) = X in : k =1 i i i k =1 i i Because the redistributions we consider move all items to the last bucket involved, we make the following anticipatory de nitions: C (L) = n log n ? C (L); M (L) = kn ? M (L): 1 For bucketing to be nontrivial, n and k have to be at least 2. In that case, log n and log k are safely positive if we use some logarithmic base strictly between 1 and 2. DIETZ, SEIFERAS, AND ZHANG, LOWER BOUND FOR ON-LINE LABELING 6 Let d be a constant so large that k  d log n. Assuming n is large enough so that d log n  n1=2, the complexity C starts out at 0 and nally reaches a value C nal  n log n ? n log k  21 n log n: The measure M starts out at 0, grows monotonically, and nally reaches a value M nal  kn  dn log n: Over all, therefore, the increase in C is at least 21d times the increase in M . Consider the steps on which we have C < 41d M; where C and M are the respective increases in C and M . Such steps can account for at most half of the overall change in C . Therefore, we can restrict attention to the other steps, on each of which we must have M  4dC: We show that, on each such step, regardless of its context, C is at most some constant times the number of items involved, which is the bucketing cost, and hence that the total bucketing cost for such steps has to be (n log n). We saw in Section 3 that this fact holds for every insertion step; so we restrict further attention to the analysis of redistribution steps, and Corollary 5.1 below will complete the proof. Actually we directly analyze such a redistribution step only when the (n; k)con guration before the step is of a special form. The rst sequence of lemmas below, culminating in Lemma 3, shows that it is no loss of generality to restrict attention to this form; and the nal lemmas provide the needed estimates involving C (L) and M (L) when L is of this form. Lemma 1. For each (n; k)-con guration L that is not nondecreasing, there is a nondecreasing (n; k)-con guration L0 with C (L0 ) = C (L) and M (L0) > M (L). Proof. Just reorder the con guration so that it is nondecreasing.  Lemma 2. For each nondecreasing (n; k)-con guration L that is not exponential, there is a nondecreasing (n; k)-con guration L0 with C (L0 ) < C (L) and M (L0 ) = M (L). Proof. First, note that we lose no generality if we assume k = 3: If L = (n1; : : :; nk ) is nondecreasing but not exponential, then there has to be some i  k ? 2 such that (ni ; ni+1; ni+2) is an (ni + ni+1 + ni+2 ; 3)-con guration with these same properties. It is clear from the de nitions of C and M that the desired conclusion for (ni ; ni+1; ni+2) will yield the conclusion for L, too. For k = 3, the idea is to take L0 = (n1 ? x; n2 + 2x; n3 ? x) for some nonzero x. Since L is not exponential and k is only 3, there can be no initial 0's. In the case that n1 > (n2 =n3)n2 , x must satisfy 0 < x < n1 ; and, in the case that n1 < (n2 =n3)n2 , it must satisfy ?n2 =2 < x < 0. Whatever x is, we will have M (L0 ) = M (L). It remains only to show that some eligible x will yield C (L0 ) < C (L). For each prospective x, let C (x) denote the resulting value C (L0 ). It is enough to show that lim C 0(x) < 0 x#0 if n1 > (n2 =n3)n2 ; DIETZ, SEIFERAS, AND ZHANG, LOWER BOUND FOR ON-LINE LABELING 7 and that lim C 0(x) > 0 if n1 < (n2 =n3)n2 : "0 Expressed more explicitly, C (x) = f (n1 ? x) + f (n2 + 2x) + f (n3 ? x); where f (x) = x log x. It is straightforward to check that the derivative C 0 (x) does satisfy both requirements.  Lemma 3. For each (n; k)-con guration L, there is a nondecreasing, exponential (n; k)-con guration L0 with C (L0 )  C (L) and M (L0 )  M (L). Proof. If the given con guration is not nondecreasing, then apply Lemma 1 one time. Then, calling the result L, consider the set L of nondecreasing (n; k)-con gurations L0 that satisfy C (L0 )  C (L) and M (L0 ) = M (L). Since C is continuous on the topologically compact set L, there is some L0 in L that minimizes C . By Lemma 2, that (n; k)-con guration must be exponential.  denote the nondecreasing, exponential (n; k + k0)-con guration Let L + ) equals with ratio a and with k0 initial 0's. Note that, for every k0 , C (L + ) equals M (L 0). C (L 0), and M (L + Lemma 4. If a = 1, then C (L 0) = (log k)n; and k ? 1 M (L 0) = 2 n: If a < 1, then   B C (L 0) = log A ? A log a n; and B  M (L 0) = A ? 1 n; x n;k k 0 ;a;k 0 n;k n;k;a; k 0;a;k 0 n;k k 0 ;a;k 0 n;k;a; n;k;a; n;k;a; n;k;a; n;k;a; where A= X k =1 a i B= and X k =1 ia : i i i The calculations are exact and easy.  Corollary 4.1. If a < 1, then   C (L 0) < log 1 ?1 a n: Proof. n;k;a; Proof. Just use the estimates A < a=(1 ? a), B > a, and a log a < 0:  Corollary 4.2. For each xed k, lim C (L "1 M (L lim "1 C (L a a 0)=n = C (L n;k;a; 0) = M (L C (L 0) n;k;a; n;k;a; 1 0)=n = log k; and n;k; ; 1 0) = k ? 1 : 1 0) 2 log k n;k; ; n;k; ; Proof. If k is xed, then lim "1 A = k, lim "1 B = k(k + 1)=2, and lim "1 log a = 0, so that the results follow immediately from Lemma 4.  a a a DIETZ, SEIFERAS, AND ZHANG, LOWER BOUND FOR ON-LINE LABELING 8 Lemma 5. There exists a pair of thresholds, a0 < 1 and k0, such that, whenever a0 < a < 1 and k > k0, M (L C (L 0) 0) n;k;a; n;k;a; > 4d: Proof. Choose k0 large, and then choose a0 < 1 large in terms of that k0. The proof is by induction on k  k0. The base case, that M (L 0 0) C (L 0 0) exceeds 4d whenever a0 < a < 1, follows from Corollary 4.2. For the induction step, it is enough to show that M (L +1 0) ? M (L 0) C (L +1 0) ? C (L 0) n;k ;a; n;k ;a; n;k ;a; n;k ;a; n;k;a; n;k;a; exceeds 4d whenever a0 < a < 1 and k  k0. In terms of A and B , the goal is for the following to exceed 4d: +1 + a +1)] ? [B ? A] E = [(A + a +1) log([(AB++a(k+1+)1)?a(B +) ?(k(+A 1) a +1 ) log a] ? [A log A ? B log a] ka +1 = A log(1 + a +1=A) + a +1 log( A + a +1 ) ? (k + 1)a +1 log a : k k k k k k k k k k Since 0 < a < 1 and k is large, a +1=A < 1=k is small enough that k log(1 + a +1 =A) < a +1=A: k k Since A < k and a +1 < 1, we certainly have k log(A + a +1) < log(k + 1): k Substituting these estimates, and cancelling a +1 , we get E > 1 + log(k + 1)k? (k + 1) log a : Since k and a are large, this estimate nally does clearly exceed 4d.  The following corollary is just what we need to complete the proof. Corollary 5.1. If (n; k)-con guration L satis es k M (L)  4d; C (L) then it also satis es C (L) = O(n), where the implicit constant depends on neither n nor k. Proof. By Lemma 3, since the conditions C (L )  C (L) and M (L )  M (L) 0 0 respectively imply C (L )  C (L) and M (L )  M (L); 0 0 DIETZ, SEIFERAS, AND ZHANG, LOWER BOUND FOR ON-LINE LABELING 9 it suces to prove this when L is nondecreasing and exponential, say with ratio a, and with no initial 0's. If a = 1, then the hypothesis gives us an upper bound on M (L) = k ? 1 = f (k): C (L) 2 log k Since limk!1 f (k) = 1, this imposes some upper bound b < 1 on k, so that we do get C (L) = (log k)n  (log b)n = O(n): If a < 1, then we deal with three separate subcases: a \small" and k arbitrary, a \large" but k \small", and a and k both \large". First, however, we have to formulate appropriate size notions. Recall the thresholds a0 and k0 from Lemma 5. For each k  k0 , cite the rst part of Corollary 4.2 to select a threshold ak < 1 such that C (Ln;k;a;0)=n  1 + log k holds whenever ak < a < 1. Take amax = max0kk0 ak . Whenever a  amax , Corollary 4.1 yields     1 1 C (L)  log 1 ? a n  log 1 ? a n: max Whenever k  k0 and a > amax  ak , we have made sure that C (L)  (1 + log k)n  (1 + log k0)n: And, whenever k > k0 and a > amax  a0 , Lemma 5 insures that M (L)=C (L) exceeds 4d.  5. Further discussion When the number of usable labels is not at least n1+ for any  > 0, the known upper bounds are not as low. With O(n) labels and exactly n labels, the respective bounds are O(n log2 n) [IKR81] and O(n log3 n) [AL90]. These bounds seem tight, and it can be shown that they are tight for smooth relabeling strategies [DZ90, Zh93, DSZ94]; but we do not yet see how to extend these results to nonsmooth strategies, for which our (n log n) is still the only known lower bound. More generally, we would like a tight bound that is some nice function, say F , of the number n of usable labels, with F (n) = (n log3 n), F (cn) = (n log2 n) for each particular c > 1, and F (n1+) = (n log n) for each particular  > 0. When the density of the labels in use grows large, the cost of further labeling becomes more closely related to an alternative natural cost measure: the number of labels spanned (rather than the number of items ). Many of the same questions can be asked of this cost measure, and the answers and arguments might be independently interesting and enlightening. It turns out that the strongest version of the (n log2 n) lower bound mentioned above (for smooth insertion of n items into a linearly bounded label space) is most natural in this setting, because then it turns out to hold regardless of the size of the label space; the lower bound on the standard cost follows as a corollary [DSZ94]. Even if it turns out that bucketing problems are not as closely related to online labeling for smaller numbers of labels, we would like to see tighter and more general analyses of their complexity as well. For k = o(log n) buckets, Jingzhong Zhang has proposed, via personal communication, a pre x-bucketing algorithm of cost O(n1+1=k (k!)1=k ). More careful analysis of his algorithm yields an expression that may be the exact optimum. DIETZ, SEIFERAS, AND ZHANG, LOWER BOUND FOR ON-LINE LABELING 10 References [AL90] A. Andersson and T. W. Lai, Fast updating of well-balanced trees, Lecture Notes in Computer Science (SWAT 90: 2nd Scandinavian Workshop on Algorithm Theory, Proceedings), vol. 447, Springer-Verlag, Berlin, 1990, pp. 111{121. [Di82] P. F. Dietz, Maintaining order in a linked list, Proceedings of the Fourteenth Annual ACM Symposium on Theory of Computing, Association for Computing Machinery, 1982, pp. 122{127. [DS87] P. F. Dietz and D. D. Sleator, Two algorithms for maintaining order in a list, Proceedings of the Nineteenth Annual ACM Symposium on Theory of Computing, Association for Computing Machinery, 1987, pp. 365{372 (revision to appear in Journal of Computer and System Sciences). [DSZ94] P. F. Dietz, J. I. Seiferas, and J. Zhang, Lower bounds for smooth list labeling (in preparation). [DZ90] P. F. Dietz and J. Zhang, Lower bounds for monotonic list labeling, Lecture Notes in Computer Science (SWAT 90: 2nd Scandinavian Workshop on Algorithm Theory, Proceedings), vol. 447, Springer-Verlag, Berlin, 1990, pp. 173{180. [HS66] F. C. Hennie and R. E. Stearns, Two-tape simulation of multitape Turing machines, Journal of the Association for Computing Machinery 13, 4 (October, 1966), 533{546. [IKR81] A. Itai, A. G. Konheim, and M. Rodeh, A sparse table implementation of sorted sets, Research Report RC 9146 (November 24, 1981), IBM Thomas J. Watson Research Center, Yorktown Heights, New York. [Ts84] A. K. Tsakalidis, Maintaining order in a generalized linked list, Acta Informatica 21, 1 (May, 1984), 101{112. [Zh93] J. Zhang, Density control and on-line labeling problems, Technical Report 481 and Ph. D. Thesis (December, 1993), Computer Science Department, University of Rochester, Rochester, New York. Computer Science Department, University of Rochester, Rochester, New York, U. S. A. 14627-0226 E-mail address : dietz@cs.rochester.edu Computer Science Department, University of Rochester, Rochester, New York, U. S. A. 14627-0226 E-mail address : joel@cs.rochester.edu Consumer Asset Management, Chemical Bank, 380 Madison Avenue, 13th Floor, New York, New York, U. S. A. 10017 E-mail address : zhang@cs.rochester.edu