Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
arXiv:0809.2489v1 [cs.DS] 15 Sep 2008 THE FAST INTERSECTION TRANSFORM WITH APPLICATIONS TO COUNTING PATHS ANDREAS BJÖRKLUND 1 AND THORE HUSFELDT 2 AND PETTERI KASKI 3 AND MIKKO KOIVISTO 3 1 Lund University, Department of Computer Science, P.O.Box 118, SE-22100 Lund, Sweden 2 IT University of Copenhagen, Rued Langgaards Vej 7, 2300 København S, Denmark and Lund University, Department of Computer Science, P.O.Box 118, SE-22100 Lund, Sweden 3 Helsinki Institute for Information Technology HIIT, University of Helsinki, Department of Computer Science, P.O.Box 68, FI-00014 University of Helsinki, Finland E-mail addresses: andreas.bjorklund@logipard.com, thore.husfeldt@gmail.com, petteri.kaski@cs.helsinki.fi, mikko.koivisto@cs.helsinki.fi Abstract. We present an algorithm for evaluating a linear “intersection transform” of a function defined on the lattice of subsets of an n-element set. In particular, the algorithm constructs an arithmetic circuit for evaluating the transform in “down-closure time” relative to the support of the function and the evaluation domain. As an application, we develop an algorithm that, given as input a digraph with n vertices and bounded integer weights at the edges, counts paths by weight and given length 0 ≤ ℓ ≤ n − 1 in time O∗ (exp(n · H(ℓ/(2n)))), where H(p) = −p log p − (1 − p) log(1 − p), and the notation O∗ (·) suppresses a factor polynomial in n. 1. Introduction Efficient algorithms for linear transformations, such as the fast Fourier transform of Cooley and Tukey [10] and Yates’ algorithm [28], are fundamental tools both in computing theory and in practical applications. Therefore it is surprising that some arguably elementary transformations have apparently not been investigated from an algorithmic perspective. This paper contributes by studying an “intersection transform” of functions defined on subsets of a ground set. In precise terms, let U be a finite set with n elements (the ground set), let R be a ring, and denote by 2U the set of all subsets of U . The intersection transform maps a function f : 2U → R to the function f ι : {0, 1, . . . , n} × 2U → R, defined for all j = 0, 1, . . . , n and Y ⊆ U by X f (X). (1.1) f ιj (Y ) = X⊆U |X∩Y |=j Key words and phrases: algorithms and data structures, arithmetic circuits, counting, linear transformations, long paths, travelling salesman problem. This research was supported in part by the Academy of Finland, Grants 117499 (P.K.) and 109101 (M.K.), and by the Swedish Research Council, project “Exact Algorithms” (A.B. and T.H.). 1 2 Our interest here is in particular to restrict (or “trim”) the domains of the input f and the output f ι from 2U to given subsets of 2U . For a subset F ⊆ 2U , denote by ↓F the down-closure of F, that is, the family of sets consisting of all the sets in F and their subsets. The notation O∗ (·) in what follows suppresses a factor polynomial in n. The following theorem states our main result. Theorem 1. There exists an algorithm that, given F ⊆ 2U and G ⊆ 2U as input, in time  O∗ |↓F| + |↓G| constructs an R-arithmetic circuit with input gates for f : F → R and output gates that evaluate to f ι : {0, 1, . . . , n} × G → R. This result supplies yet another tool aimed at the resolution of a long-standing open problem, namely that of improving upon the classical (early 1960s) dynamic programming algorithm for the Travelling Salesman Problem (TSP). With an O∗ (2n ) running time for an instance with n cities, the classical algorithm, due to Bellman [3, 4], and, independently, Held and Karp [15], remains the fastest known exact algorithm for the TSP. Moreover, progress has been equally stuck at O∗ (2n ) even if one considers the more restricted Hamiltonian Path (HP) and the Hamiltonian Cycle (HC) problems. Armed with Theorem 1, we show that the O∗ (2n ) bound can be broken in a counting context, assuming one cares only for long paths or cycles, as opposed to the spanning paths or cycles required by the TSP/HP/HC. (See §1.1 for a contrast with earlier work.) Denote by H the binary entropy function H(p) = −p log p − (1 − p) log (1 − p), 0 ≤ p ≤ 1. (1.2) Theorem 2. There exists an algorithm that, given as input (i) a directed graph D with n vertices and bounded integer weights at the edges, (ii) two vertices, s and t, and (iii) a length ℓ = 0, 1, . . . , n − 1, counts, by total weight, the number of paths of length ℓ from s to t in D in time      ℓ ·n . (1.3) O∗ exp H 2n For example, Theorem 2 implies that we can count in O(1.7548n ) time with length ℓ = 0.5n and in O(1.999999999n ) time with length ℓ = 0.9999n. For length ℓ = n − 1 the bound reduces to the classical bound O∗ (2n ). We observe that counting implies, by self-reducibility, that we can construct examples of the paths within the same time bound. Similarly, we can count cycles of a given length within the same bound. However, the efficient listing (in the form of vertex supports, weights, and ends s, t) of all the  paths for any length ℓ ≫ n/2 appears not to be possible n with present tools in O (2−ǫ) time for ǫ > 0 independent of n. Indeed, if it were possible, we would obtain the breakthrough O (2 − ǫ)n algorithm for generic TSP by starting the classical algorithm from the output of the listing algorithm. We expect Theorem 1 to have applications beyond Theorem 2; for example, in the context of subset query problems discussed by Charikar, Indyk, and Panigrahy [8].  Given F ⊆ 2U and G ⊆ 2U as input, we can count in O∗ |↓F| + |↓G| time for each Y ∈ G the number of X ∈ F that intersect Y in a given number of points; in particular, for each Y we can count the number of disjoint X. 3  By duality of disjointness and set inclusion, we can thus count in O∗ |↓F| + |↑G| time for each Y ∈ G the number of X ∈ F with X ⊆ Y . Here ↑G denotes the up-closure of G, that is, the family of sets consisting of all the sets in G and their supersets in 2U . 1.1. Further remarks and earlier work Theorem 1 has its roots in Yates’ algorithm [28] for evaluating the product of a vector with the nth Kronecker power of a 2×2 matrix. While Yates’ algorithm is essentially optimal, running in O∗ (2n ) ring operations given an input vector with 2n entries, in certain cases the evaluation can be “trimmed”, assuming one requires only sporadic entries of the output vector. In particular, the present authors have observed [6] that the zeta and Moebius transforms on 2U are amenable to trimming (see Lemma 3 below for a precise statement). The proof of Theorem 1 relies on a trimmed concatenation of two “dual” zeta transforms, one that depends on supersets of a set (the “up” transform), and one that depends on subsets of a set (the “down” transform). To provide a rough intuition, we first use the up-zeta transform to drive information about f on F “down” to ↓F. Then we use a “ranked” [5] down-zeta transform to assemble information “up” from ↓G to G. Finally, we extract the intersection transform from the information gathered at each Y ∈ G. This essentially amounts to solving a fixed system of R-linear equations at each Y ∈ G. This proof strategy yet again highlights a basic theme: the use of fast linear transformations to distribute and assemble information across a domain (e.g. time, frequency, subset lattice) so that “local” computations in the domain (e.g. pointwise multiplication, solving local systems of linear equations) alternated with transforms enable the extraction of a desired result (e.g. convolution, intersection transform). Compared with earlier works such as [5, 6, 19], the present approach establishes the serendipity of the up/down dual transforms and introduces the “linear equation trick” into the toolbox of local computations. Once Theorem 1 is available, Theorem 2 stems from the observation that a path can be decomposed into two paths, each having half the length of the original path, with exactly one vertex in common. Theorem 1 then enables us to “glue halves” in F and G, where ↓F and ↓G consist of sets of size at most ⌈ℓ/2⌉ + 1. This  prompts the observation that Theorem 1 is useful only when the bound O∗ |↓F| + |↓G| improves upon the trivial bound O∗ |F||G| obtained by a direct iteration over all pairs (X, Y ) ∈ F × G. We know at least one alternative way of proving Theorem 2, without using Theorem 1. Indeed, assuming knowledge P P of trimming [6], one can use an algorithm of Kennes [19] to evaluate a sum |Z|=j X∩Y =Z f (X)g(Y ) for given f : F → R and g : G → R in  ∗ O |↓F| + |↓G| ring operations (take the trimmed up-zeta transform of f and g, take pointwise product of transforms, take the trimmed up-Moebius transform, and sum over all j-subsets in ↓F ∪ ↓G). This enables one to evaluate the right-hand side of (3.8) below in time (1.3), thus giving an alternative proof of Theorem 2. To contrast Kennes’ algorithm with Theorem 1, Kennes’ algorithm computes for each Z ⊆ U the sum over pairs (X, Y ) ∈ F × G with Z = X ∩ Y , whereas (1.1) computes, for each Y ∈ G the sum over X ∈ F with |X ∩Y | = j. Thus, Kennes’ algorithm provides control over the intersection Z but lacks control over the pairs (X, Y ), whereas (1.1) provides control over Y but lacks control over the intersection (except for size). As regards the TSP/HP/HC, earlier work on exact exponential-time algorithms can be divided roughly into three lines of study. (For a broader treatment of TSP/HP/HC and exact exponential-time algorithms, we refer to [2, 14, 23], and [27], respectively.) 4 One line of study has been to restrict the input graph, whereby a natural restriction is to place an upper bound ∆ on the degrees of the vertices. Eppstein [11] has developed an algorithm that runs in time O∗ (2n/3 ) = O(1.260n ) for ∆ = 3 and in time O(1.890n ) for ∆ = 4. Iwama and Nakashima [16] have improved the ∆ = 3 case to O(1.251n ), and Gebauer [12] the ∆ = 4 case to O(1.733n ). The present authors established [7] an O (2−ǫ)n bound for all ∆, with ǫ > 0 depending on ∆ but not on n. A second line of study has been to ease the space requirements of the algorithms from exponential to polynomial in n. Karp [18] and, independently, Kohn, Gottlieb, and Kohn [20] have shown that TSP with bounded integer weights can be solved in time O∗ (2n ) and space polynomial in n. Combined with restrictions on the graph, one can arrive at running times O∗ (2 − ǫ)n and polynomial space [7, 11, 16]. A third line of study relaxes the requirement on spanning paths/cycles to “long” paths/cycles. In this setting, a simple backtrack algorithm finds a path of length ℓ in time O∗ (nℓ ). Monien [24] observed that this can be expedited to O∗ (ℓ!) time by a dynamic programming approach. Alon, Yuster, and Zwick [1] introduced a seminal colour-coding procedure and improved the running time to O∗ ((2e)ℓ ) expected and O∗ (cℓ ) deterministic time, c a large constant. Subsequently, combining colour-coding ideas with a divide-and-conquer approach, Chen, Lu, Sze, and Zhang [9], and, independently, Kneis, Mölle, Richter, and Rossmanith [22], developed algorithms with O∗ (4ℓ ) expected and O∗ (4ℓ+o(ℓ) ) deterministic time. A completely different approach was taken by Koutis [21], who presented an O∗ (23ℓ/2 ) expected time algorithm relying on a randomised technique for detecting whether a given nvariate polynomial, represented as an arithmetic circuit with only sum and product gates, has a square-free monomial of degree ℓ with an odd coefficient. Recently, Williams [26] extended Koutis’ technique and obtained an O∗ (2ℓ ) expected time algorithm. To contrast with Theorem 2, while the O∗ (2ℓ ) bound of the Koutis–Williams [21, 26] algorithm is superior to the bound (1.3) in Theorem 2, it is not immediate whether the Koutis–Williams approach extends to counting problems. Furthermore, it appears challenging to derandomise the Koutis–Williams algorithm without increasing the running time (see [26, p. 6]), whereas the algorithm in Theorem 2 is deterministic. 2. The fast intersection transform 2.1. Preliminaries For a logical proposition P , we use Iverson’s bracket notation [P ] to denote a 1 if P is true, and a 0 if P is false. Let F ⊆ 2U and f : F → R. Define the up-zeta transform f ζ ↑ for all Y ⊆ U by X f (X) . (2.1) f ζ ↑ (Y ) = X∈F Y ⊆X Define the down-zeta transform f ζ ↓ for all Y ⊆ U by X f (X) . f ζ ↓ (Y ) = X∈F X⊆Y (2.2) 5 The following lemma condenses the essential properties of the “trimmed” fast zeta transform [6]. Lemma 3. There exist algorithms that construct, given F ⊆ 2U and G ⊆ 2U as input, an R-arithmetic circuit with input gates for f : F → R and output gates that evaluate to  (1) f ζ ↑ : G → R, with construction time O∗ |F| + |↑G| ;  (2) f ζ ↑ : G → R, with construction time O∗ |↓F| + |G| ;  (3) f ζ ↓ : G → R, with construction time O∗ |F| + |↓G| ; and  (4) f ζ ↓ : G → R, with construction time O∗ |↑F| + |G| . 2.2. The inverse of truncated Pascal’s triangle We work with the standard extension of the binomial coefficients to arbitrary integers (see Graham, Knuth, and Patashnik [13]). For integers p and q, we let Qq p+1−k    if q > 0;  k=1 k p = 1 (2.3) if q = 0;  q  0 if q < 0. The following lemma is folklore, but we recall a proof here for convenience of exposition. Lemma 4. The integer matrices A and B with entries     j i+j j aij = , bij = (−1) , i, j = 0, 1, . . . , n i i are mutual inverses. (2.4) Proof. Let us first consider the (i, j)-entry of AB: n n    X X k j aik bkj = (−1)j (−1)k i k k=0 j = (−1) k=0 j  X k=i i+j = (−1) k i   j (−1)k k  X   j j k−i j − i (−1) i k−i k=i = [i = j].  Here the second equality follows by observing that j ≥ 0 implies kj = 0 for all k > j;  similarly, k ≥ 0 implies ki = 0 for all 0 ≤ k < i. The third equality follows from an     application of the identity pq qr = pr p−r q−r , valid for all integers p, q, r (see [13, Equation 5.21]). The last equality follows from an application of the Binomial Theorem. 6 The analysis for the (i, j)-entry of BA is similar: n n    X X k j i bik akj = (−1) (−1)k i k k=0 k=0 = (−1)i j    X k j k=i i+i = (−1) i k (−1)k  X   j j k−i j − i (−1) i k−i k=i = [i = j]. It follows from Lemma 4 that the matrices A and B are mutual inverses over an arbitrary ring R, where the entries of the matrices are understood to be embedded into R via the natural ring homomorphism z 7→ zR = z ·1R , where 1R is the multiplicative identity element of R, and z is an integer. 2.3. Proof of Theorem 1 We first describe the algorithm and then prove its correctness. All arithmetic in the evaluations, and all derivations in subsequent proofs, are carried out in the ring R. Let F ⊆ 2U and G ⊆ 2U be given as input to the algorithm. The circuit is a sequence of three “modules” starting at the input gates for f : F → R. 1. Up-transform. Evaluate the up-zeta transform g = f ζ ↑ on ↓F (2.5)  ∗ with a circuit of size O |↓F| using Lemma 3(1). Observe that (2.1) implies that all nonzero values of f ζ ↑ are in ↓F. 2. Down-transform by rank. For each i = 0, 1, . . . , n, evaluate g (i) , the component of g with rank i, on ↓F; that is, for all X ∈ ↓F, set ( g(X) if |X| = i; (i) (2.6) g (X) = 0 otherwise. Then, for each i = 0, 1, . . . , n, evaluate yi = g (i) ζ ↓ on G (2.7)  with a circuit of size O∗ |↓F| + |↓G| using Lemma 3(3). 3. Recover the intersection transform. Let BR be the matrix in Lemma 4 with entries embedded to R. Associate with each Y ∈ G the column vector T y(Y ) = y0 (Y ), y1 (Y ), . . . , yn (Y ) . For each Y ∈ G, evaluate the column vector T x(Y ) = x0 (Y ), x1 (Y ), . . . , xn (Y ) as the matrix–vector product x(Y ) = BR y(Y ). (2.8) 7  Because the matrix BR is fixed, this can be implemented with O∗ |G| fixed R-arithmetic gates.  The circuit thus consists of O∗ |↓F| + |↓G| R-arithmetic gates. It remains to show that the circuit actually evaluates the intersection transform of f . Lemma 5. For all Y ∈ G and j = 0, 1, . . . , n it holds that xj (Y ) = f ιj (Y ). Proof. Let Y ∈ G and i = 0, 1, . . . , n. Consider the following derivation: X X f (X) yi (Y ) = Z⊆Y X∈F |Z|=i Z⊆X = X X f (X) 1R Z⊆X∩Y |Z|=i X∈F X |X ∩ Y | f (X) = i R X∈F n   X X j = f (X) i R j=0 = (2.9) X∈F |X∩Y |=j n X aij j=0  f ιj (Y ). R Here the first equality expands the definitions (2.7), (2.2), (2.6), (2.5), and (2.1). The second equality follows by changing the order of summation and observing that Z ⊆ X ∩ Y if and only if both Z ⊆ X and Z ⊆ Y . The fourth equality follows by collecting the terms with |X ∩ Y | = j together. The last equality follows from (2.4) and (1.1). Now let j = 0, 1, . . . , n, and observe that (2.8), (2.9), and Lemma 4 imply n X  xj (Y ) = bji R yi (Y ) = = i=0 n X = n  X bji i=0 n n X X k=0 n X k=0 aik R i=0 k=0 n n X X k=0 = bji i=0  R bji aik  R aik  f ιk (Y )  R f ιk (Y ) f ιk (Y ) R   j = k R f ιk (Y ) = f ιj (Y ).  8 3. Counting paths 3.1. Preliminaries We require some preliminaries before proceeding with the proof of Theorem 2. For basic graph-theoretic terminology we refer to West [25]. Let D be an n-vertex digraph with vertex set V and edge set E, possibly with loops and parallel edges. (However, to avoid further technicalities in the bound (1.3), we assume that the number of edges in D is bounded from above by a polynomial in n.) Associated with each edge e ∈ E is a weight w(e) ∈ {0, 1, . . .}. For an edge e ∈ E, denote by e− (respectively, e+ ) the start vertex (respectively, the end vertex) of e. It is convenient to work with the terminology of walks instead of paths. A walk of length ℓ in D is a tuple W = (v0 , e1 , v1 , e2 , v2 , . . . , vℓ−1 , eℓ , vℓ ) such that v0 , v1 , . . . , vℓ ∈ V , + e1 , e2 , . . . , eℓ ∈ E, and, for each i = 1, 2, . . . , ℓ, it holds that e− i = vi−1 and ei = vi . The walk W is said to be from v0 to vℓ . A walk is simple if v0 , v1 , . . . , vℓ are distinct vertices. The set of distinct vertices occurring in a walk is the support of the walk. We denote the support of a walk W by supp(W ). The weight of a walk W is the sum of the weights of the edges in the walk; a walk with no edges has zero weight. We write w(W ) for the weight of W . For s, t ∈ V and S ⊆ V we denote by Ws,t (S) the set of all simple walks from s to t with support S. Observe that Ws,t (S) is empty unless both s ∈ S and t ∈ S. Let z be a polynomial indeterminate, and define an associated polynomial generating function by X fs,t (S) = z w(W ) . (3.1) W ∈Ws,t (S) monomial z w Put otherwise, the coefficient of each of fs,t (S) enumerates the simple walks from s to t with support S and weight w.  For k = 0, 1, . . . , n, denote by Vk the set of all k-subsets of V . For ℓ = 0, 1, . . . , n − 1, define a polynomial generating function by X fs,t (J). (3.2) gs,t (ℓ) = V J∈(ℓ+1) Put otherwise, the coefficient of each monomial z w of gs,t (ℓ) enumerates the simple walks from s to t with length ℓ and weight w. 3.2. Proof of Theorem 2 Let B ∈ {0, 1, . . .} be fixed. Let D be a digraph with n vertices and edge weights w(e) ∈ {0, 1, . . . , B} for all e ∈ E. Let s, t ∈ V . Let ℓ = 0, 1, . . . , n − 1. With the objective of eventually applying Theorem 1, let U = V and let R be the univariate polynomial ring over z with integer coefficients. To compute gs,t (ℓ), proceed as follows. First observe that the generating polynomials (3.1) can be computed by the following recursion on subsets of V . The singleton sets {s} ⊆ V , s ∈ V , form the base case of the recursion:  (3.3) fs,s {s} = 1. 9 The recursive step is defined for all s, t ∈ V and S ⊆ V , |S| ≥ 2, by   X  X w(e) . z fs,t (S) = fs,a S \ {t} (3.4) e∈E e− =a e+ =t a∈S\{t} Now, using (3.3) and (3.4), evaluate ps,a = fs,a on V ⌊ℓ/2⌋+1  (3.5) for each a ∈ V . Then, using (3.3) and (3.4) again, evaluate  V . qa,t = fa,t on ⌈ℓ/2⌉+1  V Next, using the algorithm in Theorem 1 with F = ⌈ℓ/2⌉+1 and G =  V ra,t = qa,t ι1 on ⌊ℓ/2⌋+1 Finally, evaluate the right-hand side of X gs,t (ℓ) = X (3.6) V ⌊ℓ/2⌋+1  , evaluate ps,a (S)ra,t (S) (3.7) (3.8) a∈V S∈( V ) ⌊ℓ/2⌋+1 by direct summation. The entire evaluation can thus be carried out with an R-arithmetic circuit of size   V V O∗ ↓ ⌈ℓ/2⌉+1 + ↓ ⌊ℓ/2⌋+1 (3.9) that can be constructed in similar time. To justify the equality in (3.8), consider the following derivation: X X ps,a (S)ra,t (S) a∈V S∈( V ) ⌊ℓ/2⌋+1 X X X fa,t (T ) fs,a (S) = V V a∈V S∈( T ∈(⌈ℓ/2⌉+1) ⌊ℓ/2⌋+1) |S∩T |=1 = X X X X X z w(Wsa )+w(Wat ) X X X X X z w(Wsa )+w(Wat ) a∈V S∈( V ) T ∈( V ) Wsa ∈Ws,a (S) Wat ∈Wa,t (T ) ⌈ℓ/2⌉+1 ⌊ℓ/2⌋+1 |S∩T |=1 = a∈V S∈( V ) T ∈( V ) Wsa ∈Ws,a (S) Wat ∈Wa,t (T ) ⌊ℓ/2⌋+1 ⌈ℓ/2⌉+1 S∩T ={a} = X X z w(W ) V J∈(ℓ+1 ) W ∈Ws,t (J) = gs,t (ℓ). Here the first two equalities expand (3.5), (3.7), (1.1), (3.6), and (3.1). The third equality follows by observing that Ws,a (S) and Wa,t (T ) are both nonempty only if a ∈ S and a ∈ T . Thus, |S ∩ T | = 1 implies that only terms with S ∩ T = {a} appear in the sum. The fourth equality is justified as follows. First observe that  an arbitrary walk W of length ℓ from s to V t has the property that there exists a J ∈ ℓ+1 with supp(W ) = J if and only if the walk 10 is simple. Moreover, a simple walk W of length ℓ from s to t has a bijective decomposition W 7→ (Wsa , Wat ) into two simple subwalks, Wsa and Wat , with supp(Wsa )∩supp(Wat ) = {a} for some a ∈ V . Indeed, Wsa is the length-⌊ℓ/2⌋ prefix of W from s to some a ∈ V , and Wat is the length-⌈ℓ/2⌉ suffix of W from a to t. Conversely, prepend Wsa to Wat , deleting one occurrence of a in the process, to get W . The fifth equality follows from (3.2) and (3.1). It remains to analyse the total running time of constructing and evaluating the circuit in terms of n and ℓ. Because B is fixed, all the ring operations are carried out on polynomials of degree at most Bn = O(n). Moreover, denoting by m the number of edges in D, the coefficients in the polynomials are integers bounded in absolute value by 2m 25n , where 2m is an upper bound for the coefficients in (3.1) and (3.2), and 25n is an upper bound for the expansion in intermediate values in the transforms. (Both bounds are far from tight.) Recalling that we assume that m is bounded from above by a polynomial in n, we have that the coefficients can be represented using a number of bits that is bounded from above by a polynomial in n. It follows that each ring operation runs in time bounded from above by a polynomial in n. To conclude that the algorithm runs within the claimed upper bound (1.3), combine (3.9) with the observation that for every 0 < p ≤ 1/2 it holds that ⌊np⌋   X n  ≤ exp H(p) · n (3.10) k k=0 where H is the binary entropy function (1.2). (For a proof of (3.10), see Jukna [17, p. 283].) References [1] N. Alon, R. Yuster, U. Zwick, Color-coding, J. Assoc. Comput. Mach. 42 (1995), 844–856. [2] D. L. Applegate, R. E. Bixby, V. Chvátal, W. J. Cook, The Traveling Salesman Problem: A Computational Study, Princeton University Press, 2006. [3] R. Bellman, Combinatorial processes and dynamic programming, Combinatorial Analysis, Proceedings of Symposia in Applied Mathematics 10, American Mathematical Society, 1960, pp. 217–249. [4] R. Bellman, Dynamic programming treatment of the travelling salesman problem, J. Assoc. Comput. Mach. 9 (1962), 61–63. [5] A. Björklund, T. Husfeldt, P. Kaski, M. Koivisto, Fourier meets Möbius: fast subset convolution, 39th Annual ACM Symposium on Theory of Computing (STOC 2007), ACM, 2007, pp. 67-74. [6] A. Björklund, T. Husfeldt, P. Kaski, M. Koivisto, Trimmed Moebius inversion and graphs of bounded degree, 25th International Symposium on Theoretical Aspects of Computer Science (STACS 2008), Dagstuhl Seminar Proceedings 08001, IBFI Schloss Dagstuhl, 2008, pp. 85–96. [7] A. Björklund, T. Husfeldt, P. Kaski, M. Koivisto, The travelling salesman problem in bounded degree graphs, 35th International Colloquium on Automata, Languages and Programming (ICALP 2008), Part I, LNCS 5125, Springer, 2008, pp. 198-209. [8] M. Charikar, P. Indyk, R. Panigrahy, New algorithms for subset query, partial match, orthogonal range searching, and related problems, 29th International Colloquium on Automata, Languages and Programming (ICALP 2002), Part I, LNCS 2380, Springer, 2002, pp. 451-462. [9] J. Chen, S. Lu, S. Sze, F. Zhang, Improved algorithms for path, matching, and packing problems, 18th Annual ACM-SIAM Symposium on Discrete Algorithms (SODA 2007), SIAM, 2007, pp. 298–307. [10] J. W. Cooley, J. W. Tukey, An algorithm for the machine calculation of complex Fourier series, Math. Comp. 19 (1965), 297–301. [11] D. Eppstein, The traveling salesman problem for cubic graphs, J. Graph Algorithms Appl. 11 (2007), 61–81. [12] H. Gebauer, On the number of Hamilton cycles in bounded degree graphs, 4th Workshop on Analytic Algorithms and Combinatorics (ANALCO 2008), SIAM, 2008. 11 [13] R. L. Graham, D. E. Knuth, O. Patashnik, Concrete Mathematics, 2nd ed., Addison–Wesley, 1994. [14] G. Gutin, A. P. Punnen (Eds.), The Traveling Salesman Problem and its Variations, Kluwer, 2002. [15] M. Held, R. M. Karp, A dynamic programming approach to sequencing problems, J. Soc. Indust. Appl. Math. 10 (1962), 196–210. [16] K. Iwama, T. Nakashima, An improved exact algorithm for cubic graph TSP, 13th Annual International Conference on Computing and Combinatorics (COCOON 2007), LNCS 4598, Springer, 2007, pp. 108– 117. [17] S. Jukna, Extremal Combinatorics, Springer, 2001. [18] R. M. Karp, Dynamic programming meets the principle of inclusion and exclusion. Oper. Res. Lett. 1 (1982), 49–51. [19] R. Kennes, Computational aspects of the Moebius transform of a graph, IEEE Transactions on Systems, Man, and Cybernetics 22 (1991), 201–223. [20] S. Kohn, A .Gottlieb, M. Kohn, A generating function approach to the traveling salesman problem, ACM Annual Conference (ACM 1977), ACM Press, 1977, pp. 294–300. [21] I. Koutis, Faster algebraic algorithms for path and packing problems, 35th International Colloquium on Automata, Languages and Programming (ICALP 2008), Part I, LNCS 5125, Springer, 2008, pp. 575– 586. [22] J. Kneis, D. Mölle, S. Ricther, P. Rossmanith, Divide-and-color, 32nd International Workshop on Graph-Theoretic Concepts in Computer Science (WG 2006), LNCS 4271, Springer, 2006, pp. 58–67. [23] E. L. Lawler, J. K. Lenstra, A. H. G. Rinnooy Kan, D. B. Shmoys (Eds.), The Traveling Salesman Problem: A Guided Tour of Combinatorial Optimization, Wiley, 1985. [24] B. Monien, How to find long paths efficiently, Ann. Discrete Math. 25 (1985), 239–254. [25] D. B. West, Introduction to Graph Theory, 2nd ed., Prentice–Hall, 2001. [26] R. Williams, Finding paths of length k in O∗ (2k ) time, arXiv:0807.3026, July 2008. [27] G. J. Woeginger, Exact algorithms for NP-hard problems: A survey, Combinatorial Optimization – Eureka, You Shrink! LNCS 2570, Springer, 2003, pp. 185–207. [28] F. Yates, The Design and Analysis of Factorial Experiments, Technical Communication 35, Commonwealth Bureau of Soils, Harpenden, U.K., 1937.