Threshold Dominating Sets and
An Improved Characterization of W [2]
Rodney G. Downey
Department of Mathematics
P.O. Box 600
Victoria University
Wellington, New Zealand
downey@maths.vuw.ac.nz
Michael R. Fellows
Department of Computer Science
University of Victoria
Victoria, British Columbia
V8W 3P6, Canada
mfellows@csr.uvic.ca
July 7, 2010
Abstract
The Threshold Dominating Set problem is that of determining for a graph
G = (V, E) whether there is a subset V 0 ⊆ V of size k, such that for each vertex v ∈ V
there are at least r elements of the closed neighborhood N [v] that belong to V 0 . We
consider the complexity of the problem parameterized by the pair (k, r). It is trivial to
observe that this is hard for W [2]. It can also be easily shown to belong to a natural
extension W ∗ [2] of W [2]. We prove membership in W [2] and thus W [2]-completeness.
Using this as a starting point, we prove that W ∗ [2] = W [2].
1
Introduction
At issue in the study of parameterized computational complexity is how different parts of
the input to a problem contribute to its difficulty. The following familiar problems are all
concerned with the existence of sets of vertices of size k in a graph G = (V, E) having various
properties: Minimum Dominating Set, Feedback Vertex Set, Vertex Cover, Independent Set (definitions can be found in [?]). All of these problems are NP-complete,
yet the parameter k seems to contribute in very different ways to the complexity of these
problems. Vertex Cover and Feedback Vertex Set can be solved in time f (k)|V |c
1
for suitable (exponential) functions f , while for Minimum Dominating Set and Independent Set the best known algorithms have a running time of O(|V |ck ).
Both outcomes are “compatible” with NP-hardness, yet in many applied situations (e.g.,
where small parameter values are of interest) we may wish to distinguish these two qualitatively different kinds of complexity behaviour. This can be viewed as one possible systematic
way of “dealing with NP-completeness”.
The framework of parameterized complexity theory pioneered in [?, ?, ?, ?] allows us to
explore this issue. Both structural results concerning parameterized complexity, and concrete
complexity classifications of particular problems, are typically quite a bit more difficult than
analogous classical theorems. For example, all four of the problems mentioned above can be
shown to be NP-complete by relatively easy combinatorial reductions. In contrast, the proofs
that Dominating Set is W [2]-complete [?] and that Independent Set is W [1]-complete
[?] are quite intricate.
In this paper, we begin by examining the complexity of the following natural problem.
Threshold Dominating Set
Input: A graph G = (V, E) and positive integers k and r.
Parameter: (k, r)
Question: Is there a set of vertices V 0 ⊆ V such that: (1) |V 0 | ≤ k, and (2) ∀v ∈ V :
|N [v] ∩ V 0 | ≥ r? (Here N [v] denotes the closed neighborhood of v, that is, N [v] = {u : u =
v or uv ∈ E}.)
Since this problem (for r = 1) includes the usual Dominating Set problem as a special
case, we can observe that Threshold Dominating Set is hard for W [2] by the results of
[?]. By showing membership in W [2] we establish:
Theorem 1. Threshold Dominating Set is complete for W [2].
The method used to prove Theorem 1 is applicable to the following variant of Satisfiability.
(n, k, n)-Weighted Satisfiability (WSat)
Input: A boolean expression E that is an n-product of k-sums of n-products.
Parameter: k
Question: Is there a satisfying truth assignment for the variables of E that has weight k?
(That is, one that assigns exactly k variables to be true and the rest to be false.)
Note that in the definition of the above problem the parameter k plays two different roles,
in the structure of E, and also in the specification of the weight of the truth assignment. (By
padding, one can easily generalize the definition to allow that the sums have arity bounded
by f (k) for a fixed arbitrary function f .) We prove:
2
Theorem 2. (n, k, n)-WSat is complete for W [2].
The W [t] degrees as they are defined and characterized in [?] are natural and useful
because it is generally more-or-less straightforward to express a problem in logic, and this
then gives membership information concerning the W hierarchy. The definition of W [t] given
in [?] is that a parameterized language L is in W [t] if it is reducible to the k-Weighted
Circuit Satisfiability problem for a family of circuits C satisfying:
(1) the weft of any circuit C ∈ C is at most t, where the weft of a circuit is the depth counting
only large gates, and small gates have fan-in bounded by a constant c, and
(2) the depth (counting both large and small gates) of any circuit C ∈ C is bounded by a
constant c0 .
We can define a natural extension W ∗ [t] by relaxing the above definition with the requirements:
(10 ) the weft of any circuit C ∈ C is at most t, but any gate with fan-in bounded by an
arbitrary function hC (k) is considered small, and
(20 ) the depth of any circuit C ∈ C is at most h0C (k) for an arbitrary function h0C .
In [?] it is shown that W ∗ [1] = W [1], and this result is used to show that the data
complexity of monotone queries to relational databases is complete for W [1]. This problem
and the two discussed above are easily shown to belong to the “correct” W ∗ [t] degree.
Our main result here is:
Theorem 3. W ∗ [2] = W [2].
In §2 we review pertinent aspects of the framework of parameterized complexity. In
§3 we prove Theorems 1 and 2 concerning Threshold Dominating Set and (n, k, n)Weighted Satisfiability. In §4 we prove the main result. §5 concludes with a discussion
of some open problems.
2
Background on Parameterized Complexity
The formal framework of parameterized complexity is sketched as follows. We consider that
input to a computational problem consists of two parts, one of which is expected to be
relatively small. Thus we consider a parameterized language L to be a set of pairs of strings,
L ⊆ Σ∗ × Σ∗ . For notational convenience, we may equivalently consider a parameterized
language L to be a subset of Σ∗ × N , where for (x, k) ∈ L, k is the parameter.
The fundamental concept of the theory is the notion of fixed-parameter tractability. We
say that a parameterized language L is fixed parameter tractable (FPT) if it can be determined
3
in time f (k)nc whether (x, k) ∈ L, where c is a constant independent of the parameter k
and n = |x| is the size of x.
Following naturally from the concept of fixed-parameter tractability is the appropriate
notion of parameterized problem reduction.
Definition. Let A, B be parameterized problems. We say that A is (uniformly many:1)
reducible to B if there is an algorithm Φ which transforms (x, k) into (x0 , g(k)) in time
f (k)|x|α , where f, g : N → N are arbitrary functions and α is a constant independent of k,
so that (x, k) ∈ A if and only if (x0 , g(k)) ∈ B.
It is easy to see that if A reduces to B and B is fixed parameter tractable then so too is A.
Note that if P = N P then a variety of natural parameterized NP-complete problems would
be fixed-parameter tractable. Thus a completeness program is reasonable for establishing
apparent fixed-parameter intractability.
The classes of parameterized problems that we define below are intuitively based on the
complexity of the circuits required to check a solution. We first define circuits in which some
gates have bounded fan-in and some have unrestricted fan-in. It is assumed that fan-out is
never restricted.
Definition. A Boolean circuit is of mixed type if it consists of circuits having gates of the
following kinds.
(1) Small gates: ¬ gates, ∧ gates and ∨ gates with bounded fan-in. We will usually assume
that the bound on fan-in is 2 for ∧ gates and ∨ gates, and 1 for ¬ gates.
(2) Large gates: ∧ gates and ∨ gates with unrestricted fan-in.
Definition. The depth of a circuit C is defined to be the maximum number of gates (small
or large) on an input-output path in C. The weft of a circuit C is the maximum number of
large gates on an input-output path in C.
Definition. We say that a family of decision circuits F has bounded depth if there is a
constant h such that every circuit in the family F has depth at most h. We say that F has
bounded weft if there is constant t such that every circuit in the family F has weft at most
t. The weight of a boolean vector x is the number of 1’s in the vector.
Definition. Let F be a family of decision circuits. We allow that F may have many different
circuits with a given number of inputs. To F we associate the parameterized circuit problem
LF = {(C, k) : C accepts an input vector of weight k}.
Definition. A parameterized language L belongs to W [t] if L reduces to the parameterized
circuit problem LF (t,h) for the family F (t, h) of mixed type decision circuits of weft at most
t, and depth at most h, for some constant h. A parameterized language L belongs to W ∗ [t]
if it belongs to W [t] where the definition of a small gate has been revised to allow fan-in
4
bounded by a fixed arbitrary function of k, and where the depth of a circuit is allowed to be
a function of k as well.
Many well-known problems in various areas of computer science have been shown to be
complete or hard for various levels of the W hierarchy of parameterized complexity classes
[?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?].
3
Threshold Domination and A Related Result
In this section we introduce a a fairly simple technique that appears to be quite useful. We
first apply it to the Threshold Dominating Set problem defined in §1.
Theorem 1. Threshold Dominating Set is complete for W [2].
Proof. Hardness for W [2] follows from [?] by taking r = 1, which yields the ordinary
Dominating Set problem as a special case.
To show membership in W [2], let G = (V, E), k and r be given. We look at the defining
property in the following way. We are allowed k choices C1 , . . . , Ck of a vertex of G. Let
S(k, r) denote the set of r-element subsets of {1, . . . , k}. The question is whether we can
make the k choices so that:
∀u ∈ V, ∃J ∈ S(k, r), ∀j ∈ J, ∃v ∈ N [u] with Cj = v
The two inner quantifications range over index sets of size bounded by functions of k.
This allows a reformulation into ∀∃ form by a blowup exponential in k. The resulting formula
is then entirely of the form ∀∃, corresponding to the conjunctive normal form required for
W [2].
The details are as follows. We describe how to produce a boolean expression E in
product-of-sums form that is satisfiable by a weight k truth assignment if and only if G has
a k-vertex r-threshold dominating set.
The set of boolean variables for E is:
X = {c[i, u] : 1 ≤ i ≤ k, u ∈ V }
The intended meaning of c[i, u] is: “the ith choice of a vertex of V 0 is vertex u”.
Let E 0 = E10 · E2 where
E10 =
Y
u∈V
X
Y
J∈S(k,r) j∈J
5
X
v∈N [u]
c[j, v]
and
E2 =
k
Y
Y
i=1
u6=u0
(¬c[i, u] + ¬c[i, u0 ])
The role of E2 is to insure that, for each i, exactly one boolean variable c[i, u] is assigned
the value true in any weight k truth assignment that satisfies E 0 . The expression E is
obtained from E 0 by replacing each inner sum-of-products by an equivalent product-of-sums.
We leave it to the reader to check that the reduction works correctly.
2
We next turn to a natural problem concerning the satisfiability of logical expressions
that is easily shown to belong to W ∗ [2], although membership in W [2] is not obvious. We
make a general definition that will be useful in §4.
Definition. (n, k, n)-Weighted Satisfiablity (WSat) is the decision problem that
takes as input a boolean expression E that is an n-product of k-sums of n-products; the
question is whether there is a weight k truth assignment that satisfies E, and the parameter
is k. Generalize this in the natural way for any string of n’s and k’s and c’s where c denotes
a constant. Thus (n, k, n, c)-WSat takes as input an n-product of k-sums of n-products of
constant sized sums, for a fixed constant c. In order to conveniently refer to the expressions
themselves, define an π(n, k, n)-expression to be an n-product of k-sums of n-products, and
similarly define a σ(n, k, n)-expression to be an n-sum of k-products of n-sums, etc., where
products and sums alternate, π and σ indicate what happens first, and the vector indicates
the index set sizes.
Theorem 2. (n, k, n)-WSat is complete for W [2].
Proof. By the antimonotone change of variables technique used in [?], we can assume that
all literals appearing in the (n, k, n)-WSat expression E are negated. That is, we can assume
that E has the following form (assume that V is the set of boolean variables of E):
E=
k
n X
Y
F (i, j)
i=1 j=1
where each subexpression F (i, j) has the form
Y
F (i, j) =
¬v
v∈V (i,j)
for some set of boolean variables V (i, j) ⊆ V .
It is easy to see that from E we could obtain an equivalent expression by replacing each
subexpression F (i, j) with an expression that calculates whether k boolean variables have
been set true in the complementary set of variables V 0 (i, j) = V − V (i, j). Thus a product of
6
negated variables can be replaced by a “threshold” calculation concerning the (monotone)
set of complementary variables.
The details are as follows.
We may take the set of boolean variables and E2 to be exactly as in the proof of Theorem
1. The meaning that we here associate to c[i, u] is: “the ith choice of a variable (of V ) to be
set true is variable u”.
Let E1 be the expression that results from replacing each subexpression F (i, j) in E
with:
k
Y
X
c[h, u]
h=1 u∈V 0 (i,j)
We also need to insure for this problem (note that for Threshold Dominating Set
the analogous issue is handled implicitly) that the k choices of variables (of V ) to be set true
are all distinct. The following subexpression will accomplish this.
E3 =
(¬c[i, u] + ¬c[i0 , u])
Y
Y
1≤i<i0 ≤k
u∈V
The reader can easily verify that the expression E 0 defined over the set of variables X by
E = E1 · E2 · E3 has a weight k truth assignment iff E has a weight k truth assignment. Note
that E 0 is an instance of (n, k, k, n)-WSat. By replacing the “k-sized” sums-of-products with
equivalent products-of-sums as in the proof of Theorem 1, we obtain an equivalent expression
in conjunctive normal form, at the cost of a blowup factor that is exponential in k.
0
This completes our argument that (n, k, n)-WSat is in W [2], and it remains to show
that it is hard for W [2]. (This is not entirely obvious at first glance, since all of the large
gates are conjunctions.)
However, we may make an argument that again illustrates aspects of this technique. We
reduce from the W [2]-hard problem Dominating Set. Let G = (V, E) denote the graph of
interest. As in the proof of Theorem 1, view the property of having a dominating set of size
k as whether we can make k vertex choices C1 , . . . , Ck so that:
∀u ∈ V, ∃j ∈ {1, . . . , k}, ∀v ∈ V − N [u], Cj 6= v
2
The rest of the argument is by now routine.
7
4
An Improved Characterization of W [2]
In this section we prove the important theorem: W ∗ [2] = W [2]. An overview of the proof is
sketched as follows. In §4.1 we recall a basic lemma on the normalization of W ∗ [t] circuits first
proved in [?]. In §4.2 we define an abstract notion of a change of variables for parameterized
satisfiability problems, and extract several useful results from the previous literature on this
subject. In §4.3 we prove a Lemma extending Theorem 2 that will be useful in the main
argument. In §4.4 the proof of the theorem is assembled.
The reason the theorem is significant is that logic is a very good tool for establishing
“upper” (class membership) bounds on parameterized complexity. One would therefore
like the logical characterization of the W [t] classes to provide as much expressive power as
possible. While it is nontrivial to see that Threshold Dominating Set belongs to W [2]
with the heretofore definition, that it is complete for W [2] is quite easy given the stronger
characterization provided by our theorem. W [2] is one of the most important and natural
parameterized degrees, with quite a few well-known problems being precisely W [2]-complete.
4.1
Circuit Normalization
Our proof of the main theorem starts with an important lemma concerning W ∗ [t] that says
essentially that the circuits can be put in a normal form that corresponds to a certain kind
of Boolean formula. The normal form is defined recursively as follows.
Definition. A Boolean expression E is in 1-alternating normal form with respect to n and
k if E is either π(n, k) or σ(n, k). A Boolean expression E is in t-alternating normal form
with respect to n and k, for t ≥ 2, if E either is of the form:
E=
k
n X
Y
Eij
i=1 j=1
or of the form:
E=
n Y
k
X
Eij
i=1 j=1
where each sub expression Eij is (t − 1)-alternating normal with respect to n and k.
Note that the above definition can be equivalently phrased in terms of circuits in a
natural way. Define a tree circuit C to be in t-alternating normal form with respect to n and
k if it has 2t layers of gates, with the fan-in alternating between n and k starting with fan-in
n for the output gate.
Corresponding to this form we have the following parameterized problem.
8
t-Alternating Weighted Satisfiability
Input: A Boolean expression E in t-alternating normal form with respect to n and k.
Parameter: k
Question: Is there a weight k truth assignment that satisfies E?
Lemma 1 (Normalization) [?]. t-Alternating Weighted Satisfiability is complete
for W ∗ [t] for all t ≥ 1.
Proof Sketch. Assume that the given circuit C has an output large gate, since an output
small gate subcircuit can be removed by employing additional nondeterminism in the manner
used in [DF95a]. The idea is basically to progressively analyze and “copy” the circuit C into
the required form, working progressively downward from the input level. In the first step,
we locate the topmost large gates. Let g denote such a gate, and let a1 , ..., ar denote the
inputs to g. Suppose g is a ∧ gate. Each ai is computed by a subcircuit consisting only of
small gates, and therefore there the number of inputs to C on which ai depends is bounded
by a function of k. Thus, ai can be re-expressed (at a cost that is bounded by a function of
k) as a product-of-sums of total size bounded by a function of k. Thus g can be replaced by
a product-of-sums “copy-gate” g 0 , where there are at most f (k)n sums (for some function f )
and each sum has size bounded by some function g(k). Make a copy of g 0 for each fan-out
line of g.
For the case where g is a large ∨ gate, we replace each argument ai with an equivalent
sum-of-products, to obtain (for g 0 ) an f (k)n-sum of g(k)-products.
We next identify the topmost layer of large gates below the inputs to C and the copy
gates created so far. Note that each copy gate has fan-out 1. We repeat the above recipe,
creating two more layers of copy gates (f 0 (k)n-products of g 0 (k)-sums and f 0 (k)n-sums of
g 0 (k)-products, for new functions f 0 and g 0 ). Eventually the entire circuit C is replaced by
the copy gates, and the resulting copy gate circuit has the required form.
2
4.2
Change of Variables
The study of parameterized satisfiability problems for boolean formulae has turned out to
be surprisingly challenging, and to involve a number of intricate tricks and combinatorial
gadgetry. In order to avoid simply repeating these, it seems useful to articulate an abstract
point of view about them, and to extract useful general results from the work that has been
done to date on parameterized satisfiability [?, ?, ?, ?, ?].
The following definition attempts to codify one of the most useful general tools in the
study of parameterized satisfiability.
Definition. A parameterized change of variables for a set of targeted subexpressions E over
9
a set of boolean variables X is a pair (r, φ), subject to various requirements, where:
(1) r is a replacement function mapping expressions in E to expressions over a set of variables
X 0 , and
(2) φ is an fpt algorithm that given X, k, and E computes:
(a) The set of new variables X 0 = X 0 (E, X, k).
(b) The replacement subexpressions r(Es ) for each subexpression Es ∈ E
(c) A positive integer k 0 that is purely a function of k.
(d) An enforcement expression E0 = E0 (E, X, k).
We require that the following conditions be met:
(1) The old variables form a subset of the new variables: X ⊆ X 0 .
(2) For every boolean expression E over X, and for every E 0 obtained from E by replacing
some number of subexpressions Es with r(Es ) where Es ∈ E:
(a) For every weight k 0 truth assignment τ for X 0 that satisfies E 0 ∧ E0 , the restriction of τ
to X has weight k and satisfies E.
(b) If there exists a weight k truth assignment to X that satisfies E, then there exists a
weight k 0 truth assignment to X 0 that satisfies E 0 .
Intuitively, what a parameterized change of variables does for us is allow us to work with
an expression E for which we wish to decide weight k satisfiability in the following way: by
expanding the set of variables, and expanding k (in the parameterized sense), we can replace
those parts of E that belong to E with replacements given by r (that are presumably simpler
or more homogeneous). Yet we still have the original variables available and functioning in
an “undisturbed” fashion, since X ⊆ X 0 . This means that various changes of variables can
be combined (by identifying the sets X included in the new X 0 s). It might seem that the
requirements are so stringent that useful changes of variables might be hard to come by. In
fact, this is not the case, and an examination of the proofs in the papers [?, ?, ?, ?] (with
some minor modifications) shows that we have the following changes of variables to work
with, with enforcement expressions that are π(n, n).
Lemma 2 (Three Basic Changes of Variables). For each of the following specifications
there is a parameterized change of variables having an enforcement expression that is a
product-of-sums:
(1) Macro Change of Variables. The set of targeted subexpressions E consists of monotone
products of literals and antimonotone sums of literals. A monotone sum can be replaced by
a single positive literal, and an antimonotone product can be replaced by a single negative
literal.
(2) Monotone Change of Variables. The set of targeted subexpressions E is the set of all
negative literals over the set of variables X. A negative literal can be replaced by a large
product of positive literals over the set of new variables.
(3) Antimonotone Change of Variables. The set of targeted subexpressions E is the set of
all positive literals over X. A positive literal can be replaced by a large product of negative
literals.
10
Proof. The gadgetry for the three changes of variables can be found in [?, ?]. Some slight
elaboration from those proofs is necessary, but straightforward. We here allow ourselves an
enforcement expression that is an arbitrary product-of-sums. Because of this extra latitude,
it is easy to make the modifications necessary to meet the requirement that the set of old
variables X is a subset of the new variables X 0 and that truth assignments to X 0 satisfying E 0
are conservative with respect to this subset in satisfying E. As the details would substantially
just repeat everything in these earlier papers, we leave these to the reader.
2
4.3
An Extension to Theorem 2
Lemma 3. (n, k, n, c)-WSat is in W [2].
Proof. Let E be the expression we start with. By means of the antimonotone change of
variables we can reduce to WSat for a boolean expression E1 = E2 ∧ E3 where E2 is an
antimonotone (n, k, n, c, n) expression and E3 is the enforcement (n, n) expression.
By distributing the c level upwards (at a blow-up cost of nc , which is allowable, since
c is a constant that does not depend on k), the expression E2 can be rearranged into an
equivalent antimonotone (n, k, n, c) expression.
We can now apply a macro change of variables to complete a reduction of (n, k, n, c)WSat to (n, k, n)-WSat. Theorem 2 completes the proof.
2
4.4
Proof of the Main Theorem
Theorem 3. W ∗ [2] = W [2]
Proof. The proof proceeds through several steps with the eventual goal being an application
of Lemma 3.
Step 1. (n, k, n, k)-Normalization.
By Lemma 1 and a macro change of variables, we can assume that we are working
with an expression E in the form E = E1 · E2 where E2 is a product-of-sums enforcement
expression, and
E1 =
n X
k
Y
(E(i, j) + F (i, j))
i=1 j=1
where:
(1) Each subexpression Ei,j is a sum of products of k literals, only one of which is positive:
11
E(i, j) =
n
X
a(i, j, r, 1) ·
r=1
k
Y
!
¬a(i, j, r, s)
s=2
with a(i, j, r, s) ∈ X for r = 1, ..., n and s = 1, ..., k.
(2) Each subexpression F (i, j) is a product of sums of k literals, only one of which is negative:
F (i, j) =
n
Y
¬b(i, j, r, 1) +
r=1
k
X
!
b(i, j, r, s)
s=2
with b(i, j, r, s) ∈ X for r = 1, ..., n and s = 1, ..., k.
Step 2. Analysis of Consequence Trees.
We begin this step working with the results of the last step, as will be true for each step
of the proof. We have an expression E = E0 · E1 where E0 is a product-of-sums enforcement
expression. Through the remainder of the proof, the enforcement expressions will simply
accumulate and be carried along and will always be denoted E0 . The part of E on which we
perform the various substitutions in this step is E1 .
To simplify notation we will always consider X to denote the set of variables for the
“current” step, that is, for the expression received at the beginning of a step.
For i ∈ {1, ..., n}, j ∈ {1, ..., k} and v ∈ X define the consequences of v in Ei,j to be
the family of sets:
Ti,j (v) = {{x : ∃t ∈ {1, ..., k} x = a(i, j, s, t)} : ∃s ∈ {1, ..., n} v = a(i, j, s, 1)}
and define the consequences of v in Fi,j to be the family of sets:
Ui,j (v) = {{x : ∃t ∈ {1, ..., k} x = b(i, j, s, t)} : ∃s ∈ {1, ..., n} v = b(i, j, s, 1)}
Let T denote the union of the sets Ti,j and let U denote the union of the sets Ui,j .
Let F be a family of sets of size at most k over the base set X and consider the problem
of finding a set X 0 of at most k elements of X that has nonempty intersection with each set
S ∈ F. By creating a k-branching tree of depth at most k, this problem can be completely
analyzed, with each possible solution corresponding to a leaf of the tree. Let Li,j (v) denote
the leaf set for such an analysis of Ti,j (v) and let Mi,j (v) denote the leaf set for such an
analysis of Ui,j (v). Note that these leaf sets have size bounded by a function of k.
In this step, we employ a macro change of variables in conjunction with a set of new
variables that allows us a “separated” representation of the variables set to 1 in a weighted
truth assignment. The set of new variables X 0 is:
X 0 = X10 ∪ X20
12
where X10 are the variables introduced by the macro change of variables and
X20 = {s[i, j] : 1 ≤ i ≤ k, 1 ≤ j ≤ |X|}
We may assume that the set of new variables X10 contains the macro variables
M = {m[S] : S ∈ T ∪ U}
and that the enforcement expression for the change of variables insures that for any satisfying
truth assignment τ , τ (m[S]) = 1 if and only if ∀v ∈ S : τ (v) = 1 (which is the basic objective
of a macro change of variables). We will also assume that a copy of the old variables X is
included in X1 (as required by the definition of a change of variables) and that this copy is
denoted
X = {x[i] : 1 ≤ i ≤ |X|}
The expression E 0 produced by this step has the description:
E 0 = E00 · E10
where E00 is the accumulating product-of-sums enforcement expression. This includes the
following that has the effect of enforcing the separated representation mechanism:
|X|
k X
Y
s[i, j]
i=1 j=1
and
and
(¬s[i, j] + ¬s[i, j 0 ])
Y
Y
1≤i<i0 ≤k
1≤j<j 0 ≤|X|
|X|
k Y
Y
(s[i, j] + ¬x[j])(¬s[i, j] + x[j])
i=1 j=1
The parameter k 0 that accompanies E 0 is chosen so that there is a budget of k (weight) for
the variables of X20 . (Note the compatibility of this with the above enforcements.)
The net effect of the macro change of variables together with the separated representation
mechanism is that a weight k 0 truth assignment can satisfy the enforcement product-of-sums
expression E00 only if it meets these conditions:
(1) The assignment restricted to the old variables X ⊆ X10 has weight k.
(2) The variables set true in (1) are separately indicated in the k blocks of variables of X20 .
(3) Various interesting sets of variables set true in (1) are indicated by the macro variables.
(4) The auxiliary representations (2) and (3) of (1) are forced to be consistent with (1).
13
The following Claims can be taken as motivation for our description of E10 , and is important for the proof of correctness for this step. Note that the Claim is well-defined, since
X ⊆ X 0 and because of the enforcements (1-4) above.
Claim 1. A truth assignment τ of weight k 0 for the set of variables X 0 satisfies a subexpression
Ei,j of E if and only if ∃p ∈ {1, ..., k} such that ∀q ∈ {1, ..., |X|}:
Y
¬s[p, q] +
¬m[S]
S∈Li,j (x[q])
evaluates to 1.
Proof. Suppose that Ei,j is satisfied by τ . This implies that one of the products of size
k having one positive literal (constituting Ei,j ) is satisfied. Suppose the positive literal is
x[q0 ]. Let p be the index of the block of X20 in which the truth of x[q0 ] is represented by
τ (s[p, q0 ]) = 1. In this block of variables of X20 , for q 6= q 0 , ¬s[p, q] evaluates to 1. To
complete the proof of the Claim in this direction, we need only consider the case of q = q0 .
Our argument fails if there is some S ∈ Li,j (x[q0 ]) such that τ (m[S]) = 1. But by the
definition of the leaf sets, this implies that every k-product in the sum that is Ei,j having
x[q0 ] as the positive literal evaluates to 0, a contradiction.
Conversely, suppose p0 satisfies the statement of the Claim. Because of the enforcements,
we know that there is an index q0 such that s[p0 , q0 ] evaluates to 1. Consider the k-product
terms in Ei,j whose positive literal is x[q0 ]. If none of these is satisfied, then by the definition
of the leaf sets, for some set S ⊆ X of size at most k, S has a nonempty intersection with
each set in Li,j (x[q0 ]) and every variable in S is assigned 1 by τ and therefore τ (m[S]) = 1,
a contradiction.
2
Claim 2. A truth assignment τ of weight k 0 for the set of variables X 0 satisfies a subexpression
Fi,j of E if and only if ∀p ∈ {1, ..., k} ∃q ∈ {1, ..., |X|}:
X
s[p, q] ·
m[S]
S∈Mi,j (x[q])
Proof. The proof is the DeMorgan dual of the proof of Claim 1.
It follows from Claim 1 that each subexpression Ei,j can be replaced by:
|X|
k Y
X
Y
¬s[p, q] +
p=1 q=1
¬m[S]
S∈Li,j (x[q])
or by distributing:
|X|
k
X
Y
Y
(¬s[p, q] + ¬m[S])
p=1 q=1 S∈Li,j (x[q])
14
2
Similarly, it follows from Claim 2 that each subexpression Fi,j can be replaced by:
|X|
k
Y
X
X
(s[p, q] · m[S])
p=1 q=1 S∈Mi,j (x[q])
Step 3. Rearrangements
We begin this step with an expression E = E0 ·E1 where E0 is the accumulating productof-sums enforcement expression and E1 has the form:
E1 =
n X
k
Y
(E(i, j) + F (i, j))
i=1 j=1
where
E(i, j) =
k Y
n
X
(y(i, j, p, q) + y 0 (i, j, p, q))
p=1 q=1
and
F (i, j) =
k X
n
Y
(z(i, j, p, q) · z 0 (i, j, p, q))
p=1 q=1
where the y( , , , ) and z( , , , ) are literals over the set of variables X.
E1 can be rewritten as:
E1 =
n
Y
k
X
i=1
E(i, j) +
j=1
k
X
F (i, j)
j=1
We have the following possibilities for further rewriting:
(1) The subexpression
k
X
E(i, j)
j=1
can be rewritten equivalently (with permitted blow-up) as a σ(k 2 , n, 2)-expression by combining the leading sums.
(2) The subexpression
k
X
F (i, j)
j=1
can similarly be rewritten as a π(k k , kn, 2)-expression by replacing the leading k-sum of
k-products by an equivalent k k -product of k-sums, and then combining the intermediate
sums.
15
By rewriting E1 (over the same set of variables) according to (1) and (2), and padding
as necessary (including adjusting the parameter) we can put E1 into the form:
E1 =
n
Y
k
X
i=1
G(i, j) +
j=1
k
Y
H(i, j)
j=1
where G(i, j) is a π(n, 2)-expression and H(i, j) is a σ(n, 2)-expression.
By distributing between the k-sum and the k-product this can be rewritten as:
E1 =
n Y
k
Y
H(i, j) +
k
X
G(i, p)
p=1
i=1 j=1
By combining the leading products and reindexing (adjusting n and k), we have:
E1 =
n
Y
H
0
(i) +
i=1
k
X
0
G (i, j)
j=1
where for each i, H 0 (i) is a σ(n, 2)-expression and for each i, j, G0 (i, j) is a π(n, 2)-expression.
Note that in some sense we are nearly done, since the above form is a π(n, k, n, c)expression, except for the subexpressions H 0 (i). The goal of the next step is to replace the
H 0 (i) to achieve the form of a π(n, k, n, c)-expression to which Lemma 3 can be applied.
Step 5. The Clock Change of Variables
We begin this step with E = E0 ·E1 where E0 is the accumulating enforcement expression
and E1 has the form described at the end of Step 4. Our objective is to replace each
subexpression H 0 (i) with an equivalent σ(k, n, 2)-expression. Note that this will achieve our
goal of putting E1 in π(n, k, n, c) form. We employ here a new change of variables based on
exactly the same gadgetry and enforcement expression as the monotone change of variables
of Lemma 2, developed in [?]. We simply describe here how a different set of substitutions
can be made.
Consider that the set of old variables X is indexed from 0 to |X| − 1:
X = {x[i] : 0 ≤ i ≤ |X| − 1}
What the new variables X 0 provide is (among other things) 2k disjoint sets (“blocks”) of
variables: Y1 , ..., Y2k , where for i = 1, ..., k we have:
Yi = {y[i, j] : 0 ≤ j ≤ |X| − 1}
16
and for i = k + 1, ..., 2k we have:
Yi = {z[i, j, k] : 0 ≤ j, k ≤ |X| − 1}
The enforcement expression for the change of variables insures that the following conditions hold in any satisfying truth assignment:
(1) The k sets of variables Y1 , ..., Yk provide a separated representation of a weight k truth
assignment to X, as in Step 2.
(2) The k sets of variables Yk+1 , ..., Y2k provide a representation of the gaps between the k
variables of X set to 1 in a weight k truth assignment, when viewing X as circularly ordered
modulo |X|, as explained in [?]. Thus for a truth assignment τ that satisfies the enforcement
expression, τ (z[i, j, k]) = 1 if and only if: τ (y[i, j − 1]) = 1 (“the ith choice of a variable of
X set to 1 is x[j − 1]”) and τ (y[i + 1, k + 1]) = 1 (“the (i + 1)th choice of a variable of X set
to 1 is x[k + 1]”) and for s = j, ..., k we have τ (x[s]) = 0 (“all of the variables from x[j] to
x[k] (mod |X|) are set to 0”).
Because of the enforcement conditions, it makes sense to say, e.g., that y[2, 3] implies
x[3] or that z[3, 5, 8] implies ¬x[6].
The validity of the substitutions we perform in this step is established by the following
Claim.
Claim 3. Let F be a σ(n, 2)-expression:
F =
n
X
(l(i) · l0 (i))
i=1
where for i = 1, ..., n both l(i) and l0 (i) are literals over the set of variables X ⊆ X 0 according
to our change of variables. For 1 ≤ r, s ≤ 2k, r 6= s, define the set of witness pairs
W(r, s) = {(u, v) : u ∈ Yr , v ∈ Ys , and ∃i such that u implies l(i) and v implies l0 (i)}
Modify this definition by insisting that u = v if r = s. If τ is a truth assignment of weight k 0
(where k 0 is specified by the change of variables) that satisfies the enforcement expressions,
then τ satisfies F if and only if τ satisfies:
F0 =
2k X
2k
X
Y
(¬u + ¬v)
r=1 s=1 (u,v)∈W(r,s)
/
where u = v if r = s.
Proof. Suppose τ satisfies F , and let i be an index such that l(i) · l0 (i) evaluates to 1. The
enforcement for the change of variables insures that there are indices r, s ∈ {1, ..., 2k} (with
possibly r = s) and variables u ∈ Yr and v ∈ Ys such that u implies l(i) and v implies l0 (i).
17
Thus (u, v) ∈ W(r, s). If r 6= s, then since there is only one variable set to 1 in each block,
(¬u0 + ¬v 0 ) evaluates to 1 for every other pair (u0 , v 0 ) 6= (u, v). Similarly if r = s.
Conversely, suppose τ satisfies F 0 . Let r and s be indices such that
Y
(¬u + ¬v)
(u,v)∈W(r,s)
/
evaluates to 1. For convenience suppose r 6= s. Let (u0 , v 0 ) be the unique pair of variables,
u0 ∈ Yr and v 0 ∈ Ys with τ (u0 ) = τ (v 0 ) = 1. We reach a contradiction unless (u0 , v 0 ) ∈ W(r, s).
By the definition of the witness sets, this implies that F is satisfied. Similarly if r = s. 2
After making the replacements justified by Claim 3, we are in a position to complete the
proof of the Theorem by an application of Lemma 3. There is a small complication in that
we are applying Lemma 3 to a product expression E 0 = E00 · E10 where E00 is the accumulated
enforcement π(n, n)-expression and E10 is the π(n, k, n, 2)-expression that is the result of the
last change of variables. An examination of the proof of Lemma 3 shows that this is not a
problem; E10 can be reduced to a product-of-sums while carrying E00 along.
2
5
Some Open Problems
The most obvious open problem arising from this work is whether W ∗ [t] = W [t] for t ≥ 3.
We conjecture that equality holds for all t, but the proof techniques developed in [?] for t=1,
and here for t = 2 do not seem to be adequate to deal with layers of fan-in gates that are
“deep” in a circuit (Cf. Lemma 1 of §4). At present, for example, we are unable to show
that the special case of (n, k, n, n)-WSat is in W [3]. A place to start might be the following
fairly natural graph problem.
Bounded Degree Simultaneous Domination
Instance: A graph H = (V, E) of maximum degree k, and a family of graphs Gv = (X, Ev )
indexed by V over the same set of “base set” of vertices X.
Parameter: k
Question: Is there a set of k vertices X 0 ⊆ X such that the set V 0 of v ∈ V for which X 0 is
a dominating set in Gv , is a dominating set of H?
Is this problem complete for W [3]?
Acknowledgements. The authors thank Ken Regan for stimulating discussions and motivation to pursue this topic, and the second author thanks the Computer Science Department
at the University of Canterbury in New Zealand for their hospitality during the preparation
of this paper.
18
References
[ADF95] K.A. Abrahamson, R.G. Downey and M.R. Fellows. Fixed parameter tractability
and completeness IV: on completeness for W[P] and PSPACE analogs. Annals of Pure
and Applied Logic 73 (1995), 235–276.
[BDFHW95] H. Bodlaender, R.G. Downey, M.R. Fellows, M. Hallett and H.T. Wareham.
Parameterized complexity analysis in computational biology. Computer Applications in
the Biosciences 11 (1995), 49–57.
[BDFW95] H. Bodlaender, R.G. Downey, M.R. Fellows and H.T. Wareham. The parameterized complexity of the longest common subsequence problem. Theoretical Computer
Science A 147 (1995), 31–54.
[BF95] H. Bodlaender and M.R. Fellows. On the complexity of k-processor scheduling. Operations Research Letters 18 (1995), 93–98. with H. Bodlaender.
[BFH94] H. Bodlaender, M.R. Fellows and M. Hallett. Beyond NP-completeness for problems of bounded width: hardness for the W hierarchy. Proceedings of the ACM Symposium on the Theory of Computing (STOC) (1994), 449–458.
[CCDF94] L. Cai, J. Chen, R.G. Downey and M.R. Fellows. On the parameterized complexity of short computation and factorization. To appear in Arch. for Math. Logic.
[CW95] M. Cesati and H.T. Wareham. Parameterized complexity analysis in robot motion
planning. In: Proc. 25th IEEE Intl. Conf. on Systems, Man and Cybernetics.
[DEF93] R.G. Downey, P.A. Evans and M.R. Fellows. Parameterized learning complexity. Proceedings of the Sixth ACM Workshop on Computational Learning Theory
(COLT’93), 51–57,
[DF95a] R.G. Downey and M.R. Fellows. Fixed-parameter tractability and completeness I:
basic theory. SIAM Journal of Computing 24 (1995), 873–921.
[DF95b] R.G. Downey and M.R. Fellows. Fixed-parameter tractability and completeness II:
completeness for W[1]. Theoretical Computer Science A 141 (1995), 109–131.
[DF95c] R.G. Downey and M.R. Fellows. Parameterized computational feasibility. Proceedings of the Second Cornell Workshop on Feasible Mathematics, Feasible Mathematics
II, P. Clote and J. Remmel (eds.), Birkhauser Boston (1995), 219–244.
[DFR96] R.G. Downey, M. Fellows and K. Regan. Parameterized circuit complexity and the
W hierarchy. To appear in Theoretical Computer Science A.
[DFT96] R.G. Downey, M.R. Fellows and U. Taylor. The complexity of relational database
queries and an improved characterization of W [1]. To appear in: Proc. DMTCS 96,
Springer-Verlag, Lecture Notes in Computer Science, 1996.
19
[FK93] M.R. Fellows and N. Koblitz. Fixed-parameter complexity and cryptography. Proceedings of the Tenth International Symposium on Applied Algebra, Algebraic Algorithms
and Error-Correcting Codes (AAECC’93), Springer-Verlag, Berlin, Lecture Notes in
Computer Science vol. 673 (1993), 121–131.
[GJ79] M. Garey and D. Johnson. Computers and Intractability: A Guide to the Theory of
NP-completeness. W.H. Freeman, San Francisco, 1979.
[DFHKW94] R.G. Downey, M.R. Fellows, M.T. Hallett, B.M. Kapron, and H.T. Wareham.
The parameterized complexity of some problems in logic and linguistics. Proceedings
Symposium on Logical Foundations of Computer Science (LFCS), Springer-Verlag, Lecture Notes in Computer Science vol. 813 (1994), 89–100.
20