Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                

Sublinear Cuts Are The Exception in Bdf-Girgs: Marc Kaufmann Raghu Raman Ravi Ulysse Schaller

Download as pdf or txt
Download as pdf or txt
You are on page 1of 23

Sublinear Cuts are the Exception in BDF-GIRGs

Marc Kaufmann #
ETH Zurich, Switzerland

Raghu Raman Ravi #


ETH Zurich, Switzerland

Ulysse Schaller #
ETH Zurich, Switzerland

Abstract
The introduction of geometry has proven instrumental in the efforts towards more realistic models for
real-world networks. In Geometric Inhomogeneous Random Graphs (GIRGs), Euclidean Geometry
induces clustering of the vertices, which is widely observed in networks in the wild. Euclidean
Geometry in multiple dimensions however restricts proximity of vertices to those cases where vertices
arXiv:2405.19369v1 [cs.SI] 27 May 2024

are close in each coordinate. We introduce a large class of GIRG extensions, called BDF-GIRGs,
which capture arbitrary hierarchies of the coordinates within the distance function of the vertex
feature space. These distance functions have the potential to allow more realistic modeling of the
complex formation of social ties in real-world networks, where similarities between people lead to
connections. Here, similarity with respect to certain features, such as familial kinship or a shared
workplace, suffices for the formation of ties. It is known that - while many key properties of GIRGs,
such as log-log average distance and sparsity, are independent of the distance function - the Euclidean
metric induces small separators, i.e. sublinear cuts of the unique giant component in GIRGs, whereas
no such sublinear separators exist under the component-wise minimum distance. Building on work
of Lengler and Todorović, we give a complete classification for the existence of small separators in
BDF-GIRGs. We further show that BDF-GIRGs all fulfill a stochastic triangle inequality and thus
also exhibit clustering.

2012 ACM Subject Classification ; Theory of computation → Random network models; Theory of
computation → Generating random combinatorial structures; Mathematics of computing → Random
graphs

Keywords and phrases Real-world Networks, Geometric Inhomogeneous Random Graphs, Boolean
Distance Functions, Network Robustness, Small Separators, Sparse Cuts, Clustering

Funding Marc Kaufmann: Swiss National Science Foundation [grant number 200021_192079]
Ulysse Schaller: Swiss National Science Foundation [grant number 200021_192079]

1 Introduction

Bringing generative graph models closer to applications has driven network science since its
inception. This includes the design of models which capture structural properties widespread
among real networks. A prominent such model are Geometric Inhomogeneous Random
Graphs (GIRGs), which are known to be sparse, small worlds, and whose degrees follow a
power-law distribution [2]. Recent research has shown that they are well-suited to model
geometric network features such as the local clustering coefficient as well as closeness and
betweenness centrality of real networks [5]. In GIRGs, each vertex comes equipped with
a set of coordinates in a geometric ground space and a weight, both drawn independently.
Pairs of vertices u, v are then connected independently with a probability which depends on
the product of the vertex weights wu and wv , and decays as their distance in the ground
space increases. The role played by the distance function has been relatively unexplored.
While we know that many graph properties are invariant under this choice [2], the robustness
of a GIRG, that is, how much of the graph’s giant component can be removed before
© Marc Kaufmann, Raghu Raman Ravi, and Ulysse Schaller;
licensed under Creative Commons License CC-BY 4.0
Leibniz International Proceedings in Informatics
Schloss Dagstuhl – Leibniz-Zentrum für Informatik, Dagstuhl Publishing, Germany
23:2 Sublinear Cuts are the Exception in BDF-GIRGs

it falls apart (into chunks of comparable size) crucially depends on it [12]. Questions of
robustness and fragility such as this one have been widely researched both for random graph
models and real-world networks, in the quest of properties that are universal across networks.
Analytical and numerical evidence suggests that there is a trade-off between robustness and
performance, such that both cannot be simultaneously optimized [15]. A further central
paradigm is the controversial "robust yet fragile nature of the internet" [6, 9]. Robustness
can be examined with respect to vertex or edge removal - and removals can be adversarial or
random. Understanding which removal strategies are most successful has developed into a
research direction of its own [1, 10, 16, 17].
In our present work, we consider robustness with respect to adversarial edge removal.
In GIRGs, choosing the Euclidean metric induces a graph which contains small separators,
that is, edge cuts of sublinear size which split the giant component into two connected
components of linear size. Using instead the so-called minimum-component distance function
(MCD), then in dimensions d ≥ 2 - in dimension d = 1, the two distance functions coincide
- Lengler and Todorović have demonstrated that all sublinear separators disappear with
high probability [12]. This coincides with the picture that we encounter in Erdös-Rényi
random graphs at edge probability p = 1+Ω(1) n , as shown by Luczak and McDiarmid [13]. In
both cases, the proofs proceed by a two-round exposure of the edges in the graph. After
the first, larger, batch of edges is unveiled, the graph already contains a giant component -
and this component contains only few sparse cuts. In the second round, one can show that
enough edges are sprinkled between the vertices that are only sparsely connected, so that all
sublinear cuts will disappear. Compared to Erdös-Rényi graphs, several complications arise.
It is, in particular, not possible to sample the edges in two independent batches. Instead,
one can produce two "almost independent" batches of edges by unveiling first a number of
d − 1 coordinates of the vertices and then unveiling their remaining dth coordinate. It is
unavoidable that the giant component grows substantially, as a constant fraction of the
total number of edges in the graph is unveiled this way, creating many new potential vertex
bipartitions in the giant component which may yield sublinear cuts. This problem can be
addressed by uncovering only coordinate information of bounded-weight vertices within the
giant component for the second batch. In their elegant proof, Lengler and Todorović then
show that edges incident to this vertex set do not increase the size of the giant by too much,
using an Azuma-Hoeffding type estimate.
As we will demonstrate, this procedure can be extended far beyond the case of MCD-
GIRGs. We first generalize GIRGs to accommodate a large class of underlying distance
functions, which we call Boolean Distance Functions (BDF). Intuitively speaking, this family
of distance functions covers arbitrary combinations of minima and maxima of subsets of
coordinates. For example, even if two individuals have exactly the same hobby (in this case,
the minimum component distance is zero), they may still be unlikely to know each other
if they live in different countries. A more refined notion could be for example, the formula
dist := min{distwork , max{disthobby , distresidence }} which encodes that two individuals are
likely to know each other if they either work in closely related fields, or if they have at the
same time similar hobbies and locations. This illustrates the potential of BDF for building
more realistic models for real-world networks. We remark that the dimension d of the
underlying geometric space is always assumed to be a constant with respect to the number
of vertices n.
In many real-world instances, particularly social networks, connections are formed not
based on an averaged similarity as the one captured by Euclidean distances which weight all
features in a symmetric way, but rather one some feature set that will naturally dominate.
M. Kaufmann, R. R. Ravi, and U. Schaller 23:3

One may think of, in a social setting, an edge encoding that two people, i.e. vertices, know
each other and closeness in a specific coordinate encoding having a parent in common – or
more generally how many generations one needs to go back to find a common ancestor. Even
if all other features, e.g. education, age, wealth, domicile, interests, differ greatly, sharing
a parent will usually ensure that two people know each other. Sometimes single features
are not dominant enough, but being similar in a specific subset of k features will render the
other d − k features obsolete, i.e. these k features will suffice to guarantee a lower bound
on the connection probability. One such example could be a feature that encodes that two
people live in the same town with a population of ca. 100’000 people (which does not suffice
for a connection) and another playing table tennis in a club (which on its own also is not
sufficient). But it is plausible that playing in a table tennis club and living in the same town
jointly ensures a constant lower bound on the connection probability. This is in line with
the complex mechanisms underlying the observed formation of social ties in real networks,
where similarities with regard to certain features are dominant but the baseline provided
by similarity in this regard is combined with similarities or differences in other dimensions
[14]. Such more sophisticated situations can be captured with BDF but are completely
impossible to encode in the Euclidean setting - and, due to the equivalence of norms on
finite-dimensional vector spaces - by any other metric induced by a norm.
The precise definition of Boolean Distance Functions can be found in Definition 4. In this
family of GIRGs, there is only one subfamily, which we call Single-Coordinate Outer-Max
(SCOM) GIRGs, that contains small separators. These collections of GIRGs exhibit a
distance function that can be expressed as the maximum of one singled-out coordinate and
an arbitrary "Boolean" combination of the other coordinates. This is our first main result:

▶ Theorem 1. Let G be a GIRG induced by a SCOM BDF κ acting on the d-dimensional


torus Td . Then, with probability 1 − o(1), the giant component of G has a separator of size
o(n).

We note that the separators can be described explicitly. One can consider two hyperplanes
that are perpendicular to the singled-out coordinate axis, which bisect the ground space
into two halves. It is then possible to show that a sublinear number of edges cuts across
the hyperplanes, connecting two linear-sized components, hence yielding the natural small
separators.
We then proceed to show - leveraging a modified version of the algorithm by Lengler
and Todorović [12] - that for all other BDF-GIRGs, a two-round exposure of the vertex
coordinates generates a graph where all potential sparse cuts in the giant component are
erased in the second round. The two key modifications concern first the partition of the
coordinates into batches. Here, the crucial ingredient is an upper bound on the distance
function by a minimum of two distance functions which involve each only a subset of (disjoint)
coordinates. This then enables an extension of the two-round exposure to arbitrary non-
SCOM BDF-GIRGs, by allowing an (under)estimate of the edges contributed by the second
exposure round. We also need to adjust the criteria for when edges are inserted. With these
additional insights we are able to prove our second main result:

▶ Theorem 2. Let G be a GIRG induced by a non-SCOM BDF κ acting on the d-dimensional


torus Td . Then, with probability 1 − o(1), the giant component of G has no separator of size
o(n).

Together, Theorems 1 and 2 provide a complete characterization of the occurrence of


sublinear separators in BDF-GIRGs. Finally, we show that all BDFs satisfy a stochastic
23:4 Sublinear Cuts are the Exception in BDF-GIRGs

version of the triangle inequality. From this, it immediately follows that all BDF-GIRGs
exhibit clustering.

▶ Theorem 3. Let G be a GIRG induced by a BDF κ acting on the d-dimensional torus Td .


Then, with probability 1 − o(1), its clustering coefficient is constant, i.e. cc(G) = Θ(1).

In social networks, the clustering coefficient has a natural interpretation: What proportion of
the friends of a node are also mutual friends ? In sparse networks, a pair of nodes u, v having
a common friend thus greatly boosts their chances of being directly connected. Recently
it has further been shown that, in a different setting, namely for GIRGs whose distance
function is induced by an Lp -norm, the clustering coefficient can be used to estimate the
dimension of the underlying ground space [8]. The precise definition of cc(G) and the proof
of Theorem 3 can be found in Appendix A.6. In the rest of the paper, we will say that an
event happens with high probability (w.h.p.) if it happens with probability 1 − o(1).

2 Geometric Inhomogeneous Random Graphs and Underlying


Geometric Spaces
In this section, we define Boolean Distance Functions (BDFs) as well as Geometric Inhomo-
geneous Random Graphs (GIRGs). At the end of the section we give some useful properties
of GIRGs and two more general lemmata that we will be needed in the proofs.

2.1 Definition of Boolean Distance Functions


We start by introducing the notion of Boolean Distance Functions. They define a symmetric
and translation-invariant distance between any two points in the d-dimensional torus Td :=
Rd /Zd , and hence can be characterized by an even function κ : Td → R≥0 .
The distance dist(x, y) between two points x, y ∈ Td is then given by dist(x, y) := κ(x−y).
For the sake of simplicity, all the properties of the distance dist(·, ·) will be expressed in
terms of κ(·). Note that on the one-dimensional torus, the distance between two points
x, y ∈ [0, 1) is given by

|x − y|T := min(|x − y|, 1 − |x − y|).

A Boolean Distance Function is then defined recursively as follows.

▶ Definition 4. Let d ∈ N be a positive integer, and let κ : Td → R≥0 and let x =


(x1 , x2 , .., xd ) ∈ Td be an arbitrary point. Then κ is a Boolean Distance Function (BDF) if:
When d = 1, then κ(x) = |x|T .
When d ≥ 2, then there exists a non-empty proper subset S ⊊ [d] of coordinates such that

κ(x) = max(κ1 ((xi )i∈S ), κ2 ((xi )i∈S


/ )) or κ(x) = min(κ1 ((xi )i∈S ), κ2 ((xi )i∈S
/ )),

where κ1 : T|S| → R≥0 and κ2 : Td−|S| → R≥0 are Boolean Distance Functions.

In the d ≥ 2 case, we call κ1 and κ2 the comprising functions of κ. Moreover, we say


that κ is an outer-max BDF if it is defined recursively as the maximum of two other BDFs,
and we say κ is an outer-min BDF if it is defined recursively as the minimum of two other
BDFs.

Since we only work with the torus geometry in this paper, we will simply write |x| instead of
|x|T .
M. Kaufmann, R. R. Ravi, and U. Schaller 23:5

Observe that the max-norm κ(x) := maxi∈[d] |xi | and the minimum component distance
(MCD) κ(x) := mini∈[d] |xi | are both BDFs.
Given a Boolean Distance Function κ, we write Bκr (x) := {y ∈ Td : κ(x − y) < r} for
the ball centered at point x of radius r with respect to the distance function induced by
κ. Moreover, we write Vκ (r) for the volume (or Lebesgue measure) of that ball. We will
sometimes drop the κ index and just write V (r) for that quantity when the underlying BDF
is clear from context.
We now define the notion of depth of a Boolean Distance Function. As we will see in
Lemma 11, Vκ (r) is characterized (as a function of r) by the depth of κ.
▶ Definition 5. Let κ : Td → R≥0 be a Boolean Distance Function. Then the depth of κ,
written D(κ), is defined recursively as follows:
When d = 1, then D(κ) := 1.
When d ≥ 2, then D(κ) := D(κ1 )+D(κ2 ) if κ is outer-max, and D(κ) := min(D(κ1 ), D(κ2 ))
if κ is outer-min, where κ1 and κ2 are the comprising functions of κ.
We now define the notion of Single-Coordinate Outer-Max (SCOM) Boolean Distance
Function. These are BDFs that can be written as the maximum of a single coordinate and
some other BDF acting on the remaining coordinates. As mentioned in the introduction,
this property characterizes whether BDF-GIRGs have small separators or not.
▶ Definition 6. Let κ : Td → R≥0 be a Boolean Distance Function. We say that κ is
Single-Coordinate Outer-Max (SCOM) if it can be written as

κ(x) = max(|xk |, κ0 ((xi )i̸=k ))

for some coordinate k ∈ [d] and some BDF κ0 : Td−1 → R≥0 . In dimension 1, we also say
that κ(x) = |x| is a SCOM BDF.
Note that this means that the unique BDF acting on d = 1 dimension is SCOM.

2.2 Definition of Geometric Inhomogeneous Random Graphs


We now define Geometric Inhomogeneous Random Graphs (GIRGs). This model was initially
introduced for the max-norm in [3]. Here, we use a version of this definition given in [12] that
we generalize to cover any Boolean Distance Function as the underlying distance. Throughout
the whole paper, we will consider undirected graphs with vertex set denoted by V := [n] and
edge set denoted by E. Before we give the definition of GIRGs, we need to introduce the
concept of (deterministically) power-law distributed weights. Intuitively, following a power
law means that the proportion of vertices at a given weight w decays as a polynomial in w.
▶ Definition 7. Let (wv )v∈V be a sequence of weights associated with the vertex set, and let
V≥w := {v ∈ V | wv ≥ w} denote the set of vertices with weight at least w. We say that this
sequence is power-law distributed with exponent β if wv ≥ 1 for all v ∈ V and if the two
following conditions are satisfied:
1. There exists some w = w(n) ≥ nω(1/ log log n) such for all constant η > 0 there is a constant
c1 > 0 such that for all 1 ≤ w ≤ w,
n
|V≥w | ≥ c1 . (PL1)
wβ−1+η
2. For all constant η > 0 there is a constant c2 > 0 such that for all w ≥ 1,
n
|V≥w | ≤ c2 . (PL2)
wβ−1−η
23:6 Sublinear Cuts are the Exception in BDF-GIRGs

Now, we are ready to define κ-GIRGs.

▶ Definition 8. Let β ∈ (2, 3), α > 1, d ∈ N, and let κ : Td → R≥0 be a Boolean Distance
Function. Let (wv )v∈V be a sequence of weights that is power-law distributed with exponent
β. A κ-Geometric Inhomogeneous Random Graph (κ-GIRG) is obtained by the following
two-step procedure:
1. Every vertex v ∈ V draws independently and uniformly at random a position xv in the
torus Td .
2. For every two distinct vertices u, v ∈ V, we add an edge between u and v in E independently
with some probability puv satisfying
n wu wv oα n wu wv oα
cL · min ,1 ≤ puv ≤ cU · min ,1 (EP)
n · Vκ (κ(xu − xv )) n · Vκ (κ(xu − xv ))

for some constants cU ≥ cL > 0.

2.3 Some useful GIRG properties


We now recall some known properties of GIRGs that are crucially used for the proofs in later
sections. We first remark that [2] uses a very general definition of GIRG, and in particular
our definition of κ-GIRG (Definition 8) fits in their geometric setting. The first property is
the existence and uniqueness of a connected component of linear size (i.e. containing Ω(n)
vertices), which we call the giant component of the graph.

▶ Theorem 9 (Theorems 5.9 and 7.3 in [2]). Let κ : Td → R≥0 be a Boolean Distance
Function and let G = (V, E) be a κ-GIRG. There exists a constant smax > 0 such that w.h.p.
G has a connected component of size at least smax n. Additionally, w.h.p. all other connected
components are at most poly-logarithmic in size, i.e., contain at most logO(1) n vertices.

Another very useful result is the characterization of the degree distribution of GIRGs.
The following lemma tells us that with high probability the degree distribution follows a
power-law with the same exponent as the weight sequence.

▶ Lemma 10 (Theorems 6.3 and 7.3 in [2]). Let (β ∈ (2, 3) and let κ : Td → R≥0 be a Boolean
Distance Function. Let G = (V, E) be a κ-GIRG whose weight sequence (wv )v∈V follows a
power-law with exponent β. Then the following holds:
1. For all η > 0, there exists c3 > 0 such that w.h.p. for all 1 ≤ w ≤ w (where w is the same
as in Definition 7),
n
|{v ∈ V : deg(v) ≥ w}| ≥ c3 .
wβ−1+η

2. For all η > 0, there exists c4 > 0 such that w.h.p. for all w ≥ 1,
n
|{v ∈ V : deg(v) ≥ w}| ≤ c4 .
wβ−1−η

3. There exists C > 0 such that for all v ∈ V we have E[deg(v)] ≤ Cwv , and moreover with
probability 1 − n−ω(1) we have deg(v) ≤ C · (wv + log2 n) for all v ∈ V.
M. Kaufmann, R. R. Ravi, and U. Schaller 23:7

3 Properties of Boolean Distance Functions


In this section, we provide a few propositions about Boolean Distance Functions together with
the main proof ideas. The complete proofs are given in Appendix A.2. These propositions
will be used for deriving our main results, but we believe they are also interesting results on
their own. Remember that Vκ (r) denotes the volume of a ball of radius r, with the distances
measured with respect to κ. We start by analyzing how Vκ (r) behaves as r → 0. When
κ : Td → R≥0 is the max-norm we have Vκ (r) = Θ(rd ), while for the MCD κ(x) = mini∈[d] |xi |
we have Vκ (r) = Θ(r). The following proposition generalizes this to arbitrary BDFs.

▶ Proposition 11. Let κ : Td → R≥0 be a Boolean Distance Function of depth D(κ) (see
Definition 5). Then Vκ (r) = Θ(rD(κ) ) as r → 0.

The proof proceeds by induction on the dimension d. The induction step consists of
writing Vκ (r) as an integral and, using the recursive structure of the BDF κ, writing this
integral as the product of two integrals, splitting into two different cases depending on
whether κ is outer-max or outer-min.
In the next proposition, we upper-bound a BDF κ by a max-norm over a subset of the
coordinates that has size exactly D(κ).

▶ Proposition 12. Let κ : Td → R≥0 be a Boolean Distance Function. Then there exists a
subset S ⊆ [d] of the coordinates with |S| = D(κ) such that κ(x) ≤ maxi∈S |xi | for all x ∈ Td .

The proof is again by induction on the dimension. For the induction step, we have (by
the induction hypothesis) two subsets S1 , S2 ⊆ [d] such that the max-norm on these subsets
of coordinates is an upper bound for the comprising functions of κ: taking S := S1 ∪ S2 if
κ is outer-max, respectively the set of smallest cardinality among S1 , S2 if κ is outer-min,
completes the induction step.
The next proposition allows us to upper-bound any non-SCOM outer-max BDF by an
outer-min BDF acting on the same set of coordinates.

▶ Proposition 13. Let κ : Td → R≥0 be a non-SCOM outer-max Boolean Distance Function.


Then there exists an outer-min BDF κ′ : Td → R≥0 such that D(κ) = D(κ′ ) and κ(x) ≤ κ′ (x)
for all x ∈ Td .

The proof also proceeds by induction on d, with the key inequality being

max(min(x1 , x2 ), min(x3 , x4 )) ≤ min(max(x1 , x3 ), max(x2 , x4 )).

Combining the three previous propositions, we get the following result, which is one of
the main building blocks for the proof of Theorem 2.

▶ Proposition 14. Let κ : Td → R≥0 be a non-SCOM Boolean Distance Function. There exist
disjoint subsets S1 , S2 ⊆ [d] with min(|S1 |, |S2 |) = D(κ) such that, with κ′ ((xi )i∈S1 ∪S2 ) :=
min(maxi∈S1 |xi |, maxi∈S2 |xi |), it holds for all x ∈ Td that κ(x) ≤ κ′ ((xi )i∈S1 ∪S2 ). Moreover,
κ′ is a BDF and there exists a constant c > 0 such that Vκ (r) ≤ cVκ′ (r) for all r ≥ 0.

Proof. If κ is outer-max, let κ′′ be the outer-min BDF given by Proposition 13 with κ ≤ κ′′
and D(κ) = D(κ′′ ). If κ is outer-min, we set κ′′ := κ (and notice that we also have κ ≤ κ′′ and
D(κ) = D(κ′′ ) in that case). Let κ1 , κ2 be the comprising functions of κ′′ . Applying Proposi-
tion 12 to κ1 and κ2 we get disjoint subsets S1 , S2 ⊆ [d] with |Sk | = D(κk ) such that κk (x) ≤
maxi∈Sk |xi | for k = 1, 2. Let κ′ ((xi )i∈S1 ∪S2 ) := min(maxi∈S1 |xi |, maxi∈S2 |xi |). Then
κ(x) ≤ κ′′ (x) = min(κ1 (x), κ2 (x)) ≤ min(maxi∈S1 |xi |, maxi∈S2 |xi |) = κ′ ((xi )i∈S1 ∪S2 ) for
23:8 Sublinear Cuts are the Exception in BDF-GIRGs

all x ∈ Td . Moreover, since κ′′ is outer-min, we have D(κ) = D(κ′′ ) = min(D(κ1 ), D(κ2 )) =
min(|S1 |, |S2 |) as desired. Finally since D(κ′ ) = min(|S1 |, |S2 |) = D(κ) we have that
Vκ (r), Vκ′ (r) ∈ Θ(rD(κ) ), and hence there exists c > 0 such that Vκ (r) ≤ cVκ′ (r) for
all r ≥ 0. ◀

4 Small Separators in Single-Coordinate-Outer-Max GIRGs


Our next main result states that GIRGs induced by BDFs that are SCOM have natural
sub-linear separators, which, as it turns out, run along the singled-out coordinate axis. The
key proof idea is to partition the ground space along the singled-out coordinate axis into
two half-spaces of equal volume, ensuring that each half-space contains a linear number of
the vertices of the giant. We then upper-bound the number of edges crossing the separating
hyperplanes. Each pair of vertices will contribute an edge intersecting one of the two
hyperplanes if and only if the vertices lie in different half-spaces and are connected by an edge.
The joint probability of this event can be computed using the law of iterated probability.
One can show that the number of crossing edges is o(n) with high probability.
Thus the two subgraphs can be disconnected through the removal of the o(n) crossing
edges, yielding Theorem 15.1 We note here that the max-norm is also a SCOM BDF, and
hence our results extend the result proved for max-norms in [3].
▶ Theorem 15. Let κ : Td → R≥0 be a SCOM BDF and let G = (V, E) be a κ-GIRG. Then,
w.h.p. there exists a subset of edges S ⊂ E with |S| = o(n) such that G ′ := (V, E \ S) has two
connected components of size Θ(n).

5 Robustness of non-Single-Coordinate-Outer-Max GIRGs


In this section, we consider GIRGs induced by non-SCOM BDFs and show that they are
robust, i.e., they do not contain separators of sub-linear size in the giant component. Our
goal is, more precisely, to prove the following theorem:
▶ Theorem 16. Let κ : Td → R≥0 be a non-SCOM BDF and let G = (V, E) be a κ-GIRG.
Then, w.h.p. for any subset of edges S ⊆ E such that G ′ := (V, E \ S) has two connected
components of size Θ(n), it holds that |S| = Ω(n).
First, we state a lemma that bounds the number of small cuts in connected graphs [13].
This lemma is the inspiration for the proof of robustness of MCD-GIRGs [12] and will also
be used our proof of Theorem 16.
▶ Lemma 17 (Lemma 7 in [13]). For any ε > 0 there exists η0 (ε) > 0 and n0 such that for
all n ≥ n0 , and for all connected graphs G with n vertices, there are at most (1 + ε)n many
bipartitions of G with at most η0 n cross-edges.
We will also need the following notion of a sparse cut.
▶ Definition 18. For a graph G = (V, E) and constants δ, η > 0, a (δ, η)-cut is a partition of
V into two sets of size at least δ|V| such that there are at most η|V| cross-edges, i.e. edges
that have one endpoint in each of the sets.
The proof strategy we follow is almost the same as in [12], with some small but crucial
modifications.

1
The full proof can be found in the Appendix A.3.
M. Kaufmann, R. R. Ravi, and U. Schaller 23:9

5.1 Edge Insertion Criteria


To extend the two-round edge exposure procedure from MCD-GIRGs to general κ-GIRGs,
we will use the upper bounds in terms of an outer-min distance function for the BDF at
hand that were established in Section 3. Our overarching goal will be to expose at first
only a subset of the coordinates of each vertex, insert some of the edges based on this
partial information, then reveal the remaining coordinates which will lead to the creation
of additional edges. More concretely, our aim will be to construct for each pair of vertices
1 2
u, v a pair of independent random variables (Yuv , Yuv ), inserting edges in the first round if
1 2
Yuv < puv and in the second round if Yuv < puv . The careful modification of the distribution
of these two random variables will allow us to emulate in two rounds the one-round sampling
procedure where an edge is inserted if Yuv < puv where Yuv would be drawn uniformly at
random from [0, 1].2
Consider therefore a κ0 -GIRG for some non-SCOM BDF κ0 . By Proposition 14, we can
find disjoint subsets S1 , S2 ⊆ [d], with D(κ0 ) = min(|S1 |, |S2 |) such that the outer-min BDF
given by κ((xi )i∈S1 ∪S2 ) := min(maxi∈S1 |xi |, maxi∈S2 |xi |) =: min(∥(x)S1 ∥∞ , ∥(x)S2 ∥∞ ) is
an upper bound for κ0 (x). This implies in particular that there exists a κ-GIRG for some
sufficiently small choice of constants c′U , c′L such that for every u, v ∈ V, pκ (u, v) ≤ pκ0 (u, v),
where the former denote the connection probabilities of vertices u and v in a κ-GIRG
respectively in a κ0 -GIRG. Without loss of generality, we will assume that S1 ⊆ {1, .., d − m}
and S2 = {d − m + 1, .., d}. Additionally, we can assume that m = |S2 | ≤ |S1 |, which also
means that D := D(κ0 ) = D(κ) = m. We will later modify the original algorithm given
in [12] by splitting the sampling process into the first d − m and the last m coordinates
(in the original approach by Lengler and Todorović, the split was between the first d − 1
coordinates and the last coordinate).
Let Yuv be a uniform random variable over the interval [0, 1], and define two iid random
1 2
variables Yuv , Yuv distributed as follows:
 1   2  √
P Yuv < c = P Yuv < c = 1 − 1 − c.
Then the following also hold:
 1   2 
c/2 ≤ P Yuv < c = P Yuv < c ≤ c,
1 2
 
P min(Yuv , Yuv ) < c = P [Yuv < c] = c.
Let
n wu wv α o
puv (c, x) := c · min 1, ,
n|x|D
and define the Edge Insertion Criterion (EIC) as
1 2
Yuv =: min(Yuv , Yuv ) < puv (cL , κ0 (xu − xv )). (EIC)
By our choice of c′L , we have the following lower bound:
puv (cL , κ0 (xu −xv )) ≥ puv (c′L , κ(xu −xv )) = max(puv (c′L , ∥(xu −xv )S1 ∥∞ ), puv (c′L , ∥(xu −xv )S2 ∥∞ )).
Thus, we obtain the two following sufficient conditions for edge insertion:
1
Yuv < puv (c′L , ∥(xu − xv )S1 ∥∞ ), (LB1)
2
Yuv < puv (c′L , ∥(xu − xv )S2 ∥∞ ). (LB2)

2
We refer the reader to [12] for a detailed discussion.
23:10 Sublinear Cuts are the Exception in BDF-GIRGs

Crucially notice that if we insert the edges according to (LB1) first, we obtain a GIRG
(with respect to the max-norm on T|S1 | ), which means that it satisfies all the properties
mentioned in Section 2.3. In particular, after the insertion of the first batch of edges, our
graph will already contain a (unique) giant component.
We now proceed to describe an algorithm that is central in proving the robustness of a
κ0 -GIRG. It follows closely the algorithm used in [12], but we need to modify it to make the
proof work in our more general setting. The main modifications are as follows. Firstly, we
use the updated (EIC), (LB1) and (LB2). Secondly, instead of first sampling the first d − 1
coordinates and then the last coordinate, we first sample the first d − m coordinates and
then sample the last m coordinates.

5.2 Sampling algorithm


In this section we describe the procedure we use for uncovering the edges of the κ0 -GIRG.
We fix some constant δ ∈ (0, 1). The algorithm can be decomposed into 6 phases.

1
Phase 1 We start by sampling Yuv for all pairs of vertices u, v ∈ V. Additionally, for
every vertex u ∈ V, we sample the first d − m coordinates of its position - (xui )1≤i≤d−m
- independently and uniformly at random from [0, 1]. This is sufficient to determine the
graph induced by the edge insertion criterion (LB1), which we refer to as G1 . By The-
orem 9, G1 has a unique linear-sized component (the giant) with at least smax n vertices
1
w.h.p.; we will assume that this holds for the rest of the proof. We denote this giant by Kmax .

Phase 2 From (PL2) we can obtain a constant B ′ such that at least half the vertices of Kmax
1

have a weight less than B ′ . This can be done by setting η = 1 and B ′ > (2c2 /smax )1/(β−2) .
Next, we sub-sample F ′ ⊆ V by including every vertex (not just those in the giant) with
weight less than B ′ into F ′ independently with probability 4f /smax for some constant
0 < f < (smax /12) · min{δ, smax } to be determined later (in Lemma 30).

Phase 3 Now set F := F ′ ∩ Kmax 1


. It is straightforward to see that by the choice of
parameters it holds that 2f n ≤ E [|F |] ≤ E[|F ′ |] ≤ 4f n/smax . Additionally, by Chernoff’s
bounds (Lemma 23), we have that f n ≤ |F | ≤ |F ′ | ≤ 6f n/smax w.h.p.; we will assume that
this holds for the rest of the proof.

The final three phases are split up into n steps in total (one step for each vertex). In
each step, we draw the last m coordinates of some vertex and potentially add some incident
edges according to the edge insertion criterion (EIC). The order in which the vertices are
1
treated is as follows - first the vertices that are not in Kmax (Phase 4), then the remaining
vertices that are not in F (Phase 5), and finally the vertices that are in F (Phase 6). Thus,
1 1
if we order the vertices as u1 , u2 , . . . , un in order to have Kmax = {ui | |V \ Kmax | < i ≤ n}
and F = {ui | |V \ F | < i ≤ n}, then the kth step can be described as follows:
Draw (xki )d−m<i≤m each independently and uniformly at random from [0, 1]. (Note that
we denote xuk by xk )
2
For all 1 ≤ j < k, sample Yjk independently.
For all 1 ≤ j < k, add an edge between uj and uk if (EIC) is satisfied.

1
Phase 4 Perform steps 1 to |V \ Kmax |.

1
Phase 5 Perform steps |V \ Kmax | + 1 to |V \ F |.
M. Kaufmann, R. R. Ravi, and U. Schaller 23:11

Phase 6 Perform steps |V \ F | + 1 to n.

We denote the resulting graph after Phase 2 + i for i = 2, 3, 4 by Gi and the corresponding
1 i
giant component that contains Kmax by Kmax . Our aim in the remainder will be to show
that the last phase destroys all sublinear cuts which were present in the giant component of
G3 , while adding only a small number of vertices.

5.3 Proof of Theorem 16


We are now ready to prove the main result of this section. The proof is divided into two
3
core steps - first, we show that Kmax has no small cuts in G4 . Then we show that the size
4 3
of Kmax is not much greater than that of Kmax . Combining these results, we can show
4 3
that any small cut in Kmax would induce a small cut in Kmax , and hence also exclude the
4
possibility of small cuts in Kmax .
First, we observe, recorded in Lemma 19,3 that the neighbors of a given vertex cannot be
too concentrated in a small region. To help quantify this we define the notion of cells. For
some M > 0, we partition [0, 1] into M sub-intervals of equal length Ij := [j/M, (j + 1)/M ]
for 0 ≤ j < M . We define a M -cell (or just cell if M is clear from context) to be a region of
the form Td−m × Ijd−m+1 × Ijd−m+2 × . . . × Ijd , where 0 ≤ jd−m+1 , jd−m+2 , . . . , jd < M . It
is easy to see that the entire space Td is partitioned into M m cells. We call such a partition
an M -cell partition. We also remark that the cell which contains a given vertex v can be
completely characterized by the last m coordinates of v.
Lemma 19 can now be derived by observing that a set S of rn cells has volume rn·M −m ∈
[rl2−m , rl] and thus contains an expected number of [rnl2−m , rnl] vertices. Using the strong
Chernoff bound (Lemma 23) for a small enough choice of r guarantees that even after a
union bound over the possible choices of S, with high probability no such set contains at
least δn2 vertices.
▶ Lemma 19. Let M = ⌈(n/l)1/m ⌉ for some constant l ∈ (0, 1], and consider the M -cell
partition of Td . Then, for every δ ∈ (0, 1) and every l, there exists a constant r(δ, l, m) > 0
such that the following property holds - with probability 1 − e−Θ(n) , there is no set S of rn
cells (of the considered partition) such that there are at least δn/2 vertices in the cells of S.
Intuitively, Lemma 19 says that since the positions of vertices are randomly chosen, they
must be more or less spread out in the last m coordinates. Thus, we must expect that when
we uncover vertices later in the ordering, they are close enough to previously uncovered
vertices to have a good chance of edge formation. Indeed, one can show the following useful
corollary, which implies a constant lower bound on edge formation in every step of phases 5
and 6.
▶ Corollary 20. Let δ ∈ (0, 1) be a constant. There is a constant P > 0 such that with high
probability, the following holds for each step k > δn/2 of the Algorithm. Let Vk := {ui ∈ V |
1 ≤ i < k}. For each subset A ⊆ Vk of size at least δn/2, the probability that step k produces
an edge from uk to A due to (LB2) is at least P , i.e.
P [∃v ∈ A with uk v ∈ E] ≥ P.
The remainder of the proof now closely models that of Theorem 3.2 in [12].4 We first

3
A full proof of Lemma 19 and the derivation of Corollary 20 from it can be found in Appendix A.4.
4
Details can be found in Appendix A.5.
23:12 Sublinear Cuts are the Exception in BDF-GIRGs

prove, using Lemma 17, which restricts the number of sparse bipartitions in a connected
3
graph, that there are no small cuts in Kmax . More precisely, there is a constant η > 0 such
3
that w.h.p. the induced subgraph G4 [Kmax ] has no (δ, η)-cut (Lemma 26). Then, we can
show that in the last phase the giant component does not grow too much, by demonstrating
4 3
that any "newly added" vertex of Kmax - which contains Kmax entirely - would have to be
either in a large non-giant component of G3 , or in a non-giant component that contains a
vertex of large weight, or in a small component consisting only of small-weight vertices that
4
is nonetheless added to Kmax during this last phase. But each of these categories consists of
4 3
at most δn many vertices (Lemmata 27, 28, 30), which implies that |Kmax | ≤ |Kmax | + 3δn
(Lemma 31). This is the key insight we need to obtain the next result, from which the main
theorem of this section follows.
4
▶ Theorem 21. Let δ ∈ (0, 1). Then there exists a constant η > 0 such that w.h.p. Kmax
has no (4δ, η)-cuts.
4
Proof. Assume that there is a (4δ, η)-cut in Kmax . By Lemma 31, such a cut would induce
3
a (δ, η)-cut in Kmax . But by Lemma 26, such a cut cannot exist w.h.p., and hence we get
the desired contradiction. ◀

Since any κ-GIRG w.h.p. has a unique giant component, the above theorem is equivalent
to Theorem 16.

References
1 Michele Bellingeri, Zhe-Ming Lu, Davide Cassi, and Francesco Scotognella. Analyses of the
response of a complex weighted network to nodes removal strategies considering links weight:
The case of the beijing urban road system. Modern Physics Letters B, 32(05):1850067, 2018.
2 Karl Bringmann, Ralph Keusch, and Johannes Lengler. Average distance in a general class of
scale-free networks with underlying geometry. arXiv preprint arXiv:1602.05712, 2016.
3 Karl Bringmann, Ralph Keusch, and Johannes Lengler. Geometric inhomogeneous random
graphs. Theoretical Computer Science, 760:35–54, 2019.
4 Fan RK Chung and Linyuan Lu. Complex graphs and networks. American Mathematical Soc.,
2006.
5 Benjamin Dayan, Marc Kaufmann, and Ulysse Schaller. Expressivity of Geometric In-
homogeneous Random Graphs—Metric and Non-metric, pages 85–100. 04 2024. doi:
10.1007/978-3-031-57515-0_7.
6 John Doyle, David Alderson, Lun Li, Steven Low, Matthew Roughan, Stanislav Shalunov,
Reiko Tanaka, and Walter Willinger. The “robust yet fragile” nature of the internet. Proceedings
of the National Academy of Sciences of the United States of America, 102:14497–502, 11 2005.
doi:10.1073/pnas.0501426102.
7 Devdatt Dubhashi and Alessandro Panconesi. Concentration of Measure for the Analysis of
Randomized Algorithms. Cambridge University Press, 2009.
8 Tobias Friedrich, Andreas Göbel, Maximilian Katzmann, and Leon Schiller. A simple statistic
for determining the dimensionality of complex networks, 2023. arXiv:2302.06357.
9 Rouzbeh Hasheminezhad, August Bøgh Rønberg, and Ulrik Brandes. The myth of the robust-
yet-fragile nature of scale-free networks: An empirical analysis. In Megan Dewar, Paweł Prałat,
Przemysław Szufel, François Théberge, and Małgorzata Wrzosek, editors, Algorithms and
Models for the Web Graph, pages 99–111, Cham, 2023. Springer Nature Switzerland.
10 Swami Iyer, Timothy Killingback, Bala Sundaram, and Zhen Wang. Attack robustness and
centrality of complex networks. PloS one, 8(4):e59613, 2013.
11 Ralph Keusch. Geometric inhomogeneous random graphs and graph coloring games. PhD
thesis, ETH Zurich, 2018.
M. Kaufmann, R. R. Ravi, and U. Schaller 23:13

12 Johannes Lengler and Lazar Todorović. Existence of small separators depends on geometry
for geometric inhomogeneous random graphs. arXiv preprint arXiv:1711.03814, 2017.
13 Malwina Luczak and Colin McDiarmid. Bisecting sparse random graphs. Random Struct.
Algorithms, 18:31–38, 01 2001. doi:10.1002/1098-2418(200101)18:13.0.CO;2-1.
14 Miller Mcpherson, Lynn Smith-Lovin, and James Cook. Birds of a feather: Homophily in
social networks. Annu Rev Sociol, 27:415–444, 08 2001. doi:10.1146/annurev.soc.27.1.415.
15 Fabio Pasqualetti, Shiyu Zhao, Chiara Favaretto, and Sandro Zampieri. Fragility limits perform-
ance in complex networks. Scientific Reports, 10, 02 2020. doi:10.1038/s41598-020-58440-6.
16 Liang Tian, Amir Bashan, Da-Ning Shi, and Yang-Yu Liu. Articulation points in complex
networks. Nature communications, 8(1):14223, 2017.
17 Sebastian Wandelt, Xiaoqian Sun, Daozhong Feng, Massimiliano Zanin, and Shlomo Havlin.
A comparative analysis of approaches to network-dismantling. Scientific reports, 8(1):13513,
2018.

A Appendix

A.1 Tools
Here we state some very useful lemmata. The first one allows us to sum over (sub)sequences
of weights and will be ubiquitous in the proofs.

▶ Lemma 22 (Lemma 6.3 in [3]). Let f : R → R be a continuously differentiable function,


and recall that V≥w = {v ∈ V | wv ≥ w}. Then, for any weights 1 ≤ w0 ≤ w1 , we have
X Z w1
f (wv ) = f (w0 ) · |V≥w0 | − f (w1 )|V≥w1 | + f ′ (w)|V≥w |dw.
v∈V,w0 ≤wv ≤w1 w0

Next we give the classical Chernoff bounds.


Pn
▶ Lemma 23 (Theorem 1.1 in [7]). Let X := i=1 Xi be the sum of independent indicator
random variables Xi . Then for any ε ∈ (0, 1) we have

P [X ≥ (1 + ε)E [X]] ≤ exp(−ε2 E[X]/3) and P [X ≤ (1 − ε)E [X]] ≤ exp(−ε2 E[X]/2).

We will also need a strong version of Chernoff’s bounds.


Pn
▶ Lemma 24 (Theorem 2.15 in [4]). Let X := i=1 Xi be the sum of independent indicator
random variables Xi . Then for any ε > 0 we have
E[X] (1+ε)E[X]

 
e
P [X ≥ (1 + ε)E [X]] ≤ ≤ .
(1 + ε)1+ε 1+ε

A.2 Proofs about Boolean Distance Functions


Proof of Proposition 11. The proof is by induction on the dimension d. Recall that d is a
constant w.r.t n. When d = 1, we must have κ(x) = |x| and therefore Vκ (r) = 2r for all
r ∈ [0, 1/2], and in particular Vκ (r) = Θ(r) = Θ(rD(κ) ). Assume now that d ≥ 2. Then
by Definition 4 there exists a non-empty proper subset S ⊊ [d] of coordinates such that
κ(x) = max(κ1 ((xi )i∈S ), κ2 ((xi )i∈S
/ )) or κ(x) = min(κ1 ((xi )i∈S ), κ2 ((xi )i∈S
/ )) for some BDFs
κ1 and κ2 . Note that by the induction hypothesis we know that Vκ1 (r) = Θ(rD(κ1 ) ) and
Vκ2 (r) = Θ(rD(κ2 ) ).
23:14 Sublinear Cuts are the Exception in BDF-GIRGs

Case 1, κ(x) = max(κ1 ((xi )i∈S ), κ2 ((xi )i∈S


/ )). In this case, we have
Z Z Z
Vκ (r) = dx = dydz
x∈Td ,κ(x)≤r z∈Td−|S| ,κ2 (z)≤r y∈T|S| ,κ1 (y)≤r
Z ! Z !
= dz · dy = Vκ2 (r) · Vκ1 (r)
z∈Td−|S| ,κ2 (z)≤r y∈T|S| ,κ1 (y)≤r

= Θ(rD(κ2 ) ) · Θ(rD(κ1 ) ) = Θ(rD(κ1 )+D(κ2 ) ) = Θ(rD(κ) ),

where the first two equalities follow from the definition of the Lebesgue measure resp. the
case distinction, the third equality follows by the Fubini-Tonelli theorem, and the last equality
holds because κ is outer-max, and hence D(κ1 ) + D(κ2 ) = D(κ) by Definition 5.
Case 2, κ(x) = min(κ1 ((xi )i∈S ), κ2 ((xi )i∈S
/ )). In this case, we have
Z Z Z Z
Vκ (r) = dx = 1 − dx = 1 − dydz
x∈Td ,κ(x)≤r x∈Td ,κ(x)>r z∈Td−|S| ,κ2 (z)>r y∈T|S| ,κ1 (y)>r
Z ! Z !
=1− dz dy = 1 − (1 − Vκ2 (r)) (1 − Vκ1 (r))
z∈Td−|S| ,κ2 (z)>r y∈T|S| ,κ1 (y)>r
  
= 1 − 1 − Θ(rD(κ2 ) ) 1 − Θ(rD(κ1 ) )
 
= 1 − 1 − Θ(rD(κ1 ) ) − Θ(rD(κ2 ) ) = Θ(rmin(D(κ1 ),D(κ2 )) ) = Θ(rD(κ) ),

where the last equality holds because κ is outer-min, and hence min(D(κ1 ), D(κ2 )) = D(κ)
by Definition 5. ◀

Proof of Proposition 12. The proof is by induction on the dimension d. When d = 1, we


must have κ(x) = |x| and therefore the claim trivially holds with S := {1}. Assume now
that d ≥ 2, so that κ = max(κ1 , κ2 ) or κ = min(κ1 , κ2 ) for two BDFs κ1 and κ2 acting on a
torus of dimension strictly smaller than d. By the induction hypothesis, there exist disjoint
subsets S1 , S2 ⊆ [d] with |S1 | = D(κ1 ), |S2 | = D(κ2 ) such that κ1 (x1 ) ≤ maxi∈S1 |x1i | and
κ2 (x2 ) ≤ maxi∈S2 |x2i |, where x1 , x2 stand for x restricted to the coordinates in which κ1 , κ2
act respectively.
If κ is outer-max, setting S := S1 ∪ S2 gives us a subset of size |S| = |S1 | + |S2 | =
D(κ1 ) + D(κ2 ) = D(κ). Moreover, we have
 
1 2
κ(x) = max(κ1 (x ), κ2 (x )) ≤ max max |x1i |, max |x2i | = max |xi |
i∈S1 i∈S2 i∈S1 ∪S2

as desired.
If κ is outer-min, let us assume without loss of generality that D(κ1 ) ≤ D(κ2 ), and set
S := S1 . Then |S| = |S1 | = D(κ1 ) = min(D(κ1 ), D(κ2 )) = D(κ) and
 
1 2
κ(x) = min(κ1 (x ), κ2 (x )) ≤ min max |x1i |, max |x2i | ≤ max |xi |
i∈S1 i∈S2 i∈S1

as desired. ◀

Proof of Proposition 13. The proof is by induction on the dimension d. Notice that since κ is
outer-max and non-SCOM, the base case is d = 4 with κ(x) = max(min(|x1 |, |x2 |), min(|x3 |, |x4 |))
M. Kaufmann, R. R. Ravi, and U. Schaller 23:15

(up to permutations of the coordinates). We have

κ(x) = max(min(|x1 |, |x2 |), min(|x3 |, |x4 |))


= min(max(|x1 |, |x3 |), max(|x1 |, |x4 |), max(|x2 |, |x3 |), max(|x2 |, |x4 |))
≤ min(max(|x1 |, |x3 |), max(|x2 |, |x4 |))
=: κ′ (x),

where the second equality follows by distributing the max operation over the min operation,
and the inequality holds because dropping terms inside the min does not decrease its value.
Note that κ′ is an outer-min BDF, and additionally, D(κ′ ) = 2 = D(κ).
For the induction step, let d ≥ 5 and let κ be κ : Td → R≥0 be a non-SCOM outer-max
Boolean Distance Function with comprising functions κ1 and κ2 . Since κ = max(κ1 , κ2 )
is non-SCOM, we know that κ1 is either outer-min or non-SCOM outer-max (if κ1 was
SCOM, then the coordinate k that can be singled out of κ1 could also be singled out of
κ), and the same holds for κ2 . For k ∈ {1, 2}, if κk is non-SCOM outer-max, we know by
the induction hypothesis that there exists an outer-min BDF κ′k with D(κ′k ) = D(κk ) and
κk ≤ κ′k . Otherwise if κk is outer-min we simply set κ′k := κk . Let κ11 , κ12 be the comprising
functions of κ′1 and κ21 , κ22 be the comprising functions of κ′2 . Without loss of generality,
we can assume that D(κ11 ) ≤ D(κ12 ) and D(κ21 ) ≤ D(κ22 ). Let us denote by xk the point
x restricted to the coordinates on which κk acts. We have

κ(x) = max(κ1 (x1 ), κ2 (x2 )) ≤ max(κ′1 (x1 ), κ′2 (x2 ))


= max(min(κ11 (x11 ), κ12 (x12 )), min(κ21 (x21 ), κ22 (x22 )))
≤ min(max(κ11 (x11 ), κ21 (x21 )), max(κ12 (x12 ), κ22 (x22 )))
=: κ′ (x).

Clearly κ′ is an outer-min BDF. It remains to show that D(κ′ ) = D(κ). Indeed

D(κ) = D(κ1 ) + D(κ2 ) = D(κ′1 ) + D(κ′2 ) = D(κ11 ) + D(κ21 )

and

D(κ′ ) = min(D(κ11 ) + D(κ21 ), D(κ12 ) + D(κ22 )) = D(κ11 ) + D(κ21 ),

which concludes the proof. ◀

A.3 Proof of Theorem 15


In the following, we give the full proof of Theorem 15, which we first restate below for the
convenience of the reader.

▶ Theorem 25. Let κ : Td → R≥0 be a SCOM BDF and let G = (V, E) be a κ-GIRG. Then,
w.h.p. there exists a subset of edges S ⊂ E with |S| = o(n) such that G ′ := (V, E \ S) has two
connected components of size Θ(n).

Proof. Our proof strategy is the same as in [3]. The idea is to partition the ground space
along the singled-out coordinate axis into two half-spaces of equal volume, ensuring that each
half-space contains a linear number of the vertices of the giant. We then upper-bound the
number of edges crossing the separating hyperplanes. Each pair of vertices will contribute
an edge intersecting one of the two hyperplanes if and only if the vertices lie in different
23:16 Sublinear Cuts are the Exception in BDF-GIRGs

half-spaces and are connected by an edge. The joint probability of this event can be computed
using the law of iterated probability.
Since κ is SCOM, we can assume without loss of generality that it is of the form
κ(x) = max(|x1 |, κ′ (x2 , . . . , xd )) for d > 1, and simply κ(x) = |x1 | for d = 1. Let D := D(κ)
denote the depth of κ. Consider the hyperplanes defined by the equations x1 = 0 and
x1 = 1/2. They partition Td into two disjoint regions. For any two vertices u, v, let us denote
by Fu,v the event that they lie in different regions of Td , and by Gru,v := {κ(xu − xv ) = r}
the event that they are at a distance r. By Proposition 11, we know that the probability 
density ϱ[Gru,v ] of the latter is in Θ(rD−1 ), so it remains to estimate P Fu,v | Gru,v .
First, notice that crucially Fu,v depends only on the first coordinate of the positions of
r1 :=
the vertices u, v. To exploit this, let us define the event Hu,v {|xu,1 − xv,1 | = r1 } for some
0 ≤ r1 ≤ r. The event Fu,v is then conditionally independent of the  event
 Gru,v given

r1
Hu,v .
r r1 r1
Consequently, for all 0 ≤ r1 ≤ r ≤ 1/2, we have P Fu,v | Gu,v , Hu,v = P Fu,v | Hu,v = 2r1 .
Thus, applying the law of iterated expectation over only the first random coordinate yields

P Fu,v | Gru,v = E P Fu,v | Gru,v , Hu,v


r1 r1
       
= E P Fu,v | Hu,v = 2r1 ≤ 2r.

Setting γu,v := min{(wu wv /n)1/D , 1/2}, we can now express pu,v as


(
Θ(1) if r ≤ γu,v ,
pu,v (r) = αD
Θ((γu,v /r) ) if r > γu,v .

Let ρu,v be the probability that u and v are connected by an edge that crosses the
separating hyperplanes. Then, following the computations that are done in the proof
D+1 Dα
of Lemma 6.1 in [3], we can show that ρu,v ≤ O(γu,v ) + O(γu,v | log(γu,v )|).5 Defining
D α̃ D α̃
P
α̃ := min(α, 1 + 1/D), we get ρu,v ≤ O(γu,v log(n)) since γu,v ≥ 1/n. Let S := u,v∈V γu,v
and let S denote the (random) set of edges in G that cross the separating hyperplanes. Then
E[|S|] = O(S log(n)) (we extracted the log term to ease the reading of the computations
to come), and following the computations that are done in the proof of Lemma 6.1 in [3]
yields (for any choice of constant η > 0) S ≤ O(n3−β+η ) + O(n2−α̃ ).5 Defining m :=
max{3 − β, 2 − α, 1 − 1/D} + 2η, and choosing η small enough so that m < 1, we get
E[|S|] = O(nm ), and therefore, using Markov’s inequality, we deduce that |S| = o(n) w.h.p.
It remains to show that with high probability G ′ has two connected components of linear
size. By Chernoff’s bounds (Lemma 23), w.h.p. each half-space contains Ω(n) vertices. By
Lemma 3.12 in [11] the subgraphs obtained by restricting the vertex set to one of these
half-spaces are also GIRGs themselves, and hence by Theorem 9 each half-space gives rise to
a connected component of size Θ(n).

A.3.1 Upper-bounding ρu,v


Recall that ρu,v is the probability that u and v are connected by an edge that crosses the
separating hyperplanes. Recall further that Fu,v denotes the event that they lie in different
regions of Td , and that Gru,v := {κ(xu − xv ) = r} is the event that they are at a distance
r. More concretely, these events allow us to express ρu,v as the marginal probability -
obtained from integrating over all possible distances r - of the following joint probability:

5
For the sake of readability we defer these computations to subsections A.3.1 and A.3.2 right after the
proof.
M. Kaufmann, R. R. Ravi, and U. Schaller 23:17

the probability that the pair of vertices u, v is at some distance r, with the two vertices not
in the same partition and connected by an edge. The possible distances r range from 0 to 12
due to the choice of the ground space. Therefore, we have

Z 1/2
pu,v (r) · ϱ[Gru,v ] · P Fu,v | Gru,v dr
 
ρu,v =
r=0
Z γu,v Z 1/2
≤ Θ(1) · Θ(rD−1 ) · 2rdr + Θ((γu,v /r)αD ) · Θ(rD−1 ) · 2rdr
r=0 r=γu,v
Z γu,v Z 1/2
= Θ(rD )dr +Θ(γu,v

)· Θ(rD−αD )dr .
r=0 r=γu,v
| {z } | {z }
=:I1 =:I2

The inequality follows from the bounds described in the main body of the proof. We proceed
to compute the integrals I1 and I2 . For the first integral we get
Z γu,v
I1 = Θ(rD )dr = Θ(γu,v
D+1
).
r=0

For the second integral, we distinguish two cases. First, we consider the case when D − αD =
̸
−1, in which case we obtain
Z 1/2
I2 = Θ(rD−αD )dr ≤ O(γu,v D+1−αD
) + O(1),
r=γu,v

where the first term accounts for the cases where the lower bound of integration dominates,
and the second term accounts for the cases where the upper bound of integration dominates.
When D − αD = −1 we have
Z 1/2
I2 = Θ(r−1 )dr = O(| log γu,v |).
r=γu,v

Putting these together, we obtain


D+1 Dα
ρu,v ≤ O(γu,v ) + O(γu,v | log(γu,v )|)

as desired.

A.3.2 Upper-bounding S
D α̃
P
Recall that S = u,v∈V γu,v corresponds to the expected number of cross-edges up to a
logarithmic factor. We have
 
X X X  wu wv α̃ X 1
D α̃
S= γu,v =  + 
n 2
u,v∈V u∈V v∈V≤n/wu v∈V≥n/wu
X  wu α̃ X 1X
= wvα̃ + |V≥n/wu |
n 2
u∈V v∈V≤n/wu u∈V

X  wu α̃ Z n/wu X
≤ O(nw1−β+η wα̃−1 )dw + |V≥n/wu |,
n 1
u∈V u∈V
| {z } | {z }
=:S1 =:S2
23:18 Sublinear Cuts are the Exception in BDF-GIRGs

where the second equality follows from the minimum in the definition of γu,v and the last
inequality holds by Lemma 22 and equation (PL2). Let us first evaluate S2 . It follows
from (PL2) that there are no vertices of weight larger than wmax = Θ(n1/(β−1−η) ). Define
w′ := n/wmax = Θ(n(β−2−η)/(β−1−η) ). Then

 
X (PL2) X
S2 = |V≥n/wu | = O  n · (n/wu )1−β+η 
u∈V,wu ≥w′ u∈V,wu ≥w′
 
X
= O n 2−β+η
wuβ−1−η  .
u∈V,wu ≥w′

By Lemma 22 and (PL2), the above becomes, for some η ′ ∈ (0, η),
  Z ∞ 
′ ′
S2 = O n2−β+η (w′ )β−1−η · n · (w′ )1−β+η + wβ−2−η nw1−β+η dw

  Z ∞ w
′ ′
= O n3−β+η (w′ )η −η + wη −η−1 dw = O(n3−β+η ).
w′

To evaluate S1 , we consider two cases. If α̃ + η ≥ β − 1, then we have


! !
X  wu α̃  n α̃−β+1+η X
S1 = O n = O n2−β+η wuβ−1−η .
n wu
u∈V u∈V

By Lemma 22 and (PL2) we get, for some η ′ ∈ (0, η),


  Z ∞   Z ∞ 
2−β+η β−2−η 1−β+η ′ 3−β+η η ′ −η−1
S1 = O n n+ w nw dw =O n · w dw
1 1
= O(n3−β+η ).

On the other hand, if α̃ + η < β − 1, then we have S1 = O n1−α̃ u∈V wuα̃ . By Lemma 22
P

and (PL2), for some η ′ ∈ (0, β − α̃ − 1), we have


  Z ∞   Z ∞ 
′ ′
S1 = O n1−α̃ n + wα̃−1 nw1−β+η dw = O n2−α̃ · wα̃−β+η dw = O(n2−α̃ ).
1 1

Summing up, we obtain S ≤ O(n3−β+η ) + O(n2−α̃ ) as desired.

A.4 Proof of Lemma 19 and deriving Corollary 20 from it


Proof of Lemma 19. For simplicity, we assume that rn is an integer. It is straightforward
to adapt it to the general case. Fix a set S of rn cells. Their total volume, is given by
rn · M −m which lies in the interval [rl2−m , rl]. Let Xv be the indicator random variable
that denotes if the vertex v is in S, so that P [Xv = 1] = rn · M −m . Let X = v∈V Xv be
P

the number of vertices in S. We have


X
E [X] = E [Xv ] = rn2 · M −m ∈ [rnl2−m , rnl],
v∈V

and therefore 2δ n ≥ 2δ E[X]


rl . Since X is a sum of independent Bernoulli random variables, we
can use the strong Chernoff’s bound (Lemma 24) to upper-bound the probability that S has
M. Kaufmann, R. R. Ravi, and U. Schaller 23:19

at least 2δ n vertices. Setting 1 + ε = δ


2rl yields
m
    δ · rnl "  δl # 2 l n
δ 2erl 2rl 2m 2erl 22m+1
P X≥ n ≤ = .
2 δ δ
2m+1
Thus, we can choose r small enough so that (2erl/δ)δl/2 ≤ 1/3, which yields
 
δ 2m n
P X ≥ n ≤ 3− l .
2
m 2m n
Now, there are at most 2M ≤ 2 l choices for S, and taking a union bound over all such
choices, we get that the probability that there exists a choice of S such that at least 2δ n
2m n
vertices are in it, is upper bounded by (2/3) l = e−Θ(n) . ◀

Proof of Corollary 20. Choose l = 1, and M = ⌈n1/m ⌉ as defined in Lemma 19. Then, for
D
l m
any two vertices u, v in the same cell, we have that ∥(xu − xv )S2 ∥D ∞ ≤ n = nl (remember
′ ′
that D = m), and this implies that puv (cL , ∥(xu − xv )S2 ∥∞ ) = cL since wu , wv ≥ 1. Thus,
we get that

< puv (c′L , ∥(xu − xv )S2 ∥∞ ) | u, v are in the same cell ≥ c′L /2,
 2 
P Yuv

and therefore by (LB2) this guarantees that P [uv ∈ E | u, v are in the same cell] ≥ c′L /2. Fix
some δn/2 < k ≤ n. Applying Lemma 19 to Vk yields that with probability 1 − e−Θ(n) , there
is no set S of rn cells such that there are at least δn/2 vertices v ∈ Vk in them. In particular,
this implies that for every subset A ⊆ Vk of size at least δn/2, there are more than rn cells
that contain at least one vertex of A. Thus, with probability at least rn/M m ≥ rl/2m , uk is
in a cell with at least one vertex v of A, and hence

P [∃v ∈ A with uk v ∈ E] ≥ rl/2m · c′L /2 =: P.

Taking a union bound over all possible values of k concludes the proof. ◀

3 4
A.5 Proving that Kmax and Kmax have no small cuts in the proof of
Theorem 16
Corollary 20, yielding a constant lower bound on edge formation in Phases 5 and 6, as well
3
as the cut bound from Lemma 17, enable us to show that Kmax has no small cuts. Let us fix
some δ ∈ (0, 1) throughout the section.
3
▶ Lemma 26. There is a constant η > 0 such that w.h.p. the induced subgraph G4 [Kmax ]
has no (δ, η)-cut.
3
Proof. At the end of phase 5, consider bipartitions of Kmax . From Lemma 17, we know

that for all ε > 0 there is some η > 0 and n0 , such that for all n ≥ n0 there are at most
(1 + ε)n many bipartitions with at most η ′ n many cross edges. Thus, this also holds for any
3
bipartitions of Kmax into two sets C1 and C2 , each of size at least δn.
3
Since F ⊆ Kmax satisfies |F | ≥ f n, without loss of generality we can assume that
|F ∩ C1 | ≥ f n/2. Additionally, since |F | ≤ 6f n/smax and f /smax < δ/12, we have that
|C2 \ F | ≥ δn/2. For some v ∈ F ∩ C1 , let Xv be the indicator random variable that detects if
there is an edge from v to some vertex in C2 \ F . By Corollary 20, we know that E [Xv ] ≥ P .
Notice that (Xv )v∈F ∩C1 are independent indicator random variables (because we implicitly
23:20 Sublinear Cuts are the Exception in BDF-GIRGs

P
condition on the positions of the vertices in C2 \ F ). So X := v∈F ∩C1 Xv is a sum of
independent indicator random variables with E [X] ≥ P f n/2 =: µ. Thus, by Chernoff’s
bound (Lemma 23) we get that X < µ/2 with probability at most e−µ/8 .
Finally, choose ε > 0 so that 1 + ε < eµ/(9n) , and let η := min(η ′ , µ/(2n)). By a union
3
bound we get that the probability that there is some bipartition of Kmax with at most

ηn ≤ η n edges at the end of Phase 5 that still less than µ/2 ≥ ηn edges after Phase 6 is
upper-bounded by (1 + ε)n e−µ/8 = e−Θ(n) . ◀

Now, the remaining goal is to show that the size of the giant does not grow too much in
3 4
Phase 6. To show this, first notice that Kmax ⊆ Kmax . We will classify the vertices outside
3
Kmax into 3 types and show that each vertex type does not contribute a lot of vertices to
4 3
Kmax \ Kmax . First, we consider vertices which are in components that are large but not
the giant. Let st := 4/(smax P ), where the constants smax and P are taken from Theorem 9
and Corollary 20 respectively.

▶ Lemma 27. W.h.p. G3 has at most δn vertices that are in non-giant components of size
at least st .

Proof. Recall that


1 1
{uk ∈ V | |V \ Kmax | < k ≤ |V \ F |} = Kmax \ F.
1
Let k > |V \ Kmax |. At the end of step k − 1, let Ak be the set of vertices that are in a
non-giant component of size at least st . Notice that since we are only uncovering vertices in
the giant, Ak is non-increasing in k (for the range of ks we consider). Let Zk be an indicator
random variable that is 1 if and only if Ak ̸= Ak+1 .
Now, if |A|V\F |+1 | < δn, we are done. Otherwise, we have that |Ak | ≥ δn for all
1
|V \ Kmax | < k ≤ |V \ F |. By Corollary 20, we have that P [Zk = 1 | |Ak | ≥ δn] ≥ P . Let BkP
be independent indicator random variables such that each one of them is 1 with probability
exactly P . Notice that
   
|V\F | |V\F |
X X
E Zk |Ak | ≥ δn ≥ E  BkP  > P · smax n/2 =: µ,
1
k=|V\Kmax |+1 1
k=|V\Kmax |+1

where we have used the fact that f /smax < smax /12 and |F | ≤ 6f n/smax . By Chernoff’s
bound (Lemma 23) we get that k BkP ≥ µ/2 w.h.p., and hence we also have that k Zk ≥
P P

µ/2 w.h.p. But since st = 4/(smax P ) this means that we have removed at least st · µ/2 > n
vertices from A|V\Kmax
1 |+1 , which is a contradiction. ◀

Now we look at vertices that are in small components containing a large-weight vertex.

▶ Lemma 28. Let B ′ be the constant defined in Phase 2. There exists a constant B ≥ B ′ > 0
such that G3 has at most δn vertices that are in a component of size at most st containing
at least one vertex of weight at least B.

Proof. Notice that for B := max{B ′ , (c2 st /δ)1/(β−2) } we have that there are at most δn/st
many vertices with weight at least B (using η = 1 in (PL2)). Thus, there are at most δn
vertices that are in a component of size at most st containing such a vertex. ◀

Finally, we take care of the remaining type of vertices bounding the number of edges
incident to small-weight vertices that are created in Phase 6. To do so, we will need the
following Azuma-Hoeffding bound with two-sided error events.
M. Kaufmann, R. R. Ravi, and U. Schaller 23:21

▶ Lemma 29 (Theorem 3.3 in [2]). Let Z1 , .., Zm be independent random variables over
Qm
Ω1 , .., Ωm . Let Z = (Z1 , .., Zm ), Ω = k=1 Ωk and let g : Ω 7→ R be measurable with
0 ≤ g(ω) ≤ M for all ω ∈ Ω. Let B ⊆ Ω be such that for some c > 0 and for all ω, ω ′ ∈ Ω \ B
that differ in at most 2 components, the following holds

|g(ω) − g(ω ′ )| ≤ c.

Then for all t ≥ 2M P [B] we have


t2
P [|g(Z) − E [g(Z)] | ≥ t] ≤ 2e− 32mc2 + (2 mM
c + 1)P [B]

We are thus ready to prove the outlined bound.

▶ Lemma 30. There is a choice of 0 < f < (smax /12) · min{δ, smax } such that w.h.p. there
are at most δn/st edges from F to vertices of weight at most B, where B is the constant
from Lemma 28.

Proof. Recall from their definitions that F ⊆ F ′ , and that F ′ is chosen independently from
all the edges and only depends on the weights of the vertices. Let S be the set of vertices in
the graph with weight at most B. Recall that all vertices in F ′ have a weight of at most B ′ .
Since B ′ ≤ B, we know that F ′ ⊆ S, and morevover we have |F ′ | ≤ (6f n/smax ) =: f ′ n.
We may assume that S = {1, . . . , |S|}, so that we have an ordering of the vertices (it does
not need to be the same ordering as in the sampling algorithm from section 5.2). Consider
the random variables Yi = (Y1i , Y2i , .., Y(i−1)i ) for all 2 ≤ i ≤ |S|, where Yij are the same
random variables defined in (EIC). Clearly, all the Yi s are independent, as each Yij is
chosen independently from [0, 1]. For a given realization ω ∈ Ω of the κ-GIRG, let g(ω) be
the number of edges between F ′ and S. Clearly, g depends only on the random variables
{Yi }2≤i≤|S| and {xi }1≤i≤|S| . Notice that 0 ≤ g(ω) ≤ |F ′ ||S| ≤ f ′ n2 for all ω ∈ Ω.
We define a "bad" event B as

B := {ω ∈ Ω | ∃u ∈ S : deg(u) ≥ 2C log2 n},

where C > 0 is the constant from Lemma 10. We have P [B] = n−ω(1) by Lemma 10.
Moreover, for any ω, ω ′ ∈ Ω \ B that differ in at most 2 components (among the 2|S| − 1
components given by {Yi }2≤i≤|S| and {xi }1≤i≤|S| ), we have

|g(ω) − g(ω ′ )| ≤ 4C log2 n =: c

since the outcome of every xu and Yu affects at most 2C log2 n edges if ω, ω ′ ∈ Ω \ B.


Furthermore, Lemma 10 also implies that the expected degree of every vertex in F ′ is
upper-bounded by B · C. Thus, we can also upper-bound the expectation of g as follows
" #
X
deg(u) ≤ BCf ′ n.
 
E g(x1 , . . . , x|S| , Y2 , . . . , Y|S| ) ≤ E
u∈F ′

Applying Lemma 29 with t =: 2BCf ′ n − E [g(Z)] ≥ BCf ′ n thus yields


(BCf ′ n)2
− 2nf ′ n2
32·2n(4C log2 n)2 + 1 n−ω(1) = n−ω(1) .

P [g(Z) − E [g(Z)] ≥ t] ≤ 2e + 2 4C log2 n

For a small enough f such that 2BCf ′ < δ/st , we obtain that w.h.p. g(Z) ≤ E [g(Z)] + t =
2BCf ′ n < δn/st as desired. ◀
23:22 Sublinear Cuts are the Exception in BDF-GIRGs

4 3
Therefore, we conclude that Kmax is not much larger than Kmax :

▶ Lemma 31. There is some choice of 0 < f < (smax /12) · min{δ, smax } such that w.h.p.

4 3
|Kmax | ≤ |Kmax | + 3δn.

Proof. The claim immediately follows from combining Lemmata 27, 28 and 30. ◀

A.6 Clustering coefficient


In the subsequent, we show that the clustering coefficient of any BDF-GIRG is in Ω(1). We
begin with the definition of the clustering coefficient, taken from [12].

▶ Definition 32. For a graph G = (V, E) the clustering coefficient of a vertex v ∈ V is defined
as

1
 deg(v) · #{triangles in G containing v}, if deg(v) ≥ 2,
cc(v) := ( 2 )
0, otherwise,

1
P
and the (mean) clustering coefficient of G is defined as cc(G) := |V| v∈G cc(v).

We further require the definition of a stochastic relaxation of the triangle inequality.

▶ Definition 33 (Definition A.3. in [12]). Let κ : Td → R≥0 be a measurable, translation-


invariant, and symmetric function inducing a surjective volume function V : R+ 0 → [0, 1].
6

We say that κ satisfies a stochastic triangle inequality if there is a constant C > 0 such that
the following two conditions hold.
1. For every ε > 0 let x1 = x1 (ε), x2 = x2 (ε) be chosen independently and uniformly at
random in the ε-ball {x ∈ Td | κ(x) ≤ ε}. Then

lim inf Pr[κ(x1 − x2 ) ≤ Cε] > 0.


ε→0

2. Moreover,

V ({x ∈ Td | κ(x) ≤ ε})


lim inf > 0.
ε→0 V ({x ∈ Td | κ(x) ≤ Cε})

Finally, we need the following theorem from [12], which reduces the task of lower-bounding
the clustering coefficient of BDF-GIRGs to showing that they satisfy the stochastic triangle
inequality.

▶ Theorem 34 (Theorem A.4 in [12]). Consider the GIRG model with a distance function
that satisfies the stochastic triangle inequality as described in Definition 33, and let G be a
random instance. Then cc(G) = Ω(1) with high probability.

Hence we need to prove the following.

▶ Lemma 35. Any BDF κ satisfies the stochastic triangle inequality with C = 2.

6
This is fulfilled in our case by choosing κ to be a BDF and V to be the volume Vκ (r) induced by κ.
M. Kaufmann, R. R. Ravi, and U. Schaller 23:23

Proof. For the first statement, recall that Proposition 12 gives us a subset S ⊆ [d] of the
coordinates with |S| = D(κ) such that κ(x) ≤ maxi∈S |xi | for all x ∈ Td . For ε ≤ 1/4, we
have

κ(x1 − x2 ) ≤ max |x1,i − x2,i | ≤ max |x1,i | + |x2,i | ≤ max |x1,i | + max |x2,i |
i∈S i∈S i∈S i∈S

Consequently, P [κ(x1 − x2 ) ≤ 2ε] ≥ P [maxi∈S |x1,i | ≤ ε] · P [maxi∈S |x2,i | ≤ ε]. Now, since
x1 , x2 are chosen uniformly and independently at random from the ε-ball centered at the
origin, and D(κ) = |S|, we have

Θ(εD(κ) )
   
V∥·∥S (ε)
P max |x1,i | ≤ ε = P max |x2,i | ≤ ε = = = Θ(1),
i∈S i∈S Vκ (ε) Θ(εD(κ) )

where the third equality equality comes from Proposition 11. Therefore P [κ(x1 − x2 ) ≤ 2ε] =
Θ(1), which implies that the limit must evaluate to a non-zero constant.
For the second statement, we have, using Proposition 11 again,

V ({x ∈ Td | κ(x) ≤ ε}) Vκ (ε) Θ(εD(κ) )


lim inf = lim inf = lim inf = Θ(1),
ε→0 V ({x ∈ Td | κ(x) ≤ Cε}) ε→0 Vκ (Cε) ε→0 Θ((Cε)D(κ) )

and as above we can conclude that the limit is a positive constant. ◀

Combining Theorem 34 and Lemma 35 gives us the desired result.

▶ Theorem 36. Let G be a GIRG induced by a BDF κ acting on the d-dimensional torus
Td . Then, with probability 1 − o(1), its clustering coefficient is constant, i.e. cc(G) = Θ(1).

You might also like