Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
Paper The following article is Open access

Transfer operators on graphs: spectral clustering and beyond

and

Published 23 February 2024 © 2024 The Author(s). Published by IOP Publishing Ltd
, , Citation Stefan Klus and Maia Trower 2024 J. Phys. Complex. 5 015014 DOI 10.1088/2632-072X/ad28fe

2632-072X/5/1/015014

Abstract

Graphs and networks play an important role in modeling and analyzing complex interconnected systems such as transportation networks, integrated circuits, power grids, citation graphs, and biological and artificial neural networks. Graph clustering algorithms can be used to detect groups of strongly connected vertices and to derive coarse-grained models. We define transfer operators such as the Koopman operator and the Perron–Frobenius operator on graphs, study their spectral properties, introduce Galerkin projections of these operators, and illustrate how reduced representations can be estimated from data. In particular, we show that spectral clustering of undirected graphs can be interpreted in terms of eigenfunctions of the Koopman operator and propose novel clustering algorithms for directed graphs based on generalized transfer operators. We demonstrate the efficacy of the resulting algorithms on several benchmark problems and provide different interpretations of clusters.

Export citation and abstract BibTeX RIS

Original content from this work may be used under the terms of the Creative Commons Attribution 4.0 license. Any further distribution of this work must maintain attribution to the author(s) and the title of the work, journal citation and DOI.

1. Introduction

Transfer operators are frequently used for the analysis of complex dynamical systems since their eigenvalues and eigenfunctions contain important information about global properties of the underlying processes [14]. While the eigenvalues are related to inherent relaxation timescales, the corresponding eigenfunctions can be used to identify metastable sets representing, for example, stable conformations of proteins or coherent sets describing, for instance, slowly mixing eddies in the ocean [511]. Other applications include model reduction, system identification, change-point detection, forecasting, control, and stability analysis [1216]. Although transfer operators have been known for a long time, they recently gained renewed attention due to the availability of large data sets and powerful machine learning methods to estimate these operators and their spectral properties from simulation or measurement data.

Spectral clustering for graphs is a well-known and powerful technique for partitioning networks into groups of nodes that are well-connected internally, and poorly connected to other groups of nodes [17]. It was first shown in [18] that graphs can be partitioned based on eigenvectors of the adjacency matrix, but more recently spectral clustering has become increasingly popular in the machine learning community. In particular, the spectral clustering algorithms proposed in [19, 20] are commonly deployed and are capable of identifying near-optimal clusters in undirected static graphs [21]. Many extensions of these standard algorithms exist, including extensions to directed graphs [22] and time-evolving graphs [23, 24]. Recent work includes scalable algorithms that can efficiently cluster large graphs [25], and a regularized method that is able to interpret heterogeneous structures in graphs [26, 27].

The underlying technique can be derived in a number of ways, for example by viewing the problem as a graph partitioning or from a random walk perspective. In [17], it is shown that using a graph partition viewpoint allows the clustering problem to be reformulated as a relaxed mincut problem with constraints to balance the clusters. In particular, balancing the clusters by the number of vertices per cluster gives an objective function known as the RatioCut, and relaxing this problem is equivalent to standard spectral clustering using the unnormalized graph Laplacian. Relaxing the normalized cut objective function, which balances clusters by weight, leads instead to the spectral clustering formulation using the normalized graph Laplacian.

As mentioned above, random walks can also be used to reformulate the graph clustering problem. Intuitively, this corresponds to finding a graph partition such that a random walker will remain within a cluster for long periods of time and will rarely jump between clusters. This random-walk interpretation of spectral clustering methods for undirected graphs is well-known, see, e.g. [17, 28]. Nevertheless, the dynamical systems perspective and relationships with transfer operators and their eigenvalues and eigenfunctions have to our knowledge neither been used to analyze existing methods in detail, nor to systematically generalize spectral clustering to directed or dynamic graphs. In [23], we defined transfer operators on graphs and showed that spectral clustering for undirected graphs is equivalent to detecting metastable sets in reversible stochastic dynamical systems. The eigenvalues of transfer operators associated with non-reversible systems, however, are in general complex-valued and the notion of metastability is not suitable for such problems anymore. This motivates the definition of so-called coherent sets, which can be regarded as time-dependent metastable sets [5, 8, 29]. Coherent sets can be detected by computing eigenvalues and eigenfunctions of generalized transfer operators based on the forward–backward dynamics of the system. By defining these operators on graphs, it is possible to develop spectral clustering techniques for directed and time-evolving graphs, thus establishing and exploiting links between dynamical systems theory and graph theory. Applying methods originally developed for the analysis of complex dynamical systems to graphs allows for a rigorous derivation and, at the same time, also intuitive interpretation of spectral clustering techniques.

In terms of existing methods for clustering directed graphs, several proposed approaches are based on symmetrization techniques [22]. A simple example is to define a new graph with adjacency matrix $ A + A^\top $, where A is the adjacency matrix of the original directed graph. This is essentially equivalent to ignoring the directions of the edges. It is also possible to symmetrize the graph by defining an adjacency matrix in terms of the stationary distribution of a random walk [30]. Both of these proposals have a notable drawback; they fail to recognize similar nodes with common in-links or out-links, and can only cluster based on the existing edges in the graph with directional information removed. Our clustering approach is loosely related to these symmetrization techniques, in that it also leads to real-valued spectra since the resulting operators are self-adjoint (with respect to potentially weighted inner products). However, we do not directly symmetrize the adjacency matrix or other graph-related matrices, but exploit properties of random walk processes and associated transfer operators. A more detailed comparison and numerical results will be presented below.

In this work, we will generalize the definition of transfer operators on graphs, first introduced in [23], and show how eigenfunctions of these operators are related to classical spectral clustering approaches. The main contributions are:

  • We study the spectral properties of graph transfer operators and their matrix representations and show that the eigenfunctions of these operators can also be obtained by solving associated optimization problems.
  • We illustrate how transfer operators can be used to detect clusters or community structures in directed graphs and provide illustrative interpretations of the identified clusters in terms of coherent sets and block matrices.
  • We define Galerkin projections of transfer operators, show how these coarse-grained operators can be estimated from random-walk data, and analyze the convergence of data-driven approximations.
  • Furthermore, we highlight additional applications of the proposed methods such as graph drawing or network inference, apply our clustering approach to different types of benchmark problems, and compare it with other spectral clustering algorithms.

In section 2, we will define transfer operators, analyze their properties, and highlight relationships with spectral clustering algorithms. Section 3 illustrates how transfer operators can be approximated and estimated from data. Additionally, we will point out different interpretations of clustering methods. Numerical results will be presented in section 4 and concluding remarks and open questions in section 5.

2. Graphs, transfer operators, and spectral clustering

In this section, we will introduce the mathematical tools required for the transfer operator- based identification of clusters in graphs.

2.1. Directed and undirected graphs

In what follows, we will mainly consider directed graphs, but the derived methods can, of course, also be applied to undirected graphs. Relationships with well-known clustering techniques for undirected graphs will be discussed below.

Definition 2.1 (directed graph). A directed graph $ \mathscr{G} = (\mathscr{V}, \mathscr{E}) $ is given by a set of vertices $ \mathscr{V} = \{{\mathscr{v}_1}, \dots, {\mathscr{v}_n} \} $ and edges $ \mathscr{E} \subseteq \mathscr{V} \times \mathscr{V} $.

The weighted adjacency matrix $ A \in \mathbb{R}^{n \times n} $ associated with a graph $ \mathscr{G} $ is defined by

where $ w({\mathscr{v}_i}, {\mathscr{v}_j}) \gt 0 $ is the weight of the corresponding edge. If the graph is undirected, then the adjacency matrix is symmetric. Additionally, we introduce the row-stochastic transition probability matrix $ S = D_{\mathscr{o}}^{-1} A \in \mathbb{R}^{n \times n} $, with

That is, sij is the probability that a random walker starting in $ {\mathscr{v}_i} $ will go to $ {\mathscr{v}_j} $ in one step. We refer to $ D_{\mathscr{o}} $ as the matrix of out-degrees, and we will later also make use of the in-degree matrix $ D_{\mathscr{i}} $, which is defined in a similar way, with

2.2. Benchmark problems

In order to generate benchmark problems with clearly defined clusters, we will use the so-called directed stochastic block model.

Definition 2.2 (directed stochastic block model). The graph $ \mathscr{G} $ is said to be sampled from the directed stochastic block model (DSBM) $ \mathbf{G}(r_b, n_b, E) $, with $ E \in [0, 1]^{r_b \times r_b} $, if the adjacency matrix

is a block matrix whose blocks $ A_{ij} \in R^{n_b \times n_b} $ contain positive entries with probability Fij . Here, rb is the number and nb the size of the blocks.

Our definition differs slightly from the DSBM described in [31], in that we allow different probabilities for all the blocks. Undirected graphs can be constructed in a similar way. While the DSBM is a frequently used model to generate benchmark problems, real-world graphs typically exhibit a more complicated structure. An algorithm to generate weighted directed graphs with heterogeneous node-degree distributions and cluster sizes for testing community detection methods is described in [32]. Briefly, the graphs in [32] are constructed by randomly assigning in-degrees to each node from a power law distribution and assigning out-degrees from a delta distribution. A mixing parameter is introduced for each of these degrees that is related to the quality of the graph partition—that is, the lower the mixing parameter the better the partitioning of the graph. Cluster sizes are also drawn from a power law distribution and a subgraph is constructed from each of these clusters. The subgraphs are finally joined randomly, with a rewiring process to ensure that the in-degree and out-degree distributions are preserved. We will use this algorithm to generate a set of graphs with different characteristics.

2.3. Coherent set illustration

The spectral clustering approach for directed and time-evolving graphs proposed in [23] is based on a generalized Laplacian that contains information about coherent sets. The detection of such coherent sets requires the notion of transfer operators, which describe the evolution of probability densities or observables. Before we introduce these operators, let us first illustrate the definition of coherence—defined in [8, 33] for continuous dynamical systems—for directed graphs.

Definition 2.3 (coherent pair). Let Sτ be the flow associated with a dynamical system and τ a fixed lag time. Two sets $ \mathbb{A} $ and $ \mathbb{B} $ form a so-called coherent pair if $ S^\tau(\mathbb{A}) \approx \mathbb{B} $ and $ S^{-\tau}(\mathbb{B}) \approx \mathbb{A} $.

That is, the set $ \mathbb{A} $ is almost invariant under the forward–backward dynamics and we call it a finite-time coherent set. To avoid that $ (S^{-\tau} \circ S^\tau)(\mathbb{A}) = \mathbb{A} $ for all sets $ \mathbb{A} $, a small random perturbation of the dynamics is required for deterministic systems, see [8]. In our setting, the dynamics are given by a random walk process on the graph and thus already non-deterministic.

Example 2.4. Figure 1 illustrates the difference between a coherent set (green) and a set that is dispersed by the random-walk dynamics (red). The example shows that coherent sets form weakly coupled clusters and can be regarded as a natural generalization of clusters in undirected graphs. $\triangle$

Figure 1.

Figure 1. (a) Two different sets of random walkers starting in $ \{{\mathscr{v}_1}, \dots, {\mathscr{v}_4} \} $ and $ \{{\mathscr{v}_{7}}, \dots, {\mathscr{v}_{10}} \} $ marked in green and red, respectively. The weight of the solid edges is 1 and the weight of the dashed edges is 0.01. (b) Positions after one step forward. Only one green random walker leaves the set. (c) Positions after one step forward and one step backward. The green set is coherent—only two random walkers escaped—, whereas the red set is clearly less invariant under the forward–backward dynamics.

Standard image High-resolution image

This notion of coherence can be easily extended to time-evolving networks as shown in [23]. The structure of the clusters may then also change in time.

2.4. Transfer operators

Transfer operators on graphs can be either directly expressed in terms of graph-related matrices (such as the transition probability matrix) or estimated from random walk data. We will start with a formal definition of transfer operators. In what follows, $ \mathbb{U} = \{f \colon \mathcal{V} \to \mathbb{R} \} $ denotes the set of all real-valued functions defined on the set of vertices $ \mathcal{V} $ of a graph $ \mathscr{G} $.

Definition 2.5 (Perron–Frobenius & Koopman operators). Let S be the transition probability matrix associated with the graph $ \mathscr{G} $.

  • (i)  
    For a function $ \rho \in \mathbb{U} $, the Perron–Frobenius operator $ \mathcal{P} \colon \mathbb{U} \to \mathbb{U} $ is defined by
  • (ii)  
    Similarly, for a function $ f \in \mathbb{U} $, the Koopman operator $ \mathcal{K} \colon \mathbb{U} \to \mathbb{U} $ is given by

The Koopman operator is the adjoint of the Perron–Frobenius operator with respect to the standard inner product.

Definition 2.6 (probability density). A probability density is a function $ \mu \in \mathbb{U} $ that satisfies $ \mu({\mathscr{v}_i}) \unicode{x2A7E} 0 $ and

This allows us to define the Perron–Frobenius operator with respect to different probability densities. We define the µ-weighted inner product by

Definition 2.7 (reversibility). The system is called reversible if there exists a probability density π that satisfies the detailed balance condition $ \pi({\mathscr{v}_i}) \hspace{0.1em} s_{ij} = \pi({\mathscr{v}_j}) \hspace{0.1em} s_{ji} $ for all $ {\mathscr{v}_i}, {\mathscr{v}_j} \in \mathscr{V} $.

Random walks on undirected graphs, for instance, are reversible [34, 35]. We will use this property to analyze the relationships between conventional spectral clustering and our approach.

Definition 2.8 (reweighted Perron–Frobenius operator). Let µ be the initial density and $ \nu = \mathcal{P} \mu $ the resulting image density, i.e.

Furthermore, assume that µ and ν are strictly positive. Given a function $ u \in \mathbb{U} $, we define the reweighted Perron–Frobenius operator $ \mathcal{T} \colon \mathbb{U} \to \mathbb{U} $ by

The reweighted Perron–Frobenius operator describes the evolution of functions with respect to the initial and final densities µ and ν. The assumption that $ \mu({\mathscr{v}_i}) \gt 0 $ implies that the probability that a random walker starts in $ {\mathscr{v}_i} $ is non-zero. Correspondingly, $ \nu({\mathscr{v}_i}) \gt 0 $ means that $ {\mathscr{v}_i} $ is reachable from at least one vertex, which could be $ {\mathscr{v}_i} $ itself. Adding self-loops thus regularizes the problem.

Definition 2.9 (forward–backward & backward–forward operators). Let $ f, u \in \mathbb{U} $ as before.

  • (i)  
    We define the forward–backward operator $ \mathcal{F} \colon \mathbb{U} \to \mathbb{U} $ by
  • (ii)  
    Analogously, the backward–forward operator $ \mathcal{B} \colon \mathbb{U} \to \mathbb{U} $ is defined by

In order to detect coherent sets, we want to find eigenfunctions $ \varphi_{\ell} $ of the forward–backward operator $ \mathcal{F} $ corresponding to eigenvalues $ \lambda_{\ell} \approx 1 $ such that $ \mathcal{F} \varphi_{\ell} = \lambda_{\ell} \hspace{0.1em} \varphi_{\ell} \approx \varphi_{\ell} $. These functions are almost invariant under the forward–backward dynamics. Defining $ \psi_{\ell} : = \frac{1}{\sqrt{\lambda_{\ell}}} \mathcal{T} \varphi_{\ell} $, this results in

That is, the functions $ \varphi_{\ell} $ and $ \psi_{\ell} $ come in pairs.

2.5. Covariance and cross-covariance operators

In addition to these different types of transfer operators, we define (uncentered) covariance and cross-covariance operators.

Definition 2.10 (covariance operators). Let S be the transition probability matrix and µ and ν the densities defined above. Given functions $ f, g \in \mathbb{U} $, we call $ \mathcal{C}_{xx}, \mathcal{C}_{yy} \colon \mathbb{U} \to \mathbb{U} $, with

covariance operators and $ \mathcal{C}_{xy} \colon \mathbb{U} \to \mathbb{U} $, with

cross-covariance operator.

Let $ X \sim \mu $ and $ Y \sim \nu $ be random variables representing the initial position of the random walker and the position after one step, respectively. It follows that

and, since the joint probability distribution is given by $ \mathbb{P}[X = {\mathscr{v}_i}, Y = {\mathscr{v}_j}] = \mu({\mathscr{v}_i}) \hspace{0.1em} s_{ij} $,

2.6. Matrix representations of operators

In order to simplify the notation, given any function $ g \in \mathbb{U} $, let $ \boldsymbol{g} \in \mathbb{R}^n $ be defined by

i.e. $ \boldsymbol{\rho}, \boldsymbol{f}, \boldsymbol{u}, \boldsymbol{\mu}, \boldsymbol{\nu} \in \mathbb{R}^n $ are the vector representations of the functions $ \rho, f, u, \mu, \nu \in \mathbb{U} $, respectively. Additionally, we define diagonal matrices

so that we can write

where the operators (with a slight abuse of notation) are applied component-wise. That is, $ P, K, T, F, B $ are the matrix representations of the corresponding operators $ \mathcal{P}, \mathcal{K}, \mathcal{T}, \mathcal{F}, \mathcal{B} $, respectively. The matrices Dµ and Dν are invertible since we assumed µ and ν to be strictly positive. For the covariance and cross-covariance operators $ \mathcal{C}_{xx} $, $ \mathcal{C}_{yy} $, and $ \mathcal{C}_{xy} $, we obtain the matrix representations $ C_{xx} = D_{\mu} $, $ C_{yy} = D_{\nu} $, and $ C_{xy} = D_{\mu} \hspace{0.1em} S $ and can thus write

In what follows, we will use functions and their vector representations as well as operators and their matrix representations interchangeably.

Remark 2.11. Let $ \unicode{x1D7D9} \in \mathbb{R}^n $ be the vector of all ones. In [23], we assumed that the random walkers are initially uniformly distributed and defined $ \widetilde{F} = S \hspace{0.1em} \widetilde{D}_{\nu}^{-1} S^\top $, with $ \widetilde{D}_{\nu} = \operatorname{diag}(S^\top \unicode{x1D7D9}) $, whereas setting $ \boldsymbol{\mu} = \frac{1}{n} \unicode{x1D7D9} $ results in $ D_{\nu} = \frac{1}{n}\operatorname{diag}(S^\top \unicode{x1D7D9}) $. However, for this special case $ F = S \hspace{0.1em} D_{\nu}^{-1} S^\top D_{\mu} = S \hspace{0.1em} \widetilde{D}_{\nu}^{-1} S^\top = \widetilde{F} $ and the definitions are equivalent.

The properties of transfer operators are analyzed in detail in appendix. We in particular show that the eigenvalues of the operators $ \mathcal{F} $ and $ \mathcal{B} $ are contained in the interval $ [0, 1] $.

2.7. Associated graph structures

How are the graph structures associated with the matrix representations F and B of the forward–backward and backward–forward operators $ \mathcal{F} $ and $ \mathcal{B} $ related to the original directed graph $ \mathscr{G} $?

Lemma 2.12. It holds that the entry fij of the matrix F is nonzero if and only if there exists an index k such that $ ({\mathscr{v}_i}, {\mathscr{v}_{k}}) \in \mathscr{E} $ and $ ({\mathscr{v}_j}, {\mathscr{v}_{k}}) \in \mathscr{E} $. That is, a random walker can go forward from $ {\mathscr{v}_i} $ to $ {\mathscr{v}_{k}} $ and then backward to $ {\mathscr{v}_j} $. Similarly, the entry bij of the matrix B is nonzero if and only if there exists an index k such that $ ({\mathscr{v}_{k}}, {\mathscr{v}_i}) \in \mathscr{E} $ and $ ({\mathscr{v}_{k}}, {\mathscr{v}_j}) \in \mathscr{E} $.

Proof. Since $ s_{ij} \neq 0 ~\Longleftrightarrow~ ({\mathscr{v}_i}, {\mathscr{v}_j}) \in \mathscr{E} $, we have

and

 □

The difference between the two matrix representations of the operators is illustrated in figure 2. Note that the probability of each forward–backward random walk via $ {\mathscr{v}_{k}} $ is divided by $ \nu({\mathscr{v}_{k}}) $, which represents the sum of the probabilities to go from any vertex to $ {\mathscr{v}_{k}} $. This is intuitively clear since the more edges enter $ {\mathscr{v}_{k}} $, the more options the random walker has to go backward. On the other hand, the more forward–backward random walks from $ {\mathscr{v}_i} $ to $ {\mathscr{v}_j} $ exist, the larger the weight fij .

Remark 2.13. This is strongly related to the bibliometric graph clustering approach described in [22]. Here, the bibliographic coupling matrix $ A A^\top $ and the co-citation matrix $ A^\top A $ represent the common out-links and in-links between a pair of nodes, respectively. Using the sum of these two matrices gives a symmetrization of the adjacency matrix that accounts for both of these types of common links. As in the forward–backward approach, the weights can be divided by the degrees of the nodes in order to give an effective measure of similarity for nodes that accounts for the number of possible routes or links into and out of the node. Alternatively, in [31] a complex-valued Hermitian matrix $ H = i(A-A^\top) $ is constructed so that $ H^2 = A A^\top + A^\top A - A^2 - (A^\top)^2 $. This can then also be viewed as a symmetrization scheme that utilizes the bibliographic coupling matrix $ A A^\top $ and the co-citation matrix $ A^\top A $.

Example 2.14. Let us consider the graph depicted in figure 3(a), which we already analyzed in example 2.4, and compute the associated matrices F and B. The resulting graph structures are shown in figures 3(b) and (c). The graph corresponding to the matrix F, for instance, now contains an edge connecting $ {\mathscr{v}_4} $ and $ {\mathscr{v}_{8}} $ since a random walker could move forward from $ {\mathscr{v}_4} $ to $ {\mathscr{v}_{5}} $ and backward to $ {\mathscr{v}_{8}} $ or the other way around. The probability of this forward–backward random walk, however, is low due to the small edge weight of $ ({\mathscr{v}_4}, {\mathscr{v}_{5}}) $. Note that without the self-loops all vertices except $ {\mathscr{v}_4} $, $ {\mathscr{v}_{8}} $, and $ {\mathscr{v}_{12}} $ would become disconnected. Every forward–backward random walk starting in one of these vertices would then end up in the initial position (since these vertices have only one outgoing edge) and thus form clusters of size one. This illustrates that adding self-loops can be regarded as a form of regularization. $\triangle$

Figure 2.

Figure 2. (a) Forward–backward edge between $ {\mathscr{v}_i} $ and $ {\mathscr{v}_j} $. (b) Backward–forward edge between $ {\mathscr{v}_i} $ and $ {\mathscr{v}_j} $.

Standard image High-resolution image
Figure 3.

Figure 3. (a) Directed graph with three clusters. Self-loops are omitted for the sake of clarity. (b) Graph structure of the matrix F for uniform µ. (c) Graph structure of the matrix B for uniform ν. The dashed edges have again a low weight. Clustering the dominant eigenvectors of the matrices F or B results in three coherent sets, see also example 2.4.

Standard image High-resolution image

2.8. Spectral clustering of directed graphs

Spectral clustering methods for undirected graphs are based on the eigenvectors corresponding to the smallest eigenvalues of an associated graph Laplacian. The random-walk normalized Laplacian is defined by

where K is the matrix representation of the Koopman operator. Equivalently, we can also compute the eigenvectors corresponding to the largest eigenvalues of K. This shows that conventional spectral clustering computes the dominant eigenfunctions of the Koopman operator, which contain information about the metastable sets of the system. The eigenvalues of the Koopman operator associated with directed graphs, however, are in general complex-valued and standard spectral clustering techniques typically fail, see [23] for a detailed analysis. For directed (and also time-evolving) graphs we thus propose to compute the eigenvectors associated with the largest eigenvalues of the forward–backward or backward–forward operators in order to detect coherent sets. The eigenvalues of these operators are real-valued even if the graph is directed, see appendix.

Algorithm 2.15 (transfer operator-based spectral clustering algorithm). 

  • 1.  
    Compute the k largest eigenvalues $ \lambda_{\ell} $ and associated eigenvectors $ \boldsymbol{\varphi}_{\ell} $ of F.
  • 2.  
    Define $ \boldsymbol{\Phi} = [\boldsymbol{\varphi}_1, \dots, \boldsymbol{\varphi}_k] \in \mathbb{R}^{n \times k} $ and let ri denote the ith row of Φ.
  • 3.  
    Cluster the points $ \{\hspace{0.1em} r_i \hspace{0.1em} \}_{i = 1}^n $ using, e.g. k-means.

The number of eigenvalues close to 1 indicates the number of coherent sets. That is, k should be chosen in such a way that there is a spectral gap between λk and $ \lambda_{k+1} $. In order to illustrate the spectral clustering algorithm, we first consider a simple graph that has a clearly defined cluster structure.

Example 2.16. We sample a benchmark problem $ \mathscr{G} \in \mathbf{G}(4, 100, E) $ from the DSBM, with

where p = 0.8 and q = 0.1. The resulting adjacency matrix is shown in figure 4(a). Note that there are two different types of clusters, namely dense blocks on the diagonal, representing groups of vertices that are tightly coupled to other vertices contained in the same cluster, and dense off-diagonal blocks, which have the property that the contained vertices are all tightly coupled (unidirectionally) to vertices in another cluster. For both types of clusters, however, the probability that a random walker going forward and then backward ends up in the same cluster is large compared to the escape probability. Applying algorithm 2.15 yields the results shown in figures 4(c) and (d). The dominant eigenvalues are $ \lambda_1 = 1 $, $ \lambda_2 = 0.72 $, $\lambda_3 = 0.70 $, and $ \lambda_4 = 0.69 $, followed by a spectral gap. The eigenfunctions of the forward–backward operator can also be used for visualizing directed graphs 3 as shown in figure 4(b). By defining the position of the vertex $ {\mathscr{v}_i} $ to be $ (\varphi_2({\mathscr{v}_i}), \varphi_3({\mathscr{v}_i})) $, we obtain a two-dimensional embedding of the graph. The edges are colored according to the source nodes. The blue vertices form a cluster in the conventional sense since they are strongly connected but only weakly coupled to the other clusters. The remaining sets of vertices have the property that many directed edges are pointing to one of the other clusters but not vice versa. These clusters correspond to non-sparse off-diagonal blocks. $\triangle$

Figure 4.

Figure 4. (a) Adjacency matrix of a directed graph comprising four clusters. The blocks are colored according to the cluster numbers. (b) Visualization of the clustered graph using the eigenfunctions ϕ2 and ϕ3 as coordinates. (c) Four dominant eigenfunctions $ \varphi_{\ell} $ (top) and $ \psi_{\ell} $ (bottom), where denotes the first, the second, the third, and the fourth eigenfunction. The functions are roughly constant within the clusters. (d) Clusters extracted from the functions $ \varphi_{\ell} $ (top) and $ \psi_{\ell} $ (bottom) using k-means. The blue indicator function picks row 3 and column 3 of the block matrix, the red function row 1 and column 2, the yellow function row 2 and column 4, and the purple function row 4 and column 1. That is, each function pair is associated with the block of the adjacency matrix marked in the corresponding color. These blocks all have in common that they contain many nonzero entries (i.e. incoming or outgoing edges). Note that since we compute two functions, i.e. $ \varphi_{\ell} $ and $ \psi_{\ell} $, it is now possible to detect dense off-diagonal blocks.

Standard image High-resolution image

2.9. Spectral clustering of undirected graphs as a special case

A natural question now is whether conventional spectral clustering methods for undirected graphs can be regarded as a special case of the algorithm derived above.

Lemma 2.17. Assume that the system is reversible with respect to π. Then, setting $ \mu = \pi $, it holds that $ \mathcal{T} = \mathcal{K} $ and $ \mathcal{F} = \mathcal{B} = \mathcal{K}^2 $.

Proof. First, we rewrite the detailed balance condition as $ D_{\pi} \hspace{0.1em} S = S^\top D_{\pi} $. Since $ \mu = \pi $ implies $ \nu = \pi $, we obtain $ T = D_{\pi}^{-1} \hspace{0.1em} S^\top D_{\pi} = S = K $. The second part follows immediately from the definitions of the operators $ \mathcal{F} $ and $ \mathcal{B} $. □

If π satisfies the detailed balance condition, then it is also automatically an invariant density. This can be seen as follows:

Lemma 2.18. Let $ \mathscr{G} $ now be an undirected graph with unique invariant density π. Assume that K has only distinct positive eigenvalues. Then conventional spectral clustering and algorithm 2.15 with $ \mu = \pi $ compute the same clusters.

Proof. Let $ \kappa_{\ell} $ be the eigenvalues of K and $ \lambda_{\ell} $ the eigenvalues of F. Using lemma 2.17, we know that $ \lambda_{\ell} = \kappa_{\ell}^2 $ and the eigenvectors are identical. Since all eigenvalues of K are by assumption positive and distinct, the ordering of the eigenvalues and eigenvectors remains unchanged. □

Negative eigenvalues of K might affect the ordering of the eigenvectors and thus the spectral clustering results. However, positive eigenvalues can be enforced by turning the transition probability matrix into a lazy Markov chain [37].

3. Analysis and approximation of transfer operators

In this section, we will derive reduced representations of transfer operators, show how they can be estimated from random walk data, and present alternative interpretations of the proposed spectral clustering approach.

3.1. Galerkin approximation

Given an operator $ \mathcal{L} \colon \mathbb{U} \to \mathbb{U} $ with matrix representation L, our goal now is to compute its Galerkin projection $ \mathcal{L}_r $ onto the r-dimensional subspace $ \mathbb{U}_r \subset \mathbb{U} $ with basis $ \{\phi_i \}_{i = 1}^r $. Here, r could potentially be much smaller than n. The matrix representation $ L_r \in \mathbb{R}^{r \times r} $ of $ \mathcal{L}_r $ is given by $ L_r = \big(G^{(0)}\big)^{-1} G^{(1)} $, with entries

By defining the vector-valued function $ \phi({\mathscr{v}_i}) = [\phi_1({\mathscr{v}_i}), \dots, \phi_r({\mathscr{v}_i})]^\top \in \mathbb{R}^r $ and

the matrices $ G^{(0)} $ and $ G^{(1)} $ can be compactly written as

and

Any function $ f \in \mathbb{U}_r $ is given by

where $ c = [c_1, \dots, c_r]^\top \in \mathbb{R}^r $, such that

Suppose now $ \xi_{\ell} $ is an eigenvector of Lr with associated eigenvalue $ \lambda_{\ell} $, i.e. $ L_r \hspace{0.1em} \xi_{\ell} = \lambda_{\ell} \hspace{0.1em} \xi_{\ell} $, then $ \varphi_{\ell}({\mathscr{v}_i}) = \xi_{\ell}^\top \phi({\mathscr{v}_i}) $ is an eigenfunction of $ \mathcal{L}_r $ since

Alternatively, assuming the matrix $ G^{(0)} $ is invertible, we can also solve the generalized eigenvalue problem $ G^{(1)} \hspace{0.1em} \xi_{\ell} = \lambda_{\ell} \hspace{0.1em} G^{(0)} \hspace{0.1em} \xi_{\ell} $. This shows that eigenfunctions of $ \mathcal{L} $ can be approximated by eigenfunctions of $ \mathcal{L}_r $.

Example 3.1. Choosing r = n and indicator functions for each vertex, i.e.

we obtain $ \Phi_{\mathscr{V}} = I $ and hence $ G^{(0)} = D_{\mu} $ and $ G^{(1)} = D_{\mu} L $ so that $ L_r = L $. That is, in this case we obtain the full matrix representation of the operator $ \mathcal{L} $. $\triangle$

Theoretically, we could choose any set of basis functions. Our goal, however, is to define the basis functions in such a way that the reduced operator $ \mathcal{L}_r $ has essentially the same dominant eigenvalues as the full operator $ \mathcal{L} $ and thus retains the metastability (and cluster structure) of the system.

Remark 3.2. The optimal basis is given by the eigenvectors $ \boldsymbol{\varphi}_{\ell} $ corresponding to the largest eigenvalues $ \lambda_{\ell} $ of L since choosing

results in $ G^{(0)} = \Phi_{\mathscr{V}} \hspace{0.1em} D_{\mu} \hspace{0.1em} \Phi_{\mathscr{V}}^\top $ and $ G^{(1)} = \Phi_{\mathscr{V}} \hspace{0.1em} D_{\mu} \hspace{0.1em} L \hspace{0.1em} \Phi_{\mathscr{V}}^\top = \Phi_{\mathscr{V}} \hspace{0.1em} D_{\mu} \hspace{0.1em} \Phi_{\mathscr{V}}^\top \operatorname{diag}(\lambda_1, \dots, \lambda_r) $. That is, $ L_r = \operatorname{diag}(\lambda_1, \dots, \lambda_r) $ and the reduced operator has exactly the same dominant eigenvalues as the full operator. However, if we already knew the eigenvectors of L, then we could immediately compute the clusters by applying, e.g. k-means. Also, the number of metastable sets is in general not known in advance. The question then is whether it is possible to derive near-optimal basis functions without having to compute the eigenvectors in the first place. One possibility would be to construct basis functions based on random walks started in different parts of the graph since the random walkers would with a high probability first explore nodes within the clusters before moving to other clusters. The generation of suitable basis functions, however, is beyond the scope of this paper.

3.2. Data-driven approximation

The operators introduced above can also be estimated from random walk data. Given m random walkers $ x^{(i)} $ sampled from the distribution µ, each random walker takes a single step forward according to the probabilities given by the transition probability matrix S. Let the final positions of the random walkers be denoted by $ y^{(i)} $. Alternatively, instead of considering many random walkers, the data can be extracted from one long random walk $ x^{(1)}, \dots, x^{(m+1)} $ by defining $ y^{(i)} = x^{(i+1)} $, for $ i = 1, \dots, m $. The density µ is then determined by the random walk. If the graph admits a unique invariant density π, then µ will converge to π for $ m \to \infty $.

We now define data matrices $ \Phi_x, \Phi_y \in \mathbb{R}^{r \times m} $ by

Further, let $ \widehat{G}_{xx}, \widehat{G}_{yy}, \widehat{G}_{xy} \in \mathbb{R}^{r \times r} $ be defined by

Thus, the entries of the matrices $ G_{xx} = \Phi_{\mathscr{V}} \hspace{0.1em} D_{\mu} \hspace{0.1em} \Phi_{\mathscr{V}}^\top $, $ G_{yy} = \Phi_{\mathscr{V}} \hspace{0.1em} D_{\nu} \hspace{0.1em} \Phi_{\mathscr{V}}^\top $, and $ G_{xy} = \Phi_{\mathscr{V}} \hspace{0.1em} D_{\mu} \hspace{0.1em} S \hspace{0.1em} \Phi_{\mathscr{V}}^\top $ are given by

Remark 3.3. Note that the matrices Gxx and Gxy are not the Gram matrices used in kernel-based methods. Here, G stands for Galerkin. For functions $ f({\mathscr{v}_i}) = c_x^\top \phi({\mathscr{v}_i}) $ and $ g({\mathscr{v}_i}) = c_y^\top \phi({\mathscr{v}_i}) $, it holds that $ \left\langle \,f,\, \mathcal{C}_{xx} f\, \right\rangle = c_x^\top \hspace{0.1em} G_{xx} \hspace{0.1em} c_x $ and $ \left\langle \,f,\, \mathcal{C}_{xy} g \right\rangle = c_x^\top \hspace{0.1em} G_{xy} \hspace{0.1em} c_y $. If we choose the basis functions defined in example 3.1, then $ G_{xx} = C_{xx} = D_{\mu} $ and $ G_{xy} = C_{xy} = D_{\mu} \hspace{0.1em} S $.

Lemma 3.4. It follows that:

  • (i)  
    $ G_{xx}^{-1} \hspace{0.1em} G_{xy} $ is a Galerkin approximation of the operator $ \mathcal{K} $ w.r.t. $ \left\langle \cdot,\, \cdot \right\rangle_{\mu} $,
  • (ii)  
    $ G_{yy}^{-1} \hspace{0.1em} G_{yx} $ is a Galerkin approximation of the operator $ \mathcal{T} $ w.r.t. $ \left\langle \cdot,\, \cdot \right\rangle_{\nu} $.

Proof. For the first part, compare Gxx and Gxy with the matrices $ G^{(0)} $ and $ G^{(1)} $ introduced above. Then, using proposition A.1, we have

which proves the second part. □

If µ is the uniform distribution, then $ \left\langle \phi_i,\, \mathcal{K} \phi_j \right\rangle = \left\langle \mathcal{P} \phi_i,\, \phi_j \right\rangle $, and $ G_{xx}^{-1} \hspace{0.1em} G_{yx} $ is an approximation of the operator $ \mathcal{P} $ with respect to the standard inner product $ \left\langle \cdot,\, \cdot \right\rangle $. As the forward–backward and backward–forward operators are simply compositions of these operators (see also proposition A.3), all the transfer operators introduced above can be estimated from data. This, however, implies that we do not directly compute Galerkin approximations of $ \mathcal{F} $ and $ \mathcal{B} $ but represent them as compositions of Galerkin projections, which introduces additional errors.

Example 3.5. For the graph introduced in example 2.16, we define sets $ \mathbb{A}_1 = \{{\mathscr{v}_1}, \dots, {\mathscr{v}_{100}} \} $, $ \mathbb{A}_2 = \{{\mathscr{v}_{101}}, \dots, {\mathscr{v}_{200}}\}$, $ \mathbb{A}_3 = \{{\mathscr{v}_{201}}, \dots, {\mathscr{v}_{300}} \} $, and $ \mathbb{A}_4 = \{{\mathscr{v}_{301}}, \dots, {\mathscr{v}_{400}} \} $ and basis functions $ \phi_j({\mathscr{v}_i}) = \unicode{x1D7D9}_{\mathbb{A}_j}({\mathscr{v}_i}) $, with $ j = 1, \dots, 4 $, where $ \unicode{x1D7D9}_{\mathbb{B}} $ denotes the indicator function for the set $ \mathbb{B} $. The representation of the forward–backward operator $ \mathcal{F} $ projected onto the space spanned by the four basis functions is then approximately a diagonal matrix and can be regarded as a coarse-grained counterpart of the full operator, where the off-diagonal entries correspond to transition probabilities between the clusters. The resulting eigenvalues and eigenvectors are good approximations of the dominant eigenvalues and eigenvectors of the full operator as shown in figure 5. $\triangle$

Figure 5.

Figure 5. (a) Mean of the eigenvalues estimated from random walk data (averaged over 1000 runs) as a function of m, where denotes the first (not shown since it is always 1), the second, the third, and the fourth eigenvalue. The shaded area in the corresponding color represents the standard deviation, the dotted line the infinite-data limit, and the dashed line the eigenvalues of the Galerkin projection of the operator. Recall that the data-driven approximation is a composition of two Galerkin approximations and thus slightly underestimates the correct eigenvalues. The eigenvalues of the Galerkin approximation are also marginally smaller than the eigenvalues of the full operator computed in example 2.16. (b) Dominant eigenfunctions $ \varphi_{\ell} $ (top) and $ \psi_{\ell}$ (bottom) of the reduced matrix (solid lines) and the full matrix (dotted lines). The chosen indicator function basis approximates the correct eigenfunctions well since they are essentially almost constant within the clusters.

Standard image High-resolution image

The example illustrates that it is possible to approximate the dominant eigenvalues and eigenvectors by computing eigendecompositions of potentially much smaller matrices, provided that a suitable set of basis functions is chosen and that the graph exhibits metastability.

Remark 3.6. Assuming that only random-walk data is available, but not the underlying graph itself, the data-driven approach can thus also be used for inferring the network structure and the transition probabilities. This will be further investigated in future work.

3.3. Optimization problem formulation

We will now show that it is possible to rewrite the clustering approach for directed graphs as a constrained optimization problem. This gives rise to a different interpretation of clusters. Canonical correlation analysis (CCA) [3840] was originally developed to maximize the correlation between random variables—potentially in infinite-dimensional feature spaces if the kernel-based formulation is used. It has been shown in [9] that CCA, when applied to time-series data, can be used to detect coherent sets. In order to illustrate the relationships with the forward–backward operator $ \mathcal{F} $ (or its matrix representation F), we will briefly outline the derivation of CCA, which can be written as

Restricted to the subspace $ \mathbb{U}_r $, defining $ f({\mathscr{v}_i}) = c_x^\top \phi({\mathscr{v}_i}) $ and $ g({\mathscr{v}_i}) = c_y^\top \phi({\mathscr{v}_i}) $, this yields

Equation (1)

The Lagrangian function corresponding to this optimization problem is

where κx and κy are Lagrange multipliers. The derivatives with respect to cx and cy are

Multiplying the first equation by $ c_x^\top $ and the second by $ c_y^\top $, it follows that $ \kappa_x = \kappa_y = : \kappa_{\ell} $. We thus obtain the (generalized) eigenvalue problem

If we now solve one of the equations for cx or cy and insert the resulting expression into the other, we have

Since $ G_{xx}^{-1} G_{xy} $ is a Galerkin approximation of $ \mathcal{K} $ and $ G_{yy}^{-1} G_{yx} $ a Galerkin approximation of $ \mathcal{T} $, the first expression can be used to compute eigenfunctions of $ \mathcal{F} $ and the second to compute eigenfunctions of $ \mathcal{B} $, where $ \lambda_{\ell} = \kappa_{\ell}^2 $ is the corresponding eigenvalue. This shows that the functions maximizing the correlation are the eigenfunctions of the forward–backward and backward–forward operators. We can again use the dominant eigenfunctions to compute coherent sets.

Example 3.7. We now generate graphs $ \mathscr{G} \in \mathbf{G}\big(2, 100, \big[\begin{smallmatrix} p & q \\ q & p \end{smallmatrix} \big]\big) $ with varying p and q using the DSBM. There are two dense diagonal blocks if p is large and q is small and two dense off-diagonal blocks if p is small and q large. In order to analyze the quality of the clustering for different values of p and q, we plot the second-largest eigenvalue κ2 (i.e. the correlation between ϕ2 and ψ2) and the adjusted Rand index [41]. The results are shown in figure 6. The clustering works for both scenarios, but, as expected, fails when the values of p and q are too similar since there are in that case no clearly defined clusters. $\triangle$

Figure 6.

Figure 6. (a) Correlation between ϕ2 and ψ2. (b) Adjusted Rand index of the obtained clustering. A value of 1 implies that the algorithm identified the correct clusters. (c) Eigenfunctions ϕ2 (top) and ψ2 (bottom) for different values of p and q, where is p = 0.99 and q = 0.01, p = 0.7 and q = 0.3, p = 0.3 and q = 0.7, and p = 0.01 and q = 0.99, corresponding to the dots of the same color in (a) and (b). Although the correlation decreases due to the 'noisier' eigenvectors if p approaches q, the spectral clustering algorithm still works for most cases unless p ≈ q. Note that the sign of ψ2 flips once q > p (i.e. the off-diagonal blocks are now non-sparse), but the correlation increases again.

Standard image High-resolution image

This demonstrates that the spectral clustering approach for directed graphs works for a wide range of parameter settings. Alternatively, we can rewrite (1) as

Equation (2)

where $ \widetilde{c}_x = G_{xx}^{{-1/2}} \hspace{0.1em} c_x $ and $ \widetilde{c}_y = G_{yy}^{{-1/2}} \hspace{0.1em} c_y $. The maximum of the cost function (2) is the largest singular value σ1 of the matrix $ G_{xx}^{{-1/2}} G_{xy} \hspace{0.1em} G_{yy}^{{-1/2}} $, attained at $ \widetilde{c}_x = u_1 $ and $ \widetilde{c}_y = v_1 $, where u1 and v1 are the corresponding left and right singular vectors [42]. Thus, setting $ c_x = G_{xx}^{{-1/2}} u_1 $ and $ c_y = G_{yy}^{{-1/2}} v_1 $ solves the CCA problem. This can be extended to the subsequent singular values and vectors.

3.4. Optimization problem formulation for undirected graphs

In order to write spectral clustering algorithms for undirected graphs as a solution of a constrained optimization problem, set $ \mu = \pi $. We then solve

Equation (3)

resulting in the Lagrangian function

and hence

where we used that Gxy is symmetric for $ \mu = \pi $ since due to the detailed balance condition $ C_{xy} = D_{\pi} \hspace{0.1em} S = S^\top D_{\pi} = C_{xy}^\top $ and hence also $ G_{xy} = G_{xy}^\top $. Thus, we obtain the eigenvalue problem $ G_{xx}^{-1} G_{xy} \hspace{0.1em} c = \kappa \hspace{0.1em} c $. This is a Galerkin approximation of the Koopman operator and the cost function is maximized by the eigenfunction associated with the largest eigenvalue. Subdominant eigenfunctions can again be used for detecting metastable sets.

Remark 3.8. If the matrix Gxy is not symmetric, we can still solve the optimization problem, but would then obtain the eigenvalue problem $ \frac{1}{2} G_{xx}^{-1} (G_{xy} + G_{yx}) \hspace{0.1em} c = \kappa \hspace{0.1em} c $, which can be regarded as a simple form of symmetrization.

Defining $ \widetilde{c} = G_{xx}^{{-1/2}} \hspace{0.1em} c $, we can also write (3) as

Equation (4)

which is maximized by the dominant eigenvector of the matrix $ G_{xx}^{{-1/2}} G_{xy} \hspace{0.1em} G_{xx}^{{-1/2}} $. We then again obtain the eigenvalue problem $ G_{xx}^{-1} G_{xy} \hspace{0.1em} c = \kappa \hspace{0.1em} c $.

3.5. Comparison

We have shown in section 2 that our transfer operator-based spectral clustering algorithm—obtained by replacing the Koopman operator by the forward–backward operator—can be interpreted as a straightforward generalization of conventional spectral clustering algorithms for undirected graphs. This is also corroborated by the optimization problem formulation. Additionally, comparing the constrained optimization problems (1) and (3) for directed and undirected graphs, we see that the former is clearly more flexible and powerful since we are allowed to choose two functions, whereas the latter is restricted to one, which makes it impossible to detect dense off-diagonal blocks.

4. Numerical results

We will present numerical results for a set of benchmark problems. Additional examples, including extensions to time-evolving networks, can be found in [23].

4.1. Weighted graphs with inhomogeneous degree distributions

We generate 300 weighted graphs with heterogeneous node-degree distributions and community sizes as described in [32] using their publicly available C++code (Package 4). All the graphs comprise 500 vertices and have between four and six clusters. We consider three different test cases: (1) only dense blocks on the diagonal, (2) only off-diagonal dense blocks, and (3) a mix of both 4 . A typical adjacency matrix (case 3) is shown in figure 7.

Figure 7.

Figure 7. (a) Adjacency matrix of a graph with six clusters. (b) Permuted adjacency matrix, where the vertices are reordered according to the identified clusters.

Standard image High-resolution image

We apply the degree-discounted bibliometric symmetrization [22], the Hermitian clustering method proposed in [31] (both described in remark 2.13), and algorithm 2.15 to the three different test cases. We run each algorithm ten times (using the same k-means implementation and settings) for each graph and compute the average adjusted Rand index and the average number of misclassified vertices. The results, listed in table 1, show that the Hermitian clustering approach works well for graphs with dense off-diagonal blocks, but fails to identify 'conventional' clusters, i.e. groups of nodes that are strongly coupled by directed edges, since the method was designed to detect only imbalances in the orientation of the edges. Instead of the random-walk normalized matrix $ A_{\textrm{rw}} = D_{\mathscr{o}}^{-1} A $ as proposed in [31], we use the normalized adjacency matrix $ A_{\textrm{nn}} = D_{\mathscr{o}}^{{-1/2}} A D_{\mathscr{i}}^{{-1/2}} $, which seems to improve the results. Although the reweighting parameters for the degree-discounted bibliometric symmetrization were determined empirically, the method works for all test cases. The transfer operator-based clustering also works well for all considered graph types and leads to marginally better results. The advantage of the dynamical systems approach is that it offers a rigorous interpretation of clusters in terms of coherent sets (or CCA) and that it can be easily extended to dynamic graphs. For more complicated benchmark problems, it might also be beneficial to tune the probability density µ or to use a combination of the operators $ \mathcal{F} $ and $ \mathcal{B} $.

Table 1. Comparison of the degree-discounted bibliometric symmetrization (DDBS), the Hermitian clustering approach (Herm-RW), and algorithm 2.15 for graphs generated using a generalization of the model proposed in [32]. For each method and test case, we compute the average adjusted Rand index (ARI) and the average number of misclassified vertices (NMV) in per cent.

 DDBS [22] Herm-RW [31]Algorithm 2.15
 ARINMVARINMVARINMV
diagonal blocks0.9543.938%0.05968.262%0.9930.558%
off-diagonal blocks0.9534.341%0.9276.681%0.9930.629%
mixed0.9583.742%0.32549.933%0.9940.531%

4.2. 32-bit adder

As a last example, we consider a 32-bit adder. The data set can be found on the Matrix Market website. Since the matrix contains negative values, we shift the entries so that all weights are positive. The circuit has a fairly simple structure and the spectral clustering algorithm detects 32 blocks of nearly the same size. Reordering the adjacency matrix based on the cluster numbers, we obtain a block matrix with very few nonzero entries in the off-diagonal blocks as shown in figure 8. The bandwidth of the matrix could be minimized by applying generalizations of the Cuthill–McKee algorithm to the block structure [43].

Figure 8.

Figure 8. (a) Structure of the original adjacency matrix of the 32-bit adder. (b) Dominant eigenvalues of the forward–backward operator. There is a spectral gap between the 32nd and 33rd eigenvalue. We thus set k = 32. (c). Permuted adjacency matrix, where we again reorder the vertices according to the identified clusters.

Standard image High-resolution image

5. Conclusion

We have defined different types of transfer operators associated with random walks on graphs and shown that the eigenfunctions contain information about metastable or coherent sets. The main advantages of the dynamical systems perspective are that the derived algorithms can be interpreted as generalizations of conventional spectral clustering methods for undirected graphs and that they can also be easily extended to time-evolving graphs as shown in [23]. Future work includes a thorough analysis of the proposed coherent set-based spectral clustering approach for dynamic graphs. Of particular interest are graphs comprising clusters whose structure changes in time, e.g. splitting and merging as well as shrinking and growing clusters. First results presented in [23] illustrate that it is possible to detect time-dependent clusters in dynamic graphs that are based on discretized dynamical systems with coherent sets. However, finding such clusters in real-world graphs, which may have a more intricate and less homogeneous coupling structure, might require tuning and improving the algorithms. Also the properties of transfer operators associated with time-evolving graphs need to be studied in more detail. An open question is whether it is possible to derive related optimization problems that take each layer of the network into account. Furthermore, it would be interesting to interpret the optimization problems derived above as relaxed versions of discrete counterparts. How is this related to normalized cuts and can these problems be efficiently solved using quantum annealers?

Another research problem is the derivation of suitable basis functions for the Galerkin approximation that would allow us to coarse-grain the system without losing essential information about the metastable or coherent sets. While it is theoretically possible to reduce the size of the eigenvalue problem significantly, computing the projected operator and solving the resulting less sparse eigenvalue problem numerically might not be advantageous in practice.

For the benchmark problems in section 4, we simply computed the forward–backward operator with respect to the uniform density. However, it might be beneficial to choose different densities µ to counterbalance the inhomogeneous degree distributions, similar to the degree-discounted symmetrization proposed in [22]. Using standard k-means, we assign each vertex to a cluster. However, some vertices might not belong to any cluster or to multiple clusters at the same time. Soft clustering methods might be able to detect overlapping communities or transition regions.

Acknowledgments

M T was supported by the EPSRC Centre for Doctoral Training in Mathematical Modelling, Analysis and Computation (MAC-MIGS) funded by the UK Engineering and Physical Sciences Research Council (Grant EP/S023291/1), Heriot–Watt University and the University of Edinburgh. We would also like to thank Nataša Djurdjevac Conrad for many fruitful discussions and helpful suggestions.

Data availability statement

The data that support the findings of this study are openly available at the following URL/DOI: https://github.com/sklus/d3s [44].

Appendix: Properties of transfer operators

We are mainly interested in spectral properties and how they are related to inherent time scales and metastable or coherent sets.

Proposition A.1. The operators defined above have the following properties:

  • (i)  
    It holds that $ \left\langle \mathcal{T} u,\, f\, \right\rangle_{\nu} = \left\langle u,\, \mathcal{K} f\, \right\rangle_{\mu} $.
  • (ii)  
    The operator $ \mathcal{F} $ is self-adjoint and positive semi-definite w.r.t. $ \left\langle \cdot,\, \cdot \right\rangle_{\mu} $.
  • (iii)  
    The operator $ \mathcal{B} $ is self-adjoint and positive semi-definite w.r.t. $ \left\langle \cdot,\, \cdot \right\rangle_{\nu} $.
  • (iv)  
    $ \mathcal{F} \unicode{x1D7D9} = \unicode{x1D7D9} $ and $ \mathcal{B} \unicode{x1D7D9} = \unicode{x1D7D9} $, i.e. F and B are row-stochastic.
  • (v)  
    The eigenvalues of $ \mathcal{F} $ and $ \mathcal{B} $ are real-valued and contained in the interval $ [0, 1] $.

Proof. Note that the µ-weighted inner product can be written as $ \left\langle \,f,\, g \right\rangle_{\mu} = \mathbf{f}^\top D_{\mu} \hspace{0.1em} \mathbf{g} $.

  • (i)  
    Then
  • (ii)  
    Additionally, we have
    Furthermore, $ \left\langle \mathcal{F} u,\, u \right\rangle_{\mu} = \big\| D_{\nu}^{{-1/2}} S^\top D_{\mu} u \hspace{0.1em} \big\|^2 \unicode{x2A7E} 0 $.
  • (iii)  
    Analogous to (ii).
  • (iv)  
    First, $ \mathcal{T} \unicode{x1D7D9} = D_{\nu}^{-1} \hspace{0.1em} S^\top D_{\mu} \hspace{0.1em} \unicode{x1D7D9} = D_{\nu}^{-1} \hspace{0.1em} S^\top \boldsymbol{\mu} = D_{\nu}^{-1} \hspace{0.1em} \boldsymbol{\nu} = \unicode{x1D7D9} $. Also, $ \mathcal{K} \unicode{x1D7D9} = S \hspace{0.1em} \unicode{x1D7D9} = \unicode{x1D7D9} $ since S is row-stochastic. Thus, this also holds for compositions of these operators.
  • (v)  
    Using (ii) and (iii), the eigenvalues are real-valued and non-negative. Furthermore, since F and B are stochastic matrices as shown in (iv), the spectral radius is 1.

 □

Corollary A.2. If µ is the uniform distribution, then F is doubly stochastic. Analogously, if ν is the uniform distribution, then B is doubly stochastic. Since $ \mu = S^{\hspace{0.1em}-\!\top} \nu $, it follows that B is doubly stochastic if $ X \sim \frac{1}{n} S^{\hspace{0.1em}-\!\top} \unicode{x1D7D9} $.

Proof. By proposition A.1, F and B are row-stochastic. If µ is the uniform distribution, then F is symmetric. Similarly, B is symmetric if ν is the uniform distribution. □

All the transfer operators introduced above can also be represented as compositions of covariance and cross-covariance operators.

Proposition A.3. Let $ \mathcal{C}_{yx} = \mathcal{C}_{xy}^* $. Then it holds that:

  • (i)  
    $ \mathcal{K} = \mathcal{C}_{xx}^{-1} \mathcal{C}_{xy} $,
  • (ii)  
    $ \mathcal{P} = \mathcal{C}_{xx}^{-1} \mathcal{C}_{yx} $, provided that µ is the uniform distribution,
  • (iii)  
    $ \mathcal{T} = \mathcal{C}_{yy}^{-1} \mathcal{C}_{yx} $,
  • (iv)  
    $ \mathcal{F} = \mathcal{C}_{xx}^{-1} \mathcal{C}_{xy} \hspace{0.1em} \mathcal{C}_{yy}^{-1} \mathcal{C}_{yx} $,
  • (v)  
    $ \mathcal{B} = \mathcal{C}_{yy}^{-1} \mathcal{C}_{yx} \hspace{0.1em} \mathcal{C}_{xx}^{-1} \mathcal{C}_{xy} $.

Proof. We can either use properties of the operators or their matrix representations:

  • (i)  
    This follows from $ \left\langle \,f,\, \mathcal{C}_{xy} \hspace{0.1em} g \right\rangle = \left\langle \,f,\, \mathcal{K} g \right\rangle_{\mu} = \left\langle \,f,\, \mathcal{C}_{xx} \hspace{0.1em} \mathcal{K} g \right\rangle $ or $ C_{xx}^{-1} C_{xy} = S = K $.
  • (ii)  
    Using the duality between $ \mathcal{P} $ and $ \mathcal{K} $, we write
    where $ \mathcal{C}_{xx}^{-1} $ and $ \mathcal{C}_{yx} $ commute if µ is the uniform distribution. Alternatively, this can be seen as follows: $ C_{xx}^{-1} C_{yx} = D_{\mu}^{-1} S^\top D_{\mu} = n \hspace{0.1em} I \hspace{0.1em} S^\top \frac{1}{n} I = S^\top = P $.
  • (iii)  
    By proposition A.1, we have
    Similarly, $ C_{yy}^{-1} C_{yx} = D_{\nu}^{-1} S^\top D_{\mu} = T $.
  • (iv) and (v)  
    This follows immediately from the definitions of $ \mathcal{F} $ and $ \mathcal{B} $.

 □

Footnotes

  • Similar visualization techniques for undirected graphs using spectral properties can be found in [36].

  • In order to generate adjacency matrices with off-diagonal blocks, we permute the block structure manually since the C++ code does not support this feature.

Please wait… references are loading.
10.1088/2632-072X/ad28fe