As aforementioned, when the condition number of
\(\mathbf {L_{S_u}^+L_{G_u}}\) is small, the condition number of
\(\mathbf {L_{S}^+L_{G}}\) will be small, which represents that graph
S is a good spectral sparsifier for graph
G. To this end, we will exploit the following spectral perturbation analysis framework for computing spectral sensitivity of each off-subgraph edges. For the generalized eigenvalue problem
let matrix
\(\mathbf {V=[v_1, \ldots , v_n]}\) . Then
\(\mathbf {v_i}\) and
\(\mathbf {\lambda _i}\) can be constructed to satisfy the following orthogonality requirement:
Consider the following first-order generalized eigenvalue perturbation problem:
where a small perturbation
\(\delta \mathbf {L_{S_u}}\) in
\(\mathbf {L_{S_u}}\) is introduced, leading to the perturbed generalized eigenvalues and eigenvectors
\(\lambda _i+\delta \lambda _i\) and
\(\mathbf {v_i+\delta v_i}\) . By only keeping the first-order terms, Equation (
29) becomes
Let
\(\mathbf {\delta v_i=\sum _j \psi _{i,j}v_j}\) , then Equation (
30) can be expressed as
Based on the orthogonality properties in Equation (
28), multiplying
\(\mathbf {v_i}\) to both sides of Equation (
31) results in
which further leads to
Then the task of spectral sparsification of general (un)directed graphs will require to recover as few as possible off-subgraph edges to the initial directed subgraph
S such that the largest eigenvalues, or the condition number of
\(\mathbf {L^+_{S_u}} \mathbf {L_{G_u}}\) can be dramatically reduced. Expand
\(\mathbf {\delta L_{S_u}}\) with only the first-order terms as
where
\(\mathbf {\delta L_S}= {{w_G(p,q)}\mathbf {e_{p,q}}}\mathbf {e_p}^\top\) for
\({(p,q)\in E_G\setminus E_S}\) ,
\(\mathbf {e_p}\in \mathbb {R}^n\) denotes the vector with only the
p-th element being 1 and others being 0, and
\(\mathbf {e_{p,q}}=\mathbf {e_p}-\mathbf {e_q}\) . The
spectral sensitivity for each off-subgraph edge
\((p,q)\) can be expressed as
It is obvious that Equation (
35) can be leveraged to rank the spectral importance of each off-subgraph edge. Consequently, spectral sparsification of general graphs can be achieved by only recovering a few dissimilar off-subgraph edges with large spectral sensitivity values. In this work, the following method based on t-step power iterations is proposed for efficient computation of dominant generalized eigenvectors
where
\(\mathbf { h_0}\) denotes a random vector. When the number of power iterations is small (e.g.,
\(t\le 3\) ),
\(\mathbf {h_t}\) will be a linear combination of the first few dominant generalized eigenvectors corresponding to the largest few eigenvalues. Then the spectral sensitivity for the off-subgraph edge
\((p,q)\) can be approximately computed by
The computation of
\(\mathbf {h_t}\) through power iterations requires solving the linear system of equations
\(\mathbf {L_{S_u} x=b}\) for
t times. Note that only
\(\mathbf {L_{S_u}}\) needs to be explicitly computed for generalized power iterations. The
Lean Algebraic Multigrid (
LAMG) [
30] solver is leveraged for computing
\(\mathbf {h_t}\) , which can handle undirected graphs with negative edge weights and has an empirical
\(O(|E_{S_u}|)\) complexity for solving Laplacian matrices
\(\mathbf {L_{S_u}}\) .