Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
Skip to main content

Generalized persistence algorithm for decomposing multiparameter persistence modules

  • Published:
Journal of Applied and Computational Topology Aims and scope Submit manuscript

Abstract

The classical persistence algorithm computes the unique decomposition of a persistence module implicitly given by an input simplicial filtration. Based on matrix reduction, this algorithm is a cornerstone of the emergent area of topological data analysis. Its input is a simplicial filtration defined over the integers \({\mathbb {Z}}\) giving rise to a 1-parameter persistence module. It has been recognized that multiparameter version of persistence modules given by simplicial filtrations over d-dimensional integer grids \({\mathbb {Z}}^d\) is equally or perhaps more important in data science applications. However, in the multiparameter setting, one of the main challenges is that topological summaries based on algebraic structure such as decompositions and bottleneck distances cannot be as efficiently computed as in the 1-parameter case because there is no known extension of the persistence algorithm to multiparameter persistence modules. We present an efficient algorithm to compute the unique decomposition of a finitely presented persistence module M defined over the multiparameter \({\mathbb {Z}}^d\). The algorithm first assumes that the module is presented with a set of N generators and relations that are distinctly graded. Based on a generalized matrix reduction technique it runs in \(O(N^{2\omega +1})\) time where \(\omega <2.373\) is the exponent of matrix multiplication. This is much better than the well known algorithm called Meataxe which runs in \({\tilde{O}}(N^{6(d+1)})\) time on such an input. In practice, persistence modules are usually induced by simplicial filtrations. With such an input consisting of n simplices, our algorithm runs in \(O(n^{(d-1)(2\omega + 1)})\) time for \(d\ge 2\). For the special case of zero dimensional homology, it runs in time \(O(n^{2\omega +1})\).

This is a preview of subscription content, log in via an institution to check access.

Access this article

Subscribe and save

Springer+ Basic
$34.99 /Month
  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8
Fig. 9
Fig. 10
Fig. 11
Fig. 12

Similar content being viewed by others

Notes

  1. Here the two sides are equal as graded \({\mathbb {k}}\)-vector spaces.

  2. Recall that an element \(m\in M\) is homogeneous with grade \(\text {gr}(m)=\mathbf{u }\) for some \(\mathbf{u }\in {\mathbb {Z}}^d\) if \(m\in M_\mathbf{u }\).

  3. e.g. \(\ker \partial _p\) denotes the inclusion of \(Z_p\) into \(C_p\)

References

  • Asashiba, H., Buchet, M., Escolar, E.G., Nakashima, K., Yoshiwaki, M.: On interval decomposability of 2D persistence modules (2018). arXiv e-prints arXiv:1812.05261

  • Asashiba, H., Escolar, E.G., Nakashima, K., Yoshiwaki, M.: On approximation of 2d persistence modules by interval-decomposables (2021)

  • Atiyah, M.: On the Krull–Schmidt theorem with application to sheaves. Bull. Soc. Math. Fr. 84, 307–317 (1956)

    Article  MathSciNet  MATH  Google Scholar 

  • Bjerkevik, H.B., Botnan, M.B.: Computational complexity of the interleaving distance (2017). arXiv e-prints arXiv:1712.04281

  • Bjerkevik, H.B., Botnan, M.B., Kerber, M.: Computing the interleaving distance is NP-hard. Found. Comput. Math. 20, 1237–1271 (2020)

    Article  MathSciNet  MATH  Google Scholar 

  • Botnan, M.B., Lesnick, M.: Algebraic stability of zigzag persistence modules. Algebraic Geom. Topol. 18, 3133–3204 (2018)

    Article  MathSciNet  MATH  Google Scholar 

  • Bauer, U., Kerber, M., Reininghaus, J.: Distributed computation of persistent homology. In: Algorithm Engineering and Experimentation, pp. 31–38 (2014)

  • Bauer, U., Kerber, M., Reininghaus, J., Wagner, H.: Phat—persistent homology algorithms toolbox. J. Symb. Comput. 78, 76–90 (2017)

    Article  MathSciNet  MATH  Google Scholar 

  • Bjerkevik, H.: Stability of higher-dimensional interval decomposable persistence modules (2016). arXiv e-print arXiv:1609.02086

  • Botnan, M.B., Lebovici, V., Oudot, S.Y.: On rectangle-decomposable 2-parameter persistence modules. In: 36th International Symposium on Computational Geometry, SoCG 2020, Volume 164 of LIPIcs, pp. 22:1–22:16. Schloss Dagstuhl - Leibniz-Zentrum für Informatik (2020)

  • Botnan, M.B., Oppermann, S., Oudot, S.: Signed barcodes for multi-parameter persistence via rank decompositions and rank-exact resolutions (2021)

  • Bruns, W., Herzog, H.J.: Cohen-Macaulay Rings. Cambridge University Press, Cambridge (1998)

    Book  MATH  Google Scholar 

  • Buchet, M., Escolar, E.G.: Every 1D persistence module is a restriction of some indecomposable 2D persistence module (2019). arXiv e-prints, arXiv:1902.07405

  • Bunch, J.R., Hopcroft, J.E.: Triangular factorization and inversion by fast matrix multiplication. Math. Comput. 28, 231–236 (1974)

    Article  MathSciNet  MATH  Google Scholar 

  • Cai, C., Kim, W., Mémoli, F., Wang, Y.: Elder-rule-staircodes for augmented metric spaces. In: 36th International Symposium on Computational Geometry (SoCG 2020). Schloss Dagstuhl-Leibniz-Zentrum für Informatik (2020)

  • Carlsson, G.: Topology and data. Bull. Am. Math. Soc. 46(2), 255–308 (2009)

    Article  MathSciNet  MATH  Google Scholar 

  • Carlsson, G., Mémoli, F.: Persistent clustering and a theorem of J. Kleinberg (2008). arXiv e-print arXiv:0808.2241

  • Carlsson, G., Zomorodian, A.: The theory of multidimensional persistence. Discrete Comput. Geom. 42(1), 71–93 (2009)

    Article  MathSciNet  MATH  Google Scholar 

  • Carlsson, G., Singh, G., Zomorodian, A.: Computing multidimensional persistence. In: International Symposium on Algorithms and Computation, pp. 730–739. Springer (2009)

  • Cerri, A., Fabio, B.D., Ferri, M., Frosini, P., Landi, C.: Betti numbers in multidimensional persistent homology are stable functions. Math. Methods Appl. Sci. 36(12), 1543–1557 (2013)

    Article  MathSciNet  MATH  Google Scholar 

  • Cerri, A., Ethier, M., Frosini, P.: On the geometrical properties of the coherent matching distance in 2D persistent homology (2018). arXiv e-prints arXiv:1801.06636

  • Cochoy, J., Oudot, S.Y.: Decomposition of exact pfd persistence bimodules. Discrete Comput. Geom. 63(2), 255–293 (2020)

    Article  MathSciNet  MATH  Google Scholar 

  • Cohen-Steiner, D., Edelsbrunner, H., Morozov, D.: Vines and vineyards by updating persistence in linear time. In: Proc. 22nd Annu. Sympos. Comput. Geom., pp. 119–126 (2006)

  • Cohen-Steiner, D., Edelsbrunner, H., Harer, J.: Stability of persistence diagrams. Discrete Comput. Geom. 37(1), 103–120 (2007)

    Article  MathSciNet  MATH  Google Scholar 

  • Corbet, R., Kerber, M.: The representation theorem of persistence revisited and generalized. J. Appl. Comput. Topol. 2(1), 1–31 (2018)

    MathSciNet  MATH  Google Scholar 

  • Cox, D.A., Little, J., O’shea, D.: Using Algebraic Geometry, vol. 185. Springer, Berlin (2006)

  • Dey, T.K., Xin, C.: Computing bottleneck distance for 2-d interval decomposable modules. In: 34th International Symposium on Computational Geometry, SoCG 2018, June 11–14, 2018, Budapest, Hungary, pp. 32:1–32:15 (2018)

  • Dey, T.K., Xin, C.: Rectangular approximation and stability of 2-parameter persistence modules (2021). arXiv e-print arXiv:2108.07429

  • Dey, T.K., Wang, Y.: Computational Topology for Data Analysis. Cambridge University Press, Cambridge (2022)

    Book  MATH  Google Scholar 

  • Dey, T.K., Shi, D., Wang, Y.: Simba: An efficient tool for approximating Rips-filtration persistence via simplicial batch-collapse. In: European Symposium on Algorithms, vol. 35, pp. 1–16 (2016)

  • Dey, T.K., Kim, W., Mémoli, F.: Computing generalized rank invariant for 2-parameter persistence modules via zigzag persistence and its applications (2021). arXiv:2111.15058

  • Edelsbrunner, H., Harer, J.: Computational Topology: An Introduction. Applied Mathematics. American Mathematical Society, Providence (2010)

    MATH  Google Scholar 

  • Edelsbrunner, H., Letscher, D., Zomorodian, A.: Topological persistence and simplification. In: Proceedings 41st Annual Symposium on Foundations of Computer Science, pp. 454–463. IEEE (2000)

  • Eisenbud, D.: The Geometry of Syzygies: A Second Course in Algebraic Geometry and Commutative Algebra, vol. 229. Springer, Berlin (2005)

    MATH  Google Scholar 

  • Hatcher, A.: Algebraic Topology. Cambridge Univ. Press, Cambridge (2000)

    MATH  Google Scholar 

  • Hilbert, D.: Ueber die theorie der algebraischen formen. Math. Ann. 36(4), 473–534 (1890)

    Article  MathSciNet  MATH  Google Scholar 

  • Holt, D.F.: The meataxe as a tool in computational group theory. In: London Mathematical Society Lecture Note Series, pp. 74–81 (1998)

  • Holt, D.F., Rees, S.: Testing modules for irreducibility. J. Aust. Math. Soc. 57(1), 1–16 (1994)

    Article  MathSciNet  MATH  Google Scholar 

  • Ibarra, O.H., Moran, S., Hui, R.: A generalization of the fast lup matrix decomposition algorithm and applications. J. Algorithms 3(1), 45–56 (1982)

    Article  MathSciNet  MATH  Google Scholar 

  • Kerber, M., Lesnick, M., Oudot, S.: Exact computation of the matching distance on 2-parameter persistence modules. In: 35th International Symposium on Computational Geometry (SoCG 2019), Volume 129 of Leibniz International Proceedings in Informatics (LIPIcs), pp. 46:1–46:15 (2019)

  • Kim, W., Mémoli, F.: Generalized persistence diagrams for persistence modules over posets. J. Appl. Comput. Topol. 5(4), 533–581 (2021)

    Article  MathSciNet  MATH  Google Scholar 

  • Knudson, K.P.: A refinement of multi-dimensional persistence (2007). arXiv e-print arXiv:0706.2608

  • Lesnick, M.: The theory of the interleaving distance on multidimensional persistence modules. Found. Comput. Math. 15(3), 613–650 (2015)

    Article  MathSciNet  MATH  Google Scholar 

  • Lesnick, M., Wright, M.: Interactive visualization of 2-D persistence modules (2015). arXiv e-prints arXiv:1512.00180

  • Lesnick, M., Wright, M.: Computing minimal presentations and Betti numbers of 2-parameter persistent homology (2019). arXiv e-prints arXiv:1902.05708

  • Liu, S., Maljovec, D., Wang, B., Bremer, P.T., Pascucci, V.: Visualizing high-dimensional data: advances in the past decade. IEEE Trans. Vis. Comput. Graph. 23(3), 1249–1268 (2017)

    Article  Google Scholar 

  • Miller, E., Sturmfels, B.: Combinatorial commutative algebra (2004)

  • Patel, A.: Generalized persistence diagrams (2016). arXiv e-print arXiv:1601.03107

  • Römer, T.: On minimal graded free resolutions. Ill. J. Math 45(2), 1361–1376 (2001)

    MATH  Google Scholar 

  • Sheehy, D.R.: Linear-size approximations to the vietoris-rips filtration. Discrete Comput. Geom. 49(4), 778–796 (2013)

    Article  MathSciNet  MATH  Google Scholar 

  • Skryzalin, J.: Numeric invariants from multidimensional persistence. Ph.D. thesis, Stanford University (2016)

  • Weibel, C.A.: An Introduction to Homological Algebra, vol. 38. Cambridge University Press, Cambridge (1995)

    MATH  Google Scholar 

  • Oudot, S.Y.: Persistence Theory: From Quiver Representations to Data Analysis, vol. 209. American Mathematical Society, Providence (2015)

    Book  MATH  Google Scholar 

  • Zomorodian, A., Carlsson, G.: Computing persistent homology. Discrete Comput. Geom. 33(2), 249–274 (2005)

    Article  MathSciNet  MATH  Google Scholar 

Download references

Acknowledgements

This research is supported partially by the NSF grants CCF-1740761, CCF-2049010 and DMS-1547357. We acknowledge the influence of the BIRS Oaxaca workshop on Multiparameter Persistence which partially seeded this work.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Tamal K. Dey.

Ethics declarations

Conflict of interest

We declare that there is no conflict of interest as required by JACT.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Appendices

Appendices

Free resolution and graded Betti numbers

Here we introduce free resolutions and graded Betti numbers of graded modules. Based on these tools, we give a proof of our Theorem 1.

Definition 13

For a graded module M, a free resolution \({\mathcal {F}}\rightarrow M\) is an exact sequence:

figure ac

where each \(F^i\) is a free graded R-module.

We say two free resolutions \({\mathcal {F}}, {\mathcal {G}}\) of M are isomorphic, denoted as \({\mathcal {F}}\simeq {\mathcal {G}}\), if there exists a collection of isomorphisms \(\{h^i:F^i\rightarrow G^i\}_{i=0,1,\dots }\) which commutes with \(f^i\)’s and \(g^i\)’s. That is,for all \(i=0,1,\dots \), \(g^i \circ h^i = h^{i-1} \circ f^{i}\) where \(h^{-1}\) is the identity map on M. See the following commutative diagram as an illustration.

figure ad

For two free resolutions \({\mathcal {F}}\rightarrow M\) and \({\mathcal {G}}\rightarrow N\), by taking direct sums of free modules \(F^i\oplus G^i\) and morphisms \(f^i\oplus g^i\), we get a free resolution of \(M\oplus N\), denoted as \({\mathcal {F}}\oplus {\mathcal {G}}\).

Note that a presentation of M can be viewed as the tail part

figure ae

of a free resolution \({\mathcal {F}}\rightarrow M\). Free resolutions and presentations are not unique. But there exists a unique minimal free resolution in the following sense:

Fact 7

For a graded module M, there exists a unique free resolution such that \(\forall i \ge 0, \text {im\,}f_{i+1}\subseteq {\mathfrak {m}}F_{i}\), where \({\mathfrak {m}}=(x_1,\ldots , x_d)\) is the unique maximal ideal of the graded ring \(R={\mathbb {k}}[x_1,\ldots , x_d]\).

Definition 14

In a minimal free resolution \({\mathcal {F}}\rightarrow M\), the tail part

figure af

is called the minimal presentation of M and \(f^1\) is called the minimal presentation map of M.

Here we briefly state the construction of the unique free resolution without formal proof. More details can be found in Bruns and Herzog (1998) and Römer (2001):

Construction A1

Choose a minimal set of homogeneous generators \(g_1, \ldots , g_n\) of M. Let \(F^0=\bigoplus _{i=1}^{n} R_{\rightarrow \text {gr}(g_i)}\) with standard basis \(e_1^{\text {gr}(g_1)}, \ldots , e_n^{\text {gr}(g_n)}\) of \(F^0\). The homogeneous R-map \(f^0: F^0 \rightarrow M\) is determined by \(f^0(e_i)=g_i\). Now the 1st syzygy module of M,

figure ag

is again a finitely generated graded R-module. We choose a minimal set of homogeneous generators \(s_1, \ldots , s_m\) of \(S_1\) and let \(F^1=\bigoplus _{j=1}^{m} R_{\rightarrow \text {gr}(s_j)}\) with standard basis \(e_1'^{\text {gr}(s_1)}, \ldots , e_m'^{\text {gr}(s_m)}\) of \(F^1\). The homogeneous R-map \(f^1: F^1 \rightarrow F^0\) is determined by \(f^1(e_j')=s_j\). By repeating this procedure for \(S_2=\ker f^1\) and moving backward further, one gets a graded free resolution of M.

Fact 8

Any free resolution of M can be obtained (up to isomorphism) from the minimal free resolution by summing it with free resolutions of trivial modules, each with the following form

figure ah

Note that the only nontrivial morphism

figure ai

is the identity map \(\mathbb {1}\).

From the above constructions, it is not hard to see that this unique free resolution is a minimal one in the sense that each free module \(F^j\) has smallest possible size of basis.

For this unique free resolution, for each j, we can write \(F^j\simeq \bigoplus _{\mathbf{u }\in {\mathbb {Z}}^d} \bigoplus ^{\beta ^{M}_{j,\mathbf{u }}} R_{\rightarrow \mathbf{u }}\) (the notation \(\bigoplus ^{\beta ^{M}_{j,\mathbf{u }}} R_{\rightarrow \mathbf{u }}\) means the direct sum of \({\beta ^{M}_{j,\mathbf{u }}}\) copies of \(R_{\rightarrow \mathbf{u }}\)). The set \(\{\beta ^{M}_{j,\mathbf{u }}\mid j\in {\mathbb {N}}, \mathbf{u }\in {\mathbb {Z}}^d\}\) is called the graded Betti numbers of M. When M is clear, we might omit the upper index in Betti number. For example, the graded Betti number of the persistence module for our working Example 1 is listed as Table 2.

Table 2 All the nonzero graded Betti numbers \(\beta _{i,\mathbf{u }}\) are listed in the table

Note that the graded Betti number of a module is uniquely determined by the unique minimal free resolution. On the other hand, if a free resolution \({\mathcal {G}}\rightarrow M\) with \(G^j\simeq \bigoplus _{\mathbf{u }\in {\mathbb {Z}}^d} \bigoplus ^{\gamma ^{M}_{j,\mathbf{u }}} R_{\rightarrow \mathbf{u }}\) satisfies \({\gamma ^{M}_{j,\mathbf{u }}} = {\beta ^{M}_{j,\mathbf{u }}} \) everywhere, then \({\mathcal {G}}\simeq {\mathcal {F}}\) is also a minimal free resolution of M.

Fact 9

\(\beta ^{M\oplus N}_{*,*}=\beta ^{M}_{*,*}+ \beta ^{N}_{*,*}\)

Proposition 6

Given a graded module M with a decomposition \(M\simeq M^1\oplus M^2\), let \({\mathcal {F}}\rightarrow M\) be the minimal resolution of M, and \({\mathcal {G}}\rightarrow M^1\) and \({\mathcal {H}}\rightarrow M^2\) be the minimal resolution of \(M^1\) and \(M^2\) respectively, then \({\mathcal {F}}\simeq {\mathcal {G}}\oplus {\mathcal {H}}\).

Proof

\({\mathcal {G}}\oplus {\mathcal {H}}\rightarrow M\) is a free resolution. We need to show it is a minimal free resolution. By previous argument, we just need to show that the graded Betti numbers of \({\mathcal {G}}\oplus {\mathcal {H}}\rightarrow M^1\oplus M^2\) coincide with graded Betti numbers of \({\mathcal {F}}\rightarrow M\). This is true by the fact 9. \(\square \)

Note that the free resolution is an extension of free presentation. So the above proposition applies to free presentation, which immediately results in the following Corollary.

Corollary 1

Given a graded module M with a decomposition \(M\simeq M^1\oplus M^2\), let f be its minimal presentation map, and g, h be the minimal presentation maps of \(M^1, M^2\) respectively, then \(f\simeq g\oplus h\).

We also have the following fact relating morphisms:

Fact 10

\(\ker (f^1\oplus f^2)=\ker f^1\oplus \ker f^2\); \(\text {coker}(f^1\oplus f^2)=\text {coker}f^1\oplus \text {coker}f^2\).

Based on the above statements, now we can prove Theorem 1

Proof (proof of Theorem 1)

With the obvious correspondence \([f_i]\leftrightarrow [f]_i\), (\(2\leftrightarrow 3\)) easily follows from our arguments about matrix diagonalization in the main context.

(\(1\rightarrow 2\)) Given \(H\simeq \bigoplus H^i\) with the minimal presentation maps f of H: For each \(H^i\), there exists a minimal presentation map \(f_i\). By Corollary 1, we have \(f\simeq \bigoplus f_i\).

(\(2\rightarrow 1\)) Given \(f\simeq \bigoplus f_i\): Since \(H=\text {coker}f= \text {coker}(\bigoplus f_i)=\bigoplus \text {coker}f_i\), let \(H^i=\text {coker}f_i\), we have the decomposition \(H\simeq \bigoplus H^i\).

It follows that the above two constructions together give the desired 1-1 correspondence. \(\square \)

Proof (proof of Proposition 1)

We start with (2). Consider the total decomposition \(f\simeq \bigoplus f^i\). By Remark 2, any presentation is isomorphic to a direct sum of the minimal presentation and some trivial presentations. Let \(f\simeq g\oplus h\) with g being the minimal presentation. So \(\text {coker}h=0\). Let \(g\simeq \bigoplus g^j\) and \(h\simeq \bigoplus h^k\) be the total decomposition of g and h. Note that \(\forall k, \text {coker}h^k=0\). Now we have \(\text {coker}f\simeq \bigoplus \text {coker}f^i\) with \(\text {coker}f^i\) being either \(\text {coker}g^j\) or 0, by the essentially uniqueness of total decomposition. With \(H\simeq \bigoplus \text {coker}g^j\) being a total decomposition of H by Remark 3, and \(\bigoplus \text {coker}f^i=\bigoplus \text {coker}g^j \bigoplus 0\), we can say that \(H\simeq \bigoplus \text {coker}f^i\) is also a total decomposition.

Now for (1). For any decomposition \(H\simeq \bigoplus H^i\), it is not hard to see that each \(H^i\) can be written as a direct sum of a subset of \(H_*^j\)’s with \(H\simeq \bigoplus H_*^j\) being the total decomposition of H. One just need to combine the \(f^i\)’s correspondingly in the total decomposition of \(f\simeq \bigoplus f^i\) to get the desired decomposition of f. \(\square \)

Missing proofs in Sect. 4

Proposition

(4) The target block \(\mathbf{A }|_{{T}}\) can be reduced to 0 while preserving the prior if and only if \(\mathbf{A }|_{{T}}\) can be written as a linear combination of independent operations. That is,

$$\begin{aligned} \mathbf{A }|_{{T}}= \sum _{{\begin{array}{c} l\notin \textsf {Row}(T)\\ k\in \textsf {Row}({T}) \end{array}}}\alpha _{k,l} \mathbf{X }^{k,l}|_{{T}}+\sum _{{\begin{array}{c} i\notin \textsf {Col}(T)\\ j\in \textsf {Col}({T}) \end{array}}} \beta _{i,j}\mathbf{Y }^{i,j}|_{{T}} \end{aligned}$$
(6)

where \(\alpha _{k,l}\)’s and \(\beta _{i,j}\)’s are coefficient in \(\mathbb {k}={\mathbb {F}}_2\).

Proof

Everything in the statement of the proposition is restricted to T. For simplicity of notations, we omit the lower script \({\le t}\) by assuming \(\mathbf{A }_{\le t}=\mathbf{A }\), i.e., \(t=m\) is the last column index. It can be verified that this omission does not affect the proof. The simple reason is that because of the admissible rules of column operations, entries beyond column t carried by any admissible operations will never affect entires in \(\mathbf{A }_{\le t}\).

Recall that \(\mathbf{Y }^{i,j}=\mathbf{A }\varvec{\cdot }[\delta _{i,j}]\) for some \((i,j)\in \textsf {Colop}\) and \(\mathbf{X }^{k,l}=[\delta _{k,l}]\varvec{\cdot }\mathbf{A }\) for some \((l,k)\in \textsf {Rowop}\) where

$$\begin{aligned} \textsf {Colop}= & {} \{(i,j)\mid c_i\rightarrow c_j \text{ is } \text{ an } \text{ admissible } \text{ column } \text{ operation } \}\subseteq \textsf {Col}(\mathbf{A })\\&\times \textsf {Col}(\mathbf{A }) \text{ and } \\ \textsf {Rowop}= & {} \{(l,k)\mid r_l\rightarrow r_k \text{ is } \text{ an } \text{ admissible } \text{ row } \text{ operation } \}\subseteq \textsf {Row}(\mathbf{A })\times \textsf {Row}(\mathbf{A }) \end{aligned}$$

Let \(\mathbf{I }\) be the identity matrix. We say a matrix \(\mathbf{P }\) is an admissible left multiplication matrix if \(\mathbf{P }=\mathbf{I }+\sum _{\textsf {Rowop}} \alpha _{k,l}[\delta _{k,l}]\) for some \((l,k)\in \textsf {Rowop}, \alpha _{k,l}\in \{0,1\}\). Similarly, we say a matrix \(\mathbf{Q }\) is an admissible right multiplication matrix if \(\mathbf{Q }=\mathbf{I }+\sum _{\mathcal {\textsf {Colop}}} \beta _{i,j}[\delta _{i,j}]\) for some \((i,j)\in \textsf {Colop}, \beta _{i,j}\in \{0,1\}\). In short, we just say \(\mathbf{P }\) and \(\mathbf{Q }\) are admissible. \(\square \)

It is not difficult to observe the following properties of admissible matrices:

Fact 11

Matrix \(\mathbf{A }'\sim \mathbf{A }\) is an equivalent matrix transformed from \(\mathbf{A }\) by a sequence of admissible operations if and only if \(\mathbf{A }'=\mathbf{P }\mathbf{A }\mathbf{Q }\) for some admissible \(\mathbf{P }\) and \(\mathbf{Q }\).

Fact 12

Admissible matrices are closed under multiplication and taking inverse.

Fact 13

For any admissible \(\mathbf{P }\), let \(S\subseteq \textsf {Row}(\mathbf{P })\) be any subset of row indices. Then \(\mathbf{P }|_{S\times S}\) is invertible.

For the last fact, observe that the matrix \(\mathbf{P }|_{S\times S}\) can be embedded as a block of an admissible matrix \(\mathbf{P }'\) constructed by making all off-diagonal entries of \(\mathbf{P }\) whose indices are not in \(S\times S\) to be zero. The matrix \(\mathbf{P }'\) is obviously admissible. So by the second fact, it is invertible. Also, \(\mathbf{P }'\) can be written in block diagonal form with two blocks \(\mathbf{P }'|_{S\times S} \text{ and } \mathbf{P }'|_{{\bar{S}}\times {\bar{S}}}=\mathbf{I }\) where \({\bar{S}}=\textsf {Row}(\mathbf{P }')-S\). Therefore, if \(\mathbf{P }'\) is invertible, so is \(\mathbf{P }|_{S\times S}= \mathbf{P }'|_{S\times S}\).

Fig. 13
figure 13

(Left) \(\mathbf{A }\) at iteration t during reduction of the sub-column \(c_t|_{\textsf {Row}(B)}\) for the block \(B=B_2\). (Right) Target block T shown in magenta includes the sub-column of \(c_t\). It does not include \(B:=B_2\). All rows external to T have zeros in the columns external to T. All columns external to T have zeros in the rows external to T. Red regions combined form R

We write the matrix \(\mathbf{A }\) in the following block forms with respect to B and T with necessary reordering of rows and columns (see Fig. 13 for a simple illustration without reordering rows and columns):

$$\begin{aligned} \mathbf{A }= \left[ \begin{array}{c|c} R &{} 0 \\ \hline T &{} B \end{array} \right] \end{aligned}$$

Here we abuse the notations of block and index block to make the expression more legible. In the above block forms of \(\mathbf{A }\), for example, T represents the entries of \(\mathbf{A }\) on the index block T, that is the block \(\mathbf{A }|_{T}\), which is the target block we want to reduce. Note that

$$\begin{aligned} \begin{aligned} R&=\big [\textsf {Row}(\mathbf{A })\setminus \textsf {Row}(B),\, \textsf {Col}(\mathbf{A }_{\le t})\setminus \textsf {Col}(B))\big ]\\&=[\bigoplus _{B_i\ne B} B_i]\cup \big [\textsf {Row}(\mathbf{A })\setminus \textsf {Row}(B),\, \{t\}\big ] \end{aligned} \end{aligned}$$

which is the block obtained by merging all other previous index blocks together with the sub-column of t excluding entries on \(\textsf {Row}(B)\). The right top block is zero since it belongs to the intersections of rows and columns from different blocks.

Observe that, the target block T can be reduced to 0 in \(\mathbf{A }\) with prior preserved if and only if

$$\begin{aligned} \mathbf{P }\mathbf{A }\mathbf{Q }:= \mathbf{P }\varvec{\cdot }\left[ \begin{array}{c|c} R &{} 0 \\ \hline T &{} B \end{array} \right] \varvec{\cdot }\mathbf{Q }= \left[ \begin{array}{c|c} R &{} 0 \\ \hline 0 &{} B \end{array} \right] \end{aligned}$$
(7)

for some admissible \(\mathbf{P }\) and \(\mathbf{Q }\).

For \(\Leftarrow \) direction, consider \(\mathbf{P }= \mathbf{I }+\sum \alpha _{k,l}[\delta _{k,l}]\) and \(\mathbf{Q }= \mathbf{I }+\sum \beta _{i,j}[\delta _{i,j}]\) with binary coefficients \(\alpha _{k,l}\)’s and \(\beta _{i,j}\)’s given in Equation 6. Then, we have

$$\begin{aligned} \mathbf{P }\mathbf{A }\mathbf{Q }&= (\mathbf{I }+\sum \alpha _{k,l}[\delta _{k,l}]) \mathbf{A }(\mathbf{I }+\sum \beta _{i,j}[\delta _{i,j}]) \end{aligned}$$
(8)
$$\begin{aligned}&= \mathbf{A }+ \sum \alpha _{k,l}[\delta _{k,l}]\mathbf{A }+ \sum \beta _{i,j}\mathbf{A }[\delta _{i,j}] + \sum \sum \alpha _{k,l}\beta _{i,j}[\delta _{k,l}]\mathbf{A }[\delta _{i,j}] \end{aligned}$$
(9)
$$\begin{aligned}&= \mathbf{A }+ \sum \alpha _{k,l}[\delta _{k,l}]\mathbf{A }+ \sum \beta _{i,j}\mathbf{A }[\delta _{i,j}] \end{aligned}$$
(10)
$$\begin{aligned}&=\mathbf{A }+ \sum \alpha _{k,l}\mathbf{X }^{k,l} + \sum \beta _{i,j}\mathbf{Y }^{i,j} \end{aligned}$$
(11)

The third Eq. (10) follows from Observations 3. After restriction to T, by the assumption that \(\sum \alpha _{k,l}\mathbf{X }^{k,l} + \sum \beta _{i,j}\mathbf{Y }^{i,j}=\mathbf{A }|_T\), we get \(\mathbf{P }\mathbf{A }\mathbf{Q }|_T=0\). By the definition of independent operations and Observation 2, one can see that our \(\mathbf{P }, \mathbf{Q }\) solves Equation 7.

For \(\Rightarrow \), we will show that if the above equation is solvable, then there always exist solutions \(\mathbf{P }'\) and \(\mathbf{Q }'\) in a simpler forms as stated in the following proposition.

Proposition 7

Equation (7) is solvable for some admissible \(\mathbf{P }\) and \(\mathbf{Q }\) if and only if it is solvable for some admissible \(\mathbf{P }'\) and \(\mathbf{Q }'\) in the following form:

$$\begin{aligned} \mathbf{P }'= \left[ \begin{array}{c|c} I &{} 0 \\ \hline U &{} I \end{array} \right] \text{ and } \mathbf{Q }'= \left[ \begin{array}{c|c} I &{} 0 \\ \hline V &{} I \end{array} \right] \end{aligned}$$
(12)

Before we prove Proposition 7, we show how one can prove the \(\Rightarrow \) direction in Proposition 4 from it. Based on the equivalent condition Eq. 7 and Proposition 7, we can write \(\mathbf{P }'\) and \(\mathbf{Q }'\) in formula 12 as

$$\begin{aligned} \mathbf{P }'=\mathbf{I }+\sum _{{\begin{array}{c} (l,k)\in \\ \textsf {Rowop}_{R\rightarrow T} \end{array}}} \alpha _{k,l}[\delta _{k,l}] \;\quad \mathbf{Q }'=\mathbf{I }+\sum _{{\begin{array}{c} (i,j)\in \\ \textsf {Colop}_{B\rightarrow T} \end{array}}} \beta _{i,j}[\delta _{i,j}] \end{aligned}$$

where \(\textsf {Rowop}_{R\rightarrow T}=\{(l,k)\in \textsf {Rowop}\mid (l,k)\in \textsf {Row}(R)\times \textsf {Row}(T)\}\) and \(\textsf {Colop}_{B\rightarrow T}=\{(i,j)\in \textsf {Colop}\mid (i,j)\in \textsf {Col}(B)\times \textsf {Col}(T)\}\), and \(\alpha _{k,l}, \beta _{i,j}\in \{0,1\}\). Then, similar to Equation 11, we get

$$\begin{aligned} \begin{aligned} \mathbf{P }'\mathbf{A }\mathbf{Q }'&=(\mathbf{I }+\sum _{{\begin{array}{c} (l,k)\in \\ \textsf {Rowop}_{R\rightarrow T} \end{array}}} \alpha _{k,l}[\delta _{k,l}])\varvec{\cdot }\mathbf{A }\varvec{\cdot }(\mathbf{I }+\sum _{{\begin{array}{c} (i,j)\in \\ \textsf {Colop}_{B\rightarrow T} \end{array}}} \beta _{i,j}[\delta _{i,j}])\\&=\mathbf{A }+\sum _{{\begin{array}{c} (l,k)\in \\ \textsf {Rowop}_{R\rightarrow T} \end{array}}}\alpha _{k,l} [\delta _{k,l}]\varvec{\cdot }\mathbf{A }+\sum _{{\begin{array}{c} (i,j)\in \\ \textsf {Colop}_{B\rightarrow T} \end{array}}} \beta _{i,j}\mathbf{A }\varvec{\cdot }[\delta _{i,j}] \\&\quad + \sum _{{\begin{array}{c} (l,k)\in \\ \textsf {Rowop}_{R\rightarrow T} \end{array}}}\sum _{{\begin{array}{c} (i,j)\in \\ \textsf {Colop}_{B\rightarrow T} \end{array}}} \alpha _{k,l}\beta _{i,j} [\delta _{k,l}]\varvec{\cdot }\mathbf{A }\varvec{\cdot }[\delta _{i,j}]\\&=\mathbf{A }+\sum _{{\begin{array}{c} (l,k)\in \\ \textsf {Rowop}_{R\rightarrow T} \end{array}}}\alpha _{k,l} [\delta _{k,l}]\varvec{\cdot }\mathbf{A }+\sum _{{\begin{array}{c} (i,j)\in \\ \textsf {Colop}_{B\rightarrow T} \end{array}}} \beta _{i,j}\mathbf{A }\varvec{\cdot }[\delta _{i,j}]\\&=\mathbf{A }+\sum _{{\begin{array}{c} (l,k)\in \\ \textsf {Rowop}_{R\rightarrow T} \end{array}}}\alpha _{ k,l }\mathbf{X }^{k,l}+\sum _{{\begin{array}{c} (i,j)\in \\ \textsf {Colop}_{B\rightarrow T} \end{array}}} \beta _{i,j}\mathbf{Y }^{i,j} \end{aligned} \end{aligned}$$

By restriction on T we have

$$\begin{aligned} \mathbf{P }'\mathbf{A }\mathbf{Q }'|_{T}= =\mathbf{A }|_{T}+\sum _{{\begin{array}{c} (l,k)\in \\ \textsf {Rowop}_{R\rightarrow T} \end{array}}}\alpha _{ k,l }\mathbf{X }^{k,l}|_{T}+\sum _{{\begin{array}{c} (i,j)\in \\ \textsf {Colop}_{B\rightarrow T} \end{array}}} \beta _{i,j}\mathbf{Y }^{i,j}|_{T} \end{aligned}$$
(13)

With \(\mathbf{P }'\mathbf{A }\mathbf{Q }'|_{T}=0\) by our assumption, we get

$$\begin{aligned} \mathbf{A }|_{T}=\sum _{{\begin{array}{c} (l,k)\in \\ \textsf {Rowop}_{R\rightarrow T} \end{array}}}\alpha _{ k,l } \mathbf{X }^{ k,l }|_T+\sum _{{\begin{array}{c} (i,j)\in \\ \textsf {Colop}_{B\rightarrow T} \end{array}}} \beta _{i,j}\mathbf{Y }^{i,j}|_T \end{aligned}$$

This is exactly what we want

$$\begin{aligned} \mathbf{A }|_{{T}}= \sum _{{\begin{array}{c} l\notin \textsf {Row}(T)\\ k\in \textsf {Row}({T}) \end{array}}}\alpha _{k,l} \mathbf{X }^{k,l}|_{{T}}+\sum _{{\begin{array}{c} i\notin \textsf {Col}(T)\\ j\in \textsf {Col}({T}) \end{array}}} \beta _{i,j}\mathbf{Y }^{i,j}|_{{T}} \end{aligned}$$
(14)

\(\square \)

Now we give the proof of Proposition 7.

Proof of Proposition 7

The \(\Leftarrow \) direction is trivial. For the \(\Rightarrow \) direction, we want to show that, if Eq. (7) is solvable for some admissible \(\mathbf{P }\) and \(\mathbf{Q }\), then there exists admissible \(\mathbf{P }'\) and \(\mathbf{Q }'\) so that

$$\begin{aligned} \mathbf{P }'= & {} \left[ \begin{array}{c|c} I &{} 0 \\ \hline U &{} I \end{array} \right] , \mathbf{Q }'= \left[ \begin{array}{c|c} I &{} 0 \\ \hline V &{} I \end{array} \right] , \text{ and } \mathbf{P }^{'}\varvec{\cdot }\left[ \begin{array}{c|c} R &{} 0 \\ \hline T &{} B \end{array} \right] \varvec{\cdot }\mathbf{Q }^{'}\\= & {} \left[ \begin{array}{c|c} R &{} 0 \\ \hline UR+BV+T &{} B \end{array} \right] = \left[ \begin{array}{c|c} R &{} 0 \\ \hline 0 &{} B \end{array} \right] \end{aligned}$$

We write \(\mathbf{P }\) and \(\mathbf{Q }\) in corresponding block forms as follows:

$$\begin{aligned} \mathbf{P }= \left[ \begin{array}{c|c} P_1 &{} P_2 \\ \hline P_3 &{} P_4 \end{array} \right] \text{ and } \mathbf{Q }= \left[ \begin{array}{c|c} Q_1 &{} Q_2 \\ \hline Q_3 &{} Q_4 \end{array} \right] \end{aligned}$$
(15)

From Eq. (7) one can get a set of equations

$$\begin{aligned} P_1 R Q_2 + P_2 B Q_4= & {} 0 \end{aligned}$$
(16)
$$\begin{aligned} P_1 R Q_1 + P_2 B Q_3= & {} R \end{aligned}$$
(17)
$$\begin{aligned} P_3 R Q_2 + P_4 B Q_4= & {} B \end{aligned}$$
(18)
$$\begin{aligned} P_3 R Q_1 + P_4 B Q_3= & {} T \end{aligned}$$
(19)

From Fact 13, we know that \(P_1, P_4, Q_1, Q_4\) are invertible. By left multiplication with \(P_1^{-1}\) and right multiplication with \(Q_4^{-1}\) on both sides of Eq. (16), one can get :

$$\begin{aligned}&P_1^{-1}P_1 R Q_2 Q_4^{-1} + P_1^{-1} P_2 B Q_4 Q_4^{-1} \nonumber \\&\quad = R Q_2 Q_4^{-1} + P_1^{-1} P_2 B = 0 \implies - R Q_2 Q_4^{-1} = P_1^{-1} P_2 B \end{aligned}$$
(20)

Similarly, by left multiplication with \(P_1^{-1}\) on both sides of Eq. (17) and by right multiplication with \(Q_4^{-1}\) on both sides of Eq. (18), one can get the following equations:

$$\begin{aligned} P_1 R Q_1 + P_2 B Q_3= & {} R \implies R Q_1 = P_1^{-1}R - P_1^{-1}P_2 B Q_3 \end{aligned}$$
(21)
$$\begin{aligned} P_3 R Q_2 + P_4 B Q_4= & {} B \implies P_4 B = B Q_4^{-1} - P_3 R Q_2 Q_4^{-1} \end{aligned}$$
(22)

Now from Eq. 19, we have:

Letting \(U=P_3 P_1^{-1}\) and \(V=Q_4^{-1}Q_3\), we get the desired equation. Now we just need to show that \(\mathbf{P }', \mathbf{Q }'\) are both admissible. We prove it for \(\mathbf{Q }'\). Similar proof holds for \(\mathbf{P }'\). We want to show that for any \((i,j)\in \textsf {Row}(V)\times \textsf {Col}(V)\), if \(\mathbf{Q }'_{i,j}=1\), then \((i,j)\in \textsf {Colop}\). From equality, \(V=Q_4^{-1}Q_3\), which implies \(\mathbf{Q }'_{i,j}=\sum _k (Q_4^{-1})_{i,k} \varvec{\cdot }(Q_3)_{k,j}=1\), we know that \((Q_4^{-1})_{i,k}= (Q_3)_{k,j}=1\) for some k. Since \(Q_4^{-1} \text{ and } Q_3\) are both blocks in the admissible matrix \(\mathbf{Q }\), by the definition of admissible left multiplication matrix, we have \((i,k), (k,j)\in \textsf {Colop}\). Note that \(\textsf {Colop}\) is closed under transitive relation by Proposition 2. So we have \((i,j)\in \textsf {Colop}\). \(\square \)

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Dey, T.K., Xin, C. Generalized persistence algorithm for decomposing multiparameter persistence modules. J Appl. and Comput. Topology 6, 271–322 (2022). https://doi.org/10.1007/s41468-022-00087-5

Download citation

  • Received:

  • Revised:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s41468-022-00087-5

Keywords

Mathematics Subject Classification