Abstract
We develop an elementary method to compute spaces of equivariant maps from a homogeneous space G/H of a Lie group G to a module of this group. The Lie group is not required to be compact. More generally, we study spaces of invariant sections in homogeneous vector bundles, and take a special interest in the case where the fibres are algebras. These latter cases have a natural global algebra structure. We classify these automorphic algebras for the case where the homogeneous space has compact stabilisers. This work has applications in the theoretical development of geometric deep learning and also in the theory of automorphic Lie algebras.
Article PDF
Similar content being viewed by others
Avoid common mistakes on your manuscript.
References
Kutyniok, G.: The Mathematics of Artificial Intelligence. In: (2022). arXiv:2203.08890
Bronstein, M.M, et al.: Geometric deep learning: grids, groups, graphs, geodesics, and gauges. In: (2021). arXiv:2104.13478
Gerken, J.E., et al.: Geometric deep learning and equivariant neural networks. In: Artificial Intelligence Review, pp 1–58 (2023)
Weiler, M., et al.: Coordinate independent convolutional networks–isometry and gauge equivariant convolutions on Riemannian manifolds. In: (2021). arXiv:2106.06020
Cohen, T., et al.: Gauge equivariant convolutional networks and the icosahedral CNN. In: International Conference on Machine Learning. PMLR. pp. 1321–1330 (2019)
Cheng, M.C., et al.: Covariance in physics and convolutional neural networks. In: (2019). arXiv:1906.02481
Cohen, T.S., Geiger, M., Weiler, M.: A general theory of equivariant CNNs on homogeneous spaces. In: Advances in Neural Information Processing Systems. (Ed.) by Wallach, H., et al. vol. 32. Curran Associates, Inc., (2019)
Weiler, M., Cesa, G.: General E(2)-equivariant steerable CNNs. In: Advances in Neural Information Processing Systems 32 (2019)
Weiler, M., et al.: 3D steerable CNNs: learning rotationally equivariant features in volumetric data. In: Advances in Neural Information Processing Systems, pp. 10381–10392 (2018)
Lang, L., Weiler, M.: A Wigner-Eckart theorem for group equivariant convolution kernels. In: (2020). arXiv:2010.10952
Lombardo, S., Mikhailov, A.V.: Reductions of integrable equations: dihedral group. In: J. Phys. A 37(31), 7727–7742 (2004). https://doi.org/10.1088/0305-4470/37/31/006. ISSN: 0305-4470
Lombardo, S., Mikhailov, A.V.: Reduction groups and automorphic Lie algebras. In: Comm. Math. Phys. 258(1), 179–202 (2005). https://doi.org/10.1007/s00220-005-1334-5. Issn: 0010-3616
Kac, V.G.: Simple irreducible graded Lie algebras of finite growth. In: Izv. Akad. Nauk SSSR Ser. Mat. 32, 1323–1367 (0373-2436) (1968)
Mathieu, O.: Classification of simple graded Lie algebras of finite growth. In: Invent. Math. 108(3), 455–519 (1992). https://doi.org/10.1007/BF02100615. ISSN: 0020-9910
Onsager, L.: Crystal statistics. I. A two-dimensional model with an order-disorder transition. In: Phys. Rev. 65(2), 117–149 (1944). ISSN: 0031-899X
Roan, S.-S.: Onsager’s Algebra, Loop Algebra and Chiral Potts Model, Max-Planck-Institut für Mathematik (1991)
Lombardo, S., Sanders, J.A.: On the classification of automorphic Lie algebras. In: Comm. Math. Phys. 299(3), 793–824 (2010). https://doi.org/10.1007/s00220-010-1092-x. ISSN:0010-3616
Bury, R.T., Mikhailov, A.V.: Automorphic Lie algebras and corresponding integrable systems. In: Differential geometry application, 74, 101710 25 (2021). https://doi.org/10.1016/j.difgeo.2020.101710. ISSN: 0926-2245
Knibbeler, V., Lombardo, S., Veselov A.P.: Automorphic Lie algebras and modular forms. In: International Mathematics Research Notices. rnab376 (2022). https://doi.org/10.1093/imrn/rnab376. eprint: https://academic.oup.com/imrn/advance-article-pdf/doi/10.1093/imrn/rnab376/42467033/rnab376.pdf. ISSN: 1073-7928
Knibbeler, V., Lombardo, S., Oelen, C.: A classification of automorphic Lie algebras on complex tori. Accepted for publication in Proceedings of the Edinburgh Mathematical Society (2024)
Olver, P.J.: Classical invariant theory. vol. 44. London Mathematical Society Student Texts. Cambridge University Press, Cambridge, pp. xxii+280 (1999). https://doi.org/10.1017/CBO9780511623660. ISBN: 0-521-55821-2
Bruinier., J.H., et al.: The 1-2-3 of modular forms. Universitext. Lectures from the Summer School on Modular Forms and their Applications held in Nordfjordeid, June 2004, Edited by Kristian Ranestad. Springer-Verlag, Berlin, pp. x+266 (2008). https://doi.org/10.1007/978-3-540-74119-0. ISBN: 978-3-540-74117-6
Knibbeler, V., Lombardo, S., Sanders, J.A.: Higher-dimensional automorphic Lie algebras. In: Found. Comput. Math. 17(4), 987–1035 (2017). https://doi.org/10.1007/s10208-016-9312-1. ISSN: 1615-3375
Finzi, M., Welling, M., Wilson, A.G.: A practical method for constructing equivariant multilayer perceptrons for arbitrary matrix groups. In: International Conference on Machine Learning. PMLR. pp. 3318–3328 (2021)
Aronsson, J.: Homogeneous vector bundles and G-equivariant convolutional neural networks. In: Sampling Theory, Signal Processing, and Data Analysis, 20(2), 10 (2022)
Sepanski, M.R.: Compact Lie groups. vol. 235. Graduate Texts in Mathematics. Springer, New York, pp. xiv+198 (2007). https://doi.org/10.1007/978-0-387-49158-5. ISBN: 978-0-387-30263-8; 0-387-30263-8
Mackey, G.W.: Induced representations of locally compact groups. I. In: Ann. of Math. 2(55), 101–139 (1952). https://doi.org/10.2307/1969423. ISSN: 0003-486X
Moore, C.C.: On the Frobenius reciprocity theorem for locally compact groups. In: Pacific J. Math. 12, 359–365 (1962). ISSN: 0030-8730
Čap, A., Slovák, J.: Parabolic geometries. I. vol. 154. Mathematical Surveys and Monographs. Background and general theory. American Mathematical Society, Providence, RI, pp. x+628 (2009). https://doi.org/10.1090/surv/154. ISBN: 978-0-8218-2681-2
Toth, G.: Finite Möbius groups, minimal immersions of spheres, and moduli. Universitext. Springer-Verlag, New York, pp. xvi+317 (2002). https://doi.org/10.1007/978-1-4613-0061-8. ISBN: 0-387-95323-X
Sharpe, R.W.: Differential geometry. vol. 166. Graduate Texts in Mathematics. Cartan’s generalization of Klein’s Erlangen program, With a foreword by S. S. Chern. Springer-Verlag, New York, pp. xx+421 (1997). ISBN: 0-387-94732-9
Bourbaki, N.: Lie groups and Lie algebras. Chapters 7–9. Elements of Mathematics (Berlin). Translated from the 1975 and 1982 French originals by Andrew Pressley. Springer-Verlag, Berlin, pp. xii+434 (2005). ISBN: 3-540-43405-4
Thurston W.P.: Three-dimensional geometry and topology. vol. 1. vol. 35. Princeton Mathematical Series. (Ed.) by Levy, S., Princeton University Press, Princeton, NJ, pp. x+311 (1997). ISBN: 0-691-08304-5
Bogatskiy, A., et al.: Lorentz group equivariant neural network for particle physics. In: International Conference on Machine Learning. PMLR, pp. 992–1002 (2020)
Acknowledgements
We are grateful to Jan E. Gerken, Sara Lombardo, Fahimeh Mokhtari, Jan A. Sanders, Alexander Veselov and Maurice Weiler for helpful and stimulating discussions, and an anonymous reviewer for their thorough and thought-provoking feedback.
Funding
This work is partially supported by the London Mathematical Society through an Emmy Noether Fellowship, REF EN-2122-03.
Author information
Authors and Affiliations
Corresponding author
Ethics declarations
Conflict of interest
The author declares no competing interests.
Additional information
Communicated by: Gitta Kutyniok
Publisher's Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Appendices
Appendix A: The Gram-Schmidt process and \(\textrm{S}^{n}\cong \textrm{SO}(n+1)/\textrm{SO}(n)\)
In order to compute the space of G-equivariant maps from a homogeneous space \(X\cong G/H\) to a representation V of G, we propose to find an explicit isomorphism \(X\cong G/H\) first. That is, obtain a map \(f:X\rightarrow G\) such that \(f(x)x_0 = x\) for any \(x\in X\), where \(x_0\) is a ‘base point’ of our choice. As we mentioned in Section 2, we are not aware of a general method to construct such a map (in the mathematical literature, one is usually satisfied knowing that it exists, and there is no need for a construction), but for specific families of homogeneous spaces, there are constructions available. In this section, we solve the question for the spheres
using the Gram-Schmidt process.
To warm up, we consider \(\textrm{S}^{1}_r=\{(x,y)\in \mathbb {R}^{2}\,|\,x^2+y^2=r^2\}\). When we pick the base point \(x_0=(r,0)\), we need to find \(f_1:\textrm{S}^{1}_r\rightarrow \textrm{SO}(2)\) (the subscript of f refers to the dimension of the sphere) such that \(f_1(x,y)(r,0)^T = (x,y)^T\). That is,
It remains to replace the \(*\) by entries which ensure that \(f_1(x,y)\in \textrm{SO}(2)\). In this case, we quickly find the solution
Going one dimension higher, we consider \(\textrm{S}^{2}_r=\{(x,y,z)\in \mathbb {R}^{3}\,|\,x^2+y^2+z^2=r^2\}\) and pick the base point \(x_0=(r,0,0)\). Then, we need to construct an orthogonal matrix \(f_2(x,y,z)\) with first column \((x/r,y/r,z/r)^T\). This time, it is harder to find a solution. We will return to this problem shortly.
The Gram-Schmidt process takes a set of linearly independent vectors \(\{v_1,\ldots ,v_k\}\) in a vector space with inner product \(\langle \cdot ,\cdot \rangle \) and returns a set of orthogonal vectors \(\{u_1,\ldots ,u_k\}\) with the same linear span. If we write the projection of v on a nonzero vector u as
then the orthogonal set of vectors is given by
To transform this orthogonal set \(\{u_1,\ldots ,u_k\}\) into an orthonormal set \(\{e_1,\ldots ,e_k\}\), one computes \(e_i=u_i/\sqrt{\langle u_i, u_i\rangle }\).
Let us return to the problem of constructing an orthogonal matrix \(f_2(x,y,z)\) with the first column \((x/r,y/r,z/r)^T\). A solution to this problem is not unique. Indeed, any solution can be multiplied on the right by a matrix of the form
because this multiplication preserves the first column, orthogonality and the determinant. Using this wiggle room, we can ensure that \(f_2\) is of the form
Next, we construct orthogonal columns with the Gram-Schmidt process. Due to the zero on the top right of the matrix \(f_2(x,y,x)\), we can save ourselves some work by computing columns from right to left. At the right, we start with a vector \((0,1,0)^T\). Instead of computing its component orthogonal to (x/r, y/r, z/r), we use (0, y/r, z/r) in order to preserve the first zero. That is, we compute \((0,1,0)-P_{(0,y/r,z/r)}(0,1,0)=\left( 0,\frac{z^2}{y^2+z^2},\frac{-yz}{y^2+z^2}\right) \) (which is only defined when \(y^2+z^2>0\)). This vector normalises to \(\left( 0,\frac{z}{\sqrt{y^2+z^2}},\frac{-y}{\sqrt{y^2+z^2}}\right) \) and is used for the right column of \(f_2\).
For the middle column, we start with a vector \((1,0,0)^T\) which is already orthogonal to the right column, leaving us only to compute its component orthonormal to the left column: \((1,0,0)-P_{(x/r,y/r,z/r)}(1,0,0)=\left( \frac{y^2+z^2}{r^2},-\frac{xy}{r^2},-\frac{xz}{r^2}\right) \), which normalises to \(\left( \frac{\sqrt{y^2+z^2}}{r},\frac{-xy}{r\sqrt{y^2+z^2}},\frac{-xz}{r\sqrt{y^2+z^2}}\right) \), resulting in the matrix
By construction, we have \(f_2(x,y,z)\in \textrm{O}(3)\), and we can check that the determinant of f is 1 hence \(f_2(x,y,z)\in \textrm{SO}(3)\). It remains to define \(f_2\) for the case \(y^2+z^2=0\), i.e. \(x=\pm r\). Here, we can simply put \(f_2(r,0,0)=\textrm{Id}\) and \(f_2(-r,0,0)=\textrm{diag}(-1,-1,1)\).
In much the same way, we can construct a map \(f_n:\textrm{S}^{n}_r\rightarrow \textrm{SO}(n+1)\) solving \(f(x)x_0=x\), where \(x=(x_1,x_2,\ldots ,x_{n+1})\). Generally, the equation leaves the freedom to multiply a solution f(x) by an element of the stabiliser subgroup \(G_{x_0}=H\) on the right. In our particular case we choose \(x_0=(r,0,0,\ldots ,0)^T\) and the stabiliser subgroup consists of block-diagonal matrices of the form \(\textrm{diag}(1,A)\) where \(A\in \textrm{SO}(n)\). It can be used to ensure that the first row of \(f_n(x_1,\ldots ,x_{n+1})\) has the form \((x_1/r, *, 0,0,\ldots ,0)\). Likewise, an element \(\textrm{diag}(1,1,A)\in H\) where \(A\in \textrm{SO}(n-1)\) can be used to ensure the second row of \(f_n(x_1,\ldots ,x_{n+1})\) has the form \((x_2/r, *, *, 0,0,\ldots ,0)\), without changing the first row. By continuing this argument for the whole sequence of subgroups \(\textrm{SO}(n)\supset \textrm{SO}(n-1)\supset \ldots \supset \textrm{SO}(2)\) of the stabiliser, we can assume without loss of generality that \(f_n\) takes the form
We will compute orthonormal columns for this matrix from right to left. For the right-most column, we start with the vector \((0,\ldots ,0,1,0)\) and obtain a norm 1 vector orthogonal to \((0,\ldots ,0,x_{n}/r,x_{n+1}/r)\) by computing \((0,\ldots ,0,1,0)-P_{(0,\ldots ,0,x_{n}/r,x_{n}/r)}(0,\ldots ,0,1,0)\) and subsequently normalising to get \(\left( 0,\ldots ,0,\frac{x_{n+1}}{\sqrt{x_{n}^2+x_{n+1}^2}},-\frac{x_{n}}{\sqrt{x_{n}^2+x_{n+1}^2}}\right) \).
Consider now the column of \(f_n(x)\) at position \(k+1\), and assume all columns to its right fit the form above, and constitute an orthogonal set of vectors together with the vector \((0,\ldots ,0,x_k/r,\ldots ,x_{n+1}/r)\). We can apply the Gram-Schmidt process for this set of vectors together with the vector \((0,\ldots ,0,1,0,\ldots ,0)^T\) where the nonzero entry is at position k, and see that all but one of the projections to be computed are zero, leaving us only to compute \((0,\ldots ,0,1,0,\ldots ,0)-P_{(0,\ldots ,0,x_k/r,\ldots ,x_{n+1}/r)}((0,\ldots ,0,1,0,\ldots ,0))\) which equals
It is convenient to introduce the notation
because the norm of the last vector is \(r_{k+1}/r_k\), and its normalisation reads
Notice that \(r_{n}=\sqrt{x_n^2+x_{n+1}^2}>0\) implies \(r_{i}>0\) for \(i=1,\ldots ,n\); hence, all these vectors are defined if and only if \(x_n^2+x_{n+1}^2> 0\).
Using these vectors, for \(k=1,\ldots ,n\), as columns for \(f_n(x)\), we obtain for \(x_n^2+x_{n+1}^2> 0\)
In the right column of this matrix, we added a factor \((-1)^{n+1}\) in order to have determinant 1. To see this, notice that the function \(\det \circ f_n\) is continuous and takes values in the discrete set \(\{1,-1\}\) due to orthonormality; hence, it is constant on the connected domain \(\{x_n^2+x_{n+1}^2> 0\}=\textrm{S}^{n}\setminus \textrm{S}^{n-2}\), and we can compute the value of \(\det \circ f_n\) by evaluating in one point. For instance, \(\det f_n(0,\ldots ,0, r)=1\). Thus, we have \(f_n(x)\in \textrm{SO}(n+1)\) when \(x_n^2+x_{n+1}^2> 0\).
We conclude using induction. If \(x_n^2+x_{n+1}^2=0\) then \((x_1,\ldots ,x_{n-1})\in \textrm{S}^{n-2}\) and we define \(f_n(x)\) as the block-diagonal matrix \(\textrm{diag}(f_{n-2}(x),1,1)\). Given that we already defined \(f_1\) and \(f_2\), this provides \(f_n\) for all \(n\ge 1\).
It is worth noting that \(f_n\) is constant on lines emanating from the origin. That is, \(f_n(\lambda x)=f_n(x)\) for any \(\lambda \ge 0\) and any \(x\ne 0\). All values of \(f_n\) are therefore achieved on \(\textrm{S}^{n}_1\).
Appendix B: From homogeneous space to \(\mathbb {R}^d\)
This appendix concerns the application of this paper to geometric deep learning and is intended as a starting point for future research.
In geometric deep learning, one encounters a group \(G\in \textrm{GL}(\mathbb {R}^d)\) and is interested in G equivariant maps from \(\mathbb {R}^d\) to a representation V of G, as we explained in the introduction. In the body of this paper, we restrict the problem to G-orbits in \(\mathbb {R}^d\). Whether this can be extended to the whole space depends on the particular group and particularly the quotient space \(\mathbb {R}^d/G\).
We solve the problem for the spheres in \(\mathbb {R}^d\) with Proposition B.1, where the quotient space is a manifold (appearing as \([0,\infty )\) in the proposition). A similar result for the case \(d=3\) can be found in [9]. Afterwards, we discuss the situation for hyperbolic spaces in \(\mathbb {R}^d\), where the quotient space is not Hausdorff.
This illustrates that future research should not aim for a general solution but rather focus on the application. In geometric deep learning, the equivariant maps are used as convolution kernels in an integral. Hence, it is harmless to exclude a set of measure zero, and one could restrict to a full measure subset of the quotient space \(\mathbb {R}^d/G\) which is better behaved, allowing also solutions for hyperbolic spaces in \(\mathbb {R}^d\).
In the setting of this appendix, we can choose what class of functions we want to study, unlike the situation for homogeneous spaces, where the function class is determined by the group actions. This appendix is written for continuous maps, but the proofs can be modified to work for other classes of maps.
Proposition B.1
Let V be a continuous representation of \(\textrm{SO}(d)\), \(H\subset \textrm{SO}(d)\) the stabiliser of \((1,0,\ldots ,0)\in \mathbb {R}^d\), and \(\{v_1,\ldots ,v_m\}\) a basis of \(V^H\), extending a basis \(\{v_1,\ldots ,v_{m'}\}\) of \(V^{\textrm{SO}(d)}\). Then, the space of continuous \(\textrm{SO}(d)\)-equivariant maps from \(\mathbb {R}^d\) to V is given by
where \(x=(x_1,\ldots ,x_d)\in \mathbb {R}^d\) and \(r=\sqrt{x_1^2+\ldots +{x_d}^2}\), and \(f_{n}:\mathbb {R}^d\rightarrow \textrm{SO}(d)\) the map defined in Appendix A with \(n=d-1\), extended to the origin by \(f_{n}(0)=\textrm{Id}\).
Proof
The set on the right-hand side of the equation will be denoted RHS, that is
We will show first that RHS is contained in \(C_{\textrm{SO}(d)}\left( \mathbb {R}^d\setminus \{0\},V\right) \). To check that a map \(x\mapsto \sum _{i=1}^m c_i(r) f_{n}(x) v_i\) in RHS is equivariant, we pick a generic element \(g\in \textrm{SO}(n)\) and compute
where the second equality follows from Theorem 2.5.
The next thing to do is to check continuity (and we will find that the discontinuity of \(f_{n}\) is irrelevant). First, pick a nonzero \(x\in \mathbb {R}^d\) and restrict to the sphere \(\textrm{S}^{n}\) containing x (and centred at the origin). There, \(f_{n}(x) v_i\) is continuous due to equivariance and the properties of the group actions. Indeed, the limit \(\lim _{y\rightarrow x}f_{n}(y) v_i\) with \(y\in \textrm{S}^{n}\) can be written as \(\lim _{g\rightarrow 1}f_{n}(gx) v_i\) with \(g\in \textrm{SO}(d)\) because the action of \(\textrm{SO}(d)\) on \(\textrm{S}^{n}\) is continuous and transitive. Equivariance equates this to \(\lim _{g\rightarrow 1}\left( gf_{n}(x) v_i\right) \), and continuity of the action on V shows this limit is \(\left( \lim _{g\rightarrow 1}g\right) f_{n}(x) v_i=f_{n}(x) v_i\).
To see that \(c_i(r)f_{n}(x) v_i\) depends continuously on x at any nonzero x, recall that \(f_{n}(\lambda x)=f_{n}(x)\) for any \(\lambda >0\), and \(c_i\) is continuous.
For continuity at \(x=0\), we use the fact that \(v_i\in V^{\textrm{SO}(d)}\) when \(i\le m'\) which implies \(c_i(r)f_{n}(x) v_i=c_i(r)v_i\) where continuity follows from continuity of \(c_i\). Let now \(i>m'\). Then, \(\lim _{x\rightarrow 0}c_i(r)f_{n}(x)v_i=0\) since \(c_i\) is continuous, \(c_i(0)=0\) and all entries of \(f_{n}(x)\in \textrm{SO}(d)\) are absolutely bounded by 1. This limit corresponds to the value \(c_i(0)f_{n}(0)v_i\). Thus, we see that the sum \(\sum _{i=1}^m c_i(r) f_{n}(x) v_i\) is indeed continuous on \(\mathbb {R}^d\), and \(\text {RHS}\subset C_{\textrm{SO}(d)}\left( \mathbb {R}^d,V\right) \).
It remains to prove that the reverse inclusion \(C_{\textrm{SO}(d)}\left( \mathbb {R}^d,V\right) \subset \text {RHS}\) holds. Let F be an element of \(C_{\textrm{SO}(d)}\left( \mathbb {R}^d\setminus \{0\},V\right) \). Then, \(F(r,0,\ldots ,0)\in V^H\) for any \(r\ge 0\), since \(hF(r,0,\ldots ,0)=F(h(r,0\ldots ,0))=F(rh(1,0\ldots ,0))=F(r,0\ldots ,0)\). Hence, we can expand \(F(r,0,\ldots ,0)\) in a basis of \(V^H\), i.e. there exists functions \(c_i\) such that
The functions \(c_i\) must be continuous because \(F(r,0,\ldots ,0)\) depends continuously on r. Moreover, \(c_i(0)=0\) when \(i>m'\) since \(F(0)\in V^G\) due to equivariance. Define a map by \(D(x)=F(x)-\sum _{i=1}^m c_i(\sqrt{x_1^2+\ldots +{x_d}^2}) f_{n}(x)v_i\). This map is equivariant by the first half of this proof, and it vanishes at \((r,0,\ldots ,0)\) for any \(r\ge 0\), since \(f_{n}(r,0\ldots ,0)=\textrm{Id}\). Let \(x\in \mathbb {R}^d\setminus \{0\}\) and \(r=\sqrt{x_1^2+\ldots +{x_d}^2}\). Then, there exists \(g\in \textrm{SO}(d)\) such that \(g(r,0,\ldots ,0)=x\). Therefore, \(D(x)=D(g(r,0\ldots ,0))=gD(r,0,\ldots ,0)=g 0=0\). That is, \(F(x)=\sum _{i=1}^m c_i(r) f_{n}(x)v_i\) and \(F\in \text {RHS}\).\(\square \)
Hyperbolic space of dimension n can be embedded in \(\mathbb {R}^d\), \(d=n+1\), as one component of the level set at \(-1\) of the quadratic form
This is an orbit of the connected group \(\textrm{SO}^+(1,n)\subset \textrm{GL}(d,\mathbb {R})\) defined by the preservation of Q.
We treated different realisations of two- and three-dimensional hyperbolic space in Examples 2.1 and 3.8, respectively. Now, we start with the 1-dimensional example. In that case, we have the quadratic form \(Q(t,x)=-t^2+x^2\) which is preserved by the group
Each nonzero point (t, 0) and (0, x) is contained in a unique orbit of this group. This describes all but five of the orbits. The remaining orbits constitute the level set \(\{Q=0\}=\{t^2=x^2\}\): four diagonal rays and the origin. The latter five orbits cannot be separated by open sets; hence, \(\mathbb {R}^2/\textrm{SO}^+(1,1)\) is not Hausdorff and therefore not a manifold, and we cannot do calculus on this space. However, if we take away the diagonals \(t^2=x^2\), the remaining quotient space \((\mathbb {R}^2\setminus \{t^2=x^2\})/\textrm{SO}^+(1,1)\) is homeomorphic to the union of four open intervals, which is a manifold. Given that the use of the equivariant maps in deep learning for this example is in an integral over \(\mathbb {R}^2\), it is harmless to omit the set \(\{t^2=x^2\}\) of measure zero in \(\mathbb {R}^2\) from the domain of these equivariant maps.
To describe continuous \(\textrm{SO}^+(1,1)\)-equivaiant maps \(\mathbb {R}^2\setminus \{Q\ne 0\}\rightarrow V\), for any continuous representation V of \(\textrm{SO}^+(1,1)\), we can proceed as we did in Proposition B.1. The equivariant maps are given by the sums
where each \(c_i\) is a continuous function, and each \(c_i(t,x)\) depends only on the \(\textrm{SO}^+(1,1)\)-orbit containing (t, x), and where f(t, x) is the matrix \(\frac{1}{\sqrt{-Q}}\begin{pmatrix}t&{}x\\ x&{}t\end{pmatrix}\) when \(Q<0\) and \(\frac{1}{\sqrt{Q}}\begin{pmatrix}x&{}t\\ t&{}x\end{pmatrix}\) when \(Q>0\).
In higher dimensions, the quotient space \(\mathbb {R}^d/\textrm{SO}^+(1,n)\) behaves similarly. The level set \(\{Q=0\}=\{t^2=\langle x,x\rangle \}\) consists of 3 orbits of \(\textrm{SO}^+(1,n)\), with \(t=0\), \(t>0\) and \(t<0\). These orbits cannot be separated by open sets, and \(\mathbb {R}^d/\textrm{SO}^+(1,n)\) is not Hausdorff. At positive values of Q, the level set is one orbit of \(\textrm{SO}^+(1,n)\), and at negative values of Q, the level set consists of two orbits (hyperbolic spaces). The quotient of the full measure subset \((\mathbb {R}^d\setminus \{Q=0\})\) by the action of \(\textrm{SO}^+(1,n)\) is homeomorphic to the union of three open intervals.
Rights and permissions
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.
About this article
Cite this article
Knibbeler, V. Computing equivariant matrices on homogeneous spaces for geometric deep learning and automorphic Lie algebras. Adv Comput Math 50, 27 (2024). https://doi.org/10.1007/s10444-024-10126-7
Received:
Accepted:
Published:
DOI: https://doi.org/10.1007/s10444-024-10126-7
Keywords
- Geometric deep learning
- Equivariant convolutional kernels
- Automorphic Lie algebras
- Homogeneous space
- Geometric frobenius reciprocity