Figure - available from: Discrete & Computational Geometry
This content is subject to copyright. Terms and conditions apply.
Examples of letters A,D,O,P,Q,R\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\mathtt {A}, \mathtt {D}, \mathtt {O}, \mathtt {P}, \mathtt {Q}, \mathtt {R}$$\end{document} represented by functions φA,φD,φO,φP,φQ,φR\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\varphi _\mathtt {A},\varphi _\mathtt {D},\varphi _\mathtt {O}, \varphi _\mathtt {P},\varphi _\mathtt {Q},\varphi _\mathtt {R}$$\end{document} from R2\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\mathbb {R}^2$$\end{document} to the real numbers. Each function φY:R2→R\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\varphi _Y:\mathbb {R}^2\rightarrow \mathbb {R}$$\end{document} describes the grey level at each point of the topological space R2\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\mathbb {R}^2$$\end{document}, with reference to the considered instance of the letter Y. Black and white correspond to the values 0 and 1, respectively (so that light grey corresponds to a value close to 1). In spite of the differences between the shapes of the considered letters, the persistent homology of the functions φA,φD,φO,φP,φQ,φR\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\varphi _\mathtt {A},\varphi _\mathtt {D},\varphi _\mathtt {O}, \varphi _\mathtt {P},\varphi _\mathtt {Q},\varphi _\mathtt {R}$$\end{document} is the same in every degree

Examples of letters A,D,O,P,Q,R\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\mathtt {A}, \mathtt {D}, \mathtt {O}, \mathtt {P}, \mathtt {Q}, \mathtt {R}$$\end{document} represented by functions φA,φD,φO,φP,φQ,φR\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\varphi _\mathtt {A},\varphi _\mathtt {D},\varphi _\mathtt {O}, \varphi _\mathtt {P},\varphi _\mathtt {Q},\varphi _\mathtt {R}$$\end{document} from R2\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\mathbb {R}^2$$\end{document} to the real numbers. Each function φY:R2→R\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\varphi _Y:\mathbb {R}^2\rightarrow \mathbb {R}$$\end{document} describes the grey level at each point of the topological space R2\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\mathbb {R}^2$$\end{document}, with reference to the considered instance of the letter Y. Black and white correspond to the values 0 and 1, respectively (so that light grey corresponds to a value close to 1). In spite of the differences between the shapes of the considered letters, the persistent homology of the functions φA,φD,φO,φP,φQ,φR\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\varphi _\mathtt {A},\varphi _\mathtt {D},\varphi _\mathtt {O}, \varphi _\mathtt {P},\varphi _\mathtt {Q},\varphi _\mathtt {R}$$\end{document} is the same in every degree

Source publication
Article
Full-text available
In many applications concerning the comparison of data expressed by R^m-valued functions defined on a topological space X, the invariance with respect to a given group G of self-homeomorphisms of X is required. While persistent homology is quite efficient in the topological and qualitative comparison of this kind of data when the invariance group G...

Similar publications

Article
Full-text available
Vectorization of images is a key concern uniting computer graphics and computer vision communities. In this paper we are presenting a novel idea for efficient, customizable vectorization of raster images, based on Catmull Rom spline fitting. The algorithm maintains a good balance between photo-realism and photo abstraction, and hence is applicable...
Conference Paper
Full-text available
The Reeb graph is a construction that studies a topological space through the lens of a real valued function. It has widely been used in applications, however its use on real data means that it is desirable and increasingly necessary to have methods for comparison of Reeb graphs. Recently, several methods to define metrics on the space of Reeb grap...
Article
Full-text available
Since their introduction in the shape analysis community, functional maps have met with considerable success due to their ability to compactly represent dense correspondences between deformable shapes, with applications ranging from shape matching and image segmentation, to exploration of large shape collections. Despite the numerous advantages of...

Citations

... As a first step in this direction, in this paper we show how GENEOs could be used to obtain stability of persistence diagrams of 1D signals in the presence of impulsive noise. For the use of such operators in the comparison of 1D signals under the action of several transformation groups, we refer the interested reader to the paper [14]. This type of signal is important in many applications, such as those concerning EEG data and time series. ...
... GENEOs have been studied in [14] as a new tool in TDA, since they allow for an extension of the theory that is not invariant under the action of every homeomorphism of the considered domain. This is important in applications where the invariance group is not the group of all homeomorphisms, such as the ones concerning shape comparison. ...
... Interestingly, GENEOs are also deeply related to the foliation method used to define the matching distance in two-dimensional persistent homology [15,16] and can be seen as a theoretical bridge between TDA and Machine Learning [11]. Furthermore, these operators make available lower bounds for the natural pseudo-distance d G (ϕ 1 , ϕ 2 ) := inf g∈G ϕ 1 − ϕ 2 • g ∞ , associated with a group G of self-homeomorphisms of the domain of the signals ϕ 1 , ϕ 2 [14]. For general information about the interest in the theory of GENEOs and its applications, we refer the reader to [17]. ...
Article
Full-text available
In recent years, group equivariant non-expansive operators (GENEOs) have started to find applications in the fields of Topological Data Analysis and Machine Learning. In this paper we show how these operators can be of use also for the removal of impulsive noise and to increase the stability of TDA in the presence of noisy data. In particular, we prove that GENEOs can control the expected value of the perturbation of persistence diagrams caused by uniformly distributed impulsive noise, when data are represented by L-Lipschitz functions from ℝ to ℝ.
... The key tool here is "persistent homology" which is an adaptation of homology to data in the form of point clouds. In the papers (Frosini and Jabłoński 2016;Bergomi et al. 2019;Conti et al. 2022) the authors introduce so called group equivariant non-expansive operators (GENEOs) in the context of TDA. This gives a different approach to the question of symmetries in neural networks, in which the topology and geometry of the data is not a priori given. ...
Article
Full-text available
We survey the mathematical foundations of geometric deep learning, focusing on group equivariant and gauge equivariant neural networks. We develop gauge equivariant convolutional neural networks on arbitrary manifolds $$\mathcal {M}$$ M using principal bundles with structure group K and equivariant maps between sections of associated vector bundles. We also discuss group equivariant neural networks for homogeneous spaces $$\mathcal {M}=G/K$$ M = G / K , which are instead equivariant with respect to the global symmetry G on $$\mathcal {M}$$ M . Group equivariant layers can be interpreted as intertwiners between induced representations of G , and we show their relation to gauge equivariant convolutional layers. We analyze several applications of this formalism, including semantic segmentation and object detection networks. We also discuss the case of spherical networks in great detail, corresponding to the case $$\mathcal {M}=S^2=\textrm{SO}(3)/\textrm{SO}(2)$$ M = S 2 = SO ( 3 ) / SO ( 2 ) . Here we emphasize the use of Fourier analysis involving Wigner matrices, spherical harmonics and Clebsch–Gordan coefficients for $$G=\textrm{SO}(3)$$ G = SO ( 3 ) , illustrating the power of representation theory for deep learning.
... Geometric Deep Learning is indeed trying to produce a geometric unification of several approaches to machine learning, focusing on the concepts of symmetry and invariance. At the intersection between this research field and Topological Data Analysis, it has been proposed to extend the study of the geometry of the space of data to the study of the geometry of the space of the observers/agents that elaborate the data [15,16]. This idea is both natural and relevant, since the interpretation of data depends on the chosen observers, and the approximation of the agents requires the knowledge of the topological and geometric properties of the space such agents belong to, including connectivity, convexity, compactness, curvature and so on. ...
... The development of a good topological and geometric theory of the spaces of GENEOs could indeed produce new methods for approximating external agents in such spaces, suggest how to change such operators without losing their equivariance, benefit from their lattice structure with respect to the operations of maximization and minimization [18], and allow to manage relations and conflicts that can arise in intelligent structures [14], just to make a few examples. As for the link between GENEOs and Topological Data Analysis (with particular reference to persistent homology), we refer the interested reader to [5,16]. A central role in this link is taken by the so-called natural pseudo-distance [10][11][12]17]. ...
Article
Full-text available
Recent advances in machine learning have highlighted the importance of using group equivariant non-expansive operators for building neural networks in a more transparent and interpretable way. An operator is called equivariant with respect to a group if the action of the group commutes with the operator. Group equivariant non-expansive operators can be seen as multi-level components that can be joined and connected in order to form neural networks by applying the operations of chaining, convex combination and direct product. In this paper we prove that each linear G-equivariant non-expansive operator (GENEO) can be produced by a weighted summation associated with a suitable permutant measure, provided that the group G transitively acts on a finite signal domain. This result is based on the Birkhoff–von Neumann decomposition of doubly stochastic matrices and some well known facts in group theory. Our theorem makes available a new method to build all linear GENEOs with respect to a transitively acting group in the finite setting. This work is part of the research devoted to develop a good mathematical theory of GENEOs, seen as relevant components in machine learning.
... GENEOs have been studied in [13] as a new tool in TDA, since they allow for an extension of the theory that is not invariant under the action of every homeomorphism of the considered domain. This is important in applications where the invariance group is not the group of all homeomorphisms, such as the ones concerning shape comparison. ...
... Interestingly, GENEOs are also deeply related to the foliation method used to define the matching distance in 2-dimensional persistent homology [6,7] and can be seen as a theoretical bridge between TDA and Machine Learning [2]. Furthermore, these operators make available lower bounds for the natural pseudo-distance d G (ϕ 1 , ϕ 2 ) := inf g∈G ϕ 1 − ϕ 2 • g ∞ , associated with a group G of self-homeomorphisms of the domain of the signals ϕ 1 , ϕ 2 [13]. ...
Preprint
Full-text available
In recent years, group equivariant non-expansive operators (GENEOs) have attracted attention in the fields of Topological Data Analysis and Machine Learning. In this paper we show how these operators can be of use also for the removal of impulsive noise and to increase the stability of TDA in the presence of noisy data. In particular, we prove that GENEOs can control the expected value of the perturbation of persistence diagrams caused by uniformly distributed impulsive noise, when data are represented by $L$-Lipschitz functions from $\mathbb{R}$ to $\mathbb{R}$.
... We indeed know that TDA and Persistent Homology allow for a qualitative and efficient geometric study of the data space, but suffer from some important limitations, since Persistent Homology alone is not able to distinguish between some functions. Fortunately, the joint use of TDA and GENEOs overcomes this difficulty in the discrimination of data (Frosini, 2016;Frosini and Jabłoński, 2016;Bergomi et al., 2019). In other words, GENEOs are able to preserve information on the data that would have been lost through TDA alone. ...
... For more details and proofs about the results and concepts illustrated in section 2 we refer the interested reader to the papers (Frosini, 2016;Frosini and Jabłoński, 2016;Frosini and Quercioli, 2017;Camporesi et al., 2018;Bergomi et al., 2019). The other sections present our new results about the construction of non-linear GENEOs via symmetric functions and permutants. ...
Article
Full-text available
Group Equivariant Operators (GEOs) are a fundamental tool in the research on neural networks, since they make available a new kind of geometric knowledge engineering for deep learning, which can exploit symmetries in artificial intelligence and reduce the number of parameters required in the learning process. In this paper we introduce a new method to build non-linear GEOs and non-linear Group Equivariant Non-Expansive Operators (GENEOs), based on the concepts of symmetric function and permutant. This method is particularly interesting because of the good theoretical properties of GENEOs and the ease of use of permutants to build equivariant operators, compared to the direct use of the equivariance groups we are interested in. In our paper, we prove that the technique we propose works for any symmetric function, and benefits from the approximability of continuous symmetric functions by symmetric polynomials. A possible use in Topological Data Analysis of the GENEOs obtained by this new method is illustrated.
... The concept of group equivariant non-expansive operator (GENEO) has been recently proposed as a versatile tool in topological data analysis and deep learning [5,9,10]. We recall that an operator F is called equivariant with respect to a group G if the action of G commutes with F . ...
Preprint
Full-text available
Group equivariant non-expansive operators have been recently proposed as basic components in topological data analysis and deep learning. In this paper we study some geometric properties of the spaces of group equivariant operators and show how a space $\mathcal{F}$ of group equivariant non-expansive operators can be endowed with the structure of a Riemannian manifold, so making available the use of gradient descent methods for the minimization of cost functions on $\mathcal{F}$. As an application of this approach, we also describe a procedure to select a finite set of representative group equivariant non-expansive operators in the considered manifold.
... Unfortunately, d G is difficult to compute, but [8] illustrates a possible path to approximate the natural pseudo-distance by means of a dual approach involving persistent homology and GENEOs. In particular, one can see that a good approximation of the space F (Φ, G) of all GENEOs corresponds to a good approximation of the pseudo-distance d G . ...
... In this section the mathematical model illustrated in [8] will be briefly recalled. Let X be a (non-empty) topological space, and Φ be a topological subspace of the topological space C 0 b (X, R) of the continuous bounded functions from X to R, endowed with the topology induced by the sup-norm · ∞ . ...
... If X has nontrivial homology in degree k, the following key result holds [8]. ...
Preprint
Group equivariant operators are playing a more and more relevant role in machine learning and topological data analysis. In this paper we present some new results concerning the construction of $G$-equivariant non-expansive operators (GENEOs) from a space $\varPhi$ of real-valued bounded continuous functions on a topological space $X$ to $\varPhi$ itself. The space $\varPhi$ represents our set of data, while $G$ is a subgroup of the group of all self-homeomorphisms of $X$, representing the invariance we are interested in.
... An approach based on GENEOs could contribute to explain and understand the architecture of neural networks. Another important reason to study the space of GENEOs is the existence of a link between these operators and Topological Data Analysis, with particular reference to persistent homology [14,5]. A central role in this link is taken by the so-called natural pseudo-distance [8,9,10,15]. ...
Preprint
Full-text available
The study of $G$-equivariant operators is of great interest to explain and understand the architecture of neural networks. In this paper we show that each linear $G$-equivariant operator can be produced by a suitable permutant measure, provided that the group $G$ transitively acts on a finite signal domain $X$. This result makes available a new method to build linear $G$-equivariant operators in the finite setting.
... One example of these problems is the one at the method section. A major source of inspiration in that regard was the following work [8], [20], [19]. Our main effort is to explore the possibility of using their results in the case of Bongard Problems. ...
... Remark: Since the set F of all G-equivariant non expansive operators can be approximated by a finite subset of F [20], we only need a small subset of operators to perform a clustering that matches the given one. ...
Article
Full-text available
Bongard problems are a set of 100 visual puzzles posed by M. M. Bongard, where each puzzle consists of twelve images separated into two groups of six images. The task is to find the unique rule separating the two classes in each given problem. The problems were first posed as a challenge for the AI community to test machines ability to imitate complex, context-depending thinking processes using only minimal information. Although some work was done to solve these problems, none of the previous approaches could automatically solve all of them. The present paper is a contribution to attack these problems with a different approach, combining the tools of persistent homology alongside with machine learning methods. In this work, we present an algorithm and show that it is able to solve problems involving differences in connectivity and size as examples, we also show that it can solve problems involving a much larger set of differences provided the right G-equivariant operators.
... Our contribution is the definition of a mathematical model where data are represented as function spaces associated with groups of transformations. Data are manipulated through group equivariant non-expansive operators (GENEOs) 16 , which are, in layman's terms, blind to the action of the group on the data. We prove that, under the assumption that the function spaces are compact, the spaces of GENEOs are compact with respect to a suitable pseudo-metric. ...
... In addition to the pseudo-metric D Φ , we define another pseudo-distance d G on the space Φ (ref. 16 ). This represents the ground truth in our model. ...
Article
Full-text available
We provide a general mathematical framework for group and set equivariance in machine learning. We define group equivariant non-expansive operators (GENEOs) as maps between function spaces associated with groups of transformations. We study the topological and metric properties of the space of GENEOs to evaluate their approximating power and set the basis for general strategies to initialize and compose operators. We define suitable pseudo-metrics for the function spaces, the equivariance groups and the set of non-expansive operators. We prove that, under suitable assumptions, the space of GENEOs is compact and convex. These results provide fundamental guarantees in a machine learning perspective. By considering isometry-equivariant non-expansive operators, we describe a simple strategy to select and sample operators. Thereafter, we show how selected and sampled operators can be used both to perform classical metric learning and to inject knowledge in artificial neural networks. Controlling the flow and representation of information in deep neural networks is fundamental to making networks intelligible. Full-text access to a view-only version of this paper is available at the link https://rdcu.be/bP6HV