Article

Random Sets and Integral Geometry

Authors:
To read the full-text of this research, you can request a copy directly from the author.

No full-text available

Request Full-text Paper PDF

To read the full-text of this research,
you can request a copy directly from the author.

... The Fell-topology T F is created by all sets F V and F K and the topological space (F(R d ), T F ) is compact, Hausdorff and separable [18]. ...
... Remark 2.40. We find for closed sets F n , F in R d that F n → F if and only if [18] 1. for every x ∈ F there exists x n ∈ F n such that x = lim n→∞ x n and 2. if F n k is a subsequence, then every convergent sequence x n k with x n k ∈ F n k satisfies lim k→∞ x n k ∈ F. ...
... Definition 2.44 (Random closed / open set according to Choquet (see [18] for more details)). ...
Article
Full-text available
In this first part of a series of 3 papers, we set up a framework to study the existence of uniformly bounded extension and trace operators for $ W^{1, p} $-functions on randomly perforated domains, where the geometry is assumed to be stationary ergodic. We drop the classical assumption of minimally smoothness and study stationary geometries which have no global John regularity. For such geometries, uniform extension operators can be defined only from $ W^{1, p} $ to $ W^{1, r} $ with the strict inequality $ r < p $. In particular, we estimate the $ L^{r} $-norm of the extended gradient in terms of the $ L^{p} $-norm of the original gradient. Similar relations hold for the symmetric gradients (for $ {\mathbb{R}^{d}} $-valued functions) and for traces on the boundary. As a byproduct we obtain some Poincaré and Korn inequalities of the same spirit. Such extension and trace operators are important for compactness in stochastic homogenization. In contrast to former approaches and results, we use very weak assumptions: local $ (\delta, M) $-regularity to quantify statistically the local Lipschitz regularity and isotropic cone mixing to quantify the density of the geometry and the mesoscopic properties. These two properties are sufficient to reduce the problem of extension operators to the connectivity of the geometry. In contrast to former approaches we do not require a minimal distance between the inclusions and we allow for globally unbounded Lipschitz constants and percolating holes. We will illustrate our method by applying it to the Boolean model based on a Poisson point process and to a Delaunay pipe process, for which we can explicitly estimate the connectivity terms.
... (a) continuous pore size distribution (c-PSD), for which there is a one-to-one relationship with the granulometry function [46], is used to characterize the size distribution of bulges, and (b) a geometrically defined mercury intrusion pore size distribution (MIP-PSD, also called porosimetry curve) is used to characterize the size distribution of bottlenecks. ...
... Note that R 2 is between 0 and 1, where 1 indicates a perfect fit. A mathematical formalization of geodesic tortuosity in the framework of random sets, a key object in stochastic geometry and mathematical morphology for microstructure characterization [46,93], was recently provided by Neumann et al. [44,94], while a slightly modified version of geodesic tortuosity was presented by Barman et al. [11]. ...
... Considering percolation path tortuosity for varying radii reveals interesting insights on porous microstructures going beyond the information gained by geodesic tortuosity. This is demonstrated using an example of paper-based materials in [46,104]. ...
Chapter
Full-text available
Many different definitions of tortuosity can be found in literature. In addition, also many different methodologies are nowadays available to measure or to calculate tortuosity. This leads to confusion and misunderstanding in scientific discussions of the topic. In this chapter, a thorough review of all relevant tortuosity types is presented. Thereby, the underlying concepts, definitions and associated theories are discussed in detail and for each tortuosity type separately. In total, more than 20 different tortuosity types are distinguished in this chapter. In order to avoid misinterpretation of scientific data and misunderstandings in scientific discussions, we introduce a new classification scheme for tortuosity, as well as a systematic nomenclature, which helps to address the inherent differences in a clear and efficient way. Basically, all relevant tortuosity types can be grouped into three main categories, which are (a) the indirect physics-based tortuosities, (b) the direct geometric tortuosities and (c) the mixed tortuosities. Significant differences among these tortuosity types are detected, when applying the different methods and concepts to the same material or microstructure. The present review of the involved tortuosity concepts shall serve as a basis for a better understanding of the inherent differences. The proposed classification and nomenclature shall contribute to more precise and unequivocal descriptions of tortuosity.
... The set C δ K is called the convolution body of K, due to the fact that g K is the convolution of the indicator functions of K and −K. Convolution bodies and the covariogram function were studied in [8,9,10,12,14]. Specifically, in relation to the phase retrieval problem in Fourier analysis, it was studied in [1,2,3]. ...
... The shape of C δ K, if scaled by a factor (1 − δ) −1 , approaches the polar projection body of K denoted by Π * K, which is the unit ball of the norm defined by ∥v∥ Π * K = |P v ⊥ K| n−1 for every unit vector v ∈ S n−1 , where P v ⊥ is the orthogonal projection to the hyperplane orthogonal to v. This was first observed by Matheron in [9], where the covariogram function was introduced. Indeed it was proven in [12, Theorem 2.2] that (1) lim ...
... To begin with, we consider the spherical contact distribution function H : [0, ∞) → [0, 1] of the pore space (see, e.g., Refs. [41,56,57]), where for each r 0, the value of H (r) is the (conditional) probability that the minimum distance from a randomly selected point of the pore phase c to the solid phase is less or equal than r. Formally, Table I. ...
... In other words, r max is defined as median of the continuous pore size distribution, which is computed via morphological opening of the pore space. Note that there is a one-to-one relationship, explicitly given in Ref. [36], between the continuous pore size distribution and the granulometry function from mathematical morphology [56]. ...
Article
Full-text available
Excursion sets of Gaussian random fields are used to model the three-dimensional (3D) morphology of differently manufactured porous glasses (PGs), which vary with respect to their mean pore widths measured by mercury intrusion porosimetry. The stochastic 3D model is calibrated by means of volume fractions and two-point coverage probability functions estimated from tomographic image data. Model validation is performed by comparing model realizations and image data in terms of morphological descriptors which are not used for model fitting. For this purpose, we consider mean geodesic tortuosity and constrictivity of the pore space, quantifying the length of the shortest transportation paths and the strength of bottleneck effects, respectively. Additionally, a stereological approach for parameter estimation is presented, i.e., the 3D model is calibrated using merely two-dimensional (2D) cross-sections of the 3D image data. Doing so, on average, a comparable goodness of fit is achieved as well. The variance of the calibrated model parameters is discussed, which is estimated on the basis of randomly chosen, individual 2D cross-sections. Moreover, interpolating between the model parameters calibrated to differently manufactured glasses enables the predictive simulation of virtual but realistic PGs with mean pore widths that have not yet been manufactured. The predictive power is demonstrated by means of cross-validation. Using the presented approach, relationships between parameters of the manufacturing process and descriptors of the resulting morphology of PGs are quantified, which opens possibilities for an efficient optimization of the underlying manufacturing process.
... For an exhaustive treatment of point processes, and for a unified theory on germ-grain models, we refer to Vere-Jones (2003, 2008), and to Matheron (1975), Molchanov (2005), Schneider and Weil (2008), respectively. Here we only recall some basic facts and definitions. ...
... Here F denotes the class of the closed subsets in R , and F is the -algebra generated by the so-called Fell topology, or hit-or-miss topology, that is the topology generated by the set system {F : ∈ G} ∪ {F : ∈ C}, where G and C are the system of the open and compact subsets of R , respectively, while F := { ∈ F : ∩ ≠ ∅} and F := { ∈ F : ∩ = ∅} (e.g., see Matheron (1975)). For the measurability of H −1 ( Θ) under rectifiability assumptions on Θ we refer to Zähle (1982); for the measurability of (Θ) and of (Θ) we refer to (Galerne, 2011, p. 47). ...
... The treatment is not exhaustive here; thus, throughout the article we will provide some interesting references for those readers who want to deepen the results we just recall. In particular, we refer to [22][23][24] for an exhaustive treatment of point processes, to [25][26][27][28] for a unified theory on germ-grain models, and also to [29] for an elegant presentation of Poisson processes. ...
... Proposition 10. Let X be an I F -type point process with respect to the finite Poisson point process Z in R d having intensity measure M(dy) = μ(y)dy, with F as in (25) and ...
... Now, the general theory of probability supporting statistical analysis should cover the theory of random sets (as an extension of random vectors) which are well defined random elements. See Matheron (1975) or Nguyen (2006). In other words, in view of credible statistics, statistics of random sets should take a central stage in empirical research. ...
... But each {x} is a closed set of R d . Thus, following Matheron (1975), we consider random sets taking values as closed subsets of R d , denoted as F ðR d Þ on which a "hit-or-miss topology" is established to obtain its Borel σ À field, denoted as BðF Þ. ...
Article
Purpose This paper aims to offer a tutorial/introduction to new statistics arising from the theory of optimal transport to empirical researchers in econometrics and machine learning. Design/methodology/approach Presenting in a tutorial/survey lecture style to help practitioners with the theoretical material. Findings The tutorial survey of some main statistical tools (arising from optimal transport theory) should help practitioners to understand the theoretical background in order to conduct empirical research meaningfully. Originality/value This study is an original presentation useful for new comers to the field.
... It can transform heterogeneous uncertain information into a unified random set form to describe. In the field of fault diagnosis, the use of random sets can describe the fault characteristics and multi-value mapping relationship between faults, and can uniformly represent and measure various uncertain information in the system [112]. ...
Article
In the aircraft control system, sensor networks are used to sample the attitude and environmental data. As a result of the external and internal factors (e.g., environmental and task complexity, inaccurate sensing and complex structure), the aircraft control system contains several uncertainties, such as imprecision, incompleteness, redundancy and randomness. The information fusion technology is usually used to solve the uncertainty issue, thus improving the sampled data reliability, which can further effectively increase the performance of the fault diagnosis decision-making in the aircraft control system. In this work, we first analyze the uncertainties in the aircraft control system, and also compare different uncertainty quantitative methods. Since the information fusion can eliminate the effects of the uncertainties, it is widely used in the fault diagnosis. Thus, this paper summarizes the recent work in this aera. Furthermore, we analyze the application of information fusion methods in the fault diagnosis of the aircraft control system. Finally, this work identifies existing problems in the use of information fusion for diagnosis and outlines future trends.
... These tools include a probability distribution, a probability density, and a suitable reference measure to perform the integration. The hyperspace F(X ) does not inherit the standard Euclidean topology, but the Mathéron "hit-or-miss" topology (Mathéron, 1974), which implies that some of these tools are built differently compared to those designed purely for X . However, as demonstrated in this section, we can work with them in a way consistent with the conventional probabilistic calculus. ...
Conference Paper
Full-text available
Daily internet communication relies heavily on tree-structured graphs, embodied by popular data formats such as XML and JSON. However, many recent generative (probabilistic) models utilize neural networks to learn a probability distribution over undirected cyclic graphs. This assumption of a generic graph structure brings various computational challenges, and, more importantly, the presence of non-linearities in neural networks does not permit tractable probabilistic inference. We address these problems by proposing sum-product-set networks, an extension of probabilistic circuits from unstructured tensor data to tree-structured graph data. To this end, we use random finite sets to reflect a variable number of nodes and edges in the graph and to allow for exact and efficient inference. We demonstrate that our tractable model performs comparably to various intractable models based on neural networks.
... The potential predictor is derived from the regionally averaged value over significant region on the correlation coefficient (CC) map between predictand and each variable in big climate data. The image processing strategy provided by mathematical morphology 43,44 is employed to automatically identify these significant regions. Only the predictors associated with significant anomalies in summer horizonal winds at 850 hPa over the monsoon domain are retained. ...
Article
Full-text available
Afro-Asian summer monsoon precipitation (AfroASMP) is the life blood of billions of people living in many developing countries covering West Africa and Asia. Its complex variabilities are always accompanied by natural disasters like floods, landslides and droughts. Reliable AfroASMP prediction several months in advance is valuable for not only decision-makers but also regional socioeconomic sustainability. To address the current predicament of the AfroASMP seasonal prediction, this study provides an effective machine-learning model (Y-model). Y-model uses the monsoon related big climate data for searching the potential predictors, encompassing atmospheric internal factors and external forcings. Only the predictors associated with significant anomalies in summer horizonal winds at 850 hPa over the monsoon domain are retained. These selected predictors are then reorganized into a large ensemble based upon different thresholds of four fundamental principles. These principles include the focused sample sizes, the relationships between predictors and predictand, the independence among predictors, and the extremities of predictors in the forecast year. Real-time predictions can be generated based on the ensemble mean of skillful members during an independent hindcast period. Y-model skillfully predicts four monsoon precipitation indices of AfroASMP during 2011–2022 at lead 4–12 months, correlation skills range from 0.58 to 0.90 and root mean square error skills are reduced by 11–53% compared to CFS v2 model at lead 1 month. This study offers an effective method for preprocessing predictors in seasonal climate prediction.
... Some of the many examples are describing the shape of different types of tissue in medicine [15], the spatial arrangement of the plants in the ecosystem [24], the microstructure of the materials [31]. The rich theory of random sets is discussed in [19], [21], [33]. ...
... Random sets have gained popularity in recent years as a useful tool for statistically analysing the geometry of objects in various fields of science. A welldeveloped theory of random sets can be found in [20], [21] and [28]. The strength of random sets lies in their ability to describe many phenomena in nature. ...
Preprint
Full-text available
In this paper we present the methodology for detecting outliers and testing the goodness-of-fit of random sets using topological data analysis. We construct the filtration from level sets of the signed distance function and consider various summary functions of the persistence diagram derived from the obtained persistence homology. The outliers are detected using functional depths for the summary functions. Global envelope tests using the summary statistics as test statistics were used to construct the goodness-of-fit test. The procedures were justified by a simulation study using germ-grain random set models.
... The covariogram of a convex body K is given by g K (x) = vol n (K ∩ (K + x)); see the recent survey by Bianchi [13] for a rich overview of this function. It is relevant here that the support of g K is DK, and it was shown by Matheron [47] that, for θ ∈ S n−1 , ...
Preprint
Full-text available
The Rogers-Shephard and Zhang's projection inequalities are two reverse, affine isoperimetric inequalities relating the volumes of a convex body and its difference body and polar projection body, respectively. Following a classical work by Schneider, both inequalities have been extended to the so-called higher-order setting. In this work, we establish the higher-order analogues for these inequalities in the setting of log-concave functions.
... A random set [Kendall, 1974, Matheron, 1975, Nguyen, 1978, Molchanov, 2005] is a set-valued random variable, modeling random experiments in which observations come in the form of sets. In the case of finite sample spaces, they are called belief functions [Shafer, 1976]. ...
Preprint
Full-text available
Statistical learning theory is the foundation of machine learning, providing theoretical bounds for the risk of models learnt from a (single) training set, assumed to issue from an unknown probability distribution. In actual deployment, however, the data distribution may (and often does) vary, causing domain adaptation/generalization issues. In this paper we lay the foundations for a 'credal' theory of learning, using convex sets of probabilities (credal sets) to model the variability in the data-generating distribution. Such credal sets, we argue, may be inferred from a finite sample of training sets. Bounds are derived for the case of finite hypotheses spaces (both assuming realizability or not) as well as infinite model spaces, which directly generalize classical results.
... Note that this opening can be considered as the subset of Ξ i which can be covered by a union of disks Dðo; hÞ; where each disk is completely contained in Ξ i . The continuous phase size distribution carries the same information as the granulometry function in mathematical morphology, [15,32] which is a widely used descriptor measuring the size distribution of complex structures. We estimate the continuous phase distribution at h > 0 from image data by estimating the volume fraction of the morphological opening of Ξ i with structuring element Dðo; hÞ: Edge effects are avoided by means of minussampling as described in Section 4.7.2 of. ...
Article
Full-text available
A data‐driven modeling approach is presented to quantify the influence of morphology on effective properties in nanostructured sodium vanadium phosphate Na3V2(PO4) 3 ${{{\rm N}{\rm a}}_{3}{{\rm V}}_{2}({{\rm P}{\rm O}}_{4}{)}_{3}}$ / carbon composites (NVP/C), which are used as cathode material in sodium‐ion batteries. This approach is based on the combination of advanced imaging techniques, experimental nanostructure characterization and stochastic modeling of the 3D nanostructure consisting of NVP, carbon and pores. By 3D imaging and subsequent post‐processing involving image segmentation, the spatial distribution of NVP is resolved in 3D, and the spatial distribution of carbon and pores is resolved in 2D. Based on this information, a parametric stochastic model, specifically a Pluri‐Gaussian model, is calibrated to the 3D morphology of the nanostructured NVP/C particles. Model validation is performed by comparing the nanostructure of simulated NVP/C composites with image data in terms of morphological descriptors which have not been used for model calibration. Finally, the stochastic model is used for predictive simulation to quantify the effect of varying the amount of carbon while keeping the amount of NVP constant. The presented methodology opens new possibilities for a ressource‐efficient optimization of the morphology of NVP/C particles by modeling and simulation.
... Mathematical morphology (MM) was first introduced in 1964 by two French researchers named Matheron and Serra [41], [42]. MM can be defined as a theory and technique for analyzing spatial structures, based on set theory and integral geometry. ...
Article
Full-text available
Detection of high impedance faults (HIF) is one of the biggest challenges in power distribution networks. HIF usually occurs when conductors in the distribution network are broken and accidently come into contact the ground or a tree branch. The current of this fault is close to the load current level and cannot be detected by overcurrent relays. Also, some regular system phenomena such as capacitor switching, load switching, and inrush current and saturation phenomena in current transformer (CT) represent some features which may overlap the components of HIF; making HIF detection schemes more complex. In this paper, a new method for HIF detection is presented which is able to distinguish any type of HIF from regular system phenomena. To achieve this, the scheme of morphological gradient edge detection (MGED) is used to process voltage signals. The MGED extracts two main features from the processed signals: first, the edges or changes in the signal are elicited and then, these features are extracted after two cycles from the onset of the fault. Then, based on these features, a high impedance fault detection index (HIFDI) is introduced for distinguishing and classifying HIF from other regular system phenomena. The simulation results for different types of HIF fault in a sample 20 kV distribution feeder and IEEE 34-bus distribution test system using EMTP confirm the fast and accurate performance of the proposed method.
... Another definition for a range that can be used to quantitatively describe the microstructure is called the integral range [14,23,24]. The definition of the integral range in the space R n is: ...
Article
Full-text available
In this paper, the internal microstructures of three different concrete mixtures were investigated through digitalimage analysis procedure conducted on a set of 60 cross-sectional images of hardened concrete specimens.To quantitatively describe the geometry and spatial organization of all the components within the hardened concretes and measure the heterogeneity of different concrete mixtures and sample shapes, a two-dimensionalAutocorrelation Function (ACF) analysis program was performed on a series of 60 scanned images of internalcross-sectional images. This enabled us to define a correlation range called the microstructural characteristiclength, li, which can be used as: (i) an indicator to quantify the properties on the domain size of the internal microstructure of hardened concrete, and (ii) an input parameter for constitutive modeling or for estimating theRepresentative Volume Element (RVE) of concrete. As the image analysis procedure based on ACF does notrequire the segmentation of the images, the method proposed in the present paper provides a simple and useful way of quantifying the microstructure of concrete for many practical purposes.
... Remark 3.2. Section 1.4 in Matheron (1975) shows that d H is a metric. However, the metric d H is not able to detect differences in shape properties. ...
Article
The core of a transferable utility (TU) game, if it is not empty, is prescribed by the set of all stable allocations. The exact determination of the core reaches exponential time complexity. Therefore, its exact computation is often avoided as the number of players increases. In this work, we propose an estimator for the core of a TU game based on the statistical theory of set estimation. Concretely, we provide a core reconstruction that is obtained in polynomial time for general dimension. Additionally, convergence rates for the estimation error are derived. Finally, a consistent core-center estimator is established as a geometrical application of this methodology.
... Besides the real images, we use synthetic FIB-SEM data obtained using the simulation tool described by . The micro-structure is generated as a realization of a Boolean model (Matheron, 1975;Schneider and Weil, 2000;Molchanov, 1995), which is a random closed set model given by the union of grains centered at the points of a homogeneous Poisson point process. Here, the grains are spherical and have a constant radius of 9 voxels and the Boolean model has a porosity of 65 %. ...
Article
Full-text available
FIB‐SEM tomography is a serial sectioning technique where a Focused Ion Beam (FIB) mills off slices from the material sample that is being analyzed. After every slicing, a Scanning Electron Microscopy (SEM) image is taken showing the newly exposed layer of the sample. By combining all slices in a stack, a 3D image of the material is generated. However, specific artifacts caused by the imaging technique distort the images, hampering the morphological analysis of the structure. Typical quality problems in microscopy imaging are noise and lack of contrast or focus. Moreover, specific artifacts are caused by the FIB milling, namely, curtaining and charging artifacts. We propose quality indexes for the evaluation of the quality of FIB‐SEM data sets. The indexes are validated on real and experimental data of different structures and materials. This article is protected by copyright. All rights reserved
... A few years later, Dempster (1967) develops what today is known as Dempster-Shafer evidence theory (formalized by Shafer, 1976), seeking the representation of epistemic knowledge on probability distributions and, in the process, relaxing some of the strong rules of probability theory. In parallel, and within the context of stochastic geometry, Kendall (1974) and Matheron (1975) developed the foundations of what is nowadays known as the random sets theory. In short, random sets theory deals with the properties of set-valued random variables (in contrast to point-valued random variables in probability theory). ...
Technical Report
Full-text available
This background paper has been made in the framework of the project "Development of a Global Infrastructure Risk Model and Resilience Index (GIRI) for the Coalition for Disaster Resilient Infrastructure (CDRI), Biennial Report on Disaster and Climate Resilient Infrastructure, 2023", supported by the United Nations Development Program (UNDP), and developed by the consortium INGENIAR CAD/CAE LTDA., UNIGE, NGI, and CIMA. The GIRI, or the Global Infrastructure Risk and Resilience Model and Index of CDRI, is a comprehensive system of indicators of risk and resilience that encompasses all countries and territories worldwide. Currently, GIRI addresses six natural hazards: earthquakes, tsunamis, landslides, floods, tropical cyclones, and droughts. The last four include the alterations induced by climate change, thus offering hydrometeorological risk metrics related to various greenhouse gas emission scenarios in the future, in addition to stationary risk metrics for geological hazards. GIRI, presently, encompasses nine infrastructure sectors: power, highways and railways, transportation, water and wastewater, communications, oil and gas, education, health, and housing.
... 17 Thereby, different stochastic 3D reconstruction methods are presented, which include Monte Carlo modeling, dynamic particle packing, stochastic grids, simulated annealing and controlled random generation. A general overview for stochastic microstructure modeling is, e.g., presented by Holzer et al., 18 Chiu et al., 19 Matheron, 20 Jeulin, 21 Lantuéjoul, 22 Schmidt 23 and Bargmann et al. 24 The two main quality criteria to be fulfilled are the prediction power and the efficiency of the method. Thereby, two main approaches for microstructure modeling can be distinguished: (a) Physics-based methods, which simulate the physical processes of microstructure formation (e.g., grain growth by sintering), for example with the phase-field method. ...
Article
Full-text available
Digital Materials Design (DMD) offers new possibilities for data-driven microstructure optimization of solid oxide cells (SOC). Despite the progress in 3D-imaging, experimental microstructure investigations are typically limited to only a few tomography analyses. In this publication, a DMD workflow is presented for extensive virtual microstructure variation, which is based on a limited number of real tomography analyses. Real 3D microstructures, which are captured with FIB-tomography from LSTN–CGO anodes, are used as a basis for stochastic modeling. Thereby, digital twins are constructed for each of the three real microstructures. The virtual structure generation is based on the pluri-Gaussian method (PGM). In order to match the properties of selected virtual microstructures (i.e., digital twins) with real structures, the construction parameters for the PGM-model are determined by interpolation of a database of virtual structures. Moreover, the relative conductivities of the phases are optimized with morphological operations. The digital twins are then used as anchor points for virtual microstructure variation of LSTN–CGO anodes, covering a wide range of compositions and porosities. All relevant microstructure properties are determined using our standardized and automated microstructure characterization procedure, which was recently published. The microstructure properties can then e.g., be used as input for a multiphysics electrode model to predict the corresponding anode performances. This set of microstructure properties with corresponding performances is then the basis to provide design guidelines for improved electrodes. The PGM-based structure generation is available as a new Python app for the GeoDict software package.
... This approach is employed in several areas to describe the qualities and shapes of individual granules in each product. The concept was introduced by G. Matheron at the end of the 60's [110] and it is now a widely used technique in image processing. ...
Preprint
Full-text available
Introduction.-The essential congenital strabismus is the most frequent ocular deviation. Genetic, clinic and neuroimaging evidence indicate the neurological and heterogeneous origin of this condition, but the intricacies of its mechanism are still unknown. Objectives.-To evaluate the white matter-functionally and structurally-in strabismic subjects with the aim of finding a correlation with the origin of this disease. Material and methods.-A prospective, transversal, and observational study was undertaken which included neurofunctional (using digitized brain mapping and neurometry) and morphometric (by means of RMI and voxel analysis) approaches in children with essential congenital strabismus (ECS) ranging from 5 to 7 years of age. Voxel analysis was also used to analyze the brains of two healthy children taken as controls. Results.-Neurofunctional studies revealed alterations in intra-and inter-hemispheric cortices in the experimental cases. Granulometric averages for the different cases were patients with DHD, 0.028253796 units; patients with SSAV, 0.0306733373 units, whereas control subjects showed the highest values, 0.034562177 units. Conclusions.-Structural alterations in both the white matter and neuroconduction are present in essential congenital strabismus. This study suggests that these findings are associated with the origin of the disease.
... Mathematical Morphology (MM), a theory initiated in the sixties by George Matheron [30] and Jean Serra [42,43] at the Fontainebleau campus of the École Superieur des Mines de Paris (ESMP), has been important for the development of the field of image analysis. Lattices are the fundamental algebraic structure incrementally the basis of any W-operator from any representation of it in terms of the elementary operators of mathematical morphology (i.e., erosion and dilation), the Boolean lattice operations, (i.e., intersection, union and complement), and the basis of the identity operator [9]. ...
Preprint
Full-text available
A classical approach to designing binary image operators is Mathematical Morphology (MM). We propose the Discrete Morphological Neural Networks (DMNN) for binary image analysis to represent W-operators and estimate them via machine learning. A DMNN architecture, which is represented by a Morphological Computational Graph, is designed as in the classical heuristic design of morphological operators, in which the designer should combine a set of MM operators and Boolean operations based on prior information and theoretical knowledge. Then, once the architecture is fixed, instead of adjusting its parameters (i.e., structural elements or maximal intervals) by hand, we propose a lattice gradient descent algorithm (LGDA) to train these parameters based on a sample of input and output images under the usual machine learning approach. We also propose a stochastic version of the LGDA that is more efficient, is scalable and can obtain small error in practical problems. The class represented by a DMNN can be quite general or specialized according to expected properties of the target operator, i.e., prior information, and the semantic expressed by algebraic properties of classes of operators is a differential relative to other methods. The main contribution of this paper is the merger of the two main paradigms for designing morphological operators: classical heuristic design and automatic design via machine learning. Thus, conciliating classical heuristic morphological operator design with machine learning. We apply the DMNN to recognize the boundary of digits with noise, and we discuss many topics for future research.
... See e.g.Matheron (1975),Molchanov (2017),Molchanov and Molinari (2018) for random sets in more general spaces. ...
... With a function ρ : Q × P → R which is continuous in P for all fixed q ∈ Q and measurable in Q for all fixed p ∈ P, define, if existent, Definition 3.1 is a generalization of a mean originally introduced for the case P = Q and ρ = d 2 by [Fré48] which is called the Fréchet mean, see above. Due to continuity of ρ, E (ρ) is a closed set and E (ρ) n (ω) is a random closed set, introduced and studied by [Cho54,Ken74,Mat74], see also [Mol05]. ...
Preprint
ENDOR spectroscopy is an important tool to determine the complicated three-dimensional structure of biomolecules and in particular enables measurements of intramolecular distances. Usually, spectra are determined by averaging the data matrix, which does not take into account the significant thermal drifts that occur in the measurement process. In contrast, we present an asymptotic analysis for the homoscedastic drift model, a pioneering parametric model that achieves striking model fits in practice and allows both hypothesis testing and confidence intervals for spectra. The ENDOR spectrum and an orthogonal component are modeled as an element of complex projective space, and formulated in the framework of generalized Fr\'echet means. To this end, two general formulations of strong consistency for set-valued Fr\'echet means are extended and subsequently applied to the homoscedastic drift model to prove strong consistency. Building on this, central limit theorems for the ENDOR spectrum are shown. Furthermore, we extend applicability by taking into account a phase noise contribution leading to the heteroscedastic drift model. Both drift models offer improved signal-to-noise ratio over pre-existing models.
... More generally, possibility theory allows for modeling purely set-valued information in the form of a constraint ω * ∈ C ⊆ Ω, which is equivalent to the measure µ such that µ(A) = 1 if A ∩ C = ∅ and µ(A) = 0 otherwise. In a sense, probability and possibility measures can be seen as two extremes on the scale of uncertainty representations, with various other theories of uncertainty in-between, including imprecise probability (Walley, 1991), random sets (Matheron, 1975;Nguyen, 1978), and evidence theory (Shafer, 1976;Smets and Kennes, 1994). ...
Article
This paper elaborates on the notion of uncertainty in the context of annotation in large text corpora, specifically focusing on (but not limited to) historical languages. Such uncertainty might be due to inherent properties of the language, for example, linguistic ambiguity and overlapping categories of linguistic description, but could also be caused by a lack of annotation expertise. By examining annotation uncertainty in more detail, we identify the sources, deepen our understanding of the nature and different types of uncertainty encountered in daily annotation practice, and discuss practical implications of our theoretical findings. This paper can be seen as an attempt to reconcile the perspectives of the main scientific disciplines involved in corpus projects, linguistics and computer science, to develop a unified view and to highlight the potential synergies between these disciplines.
Chapter
From now on, we restrict ourselves to Poisson hyperplane tessellations, because the independence properties inherent to Poisson processes allow us to derive many results of particular stochastic and geometric appeal. We begin with recalling the general notion of a Poisson process, reminding the reader that we consider only simple processes, if not explicitly stated otherwise.
Chapter
With this chapter, we begin to investigate the random polytopes which are associated with a stationary Poisson hyperplane tessellation, as ‘average’ cells or faces, possibly with weights. The zero cell of the tessellation is the almost surely unique cell containing the origin. Next, we consider the ‘typical cell’. The intuitive idea is to choose a cell of the tessellation at random, with equal chances for each cell to be selected, and to translate it so that its ‘center’ coincides with the origin. The actual definition is slightly more sophisticated, but once an exact definition is given, it turns out that the typical cell has representations that come close to the heuristic interpretation. Replacing the origin by a given convex body K, the zero cell is generalized by the K-cell. This is the intersection of all closed halfspaces containing K that are bounded by hyperplanes of the process not intersecting the interior of K. The chapter investigates these different types of cells.
Chapter
A common approach to investigating a random set or a process of convex objects is to observe it in an observation window (often convex), to draw some conclusions, and then to increase the window. An example is the determination of a functional density for a stationary particle process by means of a limit relation. The present chapter deals with the simplest questions one can ask when a stationary Poisson hyperplane process is observed inside a given bounded set, for example the question for the number of hyperplanes meeting a convex body or the number of intersection points inside a Borel set. First and second moments of such random variables will be determined. It will be explored how the observation window and the directional distribution of the hyperplane process affect the results.
Chapter
As we want to consider random hyperplane processes and the systems of polytopes in the tessellations they induce, we shall need random processes of different kinds of geometric objects. For this reason, we first recall the notion of a general point process, where the ‘points’ are elements of a suitable topological space. Then we introduce processes of flats, in particular hyperplanes, and processes of convex particles. These are then used to model the cells of a random hyperplane tessellation.
Chapter
In this chapter, we drop the stationarity assumption for the considered Poisson hyperplane processes, but we restrict ourselves to isotropic processes. The spherical directional distribution is allowed to depend on an additional parameter, which controls the distances of the hyperplanes from the origin. Thus, the intensity measure of the hyperplane process depends on this parameter and on the intensity. The focus of the first two sections is on the expected face numbers of the zero cell. Sections 4 and 5 deal with moments and the variance of the volume of the zero cell. Section 6 explores the asymptotic behavior of the intersection volume of the zero cell and a lower dimensional ball of fixed volume centered at the origin. The last section derives an exact formula for a second moment in the stationary case.
Chapter
We assume in this chapter that a nondegenerate stationary Poisson hyperplane process is given, which has locally finite intensity measure and hence an intensity and a spherical directional distribution. The properties of the hyperplane process and its induced tessellation will in an essential way depend on the spherical directional distribution. In some cases, this dependence can be analysed via the introduction of some auxiliary convex bodies. For example, by Minkowski’s theorem, the directional distribution is the surface area measure of a convex body, and the expected number of hyperplanes of the tessellation hitting a given convex body can be expressed as a mixed volume of this body and the auxiliary body. From this fact, further conclusions can be drawn. The directional distribution can also be used to define another origin-symmetric convex body, which is the projection body of the previous one and is called the Matheron zonoid. Quantities derived from the tessellation have geometric interpretations in terms of this convex body. Known information from convex geometry yields new information about the hyperplane process and its generated tessellation.
Chapter
Partial differential equations (PDEs) are very suitable in nonlinear data modeling and analysis. Hamilton-Jacobi (HJ) equations constitute a particular family in the PDEs area with many connections in various science fields, and admit under some assumptions viscosity solutions known as Hopf-Lax-Oleinik (HLO) formulas. Mathematical morphology (MM) is an efficient nonlinear image and data analysis method, which can be formulated as first order HJ PDEs. In this work, we propose to formulate HJ PDEs equations in compact Pseudo-Riemannian manifolds and prove their viscosity solutions. We also prove the viscosity solutions for a particular Hamiltonian, which makes the link to MM, constituting a new extension of classical morphological operators in compact pseudo-Riemannian manifolds. Obtained experimental results on real images show interesting capabilities of the proposed approach in multiscale analysis and image filtering.
Chapter
Morphological neural networks, or layers, can be a powerful tool to boost the progress in mathematical morphology, either on theoretical aspects such as the representation of complete lattice operators, or in the development of image processing pipelines. However, these architectures turn out to be difficult to train when they count more than a few morphological layers, at least within popular machine learning frameworks which use gradient descent based optimization algorithms. In this paper we investigate the potential and limitations of differentiation based approaches and back-propagation applied to morphological networks, in light of the non-smooth optimization concept of Bouligand derivative. We provide insights and first theoretical guidelines, in particular regarding initialization and learning rates.
Article
In this article, a novel approach for automated identification of partial discharge (PD) defects inside an insulation system is proposed employing PD pulse sequence analysis (PSA). The sequence of PD pulses is directly related to the type of PD defect in an insulation system. Therefore, the pattern of PD pulse sequence has been analyzed in this article to diagnose different types of defects. For this contribution, three common types of artificial defects have been emulated and pulse sequence pattern corresponding to each type of PD defect has been recorded. Following this, mathematical morphology (MM) has been used to analyze the PD pulse sequence pattern and based on morphological operations; several novel features have been extracted in this article to discriminate different PD pulses. The extracted features were fed to a bidirectional long short-term memory (Bi-LSTM)-based deep neural network (DNN) classifier. It has been noticed that the proposed Bi-LSTM network achieved an accuracy of 98.76% in discriminating different types of PD defects. Comparative study with other deep learning methods also indicates that the proposed MM aided Bi-LSTM is suitable automated classification of PD pulse sequence.
Article
Full-text available
In this expository review paper, we show that co-kriging, a widely used geostatistical multivariate optimal linear estimator, has a diverse range of extensions that we have collected and illustrated to show the potential of this spatial interpolator. In the context of spatial stochastic processes, this paper covers scenarios including increasing the spatial resolution of a spatial variable (downscaling), solving inverse problems, estimating directional derivatives, and spatial interpolation taking boundary conditions into account. All these spatial interpolators are optimal linear estimators in the sense of being unbiased and minimising the variance of the estimation error.
Article
Stationary Poisson processes of lines in the plane are studied, whose directional distributions are concentrated on $k\geq 3$ equally spread directions. The random lines of such processes decompose the plane into a collection of random polygons, which form a so-called Poisson line tessellation. The focus of this paper is to determine the proportion of triangles in such tessellations, or equivalently, the probability that the typical cell is a triangle. As a by-product, a new deviation of Miles’s classical result for the isotropic case is obtained by an approximation argument.
Chapter
Full-text available
It is generally assumed that transport resistance in porous media, which can also be expressed as tortuosity, correlates somehow with the pore volume fraction. Hence, mathematical expressions such as the Bruggeman relation (i.e., τ ² = ε −1/2 ) are often used to describe tortuosity ( τ )—porosity ( ε ) relationships in porous materials. In this chapter, the validity of such mathematical expressions is critically evaluated based on empirical data from literature. More than 2200 datapoints (i.e., τ – ε couples) are collected from 69 studies on porous media transport. When the empirical data is analysed separately for different material types (e.g., for battery electrodes, SOFC electrodes, sandstones, packed spheres etc.), the resulting τ versus ε —plots do not show clear trend lines, that could be expressed with a mathematical expression. Instead, the datapoints for different materials show strongly scattered distributions in rather ill-defined ‘characteristic’ fields. Overall, those characteristic fields are strongly overlapping, which means that the τ – ε characteristics of different materials cannot be separated clearly. When the empirical data is analysed for different tortuosity types, a much more consistent pattern becomes apparent. Hence, the observed τ − ε pattern indicates that the measured tortuosity values strongly depend on the involved type of tortuosity. A relative order of measured tortuosity values then becomes apparent. For example, the values observed for direct geometric and mixed tortuosities are concentrated in a relatively narrow band close to the Bruggeman trend line, with values that are typically < 2. In contrast, indirect tortuosities show higher values, and they scatter over a much larger range. Based on the analysis of empirical data, a detailed pattern with a very consistent relative order among the different tortuosity types can be established. The main conclusion from this chapter is thus that the tortuosity value that is measured for a specific material, is much more dependent on the type of tortuosity than it is dependent on the material and its microstructure. The empirical data also illustrates that tortuosity is not strictly bound to porosity. As the pore volume decreases, the more scattering of tortuosity values can be observed. Consequently, any mathematical expression that aims to provide a generalized description of τ − ε relationships in porous media must be questioned. A short section is thus provided with a discussion of the limitations of such mathematical expressions for τ − ε relationships. This discussion also includes a description of the rare and special cases, for which the use of such mathematical expressions can be justified.
ResearchGate has not been able to resolve any references for this publication.