Article

Prediction of effective elastic moduli of rocks using Graph Neural Networks

Authors:
To read the full-text of this research, you can request a copy directly from the authors.

Abstract

This study presents a Graph Neural Networks (GNNs)-based approach for predicting the effective elastic moduli of rocks from their digital CT-scan images. We use the Mapper algorithm to transform 3D digital rock images into graph datasets, encapsulating essential geometrical information. These graphs, after training, prove effective in predicting elastic moduli. Our GNN model shows robust predictive capabilities across various graph sizes derived from various subcube dimensions. Not only does it perform well on the test dataset, but it also maintains high prediction accuracy for unseen rocks and unexplored subcube sizes. Comparative analysis with Convolutional Neural Networks (CNNs) reveals the superior performance of GNNs in predicting unseen rock properties. Moreover, the graph representation of microstructures significantly reduces GPU memory requirements (compared to the grid representation for CNNs), enabling greater flexibility in the batch size selection. This work demonstrates the potential of GNN models in enhancing the prediction accuracy of rock properties and boosting the efficiency of digital rock analysis.

No full-text available

Request Full-text Paper PDF

To read the full-text of this research,
you can request a copy directly from the authors.

... Within the past decade, the use of machine learning algorithms to assimilate data in the fields of mechanics and materials science has accelerated dramatically. Regarding metals and metallic alloys, some studies have focused on the development of novel constitutive models at various material scales, including bridging the crystal and sample/component scales (i.e., structure-property relationships) [Pagan et al., 2022, Meyer et al., 2022, Chung et al., 2024 as well as, specifically, the prediction of anisotropic yield surfaces [Hartmaier, 2020, Vlassis and Sun, 2023, Heidenreich et al., 2023a,b, Nascimento et al., 2023, Ghnatios et al., 2024, Jian et al., 2024. In a previous study [Fuhg et al., 2022], we focused on the development of convex yield functions by using partially input convex neural networks (pICNNs) trained with data from CPFEM simulations to relate the parameterized microstructural features of the material to the macroscopic response. ...
Article
Full-text available
Cave geometry, spatial distribution, and interconnectivity are critical for developing resource production and contaminant remediation strategies. Geologically realistic stochastic models for simulating karst are essential for quantifying the spatial uncertainty of karst networks given geophysical observations. Dynamic graph dissolution, a novel physics-based approach for three-dimensional stochastic geomodeling of telogenetic karst morphology, is introduced herein. The cave evolution is modeled through dissolution of fractures over geologic time based on a graph representation of discrete fracture networks, which can be informed by field observations. The graph is initially modeled based on fracture intersections. In order to account for overlapping enlargements, the graph representation is updated over dissolution using the Mapper algorithm with density-based spatial clustering. This modeling approach enables the generation of multiple realizations of different geologic scenarios of karst formation at a tractable computational cost. Realizations generated using the proposed algorithm are compared with real caves using graph topological metrics such as central point dominance, connectivity degree, average degree, degree dispersion, and assortativity. The distributions of graph topological metrics of generated realizations overlap with the metrics of known caves, suggesting that the graph structures of observed and simulated caves are at least globally similar.
Article
Full-text available
Effective permeability is a key physical property of porous media that defines its ability to transport fluid. Digital rock physics (DRP) combines modern tomographic imaging techniques with advanced numerical simulations to estimate effective rock properties. DRP is used to complement or replace expensive and time‐consuming or impractical laboratory measurements. However, with increase in sample size to capture multimodal and multiscale microstructures, conventional approaches based on direct numerical simulation (DNS) are becoming very computationally intensive or even infeasible. To address this computational challenge, we propose a hierarchical homogenization method (HHM) with a data‐driven surrogate model based on 3‐D convolutional neural network (CNN) and transfer learning to estimate effective permeability of digital rocks with large sample sizes up to billions of voxels. This HHM‐CNN workflow divides a large digital rock into small sub‐volumes and predicts their permeabilities through a CNN surrogate model of Stokes flow at the pore scale. The effective permeability of the full digital rock is then predicted by solving the Darcy equations efficiently on the upscaled model in which the permeability of each cell is assigned by the surrogate model. The proposed method has been verified on micro‐CT scans of both sandstones and carbonates, and applied to the Bentheimer sandstone and a reconstructed high‐resolution carbonate rock obtained by multiscale data fusion. The computed permeabilities of the HHM‐CNN are consistent with the results of DNS on the full digital rock. Compared with conventional DNS algorithms, the proposed hierarchical approach can largely reduce the computational time and memory demand.
Article
Full-text available
We propose a novel deep learning framework for predicting the permeability of porous media from their digital images. Unlike convolutional neural networks, instead of feeding the whole image volume as inputs to the network, we model the boundary between solid matrix and pore spaces as point clouds and feed them as inputs to a neural network based on the PointNet architecture. This approach overcomes the challenge of memory restriction of graphics processing units and its consequences on the choice of batch size and convergence. Compared to convolutional neural networks, the proposed deep learning methodology provides freedom to select larger batch sizes due to reducing significantly the size of network inputs. Specifically, we use the classification branch of PointNet and adjust it for a regression task. As a test case, two and three dimensional synthetic digital rock images are considered. We investigate the effect of different components of our neural network on its performance. We compare our deep learning strategy with a convolutional neural network from various perspectives, specifically for maximum possible batch size. We inspect the generalizability of our network by predicting the permeability of real-world rock samples as well as synthetic digital rocks that are statistically different from the samples used during training. The network predicts the permeability of digital rocks a few thousand times faster than a lattice Boltzmann solver with a high level of prediction accuracy.
Article
Full-text available
An image processing workflow is presented for the characterization of pore and grain size distributions in porous geological samples from X-ray microcomputed tomography (μCT) and scanning electron microscopy (SEM) images. The pore and grain size distributions of five sandstone samples including Berea, Buff Berea, Nugget, Castlegate, and Bentheimer, and one carbonate sample, Indiana limestone, are extracted using the proposed workflow. Two-dimensional size distributions acquired from SEM images were found to be biased toward smaller sizes misrepresenting the actual 3D distributions. Stereological techniques unfolded the measured 2D size distributions from SEM images to 3D distributions comparable with μCT results. While larger pores and grains can easily be detected from μCT and SEM images, the quantification of small-scale heterogeneities is severely influenced by their limits of resolution. We show that microstructural details resolved by SEM can significantly impact the pore and grain size distributions in sandstone and carbonate rock samples. For example, SEM-resolved microporosities in Indiana limestone result in bimodal distributions of pore and grain sizes, whereas μCT observations exhibit unimodal distributions. The acquired images and processed results are openly available and may be used by researchers investigating image processing, magnetic resonance relaxation or fluid flow simulations in natural rocks. The proposed methodology can be implemented to process μCT and SEM images of natural rocks as well as other types of porous materials.
Article
Full-text available
Predicting the petrophysical properties of rock samples using micro-CT images has gained significant attention recently. However, an accurate and an efficient numerical tool is still lacking. After investigating three numerical techniques, (i) pore network modeling (PNM), (ii) the finite volume method (FVM), and (iii) the lattice Boltzmann method (LBM), a workflow based on machine learning is established for fast and accurate prediction of permeability directly from 3D micro-CT images. We use more than 1100 samples scanned at high resolution and extract the relevant features from these samples for use in a supervised learning algorithm. The approach takes advantage of the efficient computation provided by PNM and the accuracy of the LBM to quickly and accurately estimate rock permeability. The relevant features derived from PNM and image analysis are fed into a supervised machine learning model and a deep neural network to compute the permeability in an end-to-end regression scheme. Within a supervised learning framework, machine and deep learning algorithms based on linear regression, gradient boosting, and physics-informed convolutional neural networks (CNNs) are applied to predict the petrophysical properties of porous rock from 3D micro-CT images. We have performed the sensitivity analysis on the feature importance, hyperparameters, and different learning algorithms to make a prediction. Values of R2 scores up to 88% and 91% are achieved using machine learning regression models and the deep learning approach, respectively. Remarkably, a significant gain in computation time—approximately 3 orders of magnitude—is achieved by applied machine learning compared with the LBM. Finally, the study highlights the critical role played by feature engineering in predicting petrophysical properties using deep learning.
Article
Full-text available
Many hyperparameters have to be tuned to have a robust convolutional neural network that will be able to accurately classify images. One of the most important hyperparameters is the batch size, which is the number of images used to train a single forward and backward pass. In this study, the effect of batch size on the performance of convolutional neural networks and the impact of learning rates will be studied for image classification, specifically for medical images. To train the network faster, a VGG16 network with ImageNet weights was used in this experiment. Our results concluded that a higher batch size doesn’t usually achieve high accuracy, and the learning rate and the optimizer used will have a significant impact as well. Lowering the learning rate and decreasing the batch size will allow the network to train better, especially in the case of fine-tuning.
Article
Full-text available
Permeability and its anisotropy are of central importance for groundwater and hydrocarbon migration. Existing fluid dynamics methods for computing permeability have common shortcomings, i.e., high computational complexity and long computational time, reducing the potential of these methods in practical applications. In view of this, a 3D CNN-based approach for rapidly estimating permeability in anisotropic rock is proposed. Using high-resolution X-ray microtomographic images of a sandstone sample, numerous samples of the size of 100-cube voxels were generated firstly by a series of image manipulation techniques. The shrinking and expanding algorithms are employed as the data augmentation methods to strengthen the role of porosity and specific surface area (SSA) since these two parameters are critical to estimate permeability. Afterwards, direct pore-scale modeling with Lattice-Boltzmann method (LBM) was utilized to compute the permeabilities in the direction of three coordinate axes and mean permeability as the ground truth. A dataset including 3158 samples for training and 57 samples for testing were obtained. Four 3D CNN models with the same network structure, corresponding to permeabilities in 3 directions and in average, were built and trained. Based on those trained models, the satisfactory predictions of the permeabilities in x-, y-, and z-axis directions and the mean permeability were achieved with R2 scores of 0.8972, 0.8821, 0.8201, and 0.9155, respectively. Furthermore, those proposed 3D CNN models achieved good generalization ability in predicting the permeability of other samples. The trained model takes only tens of milliseconds on average to predict the permeability of one sample in one axial direction, about 10,000 times faster than LBM. The promising performance clearly demonstrates the effectiveness of 3D CNN-based approach in rapidly estimating permeability in anisotropic rock. This new approach provides an alternative way to calculate permeability with low computing cost, and it has the potential to be extended to the estimation of relative permeability and other properties of rocks.
Article
Full-text available
We report the application of machine learning methods for predicting the effective diffusivity (De) of two-dimensional porous media from images of their structures. Pore structures are built using reconstruction methods and represented as images, and their effective diffusivity is computed by lattice Boltzmann (LBM) simulations. The datasets thus generated are used to train convolutional neural network (CNN) models and evaluate their performance. The trained model predicts the effective diffusivity of porous structures with computational cost orders of magnitude lower than LBM simulations. The optimized model performs well on porous media with realistic topology, large variation of porosity (0.28–0.98), and effective diffusivity spanning more than one order of magnitude (0.1 ≲ De < 1), e.g., >95% of predicted De have truncated relative error of <10% when the true De is larger than 0.2. The CNN model provides better prediction than the empirical Bruggeman equation, especially for porous structure with small diffusivity. The relative error of CNN predictions, however, is rather high for structures with De < 0.1. To address this issue, the porosity of porous structures is encoded directly into the neural network but the performance is enhanced marginally. Further improvement, i.e., 70% of the CNN predictions for structures with true De < 0.1 have relative error <30%, is achieved by removing trapped regions and dead-end pathways using a simple algorithm. These results suggest that deep learning augmented by field knowledge can be a powerful technique for predicting the transport properties of porous media. Directions for future research of machine learning in porous media are discussed based on detailed analysis of the performance of CNN models in the present work.
Article
Full-text available
Untangling the complex variations of microbiome associated with large-scale host phenotypes or environment types challenges the currently available analytic methods. Here, we present tmap, an integrative framework based on topological data analysis for population-scale microbiome stratification and association studies. The performance of tmap in detecting nonlinear patterns is validated by different scenarios of simulation, which clearly demonstrate its superiority over the most commonly used methods. Application of tmap to several population-scale microbiomes extensively demonstrates its strength in revealing microbiome-associated host or environmental features and in understanding the systematic interrelations among their association patterns. tmap is available at https://github.com/GPZ-Bioinfo/tmap.
Article
Full-text available
Graphs naturally appear in numerous application domains, ranging from social analysis, bioinformatics to computer vision. The unique capability of graphs enables capturing the structural relations among data, and thus allows to harvest more insights compared to analyzing data in isolation. However, it is often very challenging to solve the learning problems on graphs, because (1) many types of data are not originally structured as graphs, such as images and text data, and (2) for graph-structured data, the underlying connectivity patterns are often complex and diverse. On the other hand, the representation learning has achieved great successes in many areas. Thereby, a potential solution is to learn the representation of graphs in a low-dimensional Euclidean space, such that the graph properties can be preserved. Although tremendous efforts have been made to address the graph representation learning problem, many of them still suffer from their shallow learning mechanisms. Deep learning models on graphs (e.g., graph neural networks) have recently emerged in machine learning and other related areas, and demonstrated the superior performance in various problems. In this survey, despite numerous types of graph neural networks, we conduct a comprehensive review specifically on the emerging field of graph convolutional networks, which is one of the most prominent graph deep learning models. First, we group the existing graph convolutional network models into two categories based on the types of convolutions and highlight some graph convolutional network models in details. Then, we categorize different graph convolutional networks according to the areas of their applications. Finally, we present several open challenges in this area and discuss potential directions for future research.
Article
Full-text available
Fast prediction of permeability directly from images enabled by image recognition neural networks is a novel pore-scale modeling method that has a great potential. This article presents a framework that includes (1) generation of porous media samples, (2) computation of permeability via fluid dynamics simulations, (3) training of convolutional neural networks (CNN) with simulated data, and (4) validations against simulations. Comparison of machine learning results and the ground truths suggests excellent predictive performance across a wide range of porosities and pore geometries, especially for those with dilated pores. Owning to such heterogeneity, the permeability cannot be estimated using the conventional Kozeny–Carman approach. Computational time was reduced by several orders of magnitude compared to fluid dynamic simulations. We found that, by including physical parameters that are known to affect permeability into the neural network, the physics-informed CNN generated better results than regular CNN. However, improvements vary with implemented heterogeneity.
Article
Full-text available
Over the last years deep learning methods have been shown to outperform previous state-of-the-art machine learning techniques in several fields, with computer vision being one of the most prominent cases. This review paper provides a brief overview of some of the most significant deep learning schemes used in computer vision problems, that is, Convolutional Neural Networks, Deep Boltzmann Machines and Deep Belief Networks, and Stacked Denoising Autoencoders. A brief account of their history, structure, advantages, and limitations is given, followed by a description of their applications in various computer vision tasks, such as object detection, face recognition, action and activity recognition, and human pose estimation. Finally, a brief overview is given of future directions in designing deep learning schemes for computer vision problems and the challenges involved therein.
Article
Full-text available
With the recent explosion in the amount, the variety, and the dimensionality of available data, identifying, extracting, and exploiting their underlying structure has become a problem of fundamental importance for data analysis and statistical learning. Topological data analysis (tda) is a recent and fast-growing field providing a set of new topological and geometric tools to infer relevant features for possibly complex data. It proposes new well-founded mathematical theories and computational tools that can be used independently or in combination with other data analysis and statistical learning techniques. This article is a brief introduction, through a few selected topics, to basic fundamental and practical aspects of tda for nonexperts.
Conference Paper
Full-text available
The term Deep Learning or Deep Neural Network refers to Artificial Neural Networks (ANN) with multi layers . Over the last few decades, it has been considered to be one of the most powerful tools, and has become very popular in the literature as it is able to handle a huge amount of data. The interest in having deeper hidden layers has recently begun to surpass classical methods performance in different fields; especially in pattern recognition. One of the most popular deep neural networks is the Convolutional Neural Network (CNN). It take this name from mathematical linear operation between matrixes called convolution. CNN have multiple layers; including convolutional layer, non-linearity layer, pooling layer and fully-connected layer. The convolutional and fully- connected layers have parameters but pooling and non-linearity layers don't have parameters. The CNN has an excellent performance in machine learning problems. Specially the applications that deal with image data, such as largest image classification data set (Image Net), computer vision, and in natural language processing (NLP) and the results achieved were very amazing . In this paper we will explain and define all the elements and important issues related to CNN, and how these elements work. In addition, we will also state the parameters that effect CNN efficiency. This paper assumes that the readers have adequate knowledge about both machine learning and artificial neural network.
Article
Full-text available
Point cloud is an important type of geometric data structure. Due to its irregular format, most researchers transform such data to regular 3D voxel grids or collections of images. This, however, renders data unnecessarily voluminous and causes issues. In this paper, we design a novel type of neural network that directly consumes point clouds and well respects the permutation invariance of points in the input. Our network, named PointNet, provides a unified architecture for applications ranging from object classification, part segmentation, to scene semantic parsing. Though simple, PointNet is highly efficient and effective. Empirically, it shows strong performance on par or even better than state of the art. Theoretically, we provide analysis towards understanding of what the network has learnt and why the network is robust with respect to input perturbation and corruption.
Article
Full-text available
Understanding the controls on the elastic properties of reservoir rocks is crucial for exploration and successful production from hydrocarbon reservoirs. We studied the static and dynamic elastic properties of shale gas reservoir rocks from Barnett, Haynesville, Eagle Ford, and Fort St. John shales through laboratory experiments. The elastic properties of these rocks vary significantly between reservoirs (and within a reservoir) due to the wide variety of material composition and microstructures exhibited by these organic-rich shales. The static (Young's modulus) and dynamic (P-and S-wave moduli) elastic parameters generally decrease monotonically with the clay plus kerogen content. The variation of the elastic moduli can be explained in terms of the Voigt and Reuss limits predicted by end-member components. However, the elastic properties of the shales are strongly anisotropic and the degree of anisotropy was found to correlate with the amount of clay and organic content as well as the shale fabric. We also found that the first-loading static modulus was, on average, approximately 20% lower than the unloading/reloading static modulus. Because the unloading/ reloading static modulus compares quite well to the dynamic modulus in the rocks studied, comparing static and dynamic moduli can vary considerably depending on which static modulus is used.
Article
Full-text available
Pore-scale imaging and modelling - digital core analysis - is becoming a routine service in the oil and gas industry, and has potential applications in contaminant transport and carbon dioxide storage. This paper briefly describes the underlying technology, namely imaging of the pore space of rocks from the nanometre scale upwards, coupled with a suite of different numerical techniques for simulating single and multiphase flow and transport through these images. Three example applications are then described, illustrating the range of scientific problems that can be tackled: dispersion in different rock samples that predicts the anomalous transport behaviour characteristic of highly heterogeneous carbonates; imaging of super-critical carbon dioxide in sandstone to demonstrate the possibility of capillary trapping in geological carbon storage; and the computation of relative permeability for mixed-wet carbonates and implications for oilfield waterflood recovery. The paper concludes by discussing limitations and challenges, including finding representative samples, imaging and simulating flow and transport in pore spaces over many orders of magnitude in size, the determination of wettability, and upscaling to the field scale. We conclude that pore-scale modelling is likely to become more widely applied in the oil industry including assessment of unconventional oil and gas resources. It has the potential to transform our understanding of multiphase flow processes, facilitating more efficient oil and gas recovery, effective contaminant removal and safe carbon dioxide storage.
Article
Full-text available
The relations among the resistivity, elastic-wave velocity, porosity, and permeability in Fontainebleau sandstone samples from the Ile de France region, around Paris, France were experimentally revisited. These samples followed a permeability-porosity relation given by Kozeny-Carman's equation. For the resistivity measurements, the samples were partially saturated with brine. Archie's equation was used to estimate resistivity at 100% water saturation, assuming a saturation exponent, n = 2. Using self-consistent (SC) approximations modeling with grain aspect ratio 1, and pore aspect ratio between 0.02 and 0.10, the experimental data fall into this theoretical range. The SC curve with the pore aspect ratio 0.05 appears to be close to the values measured in the entire porosity range. The elastic-wave velocity was measured on these dry samples for confining pressure between 0 and 40 MPa. A loading and unloading cycle was used and did not produce any significant hysteresis in the velocity-pressure behavior. For the velocity data, using the SC model with a grain aspect ratio 1 and pore aspect ratios 0.2, 0.1, and 0.05 fit the data at 40 MPa; pore aspect ratios ranging between 0.1, 0.05, and 0.02 were a better fit for the data at 0 MPa. Both velocity and resistivity in clean sandstones can be modeled using the SC approximation. In addition, a linear fit was found between the P-wave velocity and the decimal logarithm of the normalized resistivity, with deviations that correlate with differences in permeability. Combining the stiff sand model and Archie for cementation exponents between 1.6 and 2.1, resistivity was modeled as a function of P-wave velocity for these clean sandstones.
Article
Full-text available
A self‐consistent method of estimating effective macroscopic elastic constants for inhomogeneous materials with spherical inclusions is formulated based on elastic‐wave scattering theory. The method for general ellipsoidal inclusions will be presented in the second part of this series. The case of spherical inclusions is particularly simple and therefore provides an elementary introduction to the general method. The self‐consistent effective medium is determined by requiring the scattered, long‐wavelength displacement field to vanish on the average. The resulting formulas are simpler to apply than previous self‐consistent scattering theories due to the reduction from tensor to vector equations. In the limit of long wavelengths, our results for spherical inclusions agree with statically derived self‐consistent moduli of Hill and Budiansky. Our self‐consistent formulas are also compared both to the estimates of Kuster and Toksöz and to the rigorous Hashin–Shtrikman bounds. (For spherical inclusions and long wavelengths, the Kuster–Toksöz effective moduli are known to be identical to the Haskin–Shtrikman bounds.) A result of Hill for two‐phase composites is generalized by proving that the self‐consistent effective moduli always lie between the Haskin–Shtrikman bounds for n‐phase composites. Numerical examples for a two‐phase medium with viscous fluid and solid constituents show that the real part of our self‐consistent moduli always lie between the rigorous bounds, in agreement with the analytical results. Some of the practical details in the numerical solution of the coupled, nonlinear self‐consistency equations are discussed. Examples of velocities and attenuation coefficients estimated when the solid constituent possesses intrinsic absorption are also presented.
Article
Neural networks are typically designed to deal with data in tensor forms. In this paper, we propose a novel neural network architecture accepting graphs of arbitrary structure. Given a dataset containing graphs in the form of (G,y) where G is a graph and y is its class, we aim to develop neural networks that read the graphs directly and learn a classification function. There are two main challenges: 1) how to extract useful features characterizing the rich information encoded in a graph for classification purpose, and 2) how to sequentially read a graph in a meaningful and consistent order. To address the first challenge, we design a localized graph convolution model and show its connection with two graph kernels. To address the second challenge, we design a novel SortPooling layer which sorts graph vertices in a consistent order so that traditional neural networks can be trained on the graphs. Experiments on benchmark graph classification datasets demonstrate that the proposed architecture achieves highly competitive performance with state-of-the-art graph kernels and other graph neural network methods. Moreover, the architecture allows end-to-end gradient-based training with original graphs, without the need to first transform graphs into vectors.
Article
Deep learning has revolutionized many machine learning tasks in recent years, ranging from image classification and video processing to speech recognition and natural language understanding. The data in these tasks are typically represented in the Euclidean space. However, there is an increasing number of applications, where data are generated from non-Euclidean domains and are represented as graphs with complex relationships and interdependency between objects. The complexity of graph data has imposed significant challenges on the existing machine learning algorithms. Recently, many studies on extending deep learning approaches for graph data have emerged. In this article, we provide a comprehensive overview of graph neural networks (GNNs) in data mining and machine learning fields. We propose a new taxonomy to divide the state-of-the-art GNNs into four categories, namely, recurrent GNNs, convolutional GNNs, graph autoencoders, and spatial-temporal GNNs. We further discuss the applications of GNNs across various domains and summarize the open-source codes, benchmark data sets, and model evaluation of GNNs. Finally, we propose potential research directions in this rapidly growing field.
Article
Rock compressibility is a major control of reservoir compaction, yet only limited core measurements are available to constrain estimates. Improved analytical and computational estimates of rock compressibility of reservoir rock can improve forecasts of reservoir production performance and the geomechanical integrity of compacting reservoirs. The fast-evolving digital rock technology can potentially overcome the need for simplification of pores (e.g., ellipsoids) to estimate rock compressibility as the computations are performed on an actual pore-scale image acquired using 3D microcomputed tomography (micro-CT). However, the computed compressibility using a digital image is impacted by numerous factors, including imaging conditions, image segmentation, constituent properties, choice of numerical simulator, rock field of view, how well the grain contacts are resolved in an image, and the treatment of grain-to-grain contacts. We have analyzed these factors and quantify their relative contribution to the rock moduli computed using micro-CT images of six rocks: a Fontainebleau sandstone sample, two Berea sandstone samples, a Castelgate sandstone sample, a grain pack, and a reservoir rock. We find that image-computed rock moduli are considerably stiffer than those inferred using laboratory-measured ultrasonic velocities. This disagreement cannot be solely explained by any one of the many controls when considered in isolation, but it can be ranked by their relative contribution to the overall rock compressibility. Among these factors, the image resolution generally has the largest impact on the quality of image-derived compressibility. For elasticity simulations, the quality of an image resolution is controlled by the ratio of the contact length and image voxel size. Images of poor resolution overestimate contact lengths, resulting in stiffer simulation results.
Article
Atomistic simulations using classical interatomic potentials are powerful investigative tools linking atomic structures to dynamic properties and behaviors. It is well known that different interatomic potentials produce different results, thus making it necessary to characterize potentials based on how they predict basic properties. Doing so makes it possible to compare existing interatomic models in order to select those best suited for specific use cases, and to identify any limitations of the models that may lead to unrealistic responses. While the methods for obtaining many of these properties are often thought of as simple calculations, there are many underlying aspects that can lead to variability in the reported property values. For instance, multiple methods may exist for computing the same property and values may be sensitive to certain simulation parameters. Here, we introduce a new high-throughput computational framework that encodes various simulation methodologies as Python calculation scripts. Three distinct methods for evaluating the lattice and elastic constants of bulk crystal structures are implemented and used to evaluate the properties across 120 interatomic potentials, 18 crystal prototypes, and all possible combinations of unique lattice site and elemental model pairings. Analysis of the results reveals which potentials and crystal prototypes are sensitive to the calculation methods and parameters, and it assists with the verification of potentials, methods, and molecular dynamics software. The results, calculation scripts, and computational infrastructure are self-contained and openly available to support researchers in performing meaningful simulations.
Article
Supervised learning on molecules has incredible potential to be useful in chemistry, drug discovery, and materials science. Luckily, several promising and closely related neural network models invariant to molecular symmetries have already been described in the literature. These models learn a message passing algorithm and aggregation function to compute a function of their entire input graph. At this point, the next step is to find a particularly effective variant of this general approach and apply it to chemical prediction benchmarks until we either solve them or reach the limits of the approach. In this paper, we reformulate existing models into a single common framework we call Message Passing Neural Networks (MPNNs) and explore additional novel variations within this framework. Using MPNNs we demonstrate state of the art results on an important molecular property prediction benchmark, results we believe are strong enough to justify retiring this benchmark.
Article
Many signal processing problems involve data whose underlying structure is non-Euclidean, but may be modeled as a manifold or (combinatorial) graph. For instance, in social networks, the characteristics of users can be modeled as signals on the vertices of the social graph. Sensor networks are graph models of distributed interconnected sensors, whose readings are modelled as time-dependent signals on the vertices. In genetics, gene expression data are modeled as signals defined on the regulatory network. In neuroscience, graph models are used to represent anatomical and functional structures of the brain. Modeling data given as points in a high-dimensional Euclidean space using nearest neighbor graphs is an increasingly popular trend in data science, allowing practitioners access to the intrinsic structure of the data. In computer graphics and vision, 3D objects are modeled as Riemannian manifolds (surfaces) endowed with properties such as color texture. Even more complex examples include networks of operators, e.g., functional correspondences or difference operators in a collection of 3D shapes, or orientations of overlapping cameras in multi-view vision ("structure from motion") problems. The complexity of geometric data and the availability of very large datasets (in the case of social networks, on the scale of billions) suggest the use of machine learning techniques. In particular, deep learning has recently proven to be a powerful tool for problems with large datasets with underlying Euclidean structure. The purpose of this paper is to overview the problems arising in relation to geometric deep learning and present solutions existing today for this class of problems, as well as key difficulties and future research directions.
Article
In this paper, we prove a conjecture published in 1989 and also partially address an open problem announced at the Conference on Learning Theory (COLT) 2015. For an expected loss function of a deep nonlinear neural network, we prove the following statements under the independence assumption adopted from recent work: 1) the function is non-convex and non-concave, 2) every local minimum is a global minimum, 3) every critical point that is not a global minimum is a saddle point, and 4) the property of saddle points differs for shallow networks (with three layers) and deeper networks (with more than three layers). Moreover, we prove that the same four statements hold for deep linear neural networks with any depth, any widths and no unrealistic assumptions. As a result, we present an instance, for which we can answer to the following question: how difficult to directly train a deep model in theory? It is more difficult than the classical machine learning models (because of the non-convexity), but not too difficult (because of the nonexistence of poor local minima and the property of the saddle points). We note that even though we have advanced the theoretical foundations of deep learning, there is still a gap between theory and practice.
Article
Uniaxial Compressive Strength (UCS) and Modulus of elasticity (E) of carbonate rocks are very critical properties in petroleum, mining and civil industries. UCS is the measure of the strength of the rock and E depicts the stiffness, together they control the deformational behavior. But the heterogeneity introduced as a result of fractures, dissolution and dependency on pH and temperature makes them a difficult material to study. Complex diagenesis and resulting pore system makes the job even more daunting. So, an attempt is made to predict these properties using simple index parameters such as Porosity, Density, P-wave velocity, Poisson’s ratio and Point load index. Multiple Linear Regression Analysis (MVRA) and Artificial Neural Networking (ANN) have been used for predicting the two properties and the accuracy is tested by root mean square error. The results show that ANN has a better predictive efficiency than MVRA and they can be applied for predicting UCS and Young’s modulus of carbonate rocks with reasonable confidence.
Article
The ultrasonic compressional [Formula: see text] and shear [Formula: see text] velocities and first‐arrival peak amplitude [Formula: see text] were measured as functions of differential pressure to 50 MPa and to a state of saturation on 75 different sandstone samples, with porosities ϕ ranging from 2 to 30 percent and volume clay content C ranging from 0 to 50 percent, respectively. Both [Formula: see text] and [Formula: see text] were found to correlate linearly with porosity and clay content in shaly sandstones. At confining pressure of 40 MPa and pore pressure of 1.0 MPa, the best least‐squares fits to the velocity data are [Formula: see text] and [Formula: see text]. Deviations from these equations are less than 3 percent and 5 percent for [Formula: see text] and [Formula: see text], respectively. The velocities of clean sandstones are significantly higher than those predicted by the above linear fits (about 7 percent for [Formula: see text] and 11 percent for [Formula: see text]), which indicates that a very small amount of clay (1 or a few percent of volume fraction) significantly reduces the elastic moduli of sandstones. For shaly sandstones we conclude that, to first order, more sensitive to the porosity and clay content than is [Formula: see text]. Consequently, velocity ratios [Formula: see text] and their differences between fully saturated (s) and dry (d) samples also show clear correlation with the clay content and porosity. For shaly sandstones we conclude that, to first order, clay content is the next most important parameter to porosity in reducing velocities, with an effect which is about 0.31 for [Formula: see text] to 0.38 for [Formula: see text] that of the effect of porosity.
Article
The magnitude of the stress drops that occur during frictional sliding on ground surfaces and on faults has been studied at confining pressures of as much as 5 kb. It was found that the stiffness of the loading system and the rate at which the load was applied had no effect on the magnitude of the sudden stress drops. Confining pressure and rock type were found to be the most important parameters. For example, sliding on fault surfaces in unaltered silicate rocks at confining pressure below 1 to 2 kb was stable; that is, stick slip was absent. At higher pressures, motion occurred by stick slip, and the magnitude of the stress drop during slip increased with pressure. Stick slip was absent at all pressures in gabbro, in dunite where the minerals are altered to serpentine, and in limestone and porous tuff. These results suggest that, if stick slip on a fault in the earth produces earthquakes, the earthquakes should become more abundant and increasingly severe with depth. Also, if a fault traverses various rock types, then over part of the fault elastic buildup of stress prior to sudden movement may occur at the same time as stable creeping motion elsewhere on the fault.
Article
The value of depth-first search or "backtracking" as a technique for solving graph problems is illustrated by two examples. An algorithm for finding the biconnected components of an undirected graph and an improved version of an algorithm for finding the strongly connected components of a directed graph are presented. The space and time requirements of both algorithms are bounded by k1V + k2E + k3 for some constants k1, k2, and k3, where V is the number of vertices and E is the number of edges of the graph being examined.
Article
The connection between the elastic behaviour of an aggregate and a single crystal is considered, with special reference to the theories of Voigt, Reuss, and Huber and Schmid. The elastic limit under various stress systems is also considered, in particular, it is shown that the tensile elastic limit of a face-centred aggregate cannot exceed two-thirds of the stress at which pronounced plastic distortion occurs.
Article
Two different effective‐medium theories for two‐phase dielectric composites are considered. They are the effective medium approximation (EMA) and the differential effective medium approximation (DEM). Both theories correspond to realizable microgeometries in which the composite is built up incrementally through a process of homogenization. The grains are assumed to be similar ellipsoids randomly oriented, for which the microgeometry of EMA is symmetric. The microgeometry of DEM is always unsymmetric in that one phase acts as a backbone. It is shown that both EMA and DEM give effective dielectric constants that satisfy the Hashin–Shtrikman bounds. A new realization of the Hashin–Shtrikman bounds is presented in terms of DEM. The general solution to the DEM equation is obtained and the percolation properties of both theories are considered. EMA always has a percolation threshold, unless the grains are needle shaped. In contrast, DEM with the conductor as backbone always percolates. However, the threshold in EMA can be avoided by allowing the grain shape to vary with volume fraction. The grains must become needlelike as the conducting phase vanishes in order to maintain a finite conductivity. Specifically, the grain‐shape history for which EMA reproduces DEM is found. The grain shapes are oblate for low‐volume fractions of insulator. As the volume fraction increases, the shape does not vary much, until at some critical volume fraction there is a discontinuous transition in grain shape from oblate to prolate. In general, it is not always possible to map DEM onto an equivalent EMA, and even when it is, the mapping is not preserved under the interchange of the two phases. This is because DEM is inherently unsymmetric between the two phases.
Article
Variational principles in the linear theory of elasticity, involving the elastic polarization tensor, have been applied to the derivation of upper and lower bounds for the effective elastic moduli of quasi-isotropic and quasi-homogeneous multiphase materials of arbitrary phase geometry. When the ratios between the different phase moduli are not too large the bounds derived are close enough to provide a good estimate for the effective moduli. Comparison of theoretical and experimental results for a two-phase alloy showed good agreement.
Article
Eigenvectors, and the related centrality measure Bonacich's c(β), have advantages over graph-theoretic measures like degree, betweenness, and closeness centrality: they can be used in signed and valued graphs and the beta parameter in c(β) permits the calculation of power measures for a wider variety of types of exchange. Degree, betweenness, and closeness centralities are defined only for classically simple graphs—those with strictly binary relations between vertices. Looking only at these classical graphs, where eigenvectors and graph–theoretic measures are competitors, eigenvector centrality is designed to be distinctively different from mere degree centrality when there are some high degree positions connected to many low degree others or some low degree positions are connected to a few high degree others. Therefore, it will not be distinctively different from degree when positions are all equal in degree (regular graphs) or in core-periphery structures in which high degree positions tend to be connected to each other.
Article
Having noted an important role of image stress in work hardening of dispersion hardened materials, (1,3) the present paper discusses a method of calculating the average internal stress in the matrix of a material containing inclusions with transformation strain. It is shown that the average stress in the matrix is uniform throughout the material and independent of the position of the domain where the average treatment is carried out. It is also shown that the actual stress in the matrix is the average stress plus the locally fluctuating stress, the average of which vanishes in the matrix. Average elastic energy is also considered by taking into account the effects of the interaction among the inclusions and of the presence of the free boundary.
Article
Themacroscopic elastic moduli of two-phase composites are estimated by a method that takes account of the inhomogeneity of stress and strain in a way similar to the Hershey-Kröner theory of crystalline aggregates. The phases may be arbitrarily aeolotropic and in any concentrations, but are required to have the character of a matrix and effectively ellipsoidal inclusions. Detailed results arc given for an isotropic dispersion of spheres.