Figure 6 - uploaded by Randolph E. Bank
Content may be subject to copyright.
1. The X-Windows interface.

1. The X-Windows interface.

Source publication
Book
Full-text available
Intended mainly for use as a reference manual, this edition encompasses all the improvements in the PLTMG software package. This book includes an internal triangle tree data structure that has simplified the internal routines.

Citations

... We can assess the quality of triangles generated for discretizing the subsurface (Fig. 1b). Let h 1 , h 2 and h 3 are sides and A is the area of a particular triangular cell, then the quality of this triangular cell is given, (Bank, 1990). ...
Article
Full-text available
Induced polarization datasets, which assess the overvoltage characteristics of the subsurface, are becoming increasingly prevalent. These datasets are not limited to mineral resource exploration but are also gaining traction in various engineering applications. We present the result of joint inversion of direct current resistivity and induced polarization data. We have obtained two subsurface properties: electrical resistivity and chargeability, by minimizing the single objective function. For this purpose, we have computed the distorted apparent resistivity using the secondary voltage measure at each time gate. We compared our results with the conventional inversion of induced polarization on synthetic data with both high and low chargeable mineralized ore bodies. We noted that our approach provides a more reasonable model for high chargeable ore bodies. However, both joint inversion and nonlinear approaches give similar results for low-chargeable ore bodies. Nonlinear inversion of induced polarization was accomplished by solving two objective functions: One for resistivity and another one for induced polarization. Thus, error in the resistivity inversion propagates in the chargeability computation due to consequences. Further, the developed joint inversion algorithm is also applied to a field data set measured at the Beldih mine located in the West Bengal, India. We compared our final chargeability models with the local geology and geologic models obtained via other geophysical methods on the same profiles and found an excellent correlation in both models.
... where A denotes the triangle area, l i , i ∈ [1, 2, 3] is the length of each edge incident to the triangle, nt is the triangle normal, and no is the offset normal at the triangle center. Equation (4), also known as the mean-ratio metric, was already used for mesh optimization [Ban98;RL17]. Equation (5) denotes the normalized angle between the triangle and the offset surface normal at the triangle center. Equation (6) combines these two metrics, with a greater preference given to the angle metric. ...
Article
Full-text available
We introduce a reliable method to generate offset meshes from input triangle meshes or triangle soups. Our method proceeds in two steps. The first step performs a Dual Contouring method on the offset surface, operating on an adaptive octree that is refined in areas where the offset topology is complex. Our approach substantially reduces memory consumption and runtime compared to isosurfacing methods operating on uniform grids. The second step improves the output Dual Contouring mesh with an offset‐aware remeshing algorithm to reduce the normal deviation between the mesh facets and the exact offset. This remeshing process reconstructs concave sharp features and approximates smooth shapes in convex areas up to a user‐defined precision. We show the effectiveness and versatility of our method by applying it to a wide range of input meshes. We also benchmark our method on the Thingi10k dataset: watertight and topologically 2‐manifold offset meshes are obtained for 100% of the cases.
... The PLTMG package [16] is one of the oldest open finite element softwares for solving elliptic problems that is still under active maintenance, and includes many advanced features such as hp-adaptive refinement, a posteriori error estimation, domain decomposition and multigrid preconditioning. The a posteriori error estimation is based on a superconvergent patch recovery estimation technique introduced in [18]. ...
... The PLTMG package [45] is one of the oldest open finite element softwares for solving elliptic problems that is still under active maintenance, and includes many advanced features such as hp-adaptive refinement, a posteriori error estimation, domain decomposition and multigrid preconditioning. The a posteriori error estimation is based on a superconvergent patch recovery estimation technique introduced in [47]. ...
Thesis
Full-text available
This manuscript is concerned with a posteriori error estimation for the finite element discretization of standard and fractional partial differential equations as well as an application of fractional calculus to the modeling of the human meniscus by poro-elasticity equations. In the introduction, we give an overview of the literature of a posteriori error estimation in finite element methods and of adaptive refinement methods. We emphasize the state–of–the–art of the Bank–Weiser a posteriori error estimation method and of the adaptive refinement methods convergence results. Then, we move to fractional partial differential equations. We give some of the most common discretization methods of fractional Laplacian operator based equations. We review some results of a priori error estimation for the finite element discretization of these equations and give the state–of– the–art of a posteriori error estimation. Finally, we review the literature on the use of the Caputo’s fractional derivative in applications, focusing on anomalous diffusion and poro-elasticity applications. The rest of the manuscript is organized as follow. Chapter 1 is concerned with a proof of the reliability of the Bank–Weiser estimator for three–dimensional problems, extending a result from the literature. In Chapter 2 we present a numerical study of the Bank–Weiser estimator, provide a novel implementation of the estimator in the FEniCS finite element software and apply it to a variety of elliptic equations as well as goal-oriented error estimation. In Chapter 3 we derive a novel a posteriori estimator for the L2 error induced by the finite element discretization of fractional Laplacian operator based equations. In Chapter 4 we present new theoretical results on the convergence of a rational approximation method with consequences on the approximation of fractional norms as well as a priori error estimation results for the finite element discretization of fractional equations. Finally, in Chapter 5 we provide an application of fractional calculus to the study of the human meniscus via poro-elasticity equations.
... where h 1 , h 2 , and h 3 (in meter) are sides and A is the area of a particular triangular cell. If q > 0.6, then the triangle is stated to be of acceptable quality [33]. The triangle quality is equal to one when all sides are equal. ...
Article
Recovering the actual subsurface electrical resistivity properties from the electrical resistivity tomography data is challenging because the inverse problem is nonlinear and ill-posed. This paper proposes a Variational Encoder-Decoder (VED) based network to obtain resistivity model, which maps the apparent resistivity data (input) to true resistivity data (output). Since deep learning (DL) models are highly dependent on training sets and providing a meaningful geological resistivity model is complex, we have first developed an algorithm to construct many realistic resistivity synthetic models. Our algorithm automatically constructs an apparent resistivity pseudo-section from these resistivity models. We further computed the resistivity from two different neural architectures for comparison – UNet, and attention UNet with and without input depth encoding apparent data. In the end, we have compared our deep learning results with traditional inversion and borewell data on apparent resistivity datasets collected for aquifer mapping in the hard rock terrain of the West Medinipur district of West Bengal, India. A detailed qualitative and quantitative evaluation reveals that our VED approach is the most effective for the inversion compared to other approaches considered.
... The PLTMG package [12] is one of the oldest open finite element softwares for solving elliptic problems that is still under active maintenance, and includes many advanced features such as hpadaptive refinement, a posteriori error estimation, domain decomposition and multigrid preconditioning. The a posteriori error estimation is based on a superconvergent patch recovery estimation technique introduced in [14]. ...
... Note, due to (12), the matrix of S in the couple of basis ...
Preprint
Full-text available
In the seminal paper of Bank and Weiser [Math. Comp., 44 (1985), pp.283-301] a new a posteriori estimator was introduced. This estimator requires the solution of a local Neumann problem on every cell of the finite element mesh. Despite the promise of Bank-Weiser type estimators, namely locality, computational efficiency, and asymptotic sharpness, they have seen little use in practical computational problems. The focus of this contribution is to describe a novel implementation of hierarchical estimators of the Bank-Weiser type in a modern high-level finite element software with automatic code generation capabilities. We show how to use the estimator to drive (goal-oriented) adaptive mesh refinement and to mixed approximations of the nearly-incompressible elasticity problems. We provide comparisons with various other used estimators. An open-source implementation based on the FEniCS Project finite element software is provided as supplementary material.
... The triangle quality refers to the proximity of a triangle parameter in the mesh to the equilateral triangle and is calculated by the Eq. (4) (Bank, 1998). ...
Article
Using the unstructured mesh, a new two-dimensional joint inversion algorithm has been developed for Radiomagnetotelluric and Direct current resistivity data. The unstructured mesh is generated with triangular cells, whose vertical and lateral lengths increase towards the depths. The Finite Element Method (FEM) has been used in the forward modelling part of the developed joint inversion algorithm. In the previous studies, structured grid-based joint inversion algorithms have been developed using the Finite Difference Method (FDM). In the structured grid-based algorithms, when the mesh is being generated with rectangular cells, the vertical lengths of the cells get bigger towards the depths while the lateral lengths remain constant. With the structured mesh, the undulated surface topography cannot be represented well enough. Also, because of the incompatible aspect ratio ofmodel cell sizes in deepermodel sections, the resolution of themodel parameters will get smaller and cannot be resolved well with the structured grids. Imaging of surface topography and underground resistivity structures by the new algorithm requires fewer elements than those using structured grids. Therefore, the developed algorithm is faster than traditional 2D inversion algorithms. Furthermore, the resolution of the deeper model parameters has been increased by using the definition of the unstructured grid. A regularized inversion scheme with a smoothness-constrained stabilizer has been employed to invert the data. First, we have tested the developed joint inversion algorithm using synthetic data simplified from archaeological and mine site scenario and the results have been compared with the conventional algorithms using structured grids. We have also tested our algorithmwith the real data which were collected frommineral investigation site at approximately 10 kmeast of the Elbistan district of Kahramanmaraş province, in thewest of the TaurusMountains, Turkey. The results show that the developed joint inversion algorithm is a powerful tool to detect both resistive and conductive targets.
... In order to demonstrate the strength of DMO we use the standard mean ratio quality metric for triangles q (e) m tri [2,5,6,12,17,37], ...
Chapter
We present an algorithm called discrete mesh optimization (DMO), a greedy approach to topology-consistent mesh quality improvement. The method requires a quality metric for all element types that appear in a given mesh. It is easily adaptable to any mesh and metric as it does not rely on differentiable functions. We give examples for triangle, quadrilateral, and tetrahedral meshes and for various metrics. The method improves quality iteratively by finding the optimal position for each vertex on a discretized domain. We show that DMO outperforms other state of the art methods in terms of convergence and runtime.
... Getting high accuracy in the presence of corner singularities, however, still requires care, as we found from the responses to a challenge problem involving the L-shaped domain that we posed to the NA Digest email discussion group in November, 2018 [13,46]. The most common approach to such a problem would be to use the finite element method (FEM), for which there is widely-distributed freely-available software such as deal.II, FEniCS, Firedrake, IFISS, PLTMG, and XLiFE++ [1,4,8,31,35,53]. However, it is not straightforward to get, say, 8 digits of accuracy by such methods. ...
Preprint
A new method is introduced for solving Laplace problems on 2D regions with corners by approximation of boundary data by the real part of a rational function with fixed poles exponentially clustered near each corner. Greatly extending a result of D. J. Newman in 1964 in approximation theory, we first prove that such approximations can achieve root-exponential convergence for a wide range of problems, all the way up to the corner singularities. We then develop a numerical method to compute approximations via linear least-squares fitting on the boundary. Typical problems are solved in < 1s on a laptop to 8-digit accuracy, with the accuracy guaranteed in the interior by the maximum principle. The computed solution is represented globally by a single formula, which can be evaluated in tens of microseconds at each point.
... We use the following formula to quantify how equilateral the mesh is Bank [1998]: ...
Technical Report
Full-text available
OceanMesh2D is a set of MATLAB scripts to assemble and post-process two-dimensional (2D) triangular meshes used in finite element numerical simulations. It is designed with coastal ocean models in mind, although it can mesh any 2D region bounded by a polygon. It can be used to build meshes of varying size (up to 10-20 million vertices or so) based on user-defined parameters to edgelength functions that control how the resolution is distributed in space. The meshes created with the software are nearly reproducible since they are parameterizable and can be assembled quickly on a personal computer on the order of minutes to hours.