Book

Stochastic Finite Elements

Authors:
... Consequently, various different correlation functions have been proposed, cf. [14,27]. Here, the exponential correlation function in the form ...
... For this, we make use of the approach as presented in [12]. The basic idea of the approach is to use the approximation of the internal variable in Eq. (14) in evolution ...
... The calculation of the expectation of the internal variable is straightforward. With the series expansion of Eq. (14), the expectation is given by ...
Article
Full-text available
A robust method for uncertainty quantification is undeniably leading to a greater certainty in simulation results and more sustainable designs. The inherent uncertainties of the world around us render everything stochastic, from material parameters, over geometries, up to forces. Consequently, the results of engineering simulations should reflect this randomness. Many methods have been developed for uncertainty quantification for linear elastic material behavior. However, real-life structure often exhibit inelastic material behavior such as visco-plasticity. Inelastic material behavior is described by additional internal variables with accompanying differential equations. This increases the complexity for the computation of stochastic quantities, e.g., expectation and standard deviation, drastically. The time-separated stochastic mechanics is a novel method for the uncertainty quantification of inelastic materials. It is based on a separation of all fields into a sum of products of time-dependent but deterministic and stochastic but time-independent terms. Only a low number of deterministic finite element simulations are then required to track the effect of (in)homogeneous material fluctuations on stress and internal variables. Despite the low computational effort the results are often indistinguishable from reference Monte Carlo simulations for a variety of boundary conditions and loading scenarios.
... with β ∈ R, together with either homogeneous Dirichlet boundary conditions (10) or absorbing boundary conditions (11). We discretize this boundary value problem as described in Sect. ...
... 1. In the case of absorbing boundary conditions (11), the spectrum of the preconditioned matrix AM −1 is contained in the closed disk ...
... We consider the stochastic Helmholtz equation (9) in Q = ]0, 1[ 2 with absorbing boundary conditions (11), the point source f (x, y) = δ((x, y) − ( 1 2 , 1 2 )) as right-hand side, and spacedependent random wavenumber ...
Article
Full-text available
We investigate the Helmholtz equation with suitable boundary conditions and uncertainties in the wavenumber. Thus the wavenumber is modeled as a random variable or a random field. We discretize the Helmholtz equation using finite differences in space, which leads to a linear system of algebraic equations including random variables. A stochastic Galerkin method yields a deterministic linear system of algebraic equations. This linear system is high-dimensional, sparse and complex symmetric but, in general, not hermitian. We therefore solve this system iteratively with GMRES and propose two preconditioners: a complex shifted Laplace preconditioner and a mean value preconditioner. Both preconditioners reduce the number of iteration steps as well as the computation time in our numerical experiments.
... Series expansion methods used for random field discretization include Karhunen-Loève (KL) expansion [7,8], orthogonal polynomial expansion [9], etc. The KL expansion, introduced by Ghanem et al. [10], has emerged as the most extensively applied series expansion method for random field discretization, particularly in the context of uncertainties related to input parameters [11]. Moreover, when considering the global mean square error with respect to the number of random variables, the KL expansion is optimal compared to other series expansion methods [10]. ...
... The KL expansion, introduced by Ghanem et al. [10], has emerged as the most extensively applied series expansion method for random field discretization, particularly in the context of uncertainties related to input parameters [11]. Moreover, when considering the global mean square error with respect to the number of random variables, the KL expansion is optimal compared to other series expansion methods [10]. KL expansion uses a linear combination of orthogonal bases to represent random fields, and the selected orthogonal functions are the eigenfunctions of the Fredholm integral equation of the second kind. ...
... The wavelet-Galerkin method [15] also represents a Galerkin-based technique for solving Fredholm integral eigenvalue problems in one-dimensional domains. For two-dimensional and three-dimensional domains with random fields, Ghanem et al. [10] advocated using the finite element method (FEM) to obtain approximate solutions for the KL expansion, while Papaioannou [13] examined the convergence of the FEM in two-dimensional domains. Based FEM-Galerkin method for the discretization of the IEVP, Allaix et al. [1] proposed a genetic algorithm to achieve an optimal discretization of 2D homogeneous random fields. ...
Article
Full-text available
In the context of global mean square error concerning the number of random variables in the representation, the Karhunen–Loève (KL) expansion is the optimal series expansion method for random field discretization. The computational efficiency and accuracy of the KL expansion are contingent upon the accurate resolution of the Fredholm integral eigenvalue problem (IEVP). The paper proposes an interpolation method based on different interpolation basis functions such as moving least squares (MLS), least squares (LS), and finite element method (FEM) to solve the IEVP. Compared with the Galerkin method based on finite element or Legendre polynomials, the main advantage of the interpolation method is that, in the calculation of eigenvalues and eigenfunctions in one-dimensional random fields, the integral matrix containing covariance function only requires a single integral, which is less than a two-folded integral by the Galerkin method. The effectiveness and computational efficiency of the proposed interpolation method are verified through various one-dimensional examples. Furthermore, based on the KL expansion and polynomial chaos expansion, the stochastic analysis of two-dimensional regular and irregular domains is conducted, and the basis function of the extended finite element method (XFEM) is introduced as the interpolation basis function in two-dimensional irregular domains to solve the IEVP.
... Polynomial chaos expansion (PCE) was first introduced by Wiener in 1938 [14] to represent a stochastic output in terms of the weighted sum of orthogonal polynomials of random inputs, but it was limited to Gaussian distributed inputs only. Subsequently, the generalization of classical polynomial chaos expansion (called gPCE) is proposed in [15], where it can be applied to various continuous and discrete distributions. In the past few years, gPCE has shown encouraging outcomes for uncertainty quantification (UQ) of practical systems across multiple domains. ...
... Major Contribution Key Application [14] Comprehensive overview of the theory of homogeneous chaos and its various aspects Fluid dynamics [15] Developing gPCE for Gaussian/ non-Gaussian distributions ...
... Here, a meta-model of SDS is developed by using Lite-PCEs proposed in [28]. Briefly, the proposed model is trained in Stage-1 based on historical data of loads and PV (input variables) and the obtained output variables from the optimization module as in (4)- (15). Later, in Stage-2, for a given forecasted loads and PV by the proposed model in [30], the trained meta-model is utilized to directly predict values and probability distributions of output variables for day-ahead scheduling without resolving the optimization problem (4)- (15). ...
Article
Full-text available
Integration of distributed energy resources (DERs) presents several challenges for grid operators, including managing the intermittent output of renewable energy sources, balancing the grid's supply and demand, and ensuring grid stability and reliability. To this end, this paper presents a novel approach for stochastic day-ahead scheduling (SDS) of DERs, with the aim of minimizing network losses, maximizing solar PV generations, efficiently balancing the system, and mitigating the risks of line congestion and voltage limit violations under uncertain conditions. A meta-model of SDS is developed using the so-called lite polynomial chaos expansion, which can handle a large number of uncertain and non-parametric sources and provide a fast and scalable solution. The proposed framework is evaluated using the modified IEEE 33-bus, 69-bus and 141-bus distribution networks with forecasted values of load and solar generation on a rolling-time horizon. The results demonstrate that the proposed method achieves the same level of accuracy as Monte Carlo simulation-based SDS while having low computational complexity. Further, we have demonstrated the impact of battery energy storage systems on the SDS of DERs.
... In general, uncertainty is always present in aeroelastic systems in the form of aleatory uncertainty (inherent uncertainty) and epistemic uncertainty (comes from lack of knowledge and data) [5]. The aleatory uncertainty, which is irreducible in nature, is modeled by random variables/random fields; it is propagated through the mathematical model to quantify the response quantity [6,7]. Lindsley et al. [8] carried out limit cycle oscillation (LCO) analysis of a panel in supersonic flow by modeling elastic modulus as a Gaussian random field and uncertain boundary condition as a random variable using Monte Carlo simulation (MCS). ...
... where δr n = r n − r * n and r n is the nth random variable and (.) * denotes the term evaluated at the design point. Since the input parameters are random, so the response parameters are also random in nature [6]. The response Content courtesy of Springer Nature, terms of use apply. ...
... ξ n is the standard normal random variable showing the orthonormal condition with respect to Gaussian measure as E[ξ m , ξ n ] = δ mn , where E[ ] and δ mn are the mathematical expectation operator and Kronecker delta function respectively. In this study, the exponential covariance kernel is adopted because eigenvalues and eigenfunctions of the covariance kernel can be obtained analytically [6]. The exponential covariance function for the random field can be written as: ...
Article
Full-text available
In this work, the stochastic aeroelastic stability and flutter reliability of a wing are investigated using the stochastic finite element method in conjunction with the first order reliability method (FORM). Three stability conditions are proposed for estimating flutter onset in aeroelastic systems in the presence of uncertainties. Here, stability conditions are represented as limit state functions and defined in conditional sense on flow velocity for flutter reliability studies. Due to various representation of limit states, a lack of invariance in reliability estimates is observed using the conventional flutter reliability approach such as the first order second moment method. In this paper, a general FORM is proposed, which is suitable for all the limit state functions considered and shows invariance in reliability estimates. The proposed approach is applied to a wing having uncertain stiffness parameters, modeled by either random variables or random fields. Random fields are represented by a Karhunen–Loeve expansion, and the effect of correlation length on the flutter reliability of the wing is discussed. The computational efficiency of the FORM algorithm for various limit states in comparison of MCS is also discussed.
... The first is the probabilistic approach, which assumes that uncertainties in design parameters are modeled using statistical parameters. This allows for the determination of the probability of various scenarios, and examples of its application can be found in works by Ghanem and Spanos [1], Capillon et al. [2], as well as Wang et al. [3], including research within the context of structures with viscoelastic elements. However, many cases present challenges such as insufficient statistical data or difficulties in precisely determining probability distributions for all design parameters. ...
... In summary, based on the available information about the parameters and considering the nature of the problem under consideration, methods for addressing uncertainties can be summarized in the three following subpoints:  Probabilistic methods [1][2][3][36][37][38], where parameters are modeled as random variables. Advantage: they allow for the consideration of a wide range of probability distributions; disadvantage: they require the consideration of a large number of examples. ...
Article
Full-text available
The paper presents a method for determining the dynamic response of systems containing viscoelastic damping elements with uncertain design parameters. A viscoelastic material is characterized using classical and fractional rheological models. The assumption is made that the lower and upper bounds of the uncertain parameters are known and represented as interval values, which are then subjected to interval arithmetic operations. The equations of motion are transformed into the frequency domain using Laplace transformation. To evaluate the uncertain dynamic response, the frequency response function is determined by transforming the equations of motion into a system of linear interval equations. Nevertheless, direct interval arithmetic often leads to significant overestima-tion. To address this issue, this paper employs the element-by-element technique along with a specific transformation to minimize redundancy. The system of interval equations obtained is solved iteratively using the fixed-point iteration method. As demonstrated in the examples, this method, which combines the iterative solving of interval equations with the proposed technique of equation formulation , enables a solution to be found rapidly and significantly reduces overestimation. Notably, this approach has been applied to systems containing viscoelastic elements for the first time. Additionally , the proposed notation accommodates both parallel and series configurations of damping elements and springs within rheological models.
... This approach provides the spectral expansion of the stochastic processes by using the Hermite orthogonal polynomials in terms of Gaussian random variables. Later, this concept found a broad application in the field of engineering [67,68]. Recently, the so-called generalized PC has been developed in [10], which uses the Wiener-Askey scheme to represent non-Gaussian processes. ...
... This projection equation provides a basis for two distinctive methods: (i) an intrusive technique, called spectral Galerkin method [67], which projects the governing equations using eqs. (18) and (17), and (ii) a non-intrusive technique, i.e., the pseudo-spectral approach (also called the PC-based SCM) [64], which belongs to very efficient quadrature-based methods [54]. ...
... PDEs with parametric or random coefficients have been treated extensively in the past decades; see [19,33,40,45] and references therein, as well as [3,4,5,10,11,13,15,21,43,46,47,48,49,50] (just to mention a few), with an emphasis on stochastic computation as well as approximation. From a numerical analysis point of view, a large portion is devoted to elliptic PDEs. ...
... Secondly, A −1 z H = D(A z ) = D P Z -almost surely by definition of A z as the restriction of A z to the preimage of H under A z . From the FE estimate (22) with ℓ = 2 and the almost sure equivalence (19) of the H 2 -norm and A z · L2 , we deduce that P Z -almost surely ...
... (34) as, For the simulations, Δt was taken as Δt cri . Equation (35) served as a general criterion to break bonds. All the bonds associated with the weights w p,q NL were broken as the specimen experienced their corresponding strain p,q NL , as shown in Eq. (33). ...
... can be optimally truncated using first M terms [35,36] as, ...
Article
Full-text available
Based on stochasticity in local and nonlocal deformation-gamuts, a stochastic nonlocal equation of motion to model elastoplastic deformation of 1-D bars made of stochastic materials is proposed in this study. Stochasticity in the energy-densities as well as energy-states across the spatial domain of given material and stochasticity in the deformation-gamuts parameters are considered, and their physical interpretations are discussed. Numerical simulations of the specimens of two distinct materials, subjected to monotonic as well as cyclic loadings, are carried out. Specimens are discretized using stochastic as well as uniform grids. Thirty realizations of each stochastic process are considered. The mean values of the results from all realizations are found to be in good agreement with deterministic values, theoretical estimations and experimental results published in open literature.
... One approach to overcome the burden of MCS is to use surrogate models that offer an approximate functional relationship between random inputs and any desired structural response (output). There are several surrogate model building approaches that have been employed such as generalized polynomial chaos expansion (PCE) [19,20], data-driven PCE [21], stochastic collocation [22], low-rank tensor approximation [23,24], Kriging [25][26][27], neural networks [28,29], support vector machines [30,31], etc. Some researchers proposed to combine different types of surrogate models such as Polynomial-Chaos Kriging (PC-Kriging) [32], multi-fidelity Kriging models [33] or combination of PC-Kriging and probability density evolution [34], Kriging and Artificial Neural Networks [35], etc. ...
... In uncertainty propagation using PCE, the model response, , of a system involving random inputs, = [ 1 , 2 , … , ] , can be expressed as a series of orthogonal basis functions [19]: ...
... This choice is motivated by the fact that for {θ n } d θ n=1 i.i.d. standard normal random variables, this is a truncated Karhunen-Loève expansion of log(κ(θ, x)) ∼ GP(0, exp(− x − x 1 )) (Ghanem and Spanos 1991). ...
Article
Full-text available
This work is concerned with the use of Gaussian surrogate models for Bayesian inverse problems associated with linear partial differential equations. A particular focus is on the regime where only a small amount of training data is available. In this regime the type of Gaussian prior used is of critical importance with respect to how well the surrogate model will perform in terms of Bayesian inversion. We extend the framework of Raissi et. al. (2017) to construct PDE-informed Gaussian priors that we then use to construct different approximate posteriors. A number of different numerical experiments illustrate the superiority of the PDE-informed Gaussian priors over more traditional priors.
... In addition, the Karhunen-Loève (KL) expansion has been shown to give the best accuracy (i.e., lowest mean square error) when the exponential autocorrelation function is used, respecting the number of random variables. Consequently, the exponential covariance function has been chosen (Ghanem and Spanos 2003;Sudret and Der Kiureghian 2000), where each term C IJ of the covariance matrix C corresponds to the covariance between segments I and J, located at the positions x I and x J , and is given by ...
... While the method has been continuously investigated for efficiency and accuracy, it still presents challenges when dealing with situations that have substantial uncertainties [12][13][14]. Ghanem and Spanos [15] first proposed the spectral stochastic finite element method (SSFEM), which includes the polynomial chaos [16,17] and Karhunen-Loève expansion [18,19] as popular expansion methods. The SSFEM is useful in engineering applications where system response needs to be implicitly computed. ...
... This section will provide a brief overview of the Karhunen-Loeve (K-L) series expansion method and the stochastic harmonic function (SHF) method. The K-L series expansion method (Ghanem and Spanos 2003) is based on the symmetric positive-definite property of the correlation function ρðx 1 ; x 2 Þ, which can be decomposed as follows: ...
Article
Reinforcement corrosion is an important part of durability research of concrete structures. The process of reinforcement corrosion is highly random due to various factors, and the effect of spatial distribution of reinforcement corrosion is obvious in practical structures. This paper took corroded reinforced concrete beams as the research object, summarized the numerical characteristics of reinforcement corrosion in the test, and then established cross section corrosion extent random field models. Finally, this paper carried out a time-varying reliability analysis of the bending and shear capacity of the corroded reinforced concrete beams considering the time-varying effect of corrosion. The results showed that the failure probability of the flexural and shear bearing capacity of corroded RC beams is close to zero at the early stage of corrosion, the reliability index of the flexural bearing capacity is lower than the recommended value of 3.2 after 30 years, and the failure probability increases rapidly after 40 years. The shear capacity is lower than the reliability index 3.7 after 20 years, and the failure probability is greater than 0.3844 after 25 years.
... In addition, the Karhunen-Loève (KL) expansion has been shown to give the best accuracy (i.e., lowest mean square error) when the exponential autocorrelation function is used, respecting the number of random variables. Consequently, the exponential covariance function has been chosen (Ghanem and Spanos 2003;Sudret and Der Kiureghian 2000), where each term C IJ of the covariance matrix C corresponds to the covariance between segments I and J, located at the positions x I and x J , and is given by ...
Article
The optimal inspection planning of pipelines is mandatory for safe and economical operation under degradation processes, particularly corrosion induced by soil and environmental aggressiveness. In addition to the random nature of corrosion, the applied inspection tools and procedures provide significant uncertainties related to detecting corrosion defects and measuring the observed defect size. This work aims to consider the imperfect inspection results in terms of defect detection and sizing for optimal planning of pipeline maintenance. The pipeline degradation considers the spatial variability of corrosion in different exposure zones and its autocorrelation using the Karhunen–Loève expansion. Monte Carlo simulations compute the time-variant failure probability of a system formed by components in series. Then, the maintenance model is developed based on the inspection decision tree, where imperfect inspections and repair decision errors are considered. The developed model is applied to a gas pipeline under space-variant corrosion to show the effects of the main system parameters. The results show that defect detection and size measurements play a fundamental role in the optimal maintenance strategy, where improving inspection quality decreases the optimal inspection interval. In contrast, a higher inspection cost increases the optimal interval. Moreover, ignoring the impact of spatial variability and unacceptable error results in significant errors in the expected total cost and optimal inspection interval.
... It is well known that several mathematical and numerical methods have been established in engineering mechanics to model the aforementioned phenomena and to predict their structural impacts. Close to the analytical calculus of the basic probabilistic parameters [20], Bayesian approach [21], and Monte Carlo simulations family [22], one may find Karhunen-Loeve or polynomial chaos expansions [23], some semi-analytical techniques [20], as well as the group of stochastic perturbation methods [20,24]. The latter is formulated using various orders' approaches as the first, the second, the third, or general order Taylor expansions leading to the determination of the first two, three, or four basic statistics of structural behavior. ...
Article
Full-text available
The main issue in this work is to study the limit functions necessary for the reliability assessment of structural steel with the use of the relative entropy apparatus. This will be done using a few different mathematical theories relevant to this relative entropy, namely those proposed by Bhattacharyya, Kullback-Leibler, Jeffreys, and Hellinger. Probabilistic analysis in the presence of uncertainty in material characteristics will be delivered using three different numerical strategies-Monte Carlo simulation, the stochastic perturbation method, as well as the semi-analytical approach. All of these methods are based on the weighted least squares method approximations of the structural response functions versus the given uncertainty source, and they allow efficient determination of the first two probabilistic moments of the structural responses including stresses, displacements, and strains. The entire computational implementation will be delivered using the finite element method system ABAQUS and computer algebra program MAPLE, where relative entropies, as well as polynomial response functions, will be determined. This study demonstrates that the relative entropies may be efficiently used in reliability assessment close to the widely engaged first-order reliability method (FORM). The relative entropy concept enables us to study the probabilistic distance of any two distributions, so that structural resistance and extreme effort in elastoplastic behavior need not be restricted to Gaussian distributions.
... Specifically, cases 1 and 2 focus on the effects of variance in the logarithm of the saturated hydraulic conductivity, σ 2 lnK s ; cases 1, 3, and 4 focus on the effects of the horizontal correlation scale of saturated hydraulic conductivity, λ h ; cases 1 and 5 focus on the effects of the vertical correlation scale, λ v ; and cases 1 and 6 focus on the effects of the one-way time consumption of fluctuations, T. To demonstrate the effectiveness of the probabilistic description method, one realization to each case is generated for cases 1 to 5. These realizations are generated using the Karhunen-Loève (K-L) expansion [22][23][24][25][26][27][28][29]. The spatial distributions of K s fields, phreatic surface (denoted by the black dot dash line), pressure head p (denoted by the white dashed lines with labels), and streamlines (denoted by the black solid line with arrows) of the five cases at t = 3 days are presented in Figure 6. ...
Article
Full-text available
The effect of the variability in a layered structure, characterized by the spatial variability of the saturated hydraulic conductivity, on the distribution of a pressure head p in a foundation subjected to water level fluctuation in a reservoir is investigated with the aid of the random field theory, Karhunen–Loève (K-L) expansion, first-order moment approach, and cross-correlation analysis. The results show that the variability in the foundation structure has significant impacts on the groundwater response to the reservoir’s water level fluctuations. Regions with relatively large uncertainties of the p and σp values in the foundation are those around the initial water level at the reservoir side, and those at the distal end away from the reservoir. In addition, there is a larger variance of Ks, denoted as σlnKs2, a larger correlation scale in the horizontal direction λh, a larger correlation scale in the vertical direction λv, and a larger one-way time consumption of fluctuations T to a larger uncertainty in p. Moreover, the four factors (σlnKs2, λh, λv, and T) all have positive correlations with σp. σlnKs2 has the largest impact on σp in the foundation, λv has the second largest impact, and λh has the smallest impact. A foundation with small Ks values around the initial water level at the reservoir side and large Ks values around the highest water level at the reservoir side may produce larger p values in the foundation. These results yield useful insight into the effect of the variability in a layered structure on the distribution of the pressure head in a foundation subjected to water level fluctuation in a reservoir.
... According to this method, we can discretize and approximate ( , ) Z z ω in a form with a finite number of random variables. In brief, by considering a random filed ( , ) Z z ω with a known mean ( ) Z z , and covariance function ( ) 1 2 , Z C z z , the K-L decomposition of ( , ) Z x ω is given according to [11] by: ...
... The QoI Y = M(X), can then be represented as [3]: ...
... In the numerical experiments below, we take a unit square D = (0, 1) 2 , and fix f ≡ 1. To represent the diffusion coefficient u, we employ truncated Karhunen-Loève expansion [19,5]. We simulate M = 10000 samples of the coefficient u, generated by the following Fourier representation ...
Preprint
Full-text available
Invertible neural networks (INNs) represent an important class of deep neural network architectures that have been widely used in several applications. The universal approximation properties of INNs have also been established recently. However, the approximation rate of INNs is largely missing. In this work, we provide an analysis of the capacity of a class of coupling-based INNs to approximate bi-Lipschitz continuous mappings on a compact domain, and the result shows that it can well approximate both forward and inverse maps simultaneously. Furthermore, we develop an approach for approximating bi-Lipschitz maps on infinite-dimensional spaces that simultaneously approximate the forward and inverse maps, by combining model reduction with principal component analysis and INNs for approximating the reduced map, and we analyze the overall approximation error of the approach. Preliminary numerical results show the feasibility of the approach for approximating the solution operator for parameterized second-order elliptic problems.
Article
Full-text available
A collection of feed forward neural networks (FNN) for estimating the limit pressure load and the according displacements at limit state of a footing settlement is presented. The training procedure is through supervised learning with error loss function the mean squared error norm. The input dataset is originated from Monte Carlo simulations for a variety of loadings and stochastic uncertainty of the material of the clayey soil domain. The material yield function is the Modified Cam Clay model. The accuracy of the FNN’s is in terms of relative error no more than $$10^{-5}$$ 10 - 5 and this applies to all output variables. Furthermore, the epochs of the training of the FNN’s required for construction are found to be small in amount, in the order of magnitude of 90,000, leading to an alleviated data cost and computational expense. The input uncertainty of Karhunen Loeve random field sum appears to provide the most detrimental values for the displacement field of the soil domain. The most unfavorable situation for the displacement field result to limit displacements in the order of magnitude of 0.05 m, that may result to structural collapse if they appear to the founded structure. These series can provide an easy and reliable estimation for the failure of shallow foundation and therefore it can be a useful implement for geotechnical engineering analysis and design.
Article
Full-text available
In this paper, we introduce the numerical strategy for mixed uncertainty propagation based on probability and Dempster–Shafer theories, and apply it to the computational model of peristalsis in a heart-pumping system. Specifically, the stochastic uncertainty in the system is represented with random variables while epistemic uncertainty is represented using non-probabilistic uncertain variables with belief functions. The mixed uncertainty is propagated through the system, resulting in the uncertainty in the chosen quantities of interest (QoI, such as flow volume, cost of transport and work). With the introduced numerical method, the uncertainty in the statistics of QoIs will be represented using belief functions. With three representative probability distributions consistent with the belief structure, global sensitivity analysis has also been implemented to identify important uncertain factors and the results have been compared between different peristalsis models. To reduce the computational cost, physics constrained generalized polynomial chaos method is adopted to construct cheaper surrogates as approximations for the full simulation.
Article
The article presents physics based time invariant generalized flutter reliability approach for a wing in detail. For carrying flutter reliability analysis, a generalized first order reliability method (FORM) and a generalized second order reliability method (SORM) algorithms are developed. The FORM algorithm requires first derivative and the SORM algorithm requires both the first and second derivatives of a limit state function; and for these derivatives, an adjoint and a direct approaches for computing eigen-pair derivatives are proposed by ensuring uniqueness in eigenvector and its derivative. The stability parameter, damping ratio (real part of an eigenvalue), is considered as implicit type limit state function. To show occurrence of the flutter phenomenon, the limit state function is defined in conditional sense by imposing a condition on flow velocity. The aerodynamic parameter: slope of the lift coefficient curve (C_L) and structural parameters: bending rigidity (EI) and torsional rigidity (GJ) of an aeroelastic system are considered as independent Gaussian random variables, and also the structural parameters are modeled as second-order constant mean stationary Gaussian random fields having exponential type covariance structures. To represent the random fields in finite dimensions, the fields are discretized using Karhunen–Loeve expansion. The analysis shows that the derivatives of an eigenvalue obtained from both the adjoint and direct approaches are the same. So the cumulative distribution functions (CDFs) of flutter velocity will be the same, irrespective of the approach chosen, and it is also reflected in CDFs obtained using various reliability methods based on adjoint and direct approaches: first order second moment method, generalized FORM, and generalized SORM.
Article
Full-text available
The dynamic response of a rotor system can be significantly affected by uncertainties. It is essential to understand and quantify the influence of uncertain parameters on the rotor response. The purpose of this study is to evaluate the effect of perturbation in critical system parameters on the dynamic behaviour of a flexible rotor with localised contact. A test rig consisting of a localised contact element is developed. The critical rotor speeds are identified using the Campbell diagram, internal resonance diagram and experimental run-up analysis. Modal interactions influence the critical rotor speed. Modal interactions can be controlled using system parameters such as contact location, eccentric mass location and contact friction. Experimentally the sensitivity of the system parameters on the dynamic response is analysed. A stochastic finite element model is developed for the rotor–stator system using uncertain critical system parameters. The generalized Polynomial Chaos expansion (gPC) is used to evaluate the stochastic response of the finite element model with uncertainties. The approach uses a numerical collocation method for determining the coefficients of the gPC. The modal interaction is altered by varying the contact location, impulsive load location and the rotor–stator interface friction. The results from the experimental and numerical study indicate that the dynamic response at internal resonance rotor speed is more sensitive to the perturbation in system parameters compared to the critical speed corresponding to the first whirling mode.
Article
Full-text available
Lightweight structures are of paramount importance in engineering applications. Optimal designs combining various optimization techniques can maximize desired mechanical characteristics while minimizing undesired static or dynamic behaviors. However, these optimized structures usually have low safety factors, which makes it necessary to consider uncertainties in project design to ensure their reliability. This paper presents a systematic approach to quantify uncertainties in an optimized structural member of an unmanned aerial vehicle (UAV) wing used in remote sensing. The UAV wing structural member was optimized using the Multi-Objective Genetic Algorithm, and balsa wood, a lightweight and ecological material with high variability in mechanical properties, was used for its manufacture. To analyze the structural integrity of the UAV wing, the present study quantified parametric uncertainties in material properties and manufacturing processes using stochastic models. A probabilistic approach was adopted, which revealed a 37% reduction in the structure's safety coefficient. Various conclusions were drawn from this research, which highlights the importance of considering uncertainties in the design of optimized structures to ensure their reliability.
Article
Full-text available
In this paper, some exact solutions of the stochastic generalized nonlinear shallow water wave equation are investigated. This equation is important in fluid mechanics’ fields since it can model the propagation of disturbances in water and other incompressible fluids. Opposite to what is usually considered in the literature, the two dispersion coefficients of the nonlinear terms are considered dependent random quantities as a more realistic case. The modified extended-tanh function (METF) method is combined with the random variable transformation (RVT) technique to get full probabilistic solutions of the problem via computing the probability density functions (PDFs) of the solution processes. Based on the probability density function, any statistical moment of the solution can be evaluated. Through two different applications for the input random variables (dispersion coefficients), my findings are applied efficiently. Finally, numerical results are presented graphically along the spatial dimension at a certain wave speed and time. The obtained results ratify that the proposed technique is efficient and powerful for obtaining analytical probabilistic solutions for the problem.
Article
Full-text available
This article is the second part of a previous article devoted to the deterministic aspects. Here, we present a comprehensive study on the development and application of a novel stochastic second-gradient continuum model for particle-based materials. An application is presented concerning colloidal crystals. Since we are dealing with particle-based materials, factors such as the topology of contacts, particle sizes, shapes, and geometric structure are not considered. The mechanical properties of the introduced second-gradient continuum are modeled as random fields to account for uncertainties. The stochastic computational model is based on a mixed finite element (FE), and the Monte Carlo (MC) numerical simulation method is used as a stochastic solver. Finally, the resulting stochastic second-gradient model is applied to analyze colloidal crystals, which have wide-ranging applications. The simulations show the effects of second-order gradient on the mechanical response of a colloidal crystal under axial load, for which there could be significant fluctuations in the displacements.
Article
Full-text available
We consider a mesh-based approach for training a neural network to produce field predictions of solutions to parametric partial differential equations (PDEs). This approach contrasts current approaches for “neural PDE solvers” that employ collocation-based methods to make pointwise predictions of solutions to PDEs. This approach has the advantage of naturally enforcing different boundary conditions as well as ease of invoking well-developed PDE theory—including analysis of numerical stability and convergence—to obtain capacity bounds for our proposed neural networks in discretized domains. We explore our mesh-based strategy, called NeuFENet, using a weighted Galerkin loss function based on the Finite Element Method (FEM) on a parametric elliptic PDE. The weighted Galerkin loss (FEM loss) is similar to an energy functional that produces improved solutions, satisfies a priori mesh convergence, and can model Dirichlet and Neumann boundary conditions. We prove theoretically, and illustrate with experiments, convergence results analogous to mesh convergence analysis deployed in finite element solutions to PDEs. These results suggest that a mesh-based neural network approach serves as a promising approach for solving parametric PDEs with theoretical bounds.
Article
Full-text available
This paper discusses wave propagation in unbounded particle-based materials described by a second-gradient continuum model, recently introduced by the authors, to provide an identification technique. The term particle-based materials denotes materials modeled as assemblies of particles, disregarding typical granular material properties such as contact topology, granulometry, grain sizes, and shapes. This work introduces a center-symmetric second-gradient continuum resulting from pairwise interactions. The corresponding Euler-Lagrange equations (equilibrium equations) are derived using the least action principle. This approach unveils non-classical interactions within subdomains. A novel, symmetric, and positive-definite acoustic tensor is constructed, allowing for an exploration of wave propagation through perturbation techniques. The properties of this acoustic tensor enable the extension of an identification procedure from Cauchy (classical) elasticity to the proposed second-gradient continuum model. Potential applications concern polymers, composite materials, and liquid crystals.
Article
Full-text available
Stochastic heat transfer simulations play a pivotal role in capturing real-world uncertainties, where randomness in material properties and boundary conditions is present. Traditional methods, such as Monte Carlo simulation, perturbation methods, and polynomial chaos expansion, have provided valuable insights but face challenges in efficiency and accuracy, particularly in high-dimensional systems. This paper introduces a methodology for one-dimensional heat transfer modeling that incorporates random boundary conditions and treats thermal conductivity as a random process The proposed approach integrates Monte Carlo simulation with Cholesky decomposition to generate a vector of thermal conductivity realizations, capturing the inherent randomness in material properties. Finite element method (FEM) simulations based on these realizations yield rich datasets of temperatures at various locations. A deep neural network (DNN) is then trained on this FEM data, enabling not only rapid and accurate temperature predictions but also bidirectional computations—predicting temperatures based on thermal conductivity and inversely estimating thermal conductivity from observed temperatures.
Article
Full-text available
The uncertainties in material and other properties of structures are often spatially correlated. We introduce an efficient technique for representing and processing spatially correlated random fields in robust topology optimisation of lattice structures. Robust optimisation takes into account the statistics of the structural response to obtain a design whose performance is less sensitive to the specific realisation of the random field. We represent Gaussian random fields on lattices by leveraging the established link between random fields and stochastic partial differential equations (SPDEs). The precision matrix, i.e. the inverse of the covariance matrix, of a random field with Matérn covariance is equal to the finite element stiffness matrix of a possibly fractional PDE with a second-order elliptic operator. We consider the finite element discretisation of the PDE on the lattice to obtain a random field which, by design, takes into account its geometry and connectivity. The so-obtained random field can be interpreted as a physics-informed prior by the hypothesis that the elliptic PDE models the physical processes occurring during manufacturing, like heat and mass diffusion. Although the proposed approach is general, we demonstrate its application to lattices modelled as pin-jointed trusses with uncertainties in member Young’s moduli. We consider as a cost function the weighted sum of the expectation and standard deviation of the structural compliance. To compute the expectation and standard deviation and their gradients with respect to member cross-sections we use a first-order Taylor series approximation. The cost function and its gradient are computed using only sparse matrix operations. We demonstrate the efficiency of the proposed approach using several lattice examples with isotropic, anisotropic and non-stationary random fields and up to eighty thousand random and optimisation variables.
Book
First Challenge, SEG.A. 2023, Held in Conjunction with MICCAI 2023, Vancouver, BC, Canada, October 8, 2023, Proceedings
Article
Full-text available
When training a parametric surrogate to represent a real-world complex system in real time, there is a common assumption that the values of the parameters defining the system are known with absolute confidence. Consequently, during the training process, our focus is directed exclusively towards optimizing the accuracy of the surrogate’s output. However, real physics is characterized by increased complexity and unpredictability. Notably, a certain degree of uncertainty may exist in determining the system’s parameters. Therefore, in this paper, we account for the propagation of these uncertainties through the surrogate using a standard Monte Carlo methodology. Subsequently, we propose a novel regression technique based on optimal transport to infer the impact of the uncertainty of the surrogate’s input on its output precision in real time. The OT-based regression allows for the inference of fields emulating physical reality more accurately than classical regression techniques, including advanced ones.
Article
Many stochastic continuous-state dynamical systems can be modeled as probabilistic programs with nonlinear non-polynomial updates in non-nested loops. We present two methods, one approximate and one exact, to automatically compute, without sampling, moment-based invariants for such probabilistic programs as closed-form solutions parameterized by the loop iteration. The exact method applies to probabilistic programs with trigonometric and exponential updates and is embedded in the Polar tool. The approximate method for moment computation applies to any nonlinear random function as it exploits the theory of polynomial chaos expansion to approximate non-polynomial updates as the sum of orthogonal polynomials. This translates the dynamical system to a non-nested loop with polynomial updates, and thus renders it conformable with the Polar tool that computes the moments of any order of the state variables. We evaluate our methods on an extensive number of examples ranging from modeling monetary policy to several physical motion systems in uncertain environments. The experimental results demonstrate the advantages of our approach with respect to the current state-of-the-art.
Chapter
In practical engineering problems, reliability analysis often involves nonlinear, implicit, and computationally expensive relationships between the performance and uncertain parameters, which makes it very challenging to conduct time-dependent reliability analysis instantly and accurately. This chapter introduces the concept of time-dependent reliability analysis, followed by current time-dependent reliability analysis methods that are divided into three categories, namely outcrossing rate methods, extreme value methods, and response surrogate-based methods. Several advanced response surrogate-based methods are particularly explained, including confidence-based adaptive extreme response surface method, equivalent stochastic process transformation method, instantaneous response surface method t-IRS, and surrogate-based time-dependent reliability analysis.
Article
Full-text available
This paper is the first attempt, to the best of the authors' knowledge, to explore the effect of the stochastic material parameters on the dynamic response of functionally graded graphene platelets reinforced composites plate subject to a moving load. A novel stochastic calculation scheme is presented in this study, which considers the spatial variability of structural material parameters for a more precise and effective analysis of plate structures. Using the radial point interpolation method (RPIM), the governing equations of the plate are derived based on the first-order shear deformation theory and Hamilton’s principle. The elastic moduli of the graphene platelets (GPLs) and the matrix are modeled as separate random fields, which are discretized using the Karhunen–Loève expansion (KLE) method. The random variables obtained by KLE were utilized in conjunction with the improved point estimation method (PEM) and Newmark-β\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\beta$$\end{document} method to determine the stochastic dynamic response. By comparing the results with those obtained using Monte Carlo method, the correctness and effectiveness of the proposed stochastic calculation scheme of PEM-RPIM are confirmed. Subsequently, the scheme was used to compute the coefficient of variation of the maximum dynamic deflection at the center of the plate, and also the sensitivity analysis was conducted. The results indicate that the distribution pattern and the weight fraction of GPLs have an impact on deflection sensitivity. Moreover, the deflection sensitivity is found to be significantly higher in response to variations of the random field EGPL\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$${E}_{\text{GPL}}$$\end{document}.
Article
Full-text available
It is known that standard stochastic Galerkin methods encounter challenges when solving partial differential equations with high-dimensional random inputs, which are typically caused by the large number of stochastic basis functions required. It becomes crucial to properly choose effective basis functions, such that the dimension of the stochastic approximation space can be reduced. In this work, we focus on the stochastic Galerkin approximation associated with generalized polynomial chaos (gPC), and explore the gPC expansion based on the analysis of variance (ANOVA) decomposition. A concise form of the gPC expansion is presented for each component function of the ANOVA expansion, and an adaptive ANOVA procedure is proposed to construct the overall stochastic Galerkin system. Numerical results demonstrate the efficiency of our proposed adaptive ANOVA stochastic Galerkin method for both diffusion and Helmholtz problems.
Article
Full-text available
The stochastic solution of wave propagation through simplified shallow water equations, described by a system of 1D and 2D linear equations, has been investigated by considering the initial condition as a source of uncertainty. The Karhunen–Loéve expansion (KLE) method is applied as an alternative approach to the Monte Carlo simulation (MCS) method. Uncertainty associated with moments of the flow characteristics is quantified. The initial condition \({H}_{0}\), considered as the input random field, is decomposed in the form of a set of orthogonal Gaussian random variables \(\left\{{\xi }_{i}\right\}\). The coefficients of the series are related to eigenvalues and eigenfunctions of the covariance function of \({H}_{0}\). The flow depth \(H\) and flow velocities \(U\) and V are expanded as an infinite series whose terms H(n), U(n) and V(n) represent depth and velocities of the nth order, respectively, and a set of recursive equations is derived for H(n), U(n) and V(n). Then, H(n), U(n) and V(n) are decomposed with polynomial expansions in terms of the products of \({\xi }_{i}\) so that their coefficients are determined by replacing decompositions of H0, H(n), U(n) and V(n) into those recursive equations. MCS is conducted, in which the mean and variances of the flow depth \(H\) and flow velocities \(U\) and V are compared against approximations of the KLE results, with the same accuracy as MCS, yet with much less computation time and effort.
Article
Full-text available
The efficient discretization of a multidimensional random field with high definition and large geometric size remains a significant challenge. Compared with the simulation of one-dimensional or two-dimensional random fields, the generation of three-dimensional (3-D) random fields using the traditional Karhunen–Loève (K–L) expansion method tends to require a relatively long computational time and a larger amount of physical memory. In the present study, a decomposed K–L expansion scheme is proposed, which is applicable when a separable autocorrelation function (ACF) is used. The separability in the ACF is a precondition for the implementation of the proposed method. The proposed method decomposes the discretization of a 3-D random field into that of separate one-dimensional random fields and computes the eigenpairs in each dimension respectively. The accuracy and efficiency of the proposed method are demonstrated, and the numerical solutions of the eigenpairs used for the random field discretization are validated by comparing them with the theoretical solutions. Comparisons with the traditional K–L expansion method showed that the proposed scheme significantly reduced the computational time and memory requirements. This makes it potentially useful for the discretization of a multidimensional random field with a large geometric size and high definition, which can be used in stochastic finite element analysis. The proposed random field discretization method was applied to analyze a 3-D saturated clay slope under undrained conditions. The results of the stochastic finite element analysis suggest that the 3-D model is rather advantageous for estimating slope stability when the spatial variability of soil properties is considered. The 3-D model can capture more spatial details in soil distribution and its effects on failure mechanisms and safety factors from a theoretical point of view. Therefore, the proposed decomposed K–L expansion method for 3-D random field discretization has strong application potential in geotechnical problems.
Article
Full-text available
Currently, numerical-simulation-based slope reliability analysis (NSB-SRA) considering spatial variability is still a time-consuming task. To address this problem, this study proposed an efficient numerical-simulation-based slope reliability analysis method. Dual dimensionality reduction technique is firstly employed to greatly reduce random variables that are required for establishing a limit-equilibrium-analysis-based multivariate adaptive regression splines (MARS) model. Then, response conditioning method is used to select the failure samples predicted by MARS model as samples for performing NSB-SRA. Finally, the proposed method is validated through two spatially variable slope examples. The results show that MARS + FDM is an efficient solution to perform NSB-SRA, especially for low-probability-level NSB-SRA problem. Besides, NSB-SRA is necessary for cases of horizontal scale of fluctuation, smaller vertical of fluctuation, larger variability of undrained shear strength, and stronger positive cross-correlation between cohesion and internal friction angle because neglecting NSB-SRA will lead to an unreliable assessment on slope reliability.
Chapter
Theory-guided neural network recently has been used to solve partial differential equations. This method has received widespread attention due to its low data requirements and adherence to physical laws during the training process. However, the selection of the punishment coefficient for including physical laws as a penalty term in the loss function undoubtedly affects the performance of the model. In this paper, we propose a comprehensive theory-guided framework using a bilevel programming model that can adaptively adjust the hyperparameters of the loss function to further enhance the performance of the model. An enhanced water flow optimizer (EWFO) algorithm is applied to optimize upper-level variables in the framework. In this algorithm, an opposition-based learning technic is used in the initialization phase to boost the initial group quality; a nonlinear convergence factor is added to the laminar flow operator to upgrade the diversity of the group and expand the search range. The experiments show that competitive performance of the method in solving stochastic partial differential equations.
Article
Full-text available
A scheme for the stochastization of systems of ordinary differential equations (ODEs) based on Itô calculus is presented in this article. Using the presented techniques, a system of stochastic differential equations (SDEs) can be constructed in such a way that eliminating the stochastic component yields the original system of ODEs. One of the main benefits of this scheme is the ability to construct analytical solutions to SDEs with the use of special vector-valued functions, which significantly differs from the randomization approach, which can only be applied via numerical integration. Moreover, using the presented techniques, a system of ODEs and SDEs can be constructed from a given diffusion function, which governs the uncertainty of a particular process.
Article
Numerous stochastic methods for accounting for uncertainties in mechanical systems have been developed to study the randomness of input parameters. In this study, the non-intrusive polynomial chaos (NIPC) approach was applied to determine the dynamic responses of a mass-elastica system, which is analogous to the pole vault model. The approach accounts for uncertainties in the system by focusing on specific non-dimensional parameters, such as the non-dimensional velocity \(v_0\) and the deflection of the elastica due to the weight of the mass w. The simulation results were obtained using the Python software toolbox with the Chaospy package and were compared to those from a Monte Carlo (MC) reference approach. The results showed excellent concordance with the solutions from the MC reference approach and demonstrated a significant influence of the uncertain parameters on the pole vault performance. The NIPC method was found to be an effective choice for reducing problem complexity, computational time, and obtaining dynamic responses for uncertainty quantification of the pole vault model.
ResearchGate has not been able to resolve any references for this publication.