ArticlePublisher preview available

Recent advances and applications of surrogate models for finite element method computations: a review

Authors:
To read the full-text of this research, you can request a copy directly from the authors.

Abstract

The utilization of surrogate models to approximate complex systems has recently gained increased popularity. Because of their capability to deal with black-box problems and lower computational requirements, surrogates were successfully utilized by researchers in various engineering and scientific fields. An efficient use of surrogates can bring considerable savings in computational resources and time. Since literature on surrogate modelling encompasses a large variety of approaches, the appropriate choice of a surrogate remains a challenging task. This review discusses significant publications where surrogate modelling for finite element method-based computations was utilized. We familiarize the reader with the subject, explain the function of surrogate modelling, sampling and model validation procedures, and give a description of the different surrogate types. We then discuss main categories where surrogate models are used: prediction, sensitivity analysis, uncertainty quantification, and surrogate-assisted optimization, and give detailed account of recent advances and applications. We review the most widely used and recently developed software tools that are used to apply the discussed techniques with ease. Based on a literature review of 180 papers related to surrogate modelling, we discuss major research trends, gaps, and practical recommendations. As the utilization of surrogate models grows in popularity, this review can function as a guide that makes surrogate modelling more accessible.
Soft Computing (2022) 26:13709–13733
https://doi.org/10.1007/s00500-022-07362-8
APPLICATION OF SOFT COMPUTING
Recent advances and applications of surrogate models for finite
element method computations: a review
Jakub Kudela1
·Radomil Matousek1
Accepted: 7 June 2022 / Published online: 17 July 2022
© The Author(s), under exclusive licence to Springer-Verlag GmbH Germany, part of Springer Nature 2022
Abstract
The utilization of surrogate models to approximate complex systems has recently gained increased popularity. Because of
their capability to deal with black-box problems and lower computational requirements, surrogates were successfully utilized
by researchers in various engineering and scientific fields. An efficient use of surrogates can bring considerable savings in
computational resources and time. Since literature on surrogate modelling encompasses a large variety of approaches, the
appropriate choice of a surrogate remains a challenging task. This review discusses significant publications where surrogate
modelling for finite element method-based computations was utilized. We familiarize the reader with the subject, explain
the function of surrogate modelling, sampling and model validation procedures, and give a description of the different
surrogate types. We then discuss main categories where surrogate models are used: prediction, sensitivity analysis, uncertainty
quantification, and surrogate-assisted optimization, and give detailed account of recent advances and applications. We review
the most widely used and recently developed software tools that are used to apply the discussed techniques with ease. Based
on a literature review of 180 papers related to surrogate modelling, we discuss major research trends, gaps, and practical
recommendations. As the utilization of surrogate models grows in popularity, this review can function as a guide that makes
surrogate modelling more accessible.
Keywords Surrogate model ·Surrogate-assisted optimization ·Sensitivity analysis ·Uncertainty quantification ·Finite
element method
1 Introduction
The methods of numerical analysis, such as the finite-element
method (FEM), computational fluid dynamics (CFD), or
structural finite-element analysis (FEA), are routinely
employed to perform analysis of complex systems and
structures where obtaining an analytical solution may be
either difficult or impossible. Such analyses are becoming
ubiquitous in evaluating and optimizing design, reliabil-
ity, and maintenance of complex systems and structures
in a broad range of various industrial applications includ-
ing aerospace (Yan et al. 2020), automotive (Berthelson
et al. 2021), architecture (Westermann and Evins 2019),
biomedical engineering (Putra et al. 2018), chemical engi-
neering (Bhosekar and Ierapetritou 2018), and many others.
BJakub Kudela
Jakub.Kudela@vutbr.cz
1Institute of Automation and Computer Science, Brno
University of Technology, Technicka 2, Brno 616 00, Czech
Republic
However, these computer simulations tend to be very compu-
tationally demanding because of their intrinsically detailed
description of the studied systems. These engineering prob-
lems based on computer models also require the computation
of thousands of simulations in order to construct a suitable
solution, requiring a large computational budget (Alizadeh
et al. 2020). Additionally, because of their high fidelity, var-
ious issues in performing computer simulations can occur
regardless of how much computer power can be used. Even
the recent advance of parallel and pooling (Kudela and Popela
2020) computing methods, that carry out many calculations
or executions of processes simultaneously, do not seem to be
very helpful (Grama et al. 2003).
The principle purpose of using surrogate models (or
metamodels) is to approximately emulate the expensive-to-
evaluate high-fidelity models, such as a FEM-based model,
employing computationally less costly statistical models.
These surrogates are constructed based on a relatively low
number of simulation input and output data, that are com-
puted employing the high-fidelity expensive computations.
123
Content courtesy of Springer Nature, terms of use apply. Rights reserved.
... In addition to well-established surrogate modelling approaches, several supervised machine learning based techniques are available and under constant research [13], also in the field of structural dynamics [14,15], acoustics [16] and vibroacoustics [17,18]. Their advantage lies in the field of modelling highly complex systems, which is often too challenging for the established methods. ...
... Besides the following short introduction, a more in-depth overview of Gaussian Process regression for machine learning applications can be found in [19]. GP regression is sometimes also referred as Kriging [13]. ...
Article
I n vehicle Noise Vibration Harshness (NVH) development, vibroacoustic simulations with Finite Element (FE) Models are a common technique. The computational costs for these calculations are steadily rising due to more detailed modelling and higher frequency ranges. At the same time the need for multiple evaluations of the same model with different input parameters-e.g., for uncertainty quantification, optimization, or robustness investigation is also increasing. Therefore, it is crucial to reduce the computational costs dramatically in these cases. A common technique is to use surrogate models that replace the computation-ally intensive FE model to perform repeated evaluations with varying parameters. Several different methods in this area are well established, but with the continuous advancements in the field of machine learning, interesting new methods like the Gaussian Process (GP) regression arises as a promising approach. In Gaussian Process regression there are important parameters that strongly influence the prediction accuracy of the GP Model, namely length-scale, variance, and mostly the kernel function. In this contribution these parameters and their influence on the results are evaluated , with a focus on vibroacoustic simulations. For the kernel function, four different types-stationary, nonsta-tionary, spectral and deep learning kernel, respectively-are under investigation. As a result, it can be shown that their performance corelate with the data complexity. Further investigations focus on the frequency as input parameters and the influence of the number of training samples. In these evaluations there is an interesting difference between a simple academic model and a body in white model. The underlying effects, such as damping, system complexity, uncertainty and load case are discussed in detail. Finally, a recommendation using GP as a surrogate model for vibroacoustic simulations is given.
... However, for problems with large, complex structures, thousands of simulations are often required to construct a suitable solution due to the high fidelity requirements of their FE models [15]. Surrogate models are usually introduced in engineering calculations to approximate the structural behavior and reduce the computational burden, thus effectively reducing the computational complexity in finite element analysis and making analysis and design more efficient [16]. ...
Article
Full-text available
There are deviations between the radio telescope antenna finite element (FE) model, founded on the design stage, and the actual working antenna structure. The original FE model cannot accurately describe the antenna structure deformation characteristics under the environmental load, thereby compromising the accuracy of the active structural compensation. This article proposes an antenna FE model updating method founded on parameter optimization with a surrogate model. The updating method updates the modulus of elasticity parameters of different components of the antenna backup structure (BUS) to obtain finite element analysis (FEA) results consistent with the actual measurement of the antenna reflector surface shape. The surrogate model founded on the multi-quadratic radial basis function (RBF) improves the computational efficiency of FE model updating, replacing the complex and time-consuming finite element analysis and calculation process. This method is implemented on a radio telescope antenna with an aperture of 25 m. The results show a significant reduction in the mismatch between the antenna and the updated FE model. This method’s calculation time is significantly reduced compared with the updating method without using the surrogate model, with the RBF surrogate model taking 1% of the time of the finite element model in the FEA calculations. The proposed method can improve the antenna FE model calculation accuracy and significantly enhance the efficiency of FE model updating calculations. Thus, it can offer a reference for antenna engineering practice.
... The surrogate model refers to the computational model that predicts unknown experimental results based on existing datasets. Commonly used surrogate models include response surface, linear regression, radial basis function (RBF), Kriging, support vector regression (SVR), and artificial neural network (ANN) [22]. Multilayer perceptron (MLP) is a feedforward ANN [23] and has been shown to have strong predictive ability in some studies. ...
Article
Full-text available
The utilization of liquid-cooled plates has been increasingly prevalent within the thermal management of batteries for new energy vehicles. Using Tesla valves as internal flow channels of liquid-cooled plates can improve heat dissipation characteristics. However, conventional Tesla valve flow channels frequently experience challenges such as inconsistencies in heat dissipations and unacceptably high levels of pressure loss. In light of this, this paper proposes a new type of Tesla valve with partitions, which is used as internal channel for liquid-cooled plate. Its purpose is to solve the shortcomings of existing flow channels. Under the working conditions of Reynolds number equal to 1000, the neural network prediction-NSGA-II multi-objective optimization method is used to optimize the channel structural parameters. The objective is to identify the optimal structural configuration that exhibits the greatest Nusselt number while simultaneously exhibiting the lowest Fanning friction factor. The variables to consider are the half of partition thickness H, partition length L, and the fillet radius R. The study result revealed that the optimal parameter combination consisted of H = 0.25 mm, R = 1.253 mm, L = 0.768 mm, which demonstrated the best performance. The Fanning friction factor of the optimized flow channel is substantially reduced compared to the reference channel, reducing by approximately 16.4%. However, the Nusselt number is not noticeably increased, increasing by only 0.9%. This indicates that the optimized structure can notably reduce the fluid’s friction resistance and pressure loss and slightly enhance the heat dissipation characteristics.
... Nevertheless, owing to the intricate nature of groundwater systems, deriving analytical solutions for these partial differential equations often proves infeasible (Han 2022). Therefore, numerical methods, such as the finite difference and finite element methods, are commonly employed for resolution (Kudela and Matousek 2022). Both methods necessitate the division of the research area into multiple grids or elements, with the differential equations or their approximations calculated on these grids, respectively. ...
Article
Full-text available
Groundwater modeling is essential for effective water resource management. However, the heterogeneous distribution of hydrogeological parameters, such as hydraulic conductivity (K), significantly impacts the accuracy and efficiency of simulations. This study investigates the accuracy and computational efficiency of K parameter inversion across varying dimensions. Heterogeneous logarithmic random K (lnK) fields were generated using the Karhunen-Loève Expansion. Three inversion algorithms—Genetic Algorithm, Markov Chain Monte Carlo, and Ensemble Smoother (ES)—were evaluated in conjunction with a Kriging surrogate model. These algorithms were used to invert parameters using different numbers of Inversion Target Features (ITFs) as prediction targets. Results indicate that ES consistently outperformed GA and MCMC in terms of inversion accuracy and computational efficiency across all ITF scenarios (7, 12, 34, 48, and 71 ITFs). Sensitivity analysis revealed that selecting parameters with higher total-order indices for inversion is crucial, as a small subset of ITFs often accounts for a majority of the sensitivity. While increasing the number of ITFs captures greater spatial variability in lnK, high-dimensional cases can lead to less responsive output heads. Furthermore, inadequate grid density may not adequately represent fine-scale lnK distributions. This study provides valuable insights into parameter inversion within heterogeneous groundwater models, furthering our understanding of these complex systems.
... A surrogate function f (x) for the function f : R d →R, can be envisaged where x = (x 1 , x 2 , x 3 , …, x d ) represents the input vector, d denotes the number of dimensions of the problem having y as the single output. The input vector is subject to known upper (x U ) and lower bounds (x L ), denoted as x L ≪x≪x U [13,14]. In structural engineering, for example, surrogate models consist of a function that makes it easier to translate EDPs into structural properties and/or hazard parameters. ...
Article
Full-text available
For structural engineers, existing surrogate models of buildings present challenges due to inadequate datasets, exclusion of significant input variables impacting nonlinear building response, and failure to consider uncertainties associated with input parameters. Moreover, there are no surrogate models for the prediction of both pushover and nonlinear time history analysis (NLTHA) outputs. To overcome these challenges, the present study proposes a novel framework for surrogate modelling of steel structures, considering crucial structural factors impacting engineering demand parameters (EDPs). The first phase involves the development of a process by which 30,000 random steel special moment resisting frames (SMRFs) for low to high-rise buildings are generated, considering the material and geometrical uncertainties embedded in the design of structures. In the second phase, a surrogate model is developed to predict the seismic EDPs of SMRFs when exposed to various earthquake levels. This is accomplished by leveraging the results obtained from phase one. Moreover, separate surrogate models are developed for the prediction of SMRFs' essential pushover parameters. Various machine learning (ML) methods are examined, and the outcomes are presented as user-friendly GUI tools. The findings highlighted the substantial influence of pushover parameters as well as beams and columns' plastic hinges properties on the prediction of NLTHA, factors that have been overlooked in prior studies. Moreover, CatBoost has been acknowledged as the superior ML technique for predicting both pushover and NLTHA parameters for all buildings. This framework offers engineers the ability to estimate building responses without the necessity of conducting NLTHA, pushover, or even modal analysis which is computationally intensive.
... However, in contemporary scientific research, the consensus is that numerical modeling is a resource-intensive approach, mainly when dealing with complex phenomena. Therefore, surrogate models emerge as an attractive solution to speed up optimization processes [10,11]. Surrogate models can be built employing different techniques, among the most popular one finds Genetic Algorithms(GA) [12], Proper Generalized Decomposition(PGD) [13,14], sparse-PGD [15], and Neural Networks [16]. ...
Article
Full-text available
This study aims to provide precise predictions for the compression of reinforced polymers during the sheet Molding Compound (SMC) process, ensuring the attainment of a predefined structure while preventing material overflow during the process. The primary challenge revolves around identifying the optimal initial shape to prevent material rebound during the process. To confront this issue, a numerical model is utilized, faithfully simulating the SMC process and forming the foundation for our investigations. Furthermore, to optimize the pre-fill stage, a surrogate model is proposed to enhance modeling efficiency, and then an inverse analysis method is applied. This approach of minimizing material rebound during the SMC process results in a reliable metamodel to predict an initial mass shape accurately and at a low computational cost, thus ensuring the squeezed material fits the mold shape.
... Regression models describing the effects of the variable combinations on the system behavior were generated and leveraged to elaborate design guidelines. These regressions were used as surrogate models for the Finite Element (FE) simulations [51]. This approach is widely used to conduct analyses on multi-parameter models at a reduced computational cost without requiring time-and resource-consuming simulations [52][53][54]. ...
Article
Full-text available
Auxetics are a class of materials and metamaterials with a negative Poisson’s ratio (ν) and have gained tremendous popularity over the last three decades. Many studies have focused on characterizing designs that allow obtaining a negative ν. However, some open issues remain concerning understanding the auxetic behavior in operational conditions. Studies have been centered on analyzing the response of specific auxetic topologies instead of treating auxeticity as a property to be analyzed in a well-defined structural context. This study aims to contribute to the investigation of auxetic materials with a structural application, focusing on maximizing performance. The field of application of auxetics for designing inserts was selected and a model of a nail-cavity system was created to determine the effects of different design choices on the system behavior by exploring relationships between selected parameters and the auxetic insert behavior. The exploration combines finite element modeling analyses with their surrogate models generated by supervised learning algorithms. This approach allows for exploring the system’s behavior in detail, thus demonstrating the potential effectiveness of auxetics when used for such applications. A list of design guidelines is elaborated to support the exploitation of auxetics in nail-cavity systems.
... 1 In scenarios requiring hundreds or even thousands of calculation iterations, such as design optimization, sensitivity analysis, and uncertainty assessment, the utilization of high-fidelity numerical models is challenging. 2 For example, general-purpose FE software packages are extensively used in the seismic design of building structures because of their ability to formulate numerical models for each structural component, thus yielding precise and comprehensive outcomes. However, the computational time for a single design scheme typically extends to several minutes or longer, which renders these software packages inadequate for satisfying design optimization demands. ...
Article
To address the issue of costly computational expenditure related to high-fidelity numerical models, surrogate models have been widely used in various engineering tasks, including design optimization. Despite the successful application of the existing surrogate models, physics-based models depend largely on simplifications and assumptions, which render parameter calibration challenging; whereas data-driven models require substantial data to reach their full potential , with their performance often being constrained in tasks when obtaining massive data is difficult. In this study, a hybrid surrogate model is proposed combining physics-based and data-driven models to rapidly estimate building seismic responses. The application of this model is exemplified through effective estimation of inter-story drift ratios (IDRs), being a critical factor in shear-wall structure design. Initially, a data augmentation technique and a parametric mod-eling procedure are introduced to significantly enhance the dataset diversity. Subsequently, a task decomposition strategy is proposed to effectively integrate a data-driven graph neural network (GNN) and a physics-based flexural-shear model. Additionally, the output layer and the loss function of the GNN are modified to enhance the estimation accuracy by eliminating fundamental errors. Results of numerical experiments indicate that the proposed hybrid model can complete IDR estimations in an average time of 0.56 s, with a mean absolute percentage error of 12.7%. This performance significantly surpasses that of existing purely data-driven and physics-based models. A case study shows that the efficiency of the proposed hybrid model is approximately 100 times greater than that of conventional finite element software. This enables an accurate assessment of the design compliance with code requirements. The results of this study can be applied to the design optimization of seismic-resistant building structures.
Article
Full-text available
In-stent restenosis is a recurrence of coronary artery narrowing due to vascular injury caused by balloon dilation and stent placement. It may lead to the relapse of angina symptoms or to an acute coronary syndrome. An uncertainty quantification of a model for in-stent restenosis with four uncertain parameters (endothelium regeneration time, the threshold strain for smooth muscle cell bond breaking, blood flow velocity and the percentage of fenestration in the internal elastic lamina) is presented. Two quantities of interest were studied, namely the average cross-sectional area and the maximum relative area loss in a vessel. Owing to the high computational cost required for uncertainty quantification, a surrogate model, based on Gaussian process regression with proper orthogonal decomposition, was developed and subsequently used for model response evaluation in the uncertainty quantification. A detailed analysis of the uncertainty propagation is presented. Around 11% and 16% uncertainty is observed on the two quantities of interest, respectively, and the uncertainty estimates show that a higher fenestration mainly determines the uncertainty in the neointimal growth at the initial stage of the process. The uncertainties in blood flow velocity and endothelium regeneration time mainly determine the uncertainty in the quantities of interest at the later, clinically relevant stages of the restenosis process.
Article
Full-text available
Uncertainty Quantification (UQ) is a booming discipline for complex computational models based on the analysis of robustness, reliability and credibility. UQ analysis for nonlinear crash models with high dimensional outputs presents important challenges. In crashworthiness, nonlinear structural behaviours with multiple hidden modes require expensive models (18 h for a single run). Surrogate models (metamodels) allow substituting the full order model, introducing a response surface for a reduced training set of numerical experiments. Moreover, uncertain input and large number of degrees of freedom result in high dimensional problems, which derives to a bottle neck that blocks the computational efficiency of the metamodels. Kernel Principal Component Analysis (kPCA) is a multidimensionality reduction technique for non-linear problems, with the advantage of capturing the most relevant information from the response and improving the efficiency of the metamodel. Aiming to compute the minimum number of samples with the full order model. The proposed methodology is tested with a practical industrial problem that arises from the automotive industry.
Article
Full-text available
Surrogate models are widely used to mimic complex systems to reduce the high experimental cost. As the system becomes high-dimensional and complex, there is an increasing demand for building relatively simplified surrogate models that capture key variables and represent complex interactions. This study proposes the annealing combinable Gaussian process, an integrated solution for identifying key variables and constructing the high-precision surrogate model. Firstly, to identify redundant variables, this study optimises variables selection with a modified simulated annealing algorithm over the complete model space. This process is called the outer loop. Secondly, to improve the model accuracy and structure, this study constructs a non-parametric Gaussian process model with additive or multiplicative kernel, effectively extracting high-order interactions. Simultaneously, a Markov chain is proposed to sample the model space. The conditional entropy is used as the scoring rule for the simulated annealing algorithm of the outer loop. It is named the inner loop. This study also discusses the rationality of conditional entropy as a criterion. The annealing combinable Gaussian process performs well in various application scenarios, including regression and classification problems. Finally, our method is implemented with the particle-in-cell simulation to find out the key physics parameters.
Article
Full-text available
The Quadratic Assignment Problem (QAP) is one of the classical combinatorial optimization problems and is known for its diverse applications. The QAP is an NP-hard optimization problem which attracts the use of heuristic or metaheuristic algorithms that can find quality solutions in an acceptable computation time. On the other hand, there is quite a broad spectrum of mathematical programming techniques that were developed for finding the lower bounds for the QAP. This paper presents a fusion of the two approaches whereby the solutions from the computations of the lower bounds are used as the starting points for a metaheuristic, called HC12, which is implemented on a GPU CUDA platform. We perform extensive computational experiments that demonstrate that the use of these lower bounding techniques for the construction of the starting points has a significant impact on the quality of the resulting solutions.
Article
Full-text available
Surrogate-based finite element (FE) model updating of two welded plates is performed using experimental modal parameters in an optimization framework based on a multi-objective genetic algorithm. Since there are some uncertainties in the structure, sensitivity analysis is conducted to investigate the effect of different geometric and material properties on the modal parameters. Moreover, because the welding process introduces some residual stresses to the plates, the effect of using the pre-stressed modal analysis in the updating process is also investigated. Based on the obtained results, considering the pre-stressed condition reduces the errors of the updated FE model.
Article
Full-text available
Despite the extensive use of mortar materials in constructions over the last decades, there is not yet a robust quantitative method available in the literature, which can reliably predict their strength based on the mix components. This limitation is attributed to the highly nonlinear relation between the mortar’s compressive strength and the mixed components. In this paper, the application of artificial intelligence techniques for predicting the compressive strength of mortars is investigated. Specifically, Levenberg–Marquardt, biogeography-based optimization, and invasive weed optimization algorithms are used for this purpose (based on experimental data available in the literature). The comparison of the derived results with the experimental findings demonstrates the ability of artificial intelligence techniques to approximate the compressive strength of mortars in a reliable and robust manner.
Article
Full-text available
Although metaheuristic optimization has become a common practice, new bio-inspired algorithms often suffer from a priori ill reputation. One of the reasons is a common bad practice in metaheuristic proposals. It is essential to pay attention to the quality of conducted experiments, especially when comparing several algorithms among themselves. The comparisons should be fair and unbiased. This paper points to the importance of proper initial parameter configurations of the compared algorithms. We highlight the performance differences with several popular and recommended parameter configurations. Even though the parameter selection was mostly based on comprehensive tuning experiments, the algorithms' performance was surprisingly inconsistent, given various parameter settings. Based on the presented evidence, we conclude that paying attention to the metaheuristic algorithm's parameter tuning should be an integral part of the development and testing processes.
Article
Automated bioacoustics analysis is being increasingly used to describe environmental phenomena such as species abundance and biodiversity. Within this research area, many algorithms have been proposed. These achieve different sub-objectives within bioacoustics processes and can be combined to form workflows. However, these algorithms are typically evaluated in a limited number of scenarios and are rarely evaluated with different combinations of other tasks. This can result in workflows that are not well optimised to serve a given scenario, particularly under resource and time constraints, which ultimately leads to inaccurate bioacoustics analyses. This work examines the problem of bioacoustics workflow construction by searching and ordering combinations of tasks to determine which produce the most accurate output while remaining under user-defined time constraints. Workflow construction is investigated within a scenario where species need to be classified within synthetically generated soundscapes with different numbers of species, noise levels, and densities of species. A search algorithm is created that applies Particle Swarm Optimisation (PSO) to a neural network-based surrogate model. This algorithm is used to efficiently search for candidate workflow structures. This is compared to a random search, a genetic algorithm, and a PSO algorithm without the surrogate model, as well as existing workflows based on previous research. It is found that for all scenarios, the surrogate model-based search method can quickly find effective workflows in a low number of searches. Furthermore, it is found that workflow effectiveness varies depending on the scenarios and recordings used.
Article
A finite element (FE)–guided mathematical surrogate modeling methodology is presented for evaluating relative injury trends across varied vehicular impact conditions. The prevalence of crash-induced injuries necessitates the quantification of the human body’s response to impacts. FE modeling is often used for crash analyses but requires time and computational cost. However, surrogate modeling can predict injury trends between the FE data, requiring fewer FE simulations to evaluate the complete testing range. To determine the viability of this methodology for injury assessment, crash-induced occupant head injury criterion (HIC15) trends were predicted from Kriging models across varied impact velocities (10–45 mph; 16.1–72.4 km/h), locations (near side, far side, front, and rear), and angles (−45 to 45°) and compared to previously published data. These response trends were analyzed to locate high-risk target regions. Impact velocity and location were the most influential factors, with HIC15 increasing alongside the velocity and proximity to the driver. The impact angle was dependent on the location and was minimally influential, often producing greater HIC15 under oblique angles. These model-based head injury trends were consistent with previously published data, demonstrating great promise for the proposed methodology, which provides effective and efficient quantification of human response across a wide variety of car crash scenarios, simultaneously.Graphical abstract This study presents a finite element-guided mathematical surrogate modeling methodology to evaluate occupant injury response trends for a wide range of impact velocities (10–45 mph), locations, and angles (−45 to 45°). Head injury response trends were predicted and compared to previously published data to assess the efficacy of the methodology for assessing occupant response to variations in impact conditions. Velocity and location were the most influential factors on the head injury response, with the risk increasing alongside greater impact velocity and locational proximity to the driver. Additionally, the angle of impact variable was dependent on the location and, thus, had minimal independent influence on the head injury risk.
Article
Uncertainty Quantification (UQ) is a key discipline for computational modeling of complex systems, enhancing reliability of engineering simulations. In crashworthiness, having an accurate assessment of the behavior of the model uncertainty allows reducing the number of prototypes and associated costs. Carrying out UQ in this framework is especially challenging because it requires highly expensive simulations. In this context, surrogate models (metamodels) allow drastically reducing the computational cost of Monte Carlo process. Different techniques to describe the metamodel are considered, Ordinary Kriging, Polynomial Response Surfaces and a novel strategy (based on Proper Generalized Decomposition) denoted by Separated Response Surface (SRS). A large number of uncertain input parameters may jeopardize the efficiency of the metamodels. Thus, previous to define a metamodel, kernel Principal Component Analysis (kPCA) is found to be effective to simplify the model outcome description. A benchmark crash test is used to show the efficiency of combining metamodels with kPCA.