Article

Uncertainties in parameter estimation: The optimal experiment design

Authors:
To read the full-text of this research, you can request a copy directly from the authors.

Abstract

An extended maximum likelihood principle is described by which inverse solutions for problems with uncertainties in known model parameters can be treated. The method introduces the concept of an equivalent experimental noise which differs significantly from the measurement noise when the system response is sensitive to the uncertainties in the known parameters. When the equivalent noise varies smoothly and significantly over the range of uncertainty, the inverse solution tends to be independent of the uncertainties. By minimizing the equivalent noise through appropriate choice of a measurement protocol, an optimal experiment can be defined. Examples are given of designing an experiment for estimating conductivity and contact resistance when surface convective coefficients are uncertain.

No full-text available

Request Full-text Paper PDF

To read the full-text of this research,
you can request a copy directly from the authors.

... The above-mentioned studies considered only the experimental noise, while the uncertainties that might have existed in the known model parameters of heat transfer models were not taken into account, i.e., the predictions were assumed to be strictly accurate. Only a few research studies considered both the experimental noise and the uncertainties of model parameters, and the uncertainties of the retrieved properties were estimated using the Cramér-Rao lower bound (CRB)-based method [15][16][17][18][19][20][21][22]. These works, relative to inverse heat transfer problems, mainly focused on retrieving the thermal conductivity, thermal resistance, and heat transfer coefficient by solving inverse heat conduction problems [15][16][17]. ...
... Only a few research studies considered both the experimental noise and the uncertainties of model parameters, and the uncertainties of the retrieved properties were estimated using the Cramér-Rao lower bound (CRB)-based method [15][16][17][18][19][20][21][22]. These works, relative to inverse heat transfer problems, mainly focused on retrieving the thermal conductivity, thermal resistance, and heat transfer coefficient by solving inverse heat conduction problems [15][16][17]. The other studies mainly investigated the uncertainty estimation and the selection of measurement modalities for the retrieval of the magnetic material properties of electromagnetic devices (EMD) [20][21][22]. ...
... The Cramér-Rao inequality theorem states that the covariance matrix of the deviation between the true and the estimated parameters is bounded from below by the inverse of the Fisher information matrix M [15][16][17] ...
Article
Full-text available
The conductive and radiative properties of participating medium can be estimated by solving an inverse problem that combines transient temperature measurements and a forward model to predict the coupled conductive and radiative heat transfer. The procedure, as well as the estimates of parameters, are not only affected by the measurement noise that intrinsically exists in the experiment, but are also influenced by the known model parameters that are used as necessary inputs to solve the forward problem. In the present study, a stochastic Cramér–Rao bound (sCRB)-based error analysis method was employed for estimation of the errors of the retrieved conductive and radiative properties in an inverse identification process. The method took into account both the uncertainties of the experimental noise and the uncertain model parameter errors. Moreover, we applied the method to design the optimal location of the temperature probe, and to predict the relative error contribution of different error sources for combined conductive and radiative inverse problems. The results show that the proposed methodology is able to determine, a priori, the errors of the retrieved parameters, and that the accuracy of the retrieved parameters can be improved by setting the temperature probe at an optimal sensor position.
... The traditional CRB method can be extended when dealing with stochastic uncertain model parameters b, see (Fadale et al., 1995a), (Emery et al., 2000) with an unbiased estimator u. The forward problem becomes now Φ(u, b)+e n . ...
... Substituting (Equation 25) in (Equation 21), the extended Fisher information matrix can be written as (Emery et al., 2000): ...
... According to (Emery et al., 2000), the effect of the trace term is very small, and can thus be neglected. The extended Fisher information matrix, M, can then be approximated by: ...
... Although these materials are generally inhomogeneous and anisotropic, they are often treated as homogeneous, isotropic materials to which are ascribed effective thermal parameters [10,12]. A second trend of the recent studies on heat transfer inverse problems is the taking into account of uncertainty (of the experimental results, of certain parameters that enter into the retrieval model, etc.) in order to evaluate the accuracy of the retrieved parameters, e.g., [14][15][16][17]. The present investigation is inspired by these two trends. ...
... In the preceding lines, the emphasis has been on the forward problem of the prediction of the temperature field T, assuming that all other ingredients of the configuration and of the thermal loading are known. Actually, the present investigation is more specifically concerned with the inverse problem (examples of which can be found in [1]; see also e.g., [2,3,9,5,14,8,19,20]) of the retrieval of k (or, more precisely, of k 1 ) from data relative to T (more precisely, T 0 ), assuming that all other parameters of the configuration as well as of the thermal loading (i.e., the nuisance parameters [15], also termed priors in [17,20]) are more or less well-known (i.e., uncertain to some degree). Note that in many publications, such as [8], the nuisance parameters are assumed to be perfectly well-known, with uncertainty existing only in the form of random noise in the measured data [4]. ...
... The chosen physical configuration (in which D 1 is an infinitely-long circular cylinder) will be shown to enable both the forward and inverse problems to be solved in explicit, exact manner so as to make possible a thorough analysis (somewhat in the spirit of [14,16]) of the influence of nuisance parameter uncertainty on retrieval accuracy. ...
Article
Full-text available
The retrieval, using external steady-state temperature field data, of the temperature-independent thermal conductivity of a z-independent cylindrical object subjected to an external, z-independent, heat load is studied. This inverse problem possesses an exact solution, both for continuous and discrete input data, whose properties, with respect to the various nuisance parameter uncertainties, are analyzed, first in a mathematical, and subsequently in a numerical manner for noiseless data.
... Although IHCPs have been extensively studied for different applications in the past (e.g., [11][12][13][14][15][16][17][18][19]), little work has been done for the inverse problem related to laser irradiation of a remote surface. The laser energy is delivered to the target surface in a periodic way because of laser or atmospheric variations. ...
... The sensitivity function qL; t; d k is taken as the result of the solution of Eqs. (15)(16)(17)(18) at the measured position x L and at time t by letting T 1 d k . The search step size k is determined by minimizing the function given by Eq. (20) with respect to k . ...
... For this case, the mathematical expression of the sensitivity problem is almost the same as that in Eqs. (15)(16)(17)(18), except that the boundary condition at x 0 is changed to ...
Article
Full-text available
In the high-energy laser heating of a target, the temperature and heat flux at the heated surface are not directly measurable, but they can be estimated by solving an inverse heat conduction problem based on the measured temperature and/or the heat flux at the accessible (back) surface. In this study, the one-dimensional inverse heat conduction problem in a finite slab is solved by the conjugate gradient method, using measured temperature and heat flux at the accessible (back)surface. Simulated measurement data are generated by solving a direct problem, in which the front surface of the slab is subjected to high-intensity periodic heating. Two cases are simulated and compared, with the temperature or heat flux at the heated front surface chosen as the unknown function to be recovered. The results show that the latter choice (i.e., choosing back surface heat flux as the unknown function) can give better estimation accuracy in the inverse heat conduction problem solution. The front surface temperature can be computed with high precision as a by-product of the inverse heat conduction problem algorithm. The robustness of this inverse heat conduction problem formulation is tested by different measurement errors and frequencies of the input periodic heating flux.
... Another way is to use the extended maximum likelihood approach [9][10][11], which consists in using a modified objective function. The idea is to introduce an "equivalent noise" on measurements that includes measurement noise and uncertainties on known parameters. ...
... The most commonly used one is probably the "D-optimality criterion" which tends to reduce the effect of measurement noise on estimations. This optimization problem is often solved graphically, or by trial and error and/or by using a deterministic optimization algorithm [9,10,[14][15][16][17][18]. If the number of experimental variables increases, the experiment design can become a complex optimization problem. ...
... For example, a flash method applied on a thin sample could be more sensitive to the thickness compared to the measurement noise. The proposed method is quite similar to the one proposed by Emery et al. [9][10][11]. It consists in considering all known parameters as normally distributed random variables. ...
Article
Full-text available
This numerical study deals with the design of experiments. It aims at optimizing thermocouple positions in order to reduce uncertainties in the estimation of thermophysical properties when an inverse method is applied. The 2D system under investigation is a squared sample of orthotropic material submitted to a constant heat flux on left and bottom edges. The temperature response is given by three sensors. The unknown parameters are the volumetric thermal capacity and the conductivities in the two principal directions. The experiment design is based on an original optimality criterion. Two stochastic algorithms are used to find its minimum and their efficiency is compared to a pure random search algorithm. To deal with the fact that the experiment design depends on the unknowns, a robust optimization approach is proposed.
... Therefore, the solution from uncertainty quantization method is not only limited to the point estimation of unknown quantities, but also a complete probability description. The relevant theories can be found in [28][29][30][31][32]. The optimization problem based on Bayesian inference is well used to solve stochastic IHTP [32,33]. ...
... The formulas for the mean and covariance matrix of the Bayesian posterior model in this paper are shown by the following Eq. (30)(31)(32)(33). The proof that the PPDF is a normal distribution is also condensed into the following Lemma 1 and Theorem 2. (a) Assuming R n , A n ∈ ℝ m×m , R n and A n A T n are both positive definite; (b) Let f n 1 (q 1 , ⋯, q m ) = X T n GX n , where X n = T σ n − (A n q n + N n ), G = I m×m ; (c) Let f n 2 (q 1 , ⋯, q m ) = q T n R n q n . ...
Article
The inverse heat transfer problems (IHTP) have a wide range of applications in the engineering field. Bayesian methods using Markov Chain Monte Carlo (MCMC) have long been considered as a robust and effective method for solving inverse problems. However, the discretization of the problem domain by the spatio-temporal Galerkin skill, i.e., the finite element interpolation also includes the time dimension, making the scale of the unknown parameters extremely difficult for Bayesian calculations. In this paper, a fast Bayesian parallel sampling (FBPS) framework is proposed for large-scale parameter estimation of benchmark three-dimensional inverse heat transfer problems (3D-IHTP). The FBPS we developed achieves a parameter computation scale of 10^5 magnitude within minutes, through dimensionality reduction of the space-time dependent problem domain. The Hamilto-nian Monte Carlo (HMC) sampler, which is proven to be more efficient for high-dimensional parameter estimation, is employed. Through several simulation tests of IHTP, it was confirmed that the solving efficiency of FBPS surpasses that of the traditional Bayesian strategy significantly. Finally, FBPS is successfully developed to estimate the unknown heat flux on the chip heat sink and pack interface effectively, given some simulated high resolution measurement data. The reliability and efficiency show that FBPS has the potential to support efficient prediction techniques for a class of IHTPs in engineering applications.
... The major justification of maximum likelihood estimates is usually its large-sample efficiency. Fadale et al. (1995a) and Emery et al. (2000) presented an extended MLE approach to examine the system uncertainties for parameter identification in heat transfer problems. When MLE is extended to function estimation, it takes into consideration the statistics of uncertainties but ignores prior knowledge of unknowns, resulting in ill-posed problems and failure to give smooth solutions (Wang and Zabaras, 2004). ...
... The general steps for determining the optimal experiment (Emery et al., 2000) is as follows: ...
Article
With the increase in the computational power, numerical simulations play an increasing role in designing, development, and optimization of various food processing operations. Inverse and ill-posed problems have been studied extensively in many branches of science and engineering, including mechanical, chemical, aerospace, biology, physics, and chemistry. The inverse techniques have been currently witnessing a growing trend in the food processing field for a decade. Inverse problems are usually performed when direct measurements of heat and mass transfer properties and boundary conditions, are not feasible. They are very sensitive to measurement errors; require optimization methods to tackle the inverse problem. It is necessary to consider the coupling heat and mass transfer for the solution of inverse problem since food process involves simultaneous heat and mass transfer within the food products. To date, inverse methods have been applied in few food processing operations, including drying, baking, freezing, and thermal processing. However, studies related to these areas on the estimation of unknown quantities for various fruits and vegetables, and food products are limited. This review mainly focuses on statistical concepts which include confidence interval, confidence region, model validation and minimization techniques discussed in detail. The optimal experimental design, D-optimality and Fisher Information Matrix, and sensitivity analysis are all extensively presented. The optimization techniques and their algorithms used in the area of food processing are explained. Finally, it covers the inverse estimation of unknown quantities, namely, heat and mass transfer parameters in different food processing operations.
... Although the inverse heat conduction problems have been extensively studied for different applications in the past (e.g., [11][12][13][14][15][16][17][18][19]), little work has been done for the inverse problem related to laser irradiation of a remote surface. The laser energy is delivered to the target surface in a periodic way because of laser or atmosphere variations. ...
... In Eq. (23), the domain integral term is reformulated based on the Green's second identity; the boundary conditions of the sensitivity problem give by Eqs. (16) and (17) are utilized and then S Δ is allowed to go to zero. The vanishing of the integrands containing T Δ leads to the following adjoint problem for the determination of ...
Conference Paper
Full-text available
In High-Energy Laser (HEL) heating of target, the temperature and heat flux at the heated surface is not directly measurable, but can be estimated by solving an inverse heat conduction problem (IHCP) based on measured temperature or/and heat flux at the accessible (back) surface. In this study, the one-dimensional (1-D) IHCP in a finite slab is solved by the conjugate gradient method (CGM) using measured temperature and heat flux at the accessible (back) surface. Simulated measurement data are generated by solving a direct problem where the front surface of the slab is subjected to high intensity periodic heating. Two cases are simulated and compared, with the temperature or heat flux at the heated front surface chosen as the unknown function to be recovered. The results showed that the latter choice, i.e., choosing back surface heat flux as the unknown function, can give better estimation accuracy in the IHCP solution. The front surface temperature can be computed with a high precision as a byproduct of the IHCP algorithm. The robustness of this IHCP formulation is tested by different measurement errors and frequencies of the input periodic heating flux. Copyright © 2009 by the American Institute of Aeronautics and Astronautics, Inc.
... With the rapid explosion of computational power and critical demands on engineering system robustness and reliability, optimization under uncertainty is receiving a growing attention [5] [6]. Recently, a sequence of methods have been proposed to solve stochastic inverse heat transfer problems including sensitivity analysis by Norris [7] and Blackwell and Dowding [8], extended Maximum Likelihood Estimator (MLE) approach by Emery et al. [9] [10], spectral stochastic method by Narayanan and Zabaras [11], and Bayesian inference method by Ferrero and Gallagher [12], Leoni and Amon [13] and Wang and Zabaras [14] [15]. ...
... With the rapid explosion of computational power and critical demands on engineering system robustness and reliability, optimization under uncertainty is receiving a growing attention [5, 6]. Recently, a sequence of methods have been proposed to solve stochastic inverse heat transfer problems including sensitivity analysis by Norris [7] and Blackwell and Dowding [8], extended Maximum Likelihood Estimator (MLE) approach by Emery et al. [9, 10], spectral stochastic method by Narayanan and Zabaras [11], and Bayesian inference method by Ferrero and Gallagher [12], Leoni and Amon [13] and Wang and Zabaras [14, 15]. Compared with other methods, the Bayesian statistical inference method [16, 17] has some significant advantages. ...
Article
As most engineering systems and processes operate in an uncertain environment, it be- comes increasingly important to address their analysis and inverse design in a stochastic manner using statistical data-driven methods. Recent advances in computational Bayesian and spatial statistics en- able complete and ecien t solution procedures to such problems. Herein, a novel framework based on Bayesian inference is presented for the solution of stochastic inverse problems in heat transfer. The posterior probability density function (PPDF) of unknowns (modeled as random variables or stochastic processes), such as material thermal properties and boundary heat ux, is computed given nite set of thermocouple temperature measurements. Markov Chain Monte Carlo (MCMC) algo- rithms are exploited to obtain estimates of statistics of random unknowns. A parameter estimation problem is rst solved using simple, hierarchical and augmented Bayesian models. Boundary heat ux reconstruction in heat conduction is then studied. Simulation results demonstrate the great potential of applying a Bayesian approach to stochastic estimation and design problems. Although discussed in the context of thermal systems, the methodology presented is general and applicable to design and estimation problems in diverse areas of engineering.
... The model (15)(16)(17) was solved by means of a finite difference numerical scheme implemented with a moving grid strategy in order to minimize the numerical diffusion generated by the first order space derivative present in the advection term of equation (15). The numerical algorithm is standard and not reported here for the sake of brevity. ...
... A Kalman Filter, described in the following paragraph, has been selected as the parameter estimation algorithm since it easily accounts for the measurement errors affecting the imposed control, the inlet temperature. Furthermore, it can be easily extended to account for the uncertainties associated to other geometrical or thermo physical parameters as described by Emery at al. [17]. ...
Article
Full-text available
Many studies about heat transfer characterization of single phase fixed bed matrix regenerators are devoted to the finding of experimental correlations. Despite several deep investigations, the emerged correlations are not well established, indeed the high complexity of the processes involved, the shape of the solid-fluid interface, the complexity of the geometry of the solid matrix, make accurate experimental data difficult to obtain. The aim of the present work pursuit a double objective: (i) to develop and propose an inverse method to identify h, the fluid-matrix heat transfer coefficient, by means of transient simulated experiments, and (ii) to investigate the sensitivity of the h reconstruction process to the variation of the control input parameters and material properties, in order to find the optimal value of the experimental control variables that allows the identification of this unknown coefficient to be performed with ‘‘minimum variance’’. The reconstruction technique is applied to numerical experiments and it is based on the simulated measurements of oscillating temperatures of the fluid at the inlet and outlet of the regenerator. The identification of h is performed by means of an inverse search technique, driven by the difference between simulated measurements and calculated temperature time histories at the regenerator outlet. At first, experiments in different operating conditions are simulated in order to investigate the ability of the algorithm to identify the correct value of h and its uncertainty. Then a parametric study is performed and the optimal control frequency of the known (imposed) oscillating temperature signal at the inlet is found as a function of the mass flow rate, the geometry and other operating and thermophysical characteristics of the system.
... An important question is what criterion of optimality to use when finding the optimal location-we suggest using a statistical criterion that leads to the minimal total variance of estimated parameters computed using the Fisher information matrix. We refer the reader to Emery & Fadale (1996), Fadale et al. (1995), Emery et al. (2000) and Li et al. (2008) for a general discussion of the criteria in the context of the sensor location problem. In the case of two measurements, the Jacobian takes the form ...
Article
Many industrial and engineering applications are built on the basis of differential equations. In some cases, parameters of these equations are not known and are estimated from measurements leading to an inverse problem. Unlike many other papers, we suggest to construct new designs in the adaptive fashion ‘on the go’ using the A-optimality criterion. This approach is demonstrated on determination of optimal locations of measurements and temperature sensors in several engineering applications: (1) determination of the optimal location to measure the height of a hanging wire in order to estimate the sagging parameter with minimum variance (toy example), (2) adaptive determination of optimal locations of temperature sensors in a one-dimensional inverse heat transfer problem and (3) adaptive design in the framework of a one-dimensional diffusion problem when the solution is found numerically using the finite difference approach. In all these problems, statistical criteria for parameter identification and optimal design of experiments are applied. Statistical simulations confirm that estimates derived from the adaptive optimal design converge to the true parameter values with minimum sum of variances when the number of measurements increases. We deliberately chose technically uncomplicated industrial problems to transparently introduce principal ideas of statistical adaptive design.
... In literature though, there can be found other measures of the statistical dispersion,too[76]. The Fisher information measure has been used to estimate parameters from experiments analyzing the relationship between information, sampling and the model[77] and also uncertainty in the parameter estimation[78,79].Thus, let us resort three properties of the maximum likelihood estimation[80 ...
Thesis
Nowadays, a Coordinate Measuring Machine (CMM) is one of the essential tools used in the product verification process. Measurement points provided by a CMM are conveyed to the CMM data analysis software. As a matter of fact, the software can contribute significantly to the measurement uncertainty, which is very important from the metrological point of view. Mainly, it is related to the association algorithm used in the software, which is intended to find an optimum fitting solution necessary to ensure that the calculations performed satisfy functional requirements. There are various association methods, which can be used in these algorithms (such as Least squares, Minimum zone, etc.). However, the current standards do not specify any of the methods that have to be established. Moreover, there are different techniques for the evaluation of uncertainty (such as experimental resamplings, Monte Carlo simulations, theoretical approaches based on gradients, etc.), which can be used with association methods for the further processing. Uncertainty evaluated by a combination of an association method and uncertainty evaluation technique is a term of implementation uncertainty, which in its turn is a contributor to measurement uncertainty according to the Geometrical Product Specification and Verification project (GPS). This work is focused on the analysis of the impact of the association method on the implementation uncertainty, by assuming that all the other factors (such as the sampling strategy, the measurement equipment parameters, etc.) are fixed and chosen according to standards, within the GPS framework. The objective of the study is Probabilistic method (PM), which is based on the classification of continuous subgroups of a rigid motion (a mathematical principle of the GPS language) and non-parametric density estimation techniques. The method has essentially been developed to decompose complex surfaces and showed promising future in the shape partitioning. However, it comprises geometric fitting procedures, which are considered in this work in more detail. The methodology of the research is based on the comparison of PM with another statistical association method, namely the Least squares method (LS) by means of the parameter estimation and uncertainty evaluation. For the uncertainty evaluation two different techniques, the Gradient-based and Bootstrap methods are used in a combination with the both association methods, PM and LS. The comparison is performed through both the analysis of the results obtained by the parameter estimation and analysis of variance. Variances of the estimated parameters and estimated form error are considered as the response variables in the analysis of variance. The case study is restricted to the roundness geometric tolerance evaluation. Despite the measurement process was simulated, the methodology can be applied for real measurement data. The obtained results during the work can be interesting both in the theoretical and in the practical points of view
... Important insights emerge by approaching inverse problems using a probabilistic framework. Some of the methods introduced to deal with this problem include the extended maximum likelihood method [16], the spectral stochastic method [17,18], the sparse grid collocation approach [19,20], stochastic reduced order models [21], and the Bayesian inference approach [22,23]. In the Bayesian formalism, one obtains additional insight by computing a probability distribution that summarizes all available information about the elastic moduli (e.g., we can estimate moments, marginal distributions, quantiles), as opposed to the single value obtained in the deterministic setting. ...
Article
A method is presented for inferring the presence of an inclusion inside a domain; the proposed approach is suitable to be used in a diagnostic device with low computational power. Specifically, we use the Bayesian framework for the inference of stiff inclusions embedded in a soft matrices, mimicking tumors in soft tissues. We rely on a Poly- nomial Chaos (PC) surrogate to accelerate the inference process. The PC surrogate predicts the dependence of the displacements field with the random elastic moduli of the materials, and are computed by means of the Stochastic Galerkin (SG) projection method. Moreover, the inclusion’s geometry is assumed to be unknown, and this is addressed by using a dictionary consisting of several geometrical models with different configurations. A model selection approach based on the evidence provided by the data (Bayes factors) is used to discriminate among the different geometrical models and select the most suitable one. The idea of using a dictionary of pre-computed geometrical models helps to maintain the computational cost of the inference process very low, as most of the computational burden is carried out off-line for the resolution of the SG problems. Numerical tests are used to validate the methodology, assess its performance, and analyze the robustness to model errors.
... Recent studies, however, have shown that using the measured heat flux as additional information in an IHCP can reduce the proneness to the inherent instability of the ill-posed problem [9,10]. Although the IHCPs have been extensively studied for different applications in the past decades (e.g., [11][12][13][14][15][16][17][18][19]), little work has been done for the inverse numerical algorithm using heat flux measurement data in the objective functional. Furthermore, in HEL weapon applications, the laser energy may be delivered to the surface in a periodic way because of the target-spinning or atmosphere variations. ...
Article
Temperature and heat flux on inaccessible surfaces can be estimated by solving an inverse heat conduction problem (IHCP) based on the measured temperature and/or heat flux on accessible surfaces. In this study, the heat flux and temperature on the front (heated) surface of a three-dimensional (3D) object is recovered using the conjugate gradient method (CGM) with temperature and heat flux measured on back surface (opposite to the heated surface). The thermal properties of the 3D object are considered to be temperature-dependent. The simulated measurement data, i.e., the temperature and heat flux on the back surface, are obtained by numerically solving a direct problem where the front surface of the object is subjected to high intensity periodic laser heat flux with a Gaussian profile. The robustness of the formulated 3D IHCP algorithm is tested for two materials. The effects of the uncertainties in thermo-physical properties on the inverse solutions are also examined. Efforts are made to reduce the total number of heat flux sensors on the back surface required to recover the front-surface heating condition.
... Under this circumstance, some researchers propose to determine the heated surface temperature indirectly by solving an inverse heat conduction (IHC) problem[7,8]based on the transient temperature and/or heat flux measured at the back surface. Although IHC problems have been extensively studied for different applications in the past (e.g.,9101112), little work has been done for composite materials subjected to high energy laser heating. Though Aviles-Ramos et al.[13]developed an exact solution for the IHC in a two-layer composite material, the pyrolysis effect was not considered in their model. ...
Article
A new numerical model is developed to simulate the 3-D inverse heat transfer in a composite target with pyrolysis and outgassing effects. The gas flow channel size and gas addition velocity are determined by the rate equation of decomposition chemical reaction. The thermophysical properties of the composite considered are temperature-dependent. A nonlinear conjugate gradient method (CGM) is applied to solve the inverse heat conduction problem for high-energy laser-irradiated composite targets. It is shown that the front-surface temperature can be recovered with satisfactory accuracy based on the temperature/heat flux measurements on the back surface and the temperature measurement at an interior plane.
... On the other hand, the sensitivity of the measured quantity on the eccentricity level may have a large impact on the identification results. Therefore, it is proposed that the CRLB stochastic technique, which is widely used in signal processing [18] and heat transfer applications [19], is utilized here. This technique has been recently used for estimating the error in the inverse problem solution of magnetic material characterization in different electromagnetic devices [20], [21]. ...
Article
Full-text available
Detection of a static eccentricity fault in rotating electrical machines is possible through several measurement techniques, such as shaft voltages and flux probes. A predictive maintenance approach typically requires that the condition monitoring technique is online, accurate, and able to detect incipient faults. This paper presents a study of the optimal measurement modality that leads to a minimal identification error of the static eccentricity in synchronous machines. The Cramér-Rao lower bound technique is implemented by taking into account both measurement and model uncertainties. Numerical results are obtained using a two-dimensional finite element model and is experimentally validated on a synchronous two-pole generator. Results indicate that shaft voltage measurements are better suited to the detection of static eccentricity.
... In chemistry, this technique has been used to discover the value of various parameters relevant to a reaction, making laboratory syntheses more successful (e.g. [15][16][17]), and the approach was used to develop and validate a new method for synthesizing a compound that has now been used in industry [18]. Optimal experiment design has also been used in pharmacology and clinical applications (e.g. ...
Article
Full-text available
Computer games can be motivating and engaging experiences that facilitate learning, leading to their increasing use in education and behavioural experiments. For these applications, it is often important to make inferences about the knowledge and cognitive processes of players based on their behaviour. However, designing games that provide useful behavioural data are a difficult task that typically requires significant trial and error. We address this issue by creating a new formal framework that extends optimal experiment design, used in statistics, to apply to game design. In this framework, we use Markov decision processes to model players' actions within a game, and then make inferences about the parameters of a cognitive model from these actions. Using a variety of concept learning games, we show that in practice, this method can predict which games will result in better estimates of the parameters of interest. The best games require only half as many players to attain the same level of precision.
... To conduct thermal measurements under harsh environment, it has been proposed that sensors be located away from direct contact with the environment and mathematical models be used to calculate the desired quantities from the sensor measurement data. Specifically, the front surface temperature can be determined indirectly by solving an inverse heat conduction problem (IHCP) [1][2][3][4][5][6][7][8][9][10][11][12][13][14][15][16][17][18] based on the transient temperature and/or heat flux measured on the back surface. ...
Article
We present a new method of solving the three-dimensional inverse heat conduction (3D IHC) problem with the special geometry of a thin sheet. The 3D heat equation is first simplified to a 1D equation through modal expansions. Through a Laplace transform, algebraic relationships are obtained that express the front surface temperature and heat flux in terms of those same thermal quantities on the back surface. We expand the transfer functions as infinite products of simple polynomials using the Hadamard Factorization Theorem. The straightforward inverse Laplace transforms of these simple polynomials lead to relationships for each mode in the time domain. The time domain operations are implemented through iterative procedures to calculate the front surface quantities from the data on the back surface. The iterative procedures require numerical differentiation of noisy sensor data, which is accomplished by the Savitzky–Golay method. To handle the case when part of the back surface is not accessible to sensors, we used the least squares fit to obtain the modal temperature from the sensor data. The results from the proposed method are compared with an analytical solution and with the numerical solution of a 3D heat conduction problem with a constant net heat flux distribution on the front surface.
... Stochastic or statistical applications to heat transfer have been more limited. In between others, Emery et al. has explored the use of an extended maximum likelihood estimator (MLE) framework [20, 21] for inverse parameter estimation problems in heat conduction systems. ...
Article
This chapter presents a stochastic modeling and statistical inference approach to the solution of inverse problems in thermal transport systems. Of particular interest is the inverse heat con- duction problem (IHCP) of estimating an unknown boundary heat ux in a conducting solid given temperature data within the domain. Even though deterministic sequential and whole time domain estimation methods have been applied with success over the years for the solution of such problems, we herein introduce stochastic approaches to representing and solving the IHCP. As most engineer- ing systems and processes operate in an uncertain environment, it becomes increasingly important to address their analysis and inverse design in a stochastic manner using statistical data-driven prior and concurrent information on the system response. Recent advances in spectral stochastic model- ing, computational Bayesian and spatial statistics enable complete and ecien t solution procedures to such problems. Two distinct approaches to the IHCP are presented in this chapter one based on spectral stochastic modeling and the other on Bayesian inference. Although these techniques are discussed in the context of the IHCP, the methodologies presented are general and applicable to design and estimation problems in other more complex problems in thermal transport systems including problems in the presence of convection, radiation and conduction.
... There exist several possible ways of defining the optimal experiments when constraints and an initial set of models are considered. Examples of such experiments are design of input sequences that consists of maximizing the determinant of the Fisher information matrix, or D-optimality (Goodwin and Payne (1977); Emery et al. (2000)), constructions based on the sensitivity matrix (Point et al. (1996)), designs based on an overall measure of the divergence between the model predictions (Asprey and Macchietto (2000)), or designs based on maximizing the smallest eigenvalue of the Fisher information matrix (Antoulas and Anderson (1999); Sadegh et al. (1998)). ...
... Recent studies, however, have shown that using the measured heat flux as additional information in an IHCP can increase the stability of the solution due to the less proneness to the inherent instability of the ill-posed IHCP[9,10]. Although IHCPs have been extensively studied for different applications in the past decades, most of them used temperature measurement data in the objective function[11][12][13][14][15][16][17][18][19][20][21][22][23][24]. Little work has been done for the inverse numerical algorithm based on heat flux measurement data. ...
Article
Temperature and heat flux at the heated surface can be estimated by solving an inverse heat conduction problem (IHCP) based on measured temperature and/or heat flux at the accessible locations (e.g., back surface). Most of the previous studies used temperature measurement data in the objective function, and little work has been done for the inverse numerical algorithm based on heat flux measurement data. In this study, a one-dimensional IHCP in a finite slab is solved by using the conjugate gradient method. The heat flux measurement data are, for the first time, incorporated into the objective function for a nonlinear heat conduction problem with temperature-dependent thermophysical properties. The results clearly show that the inverse approach of using heat flux measurement data in the objective function can provide much better predictions than the traditional approaches in which the temperature measurements are employed in the objective function. Parametric studies are performed to demonstrate the robustness of the formulated IHCP algorithm by testing it for two different materials under different frequencies of the imposed heat flux along with random errors of the measured heat flux at the back surface.
... Imagine a thin plate separating the environment and the sensors that are mounted on the back surface. The front surface temperature can be determined indirectly by solving an inverse heat conduction problem12345678910111213 based on the transient temperature and/or heat flux measured at the back surface. Among the many methods proposed to solve the inverse heat conduction problem, the Laplace transform method (if applicable) most concisely captures the mathematical relationships in terms of transfer functions1415161718. ...
Article
Laplace transform is used to solve the problem of heat conduction over a finite slab. The temperature and heat flux on the two surfaces of a slab are related by the transfer functions. These relationships can be used to calculate the front surface heat input (temperature and heat flux) from the back surface measurements (temperature and/or heat flux) when the front surface measurements are not feasible to obtain. This paper demonstrates that the front surface inputs can be obtained from the sensor data without resorting to inverse Laplace transform. Through Hadamard Factorization Theorem, the transfer functions are represented as infinite products of simple polynomials. Consequently, the relationships between the front and back surfaces are translated to the time-domain without inverse Laplace transforms. These time-domain relationships are used to obtain approximate solutions through iterative procedures. We select a numerical method that can smooth the data to filter out noise and at the same time obtain the time derivatives of the data. The smoothed data and time derivatives are then used to calculate the front surface inputs.
... The majority of the deterministic approaches restate the problem as a least-squares minimization problem and lead to estimates of unknowns without rigorously considering system uncertainties and without providing quantification of uncertainty in the inverse problem [1, 2]. Several methods have been introduced to address inverse problems under uncertainties, such as the extended maximum likelihood method [3], the spectral stochastic method [4, 5], the sparse grid collocation approach [6] and the Bayesian inference approach [7, 8]. The Bayesian inference approach provides a systematic means of taking system variabilities and parameter fluctuations into account. ...
Article
A new approach to modeling inverse problems using a Bayesian inference method is introduced. The Bayesian approach considers the unknown parameters as random variables and seeks the probabilistic distribution of the unknowns. By introducing the concept of the stochastic prior state space to the Bayesian formulation, we reformulate the deterministic forward problem as a stochastic one. The adaptive hierarchical sparse grid collocation (ASGC) method is used for constructing an interpolant to the solution of the forward model in this prior space which is large enough to capture all the variability/uncertainty in the posterior distribution of the unknown parameters. This solution can be considered as a function of the random unknowns and serves as a stochastic surrogate model for the likelihood calculation. Hierarchical Bayesian formulation is used to derive the posterior probability density function (PPDF). The spatial model is represented as a convolution of a smooth kernel and a Markov random field. The state space of the PPDF is explored using Markov chain Monte Carlo algorithms to obtain statistics of the unknowns. The likelihood calculation is performed by directly sampling the approximate stochastic solution obtained through the ASGC method. The technique is assessed on two nonlinear inverse problems: source inversion and permeability estimation in flow through porous media.
... Compared with other techniques for inverse problems under uncertainties, e.g. the sensitivity analysis [15], the extended maximum likelihood method [16] and the spectral stochastic method [17], the Bayesian inference approach has some unique attributes [18][19][20]. Firstly, it constitutes a complete probabilistic description of the inverse problem and thus provides a natural framework for quantifying its uncertainties. ...
Article
This paper studies a Bayesian inference approach to the Cauchy problem in steady-state heat conduction of probabilistically calibrating the boundary temperature. The prior modeling is achieved via the Markov random field, and its regularizing property is investigated. A hierarchical Bayesian model is adopted for selecting the regularization parameter and detecting the noise level automatically. The posterior state space is explored using the Markov chain Monte Carlo for obtaining relevant statistics. Two augmented Tikhonov regularization methods that could determine the regularization parameter and the noise level are proposed and analyzed. Numerical results indicate that the Bayesian inference approach could yield an accurate estimate of the solution with its uncertainties quantified, and the augmented Tikhonov regularization methods are accurate and flexible. Copyright © 2008 John Wiley & Sons, Ltd.
... The approach by Bergot and Doerenbecher [13, 14] is very close to the V-optimality condition used in some works on the optimal experiment design [15]. In a significant number of works on the optimum experiment design16171819, certain measures of the FIM (determinant, maximum or minimum eigenvalues, traces) are used as criteria for an optimal sensor placement. These approaches are rather computationally extensive due to the need to directly operate with FIM or the Hessian. ...
Article
Criteria of optimality for sensors' location are addressed using an interpolation error transformed by especial adjoint problems. The considered criteria correspond to the analysis error in certain Hessian-based metrics and to the error of some forecast aspect. Both criteria are obtained using adjoint problems that provide computation without the direct use of the Hessian. For a linear inverse heat conduction problem, these criteria are compared and demonstrated promising results when compared with a criterion based on the norm of the interpolation error of observation data. Approaches to sensor set modification using either redistribution of sensors' or refinement of the sensors grid (insertion of additional sensors) are also compared. Copyright © 2009 John Wiley & Sons, Ltd.
Article
The paper deals with the design of an experiment for estimating thermophysical parameters in a problem described by Sawaf et al. (1995) and Ruffio et al. (2012). Its aim is to illustrate potential problems connected to the application of a nonlinear regression with “nuisance” random parameters for estimating parameters of interest. Presenting four simplified versions of the problem where a solution is found numerically and its quality is assessed by a Monte Carlo simulations this paper presents typical features of nonlinear regression with random common factors.
Chapter
The paper begins in Part I with at least a part of the answer to "Why did I become an engineer and a professor?" Growing up in a family unfamiliar with higher education, neither the implied question about attending college nor the answer were obvious. Because of the cataclysmic beginning of World War II in the Pacific and the associated scare of an attack, I ended up going to school 12. months a year. This had a major influence on my early schooling and may have been the impetus that led to my professional development.Part II describes my involvement with inverse problems and parameter estimation. The current emphasis on complex computer models to simulate thermal systems requires that the model parameters be known with precision. In addition, the uncertainties associated with the experimental data and the model predictions are topics of great interest to both experimentalists and modelers. This is particularly true when models are used to extrapolate performance to regions outside of the parameter space used for validation of the model.During a trip to Russia a fortuitous meeting with faculty of the Moscow Aviation Institute introduced me to inverse problems. This was a fascinating area and reading the literature, particularly that which was related to electrical engineering, particularly radar sensing, I became fascinated with Bayesian statistics and inference. This chapter describes the development of my interests and the technical details associated with both inverse problems and parameter estimation. Early parameter estimation efforts were based on the least squares technique which for normally distributed variables is equivalent to maximum likelihood. Unfortunately, these solutions give at best only approximate estimates of the uncertainty associated with the estimates. Bayesian inference supplies more precise estimates, but at a substantial increase in computational cost. An alternative approach is that of Markov Chain Monte Carlo, still very expensive. Part II describes these different methods and presents the results of their application to a number of thermal problems.
Article
A non-invasive virtual sensor is employed for the inverse prediction of the time-varying ledge profile that forms inside high-temperature metallurgical reactors filled with a load of phase change material (PCM). The virtual sensor is tested for thermo physical properties of the vessel wall and of the PCM that fall outside the range for which it was originally designed. The results are analyzed and presented in terms of the shift of key thermo physical properties from the reference case. Results indicate that the virtual sensor is more sensitive to the variation of the properties of the phase change material than that of the vessel walls. The virtual sensor response remains accurate for reactor loads of high thermal inertia. The virtual sensor may still be used for reactor loads of low thermal inertia provided that thermo physical properties of the PCM are well-known.
Article
Axial flux permanent magnet synchronous machines (AFPMSMs) have been extensively used in different applications due to their excellent performances [1]. Particularly, AFPMSMs are well dedicated for applications that acquire high power density, such as substantial energy applications. In practice, these types of machines suffer from different faults, such as a rotor eccentricity and permanent magnet (PM) demagnetization, which decrease their reliability. Early detection of these faults can lead to mitigation of catastrophic failures.
Article
Magnetic properties of the electromagnetic devices core materials are reconstructed by solving a coupled experimental-numerical electromagnetic inverse problem. However, the measurement noise as well as uncertainties of the forward model parameters and structure may result in dramatic recovery errors in the recovered values of the material parameters. In this paper, we review the use of the electromagnetic inverse problem for the identification of the magnetic material characteristics. The inverse algorithm is combined with a generic stochastic uncertainty analysis for a priori qualitative error estimation, and a quantitative error reduction. The complete inverse methodology is applied onto the identification of the magnetizing B-H curve of the magnetic material of a commercial asynchronous machine. Both numerical and experimental results validate the inverse approach, showing a good capability for magnetic material identification in electromagnetic devices. The proposed technique is general and can be applied into a wide range of applications in electromagnetic community.
Article
Methods are discussed for computing the sensitivity of the temperature field to changes in material properties and initial boundary condition parameters for heat conduction problems. The most general method is to derive sensitivity equations by differentiating the energy equation with respect to the parameter of interest and solving the resulting sensitivity equations numerically. An example problem in which there are 12 parameters of interest is presented and the resulting sensitivity equations and associated boundary initial conditions are derived. The sensitivity equations are implemented in a general-purpose unstructured-grid control-volume finite-element code. Numerical results are presented for thermal conductivity and volumetric heat capacity sensitivity coefficients for heat conduction in a 2-D orthotropic body. The numerical results are compared with the analytical solution to demonstrate that the numerical sensitivity method is second-order accurate as the mesh is refined spatially.
Article
This paper presents a policy for selecting the most informative individuals in a teacher-learner type coevolution. We propose the use of the surprisal of the mean, based on Shannon information theory, which best disambiguates a collection of arbitrary and competing models based solely on their predictions. This policy is demonstrated within an iterative coevolutionary framework consisting of symbolic regression for model inference and a genetic algorithm for optimal experiment design. Complex symbolic expressions are reliably inferred using fewer than 32 observations. The policy requires 21% fewer experiments for model inference compared to the baselines and is particularly effective in the presence of noise corruption, local information content as well as high dimensional systems. Furthermore, the policy was applied in a real-world setting to model concrete compression strength, where it was able to achieve 96.1% of the passive machine learning baseline performance with only 16.6% of the data.
Article
We propose a new method of safety monitoring for overheating in exothermic reactions that is applicable to detection of thermal runaway in Li-ion batteries. The proposed method is based on the solutions of a one-dimensional heat conduction problem across the wall thickness of a cylinder. The problem is known as an Inverse Heat Conduction Problem (IHCP) since heat is conducted outwards while monitoring sensors only have access to the outside surface. We first obtain the transfer functions relating the inner and outer boundaries through Laplace transform. We then use Hadamard factorization theorem to express the transfer functions in terms of infinite product of polynomials. Truncations of the polynomial transfer functions represent time domain relationships between physically accessible measurements and the heating inside a cylindrical wall. These relationships lead us to propose time derivatives of temperature as better indicators for safety monitoring in exothermic processes.
Article
Suitable positioning of temperature probes improves the accuracy of thermal-diffusivity measurements in thermo-optical tests. The optimal positions depend on the unknown diffusivity, making the positions unknown a priori. One solution is to measure the temperature field and choose the optimal positions after the experiment. D-optimality is used here to choose the best positions for temperature measurement to determine the principal components of thermal diffusivity for transversely isotropic materials in a flash-type experiment. Two D-optimality parameters are examined: one uses all available information; the other neglects nuisance parameters. The slab specimens are heated over a central region while temperatures are measured on the opposite face. Increasing the duration of the heating pulse provides more information, within the limit of the imposed boundary conditions. Experiments using a metal plate showed that measurements made near the optimal positions improve the accuracy of the estimated diffusivity. These results support using IR thermography to provide flexibility in positioning measurements. This method of optimization shows promise in optimizing measurement of specimens having transverse isotropy.
Article
Many engineering designs are built on the basis of differential equations. In some cases, parameters of the underlying differential equations are not known and should be estimated from the measurement data at the sensors' locations, leading to an inverse problem. In this paper, we demonstrate how the optimal locations of these sensors can be determined using the statistical theory of optimal experiments. Three specific engineering problems are discussed to illustrate the statistical criteria: (1) adaptive determination of optimal locations of temperature sensors in a one-dimensional inverse heat transfer problem, (2) localization of a power leak/damage in conductive wire, (3) adaptive design in the framework of a one-dimensional boundary-value problem when the solution is found numerically using the finite difference approach. In all these problems, statistical criteria for parameter identification and optimal designs are applied.
Article
The inverse heat conduction problem (IHCP) in a one-dimensional composite slab with rate-dependent pyrolysis chemical reaction and outgassing flow effects is investigated using the conjugate gradient method (CGM). The thermal properties of the composites are considered to be temperature-dependent, which makes the IHCP a nonlinear problem. The inverse problem is formulated in such a way that the front-surface heat flux is chosen as the unknown function to be recovered, and the front-surface temperature is computed as a by-product of the IHCP algorithm, which uses back-surface temperature and heat flux measurements. The proposed IHCP formulation is then applied to solve the IHCP in an organic composite slab whose front surface is subjected to high intensity periodic laser heating. It is shown that an extra temperature sensor located at an interior position is necessary since the organic composites usually possess a very low thermal conductivity. It is also found that the frequency of the periodic laser heating flux plays a dominant role in the inverse solution accuracy. In addition, the robustness of the proposed algorithm is demonstrated by its capability in handling the case of thermophysical properties with random errors.
Article
When thermal systems are described by models, it is necessary to know the properties and parameters of the model to ensure accurate results. These properties are usually determined from experiments designed to maximize the precision of the estimated properties. Achieving such high precision requires that all of the other properties be known with certainty. This paper describes a method of design and estimation based upon Fisher's concept of information that achieves good precision in estimating thermal properties even when the other parameters are known only approximately.
Article
Full-text available
This theoretical and numerical study deals with the estimation of thermal diffusivities of orthotropic materials with the 3D-laser-flash method. This method consists in applying a short non-uniform heat flux to a sample in order to generate three-dimensional heat transfer. An infrared camera is used to measure the evolution of the temperature field at the front face or at the back face of the sample. An estimation procedure, i.e., an estimator, combines these measurements with an analytical solution of the underlying model in order to estimate unknown parameters, i.e., thermal diffusivities, a heat transfer coefficient and parameters related to the spatial shape of the laser beam. In this work, three estimators inspired from previous work are presented and some improvements are proposed. A fourth estimator is introduced and compared to the previous ones. This comparison is based on theoretical standard deviations of thermal diffusivities. Results show that standard deviations can vary up to a factor of 4 and are minimized by using the fourth procedure.
Article
Full-text available
An inverse heat conduction analysis is presented to simultaneously estimate the temperature-dependent thermal conductivity and heat capacity based on a modified elitist genetic algorithm (MEGA). In this study, MEGA is used to minimize a least squares objective function containing estimated and simulated (filtered) temperatures. The estimated temperatures are obtained from the direct numerical solution (finite differences method, or FDM) of the finite one-dimensional conductive model by using an estimate for the unknown temperature-dependent thermophysical properties (TDTPs). The accuracy of the MEGA is assessed by comparing the estimated and the preselected TDTPs. The results of the MEGA are used as the starting point for a locally convergent optimization algorithm, i.e., the Levenberg–Marquardt (L–M) method. It is shown in this work that hybridization of the MEGA with the L–M method can lead to accurate estimates. From the results, it is found that the RMS error between estimated and simulated temperatures is very small irrespective of whether measurement errors are included or excluded. In addition to estimation of the TDTPs, sensitivity analysis is performed to investigate the effects of heating duration. Also, it is found that the results of the MEGA are highly satisfactory with only single-sensor measurements on the heated surface.
Article
Purpose The purpose of this paper is to determine a priori the optimal needle placement so to achieve an as accurate as possible magnetic property identification of an electromagnetic device. Moreover, the effect of the uncertainties in the geometrical parameter values onto the optimal sensor position is studied. Design/methodology/approach The optimal needle placement is determined using the stochastic Cramér‐Rao lower bound method. The results obtained using the stochastic method are compared with a first order sensitivity analysis. The inverse problem is solved starting from real local magnetic induction measurements coupled with a 3D finite element model of an electromagnetic device (EI core inductor). Findings The optimal experimental design for the identification of the magnetic properties of an electromagnetic device is achieved. The uncertainties in the geometrical model parameters have a high effect on the inverse problem recovered solution. Originality/value The solution of the inverse problem is more accurate because the measurements are carried out at the optimal positions, in which the effects of the uncertainties in the geometrical model parameters are limited.
Article
Usually when determining parameters with an inverse method, it is assumed that parameters or properties, other than those being sought, are known exactly. When such known parameters are uncertain, the inverse solution can be very sensitive to the degree of uncertainty. This paper presents a modification to the least squares technique which reduces the sensitivity to the uncertainty in the known parameters. In the presence of noisy data the method can be improved slightly by using Tikhanov's regularization. The reason for this limited improvement can be understood by examining the stochastic regularization method.
Article
Thermochemical materials, particularly salt hydrates, have significant potential for use in thermal energy storage applications. When a salt hydrate is heated to a threshold temperature, a chemical reaction is initiated to dissociate it into its anhydrous form and water vapor. The anhydrous salt stores the sensible energy that was supplied for dehydration, which can be later extracted by allowing cooler water or water vapor to flow through the salt, transforming the stored energy into sensible heat. We model the heat release that occurs during a thermochemical hydration reaction using relations for mass and energy conservation, and for chemical kinetics and stoichiometry. A set of physically significant dimensionless parameters reduces the number of design variables. Through a robust sensitivity analysis, we identify those parameters from this group that more significantly influence the performance of the heat release process, namely a modified Damköhler number, the thermochemical heat capacity, and the heat flux and flowrate. There is a strong nonlinear relationship between these parameters and the process efficiency. The optimization of the efficiency with respect to the parameters provides guidance for designing engineering solutions in terms of material selection and system properties.
Article
The measured voltage signals picked up by the needle probe method can be interpreted by a numerical method so as to identify the magnetic material properties of the magnetic circuit of an electromagnetic device. However, when solving this electromagnetic inverse problem, the uncertainties in the numerical method give rise to recovery errors since the calculated needle signals in the forward problem are sensitive to these uncertainties. This paper proposes a stochastic Cramér–Rao bound method for determining the optimal sensor placement in the experimental setup. The numerical method is computationally time efficient where the geometrical parameters need to be provided. We apply the method for the non-destructive magnetic material characterization of an EI inductor where we ascertain the optimal experiment design. This design corresponds to the highest possible resolution that can be obtained when solving the inverse problem. Moreover, the presented results are validated by comparison with the exact material characteristics. The results show that the proposed methodology is independent of the values of the material parameter so that it can be applied before solving the inverse problem, i.e. as a priori estimation stage.
Article
Computational simulation models are extensively used in the development, design, and analysis of an aircraft engine and its components to represent the physics of an underlying phenomenon. The use of such a model-based simulation in engineering often necessitates the need to estimate model parameters based on physical experiments or field data. This class of problems, referred to as inverse problems (Woodbury KA 2003 Inverse engineering handbook. CRC, Boca Raton) in the literature, can be classified as well-posed or ill-posed depending on the quality (uncertainty) and quantity (amount) of data that are available to the engineer. The development of a generic inverse modeling solver in a probabilistic design system (PEZ version 2.6 user-manual. Probabilistic design system at General Electric Aviation, Cincinnati) requires the ability to handle diverse characteristics in various models. These characteristics include (a) varying fidelity in model accuracy with simulation times from a couple of seconds to many hours; (b) models being black-box, with the engineer having access to only the input and output; (c) nonlinearity in the model; and (d) time-dependent model input and output. This paper demonstrates methods that have been implemented to handle these features, with emphasis on applications in heat transfer and applied mechanics. A practical issue faced in the application of inverse modeling for parameter estimation is ill-posedness, which is characterized by instability and nonuniqueness in the solution. Generic methods to deal with ill-posedness include (a) model development, (b) optimal experimental design, and (c) regularization methods. The purpose of this paper is to communicate the development and implementation of an inverse method that provides a solution for both well-posed and ill-posed problems using regularization based on the prior values of the parameters. In the case of an ill-posed problem, the method provides two solution schemes—a most probable solution closest to the prior, based on the singular value decomposition (SVD), and a maximum a posteriori probability (MAP) solution. The inverse problem is solved as a finite dimensional nonlinear optimization problem using the SVD and/or MAP techniques tailored to the specifics of the application. The objective of the paper is to demonstrate the development and validation of these inverse modeling techniques in several industrial applications, e.g., heat transfer coefficient estimation for disk quenching in process modeling, material model parameter estimation, sparse clearance data modeling, and steady state and transient engine high-pressure compressor heat transfer estimation.
Article
Full-text available
The time dependent heating and cooling velocities are investigated in this paper. The temperature profile is found by using a keyhole approximation for the melted zone and solving the heat transfer equation. A polynomial expansion has been deployed to determine the cooling velocity during welding cut-off stage. The maximum cooling velocity has been estimated to be V max≈83°Cs−1.
Article
Conventional heat transfer analysis is usually based on deterministic mathematical models along with precisely specified boundary conditions and thermo-physical properties. An analysis can be rendered superfluous owing to parametric uncertainty, inaccuracy in measurements or small scale variations. Therefore, discrepancy in result needs to be appropriately dealt with when taking stand with experimental data. Consequently, proposing sensible and reasonably accurate arguments for observed discrepancy during model validation is critical. Such contentions also determine the scope for future research explorations. This review examines recent literature for experimental/numerical data processing, analysis, comparison and other vital issues primarily related to thermal measurement errors and uncertainties.
Chapter
This article discusses on types of electronic package where the package selection depends on the wiring demand, power, reliability, and electrical performance. Flip chip package is emerging in high performance products as its area array allows larger I/O counts and better inductive path. Meanwhile, the biggest cooling challenge lies on cost performance product as increasing demand on functions placed elevation in power per unit area. Finally, the package reliability is dependent on the types of package used, which are wire bond, ceramic, or plastic packages.
Article
Accurate modeling of thermal systems depends upon the determination of the material properties and the surface heat transfer coefficients. These parameters are frequently estimated from temperatures measured within the system or on the surface or from measured surface heat fluxes. Because of sensor errors or lack of sensitivity, the measurements may lead to erroneous estimates of the parameters. These errors can be ameliorated if the sensors are placed at points of maximum sensitivity. This paper describes two methods to optimize sensor locations: one to account for signal error, the other to consider interacting parameters. The methods are based upon variants of the normalized Fisher information matrix and are shown to be equivalent in some cases, but to predict differing sensor locations under other conditions, usually transient.
Article
The process of parameter estimation and the estimated parameters are affected not only by measurerment noise, which is present during any experiment, but also by uncertainties in the parameters of the model used to describe the system. This paper describes a method to optimize the design of an experiment to deduce the maximum information during the inverse problem of parameter estimation in the presence of uncertainties in the model parameters. It is shown that accounting for these uncertainties affects the optimal locations of the sensors.
Article
This paper addresses the question in the design of experiments of where to place sensors for optimal sensitivity and the post-experiment determination of which sensors yield relevant data. The authors in their previous works have described the spatial dependence of the response sensitivities and the importance of conducting a sensitivity analysis for a better understanding of the system response. This paper describes the formulation of the method for a transient analysis and its application to thermal problems. The results have been verified using the Monte Carlo sampling technique to simulate the variations in the parameters. The results show that there are not only optimal locations to maximize the sensitivities of the responses, but also optimal times of measurement. Sample rest cases are used to demonstrate the effects of time of measurement and placement of sensors on the accuracy of the measured temperatures.
Article
The symposium included sessions on the mechanical and deformation properties of polymer interfaces (with emphasis on the effects of plastic behavior in polymeric thin films, and general attention to stress effects on reliability), protective coatings for IC's, polymers and polymer-processing for high density packaging (e.g., photoimageable polyimides, use of liquid crystals to control thermal expansion, effect of curing on stress in polyimides in multilayer structures), ceramics and glass-ceramics (emphasis on aluminum nitride bulk, and interface properties), metallization techniques (low temperature CVD of copper films, laser planarization, laser assisted deposition of catalysts for electroless and electrolytic plating of copper), solders and soldering (including fatigue life predictions for solder joints), and measurement of material properties of thin films.
Article
The paper introduces the new ASME measurement uncertainty methodology which is the basis for two new ASME/ANSI standards and the ASME short course of the same name. Some background and history that led to the selection of this methodology are discussed as well as its application in current SAE, ISA, JANNAF, NRC, USAF, NATO, and ISO Standards documents and short courses.
Article
The methodology and applications of inverse heat transfer problems are discussed. In particular, attention is given to the statement and applications of inverse heat transfer problems in the study of heat transfer processes and design of technical systems, analysis of the well-posedness of inverse problems, analytical forms of boundary value inverse heat transfer problems, and a direct method for calculating nonstationary thermal loads. The discussion also covers the solution of boundary value inverse heat transfer problems by direct numerical methods, extreme-value statements of inverse heat transfer problems, and iterative regularization of inverse problems.
Article
It is no longer acceptable, in most circles, to present experimental results without describing the uncertainties involved. Besides its obvious role in publishing, uncertainty analysis provides the experimenter a rational way of evaluating the significance of the scatter on repeated trials. This can be a powerful tool in locating the source of trouble in a misbehaving experiment. To the user of the data, a statement (by the experimenter) of the range within which the results of the present experiment might have fallen by chance alone is of great help in deciding whether the present data agree with past results or differ from them. These benefits can be realized only if both the experimenter and the reader understand what an uncertainty analysis is, what it can do (and cannot do), and how to interpret its results.This paper begins with a general description of the sources of errors in engineering measurements and the relationship between error and uncertainty. Then the path of an uncertainty analysis is traced from its first step, identifying the intended true value of a measurement, through the quantitative estimation of the individual errors, to the end objective—the interpretation and reporting of the results. The basic mathematics of both single-sample and multiple-sample analysis are presented, as well as a technique for numerically executing uncertainty analyses when computerized data interpretation is involved.The material presented in this paper covers the method of describing the uncertainties in an engineering experiment and the necessary background material.
Article
The procedure of parameter estimation and the parameter estimates are not only affected by the measurement noise, which is present during any experiment, but are also influenced by the known model parameters. The most commonly used functional, which is based on the maximum likelihood principle, only accounts for the experimental noise but not the effect of the uncertainties in the known parameters. A new functional for parameter estimation has been proposed, which will also take into account the uncertainties in the known model parameters. It is shown that, in the presence of uncertainties in the known model parameters, the proposed functional is superior to previous functionals.
Article
It is desired to estimate $s$ parameters $\theta_1, \theta_2, \cdots, \theta_s.$ There is available a set of experiments which may be performed. The probability distribution of the data obtained from any of these experiments may depend on $\theta_1, \theta_2, \cdots, \theta_k, k \geqq s.$ One is permitted to select a design consisting of $n$ of these experiments to be performed independently. The repetition of experiments is permitted in the design. We shall show that, under mild conditions, locally optimal designs for large $n$ may be approximated by selecting a certain set of $r \leqq k + (k - 1) + \cdots + (k - s + 1)$ of the experiments available and by repeating each of these $r$ experiments in certain specified proportions. Examples are given illustrating how this result simplifies considerably the problem of obtaining optimal designs. The criterion of optimality that is employed is one that involves the use of Fisher's information matrix. For the case where it is desired to estimate one of the $k$ parameters, this criterion corresponds to minimizing the variance of the asymptotic distribution of the maximum likelihood estimate of that parameter. The result of this paper constitutes a generalization of a result of Elfving [1]. As in Elfving's paper, the results extend to the case where the cost depends on the experiment and the amount of money to be allocated on experimentation is determined instead of the sample size.
Article
Silica-filled epoxy composites represent an important class of electronic packaging materials. In this paper, a series of semi-empirical equations are proposed for estimating the density, temperature-dependant modulus, expansion coefficient and Poisson's ratio of silica-filled epoxy composites as a function of the silica content and glass transition temperature. The density and expansion coefficients are calculated using the rule of mixtures, while the composite moduli in the glassy and rubbery plateaus are derived using the Halpin-Tsai equation, the theory of rubber visco-elasticity, and elementary considerations of the polymer cross-link density. A four-parameter sigmoidal function is shown to account well for the composite stiffness in the transition region between the glassy and rubbery states, while a three-parameter single rise to maximum equation expresses the change in the composite's Poisson ratio with silica content. The models are corroborated against a large data library of actual packaging materials. Their usefulness in calculating e.g., the warpage in a plastic ball-grid array package is demonstrated in a worked example.
Describing the uncertainties in experimen-tal results
  • R J Mo€at
R.J. Mo€at, Describing the uncertainties in experimen-tal results, Experimental Thermal and Fluid Science 1 (1988) 3±17.
Estimated values of R using J and L for two di€erent sampling intervals. measurement uncertainty
  • R B Abernethy
  • R P Benedict
  • R B Dowdell
  • Fig
R.B. Abernethy, R.P. Benedict, R.B. Dowdell, ASME Fig. 8. Estimated values of R using J and L for two di€erent sampling intervals. measurement uncertainty, ASME J. Fluids Engineering 26 (1985) 161±164.
ASME Fig. 8. Estimated values of R using J and L for two dierent sampling intervals. measurement uncertainty
  • R B Abernethy
  • R P Benedict
  • R B Dowdell
R.B. Abernethy, R.P. Benedict, R.B. Dowdell, ASME Fig. 8. Estimated values of R using J and L for two dierent sampling intervals. measurement uncertainty, ASME J. Fluids Engineering 26 (1985) 161±164.