Chapter

Solution of Ill-Posed Problems

Authors:
To read the full-text of this research, you can request a copy directly from the authors.

No full-text available

Request Full-text Paper PDF

To read the full-text of this research,
you can request a copy directly from the authors.

... The first approach requires to consider all possible solutions to the optimization problem and select or create the unique representative according to some plausible selection criteria or averaging scheme. Second approach in the mold of [49,50,51,52], requires such a definition of the optimization problem so that the solution appears to be always unique. ...
... The close condition is the minimization of the sum of the relative values of the transformed and original coefficients. In the spirit of the generalized Tikhonov regularization [51,52], the generalization of the lasso selection criteria could be introduced, so that a unique solution deviates minimally from the solution with Borel-transformed coefficients. Yet another generalization of the lasso, calls for preferred solution to deviate minimally from the solution with original coefficients. ...
... 52) numerically. From the formulae(51),(52) one can deduce the power-law(42), with the critical index β = 0.29505.Application of the optimization method explained in the section 2.2, gives in the 7th order slightly smaller result, β = 0.15263. The Borel summation in the 6th order gives a close result, β = 0.20411. ...
Preprint
Full-text available
Minimal difference and minimal derivative optimal conditions are applied to calculations of critical indices and amplitudes for the effective permeability of thin wavy channels. Effective permeability and wetted perimeter of the two-dimensional percolating random media are considered as well. Closed-form expressions for all porosities, critical points and indices are found for the first time.
... In the presented work, to solve the ill-posed problem of inverting the Sumudu transform, we employ the trial-anderror method (Tikhonov and Arsenin, 1979). The essence of the method is to select the optimal element from a predetermined compact set M of potential initial functions. ...
... The inversion problem is solved by choosing a function f(t) from M, at which the distance between its Sumudu image and the initially specified image g(u) is minimal. If g(u) does not initially lie in the set S[M] (e.g., due to noise distortion), then the resulting function f(t) solves the problem approximately and is called a quasi-solution (Tikhonov and Arsenin, 1979). Compactness of the set M is achieved because its elements -functions f(t) -correspond to solutions to the problem of modeling ground-based electromagnetic sounding, which is parameterized by a closed set of finite-dimensional space (set of parameters of the geophysical model under consideration). ...
... As inferred from the data presented, the developed neural network algorithm allows performing the inverse Sumudu transform with high accuracy, sufficient for addressing practical problems. We note that when solving integral equations of the first kind by conventional methods, due to their poor conditionality, the characteristic error in the resulting solution turns out to be significantly higher (Tikhonov and Arsenin, 1979). ...
Article
Full-text available
The paper discusses the results of the development of a deep learning-based algorithm of the inverse Sumudu transform applied to the problem of on-ground non-stationary electromagnetic sounding. The Sumudu transform has potential for solving forward geoelectric problems in three-dimensional earth models because, unlike using the Laplace or Fourier transform, the Sumudu image of a real function is also a real function. Thus, there is no need to use complex numbers in subsequent calculations, which reduces computational costs and memory requirements in case of successful determination of the Sumudu image of the function. The disadvantages of the approach include the absence of an explicit method for calculating the inverse transform. The inversion can be done by solving the corresponding Fredholm integral equation of the first kind, but this is a poorly conditioned task leading to high requirements for the accuracy of the Sumudu image. The use of modern machine learning techniques can provide a method that is more robust to noise in the input data. This paper describes the process of creating a training dataset and developing a neural network algorithm; we evaluate the accuracy and performance of the obtained solution. The proposed method can contribute to the development of new approaches to physical processes modeling as well as to analysis, processing and interpretation of measured geophysical data.
... In general, a regularization method can be employed to compute the approximate solutions that are less sensitive to noise than the naive solution. Probably one of the most popular regularization methods is Tikhonov regularization, [1], which replaces (1) by the minimization problem ...
... Similarly to the scalar case, , 1 , and 2 can be obtained by applying Cramer's rule. Hence , 1 , and 2 have the following closed forms ...
... Similarly to the scalar case, , 1 , and 2 can be obtained by applying Cramer's rule. Hence , 1 , and 2 have the following closed forms ...
Article
In this paper, we present a hybrid model based on total generalized variation (TGV) and shearlet with non-quadratic fidelity data terms for blurred images corrupted by impulsive and Poisson noises. Numerical experiments demonstrate that the proposed can reduce the staircase effect while preserving edges and outperform classical TV-based models in the peak signal-to-noise ratio (PSNR).
... For ill-conditioned equations, it is difficult to obtain accurate and reliable parameter estimates, which severely affects the accuracy and quality of data processing. For the solution of ill-conditioned problems, a variety of biased estimation methods and improved methods have been proposed to improve the quality of parameter estimates, such as regularization, truncated singular value decomposition and ridge estimation [9][10][11][12][13][14][15][16]. The key to solving ill-conditioned problems by the regularization method is the selection of the stabilization functional and regularization parameter. ...
... Especially for the Brown model and the QP model, the condition number is reduced to 26.823 and 32.576, which can be considered to mean that the matrix is well-posed. However, it can be seen from Figure 4 that the order of magnitude of the damping factor for the Brown model, QP model and Fourier model should be 9 10 − , 14 10 − and 11 10 − . According to the selection principle of the damping factor and Table 5, the actual value determined by the LMCDM+h algorithm is generally too large, while the values of damping factor determined by the LMCDM+HK algorithm are relatively consistent with the trend of the ridge trace curves, which is closer to the result of the ridge trace analysis. ...
... Brown × . 9 10 698 5 − × . 17 10 684 3 ...
Article
Full-text available
In this study, the ill-conditioning of the iterative method for nonlinear models is discussed. Due to the effectiveness of ridge estimation for ill-conditioned problems and the lack of a combination of the H-K formula with the iterative method, the improvement of the LM algorithm is studied in this paper. Considering the LM algorithm for ill-conditioned nonlinear least squares, an improved LM algorithm based on the H-K formula is proposed for image distortion correction using self-calibration. Three finite difference methods are used to approximate the Jacobian matrix, and the H-K formula is used to calculate the damping factor in each iteration. The Brown model, quadratic polynomial model and Fourier model are applied to the self-calibration, and the improved LM algorithm is used to solve the model parameters. In the simulation experiment of space resection of a single image, we evaluate the performance of the LM algorithm based on the gain ratio (LMh) and the improved LM algorithm based on the H-K formula (LMHK), and the accuracy of different models and algorithms is compared. A ridge trace analysis is carried out on the damping factor to illustrate the effects of the improved algorithm in handling ill-conditioning. In the second experiment, the improved algorithm is applied to measure the diameter of a coin using a single camera. The experimental results show that the improved LM algorithm can reach the same or higher accuracy as the LMh algorithm, and it can weaken the ill-conditioning to a certain extent and enhance the stability of the solution. Meanwhile, the applicability of the improved LM algorithm in self-calibration is verified.
... System (27) is ill-conditioned, since it is a discrete analogue of the first-kind integral Equation (25)-ill-conditioned one [77]. In this case, the vector elements on the right side of g may contain some noise of varying intensity. ...
... In this case, the vector elements on the right side of g may contain some noise of varying intensity. To solve the system (28), we use Tikhonov's regularization method [77] and proceed to the following minimization problem: ...
... In order to select the value of the parameter , we use the quasi-optimal criterion [77,78], for which purpose we introduce the function: ...
Article
Full-text available
Due to the ongoing global warming on the Earth, permafrost degradation has been extensively taking place, which poses a substantial threat to civil and industrial facilities and infrastructure elements, as well as to the utilization of natural resources in the Arctic and high-latitude regions. In order to prevent the negative consequences of permafrost thawing under the foundations of constructions, various geophysical techniques for monitoring permafrost have been proposed and applied so far: temperature, electrical, seismic and many others. We propose a cross-borehole exploration system for a high localization of target objects in the cryolithozone. A novel mathematical apparatus for three-dimensional modeling of transient electromagnetic signals by the vector finite element method has been developed. The original combination of the latter, the Sumudu integral transform and artificial neural networks makes it possible to examine spatially heterogeneous objects of the cryolithozone with a high contrast of geoelectric parameters, significantly reducing computational costs. We consider numerical simulation results of the transient electromagnetic monitoring of industrial facilities located on permafrost. The formation of a talik has been shown to significantly manifest itself in the measured electromagnetic responses, which enables timely prevention of industrial disasters and environmental catastrophes.
... System (27) is ill-conditioned, since it is a discrete analogue of the first-kind integral equation (25), ill-conditioned one [73]. In this case, the vector elements on the right side of g may contain some noise of varying intensity. ...
... In this case, the vector elements on the right side of g may contain some noise of varying intensity. To solve the system (28), we use Tikhonov's regularization method [73] and proceed to the following minimization problem: ...
... In order to select the value of the parameter , we use the quasi-optimal criterion [73,74], for which purpose we introduce the function: ...
Preprint
Full-text available
Due to the ongoing global warming on the Earth, permafrost degradation has been extensively taking place, which poses a substantial threat to civil and industrial facilities and infrastructure elements, as well as to the utilization of natural resources in the Arctic and high-latitude regions. In order to prevent the negative consequences of permafrost thawing under the foundations of constructions, various geophysical techniques for monitoring permafrost have been proposed and applied so far: temperature, electrical, seismic and many others. At the same time, non-stationary electromagnetic methods seem to have significant potential and a number of important practical advantages. We propose a cross-borehole exploration system for the highest localization of target objects in the cryolithozone. A novel mathematical apparatus for three-dimensional modeling of transient electromagnetic (TEM) signals by the vector finite element method has been developed. The original combination of the latter, the Sumudu integral transform and artificial neural networks makes it possible to examine spatially heterogeneous objects of the cryolithozone with a high contrast of geoelectric parameters, significantly reducing computational costs. We have studied the sensitivity of the TEM signals to the boundary between thawed and frozen rocks depending on the inter-borehole distance and spatial orientations of the transmitters and receivers. Numerical simulation results of the TEM monitoring of industrial facilities located on permafrost are considered. The formation of a talik has been shown to significantly manifest itself in the measured electromagnetic responses, which enables timely prevention of industrial disasters and environmental catastrophes.
... But the process of solving an inverse problems is very difficult, which usually does not give the exact answer. Therefore, for solving such problems, approximate methods such as: iterative methods, regularization technique (Tikhonov regularization) [5,6], random methods and system identification, methods that search for approximate answer in subset of solutions, integrated techniques or direct numerical methods are used [21,34,36,38]. Methods are also provided for one type of these problems such as inverse heat conduction problem (IHCP) and among the most versatile methods can be mentioned: Mollification [4], iterative regularization [35], BFM (Base Function Method) [6], tikhonov regularization [5,6], HBM (Haar basis method) [7,8], and the FSM (Function Specification Method) [34]. ...
... Therefore, for solving such problems, approximate methods such as: iterative methods, regularization technique (Tikhonov regularization) [5,6], random methods and system identification, methods that search for approximate answer in subset of solutions, integrated techniques or direct numerical methods are used [21,34,36,38]. Methods are also provided for one type of these problems such as inverse heat conduction problem (IHCP) and among the most versatile methods can be mentioned: Mollification [4], iterative regularization [35], BFM (Base Function Method) [6], tikhonov regularization [5,6], HBM (Haar basis method) [7,8], and the FSM (Function Specification Method) [34]. As mentioned, these methods are approximate and usually do not get the exact answer [12]. ...
Article
Full-text available
In this paper, a numerical approach is combined with teaching learning based optimization algorithm (TLBO) to solve problems of inverse partial differential equations. The most important point in the way we have presented for these problems, for example an inverse heat condition problem (IHCP), is that we can get an answer to them without any conjecture about the unknown. Numerical experiments in implementation of this method, even without guessing the type of unknown function of the problem, show that these problems can be solved with great accuracy. Accurate results also show that after solving the problems, an excellent estimation of some unknowns can be obtained.
... Nevertheless, the majority of the existing numerical methods may provide stable solutions for boundary-initial problems by use of regularization techniques [36][37][38]. Those most popular include: Tikhonov regularization method [39], singular value decomposition method [40], statistical Bayesian inference, gradient-based regularization algorithm, entropy maximum. The regularization methods may need an additional parameter for which the determination of a value poses an additional challenge. ...
... The problem with the stability of the inverse problems solution necessitates the use of regularization [13,38,44,45]. This paper does not include a classic regularization method [36,37,39,40] but quasiregularization consisting in the consideration of the energy equation in the ceramic area Ω c . The energy balance equation in the ceramic area Ω c was obtained by integrating Eq. (2): ...
Article
Ceramic protective coats, for instance, on turbine blades, create a double-layer area with various thermophysical properties and they require metal temperature control. In this paper, it is implemented by formulating a Cauchy problem for the equation of thermal conductivity in the metal cylindrical area with a ceramic layer. Due to the ill posed problem, a regularization method was applied consisting in the notation of thermal balance for the ceramic layer. A spectral radius for the equation matrix was taken as the stability measure of the Cauchy problem. Numerical calculations were performed for a varied thickness of the ceramic layer, with consideration of the non-linear thermophysical properties of steel and a ceramic layer (zirconium dioxide). A polynomial was determined which approximates temperature distribution in time for the protective layer. The stability of solutions was compared for undisturbed and disturbed temperature values, and thermophysical parameters with various ceramic layer thickness. The obtained calculation results confirmed the effectiveness of the proposed regularization method in obtaining stable solutions at random data disturbance.
... Since the entries of f are obtained through observation, they are typically contaminated by a measurement error or noise. Let f à denote the unavailable error free exact signal associated with f and n be the noise in f, i.e. f f n. = + à Tikhonov regularization [2,3], also known as ridge regression, is a highly effective method for solving the above ill-posed systems (1). Andrey Tikhonov initially introduced it in 1943 [4] to address ill-posed linear systems and later it was extended by David G. Luenberger in the statistical setting in 1969 [5]. ...
... The regularization parameter λ controls the weight given to the regularization term relative to the minimization of the residual term. The above equation (2) can be written as ...
Article
Full-text available
We define the Tikhonov Orthogonal Greedy Algorithm (T-OGA), a variant of the orthogonal greedy algorithm, to study the recovery of sparse signals from noisy measurements. We establish sufficient condition for T-OGA to recover sparse signals via restricted isometry property (RIP) inherited from frames. We introduce various concepts such as match matrix, match vector, and residue vector to execute T-OGA. Various technical lemmas have been established to prove our main theorem. In the execution of T-OGA, we employ Tikhonov regularization with regularization parameter λ to solve the minimization problem for the N-sparse solution. In our main result, we proved that if a frame satisfies the RIP of order N + 1 with isometry constant τ and (τ + λ) < √1 , then T-OGA can recover every N-sparse signal in the atmost
... Moreover, numerical solution of such problems can be unstable. Mathematicians who have studied such problems have shown (Tikhonov AN, et al. [5]) that the problem we are considering here has an unique solution if the epidemic dynamics in age groups is represented by increasing functions of time. Of course, the results obtained can be presented in any form. ...
... One should add parameters that characterize а scheme of the disease (for example, the length of the latent period, the time of visiting a doctor, and, possibly, others) and the residents' response to infection (for example, what part of new infected residents visit a doctor.) The listed parameters are sufficient for implementing both ABM.To restore the real values of the infection probabilities in age groups (point 3) in the book (Tikhonov AN, et al. [5]) a regularizing procedure was proposed. According to this procedure, recovery is reduced to the problem of minimizing the functional, which is the integral quadratic difference between the functions (and their derivatives too) that determine the dynamics of the epidemic and obtained in points 2 and 5 (Perminov VD, [6]). ...
... Regarding the image z, the probability density p Z (z) of the random variable Z is called prior and incorporates some a priori information on the image to be recovered, such as smoothness, sparsity, or presence of edges. For our workflow, we assume that Z has a Tikhonov prior [18], which has the form ...
Article
Full-text available
The last years have witnessed significant developments in image acquisition systems and in algorithms for extracting information from them. Nevertheless, in many scenarios, several factors can hinder the recovery of useful data from images. This is especially true and important in forensic applications, where images are often accidentally captured by an imaging system not engineered for that specific acquisition (for example, the surveillance system designed to monitor the entrance of a bank may accidentally capture the license plate of a vehicle passing outside the bank). Therefore, the acquired images often need to be processed to facilitate extracting information from them. When facing a combination of several impairment factors, such as blur and perspective distortion, several image restoration algorithms must be applied. Then, it is necessary to choose a restoration order, which means the order by which single restoration algorithms are chained together to obtain the enhanced image. This study aims to understand whether such an order may impact the final result. Of course, there exists a wide variety of image impairments; in this study, we focus on the case of an image affected by a combination of optical/motion blur, perspective distortion, and additive noise, which are all widespread artifacts in forensic image applications. To answer the question about the importance of choosing one restoration workflow over another, we first model each considered defect and its restoration operator and then analyze and compare the effects of the composition of such operators on the restored output. Such a comparison is made from both a mathematical and experimental point of view, using both images with synthetically generated impairments and pictures with real degradations. The results show that the restoration order can affect significantly the results, especially when the defects are severe.
... The matrix may be ill-conditioned or singular and may yield a nonunique solution that can be judged by the condition number of the matrix (the ratio of the largest singular value to the smallest value). Tikhonov regularization is the most commonly used method for regularizing ill-posed problems [17,18]. The standard approach to solving an underdetermined system of linear equations, such as Equation (23), is known as linear least squares. ...
Article
Full-text available
In a diesel engine, piston slap commonly occurs concurrently with fuel combustion and serves as the main source of excitation. Although combustion pressure can be measured using sensors, determining the slap force is difficult without conducting tests. In this study, we propose a method to identify the slap force of the piston to solve this difficult problem. The traditional VMD algorithm easily receives noise interference, which affects the value of parameter combination [𝑘, 𝛼] and thus affects the extraction accuracy of the algorithm. First, we obtain the transfer function between the incentive and vibration response through percussion tests. Secondly, a variational modal decomposition method based on whale algorithm optimization is used to separate the slap response from the surface acceleration of the block. Finally, we calculated the slap force using the deconvolution method. Deconvolution is a typical inverse problem of mathematics, often prone to ill-conditioning, and the singular value decomposition and regularization method is used to overcome this flaw and improve accuracy. The proposed method provides an important means to evaluate the angular distribution of the slap force, identify the shock positions on the piston liner, and determine the peak value of the waveform which helps us analyze the vibration characteristics of the piston and optimize the structural design of the engine.
... This algorithm minimizes the Tikhonov regularization function and solves for a smooth model that matches the MT data. Tikhonov [1977] defined a regularized solution to the MT inverse problem as finding the model m that minimizes the objective function (see Equation 3): ...
Article
Full-text available
Magnetotelluric measurements from the Sabalan geothermal field in northwestern Iran have provided clear indications of geothermal reservoirs. This study aimed to identify shallow and deeper conductivity anomalies associated with geothermal systems in the Sabalan area. The magnetotelluric method is widely used for exploring subsurface resources, as it is suitable for detecting subsurface anomalies at significant depths. Based on modeling results, thick conductive layer was found near the surface of the Moil Valley area, where two reservoirs are embedded. The main reservoir, located on the west side of Sabalan volcano, extends from the south and southwest of Sabalan peak to the west and Moil valley. The dimensions of this reservoir are almost twice as large as the estimated volume obtained from previous studies. Another smaller reservoir is located to the north of the peak.
... To obtain models that fit given noisy data, it is necessary to impose stabilization, or regularization, with an unknown regularization parameter balancing the minimization of the data misfit and the model property (Oldenburg 1974;Pedersen 1977;Chai and Hinze 1988;Reamer and Ferguson 1989;Guspi 1992). The most well-known form of stabilization, as considered here, is imposed by Tikhonov regularization (Barbosa et al. 1997;Blakely 1995;Tikhonov and Arsenin 1977;Chakravarthi and Sundararajan 2007). Choosing the regularization parameter is critical for achieving both resolution and stability (Tikhonov 1963;Xu et al. 2006;Eshagh 2009). ...
Article
The processing of potential field datasets requires many steps; one of them is the inverse modeling of potential field data. Using a measurement dataset, the purpose is to evaluate the physical and geometric properties of an unidentified model in the subsurface. Because of the ill-posedness of the inverse problem, the determination of an acceptable solution requires the imposition of a regularization term to stabilize the inversion process. We also need a regularization parameter that determines the comparative weights of the stabilization and data fit terms. This work offers an evaluation of automated strategies for the estimation of the regularization parameter for underdetermined linear inverse problems. We look at the methods of generalized cross validation, active constraint balancing (ACB), the discrepancy principle, and the unbiased predictive risk estimator. It has been shown that the ACB technique is superior by applying the algorithms to both synthetic data and field data, which produces density models that are representative of real structures and demonstrate the method’s supremacy. Data acquired over the chromite deposit in Camaguey, Cuba, are utilized to corroborate the procedures for the inversion of experimental data. The findings gathered from the three-dimensional inversion of gravity data from this region demonstrate that the ACB approach gives appropriate estimations of anomalous density structures and depth resolution inside the subsurface.
... To improve the time domain response a regularisation parameter 20 was introduced in Equation (18). Figure 4 shows an improvement in both the inverse filter and the output signal responses. There was a considerable decrease in the inverse filter ringing and the output signal of the system is closer to a delta function. ...
... Mathematical constraints are usually employed by minimizing a function involving the parameters of the model (e.g., [32,46]), like the volume of the causative body (e.g., [45]), moment of inertia, e.g., [41], depth dependence of the solution [48], or distance, flatness, smoothness, and compactness (e.g., [3,10,37]). Incorporation of the mathematical constrains into the final solution is performed by means of the regularization approach, where the misfit functional is composed of two parts: the first one describing the fit of the model response to the observed data and the second describing additional mathematical properties of the regularized solution (e.g., [61,79]). ...
Article
Full-text available
Gravimetry is a discipline of geophysics that deals with observation and interpretation of the earth gravity field. The acquired gravity data serve the study of the earth interior, be it the deep or the near surface one, by means of the inferred subsurface structural density distribution. The subsurface density structure is resolved by solving the gravimetric inverse problem. Diverse methods and approaches exist for solving this non-unique and ill-posed inverse problem. Here, we focused on those methods that do not pre-constrain the number or geometries of the density sources. We reviewed the historical development and the basic principles of the Growth inversion methodology, which belong to the methods based on the growth of the model density structure throughout an iterative exploration process. The process was based on testing and filling the cells of a subsurface domain partition with density contrasts through an iterative mixed weighted adjustment procedure. The procedure iteratively minimized the data misfit residuals jointly with minimizing the total anomalous mass of the model, which facilitated obtaining compact meaningful source bodies of the solution. The applicability of the Growth inversion approach in structural geophysical studies, in geodynamic studies, and in near surface gravimetric studies was reviewed and illustrated. This work also presented the first application of the Growth inversion tool to near surface microgravimetric data with the goal of seeking very shallow cavities in archeological prospection and environmental geophysics.
... However, the problem (Eq. 1) is ill-posed. It can be successfully solved using the regularization method proposed by academician Tikhonov and Arsenin (13), which has been previously applied to restore the character of kinetic heterogeneity in catalytic systems based on titanium and neodymium (14). ...
Article
Full-text available
This article presents a novel simulation approach for solving the inverse problem of kinetic heterogeneity in polymerization processes, specifically focusing on the production of polyisoprene using a gadolinium chloride solvate-based catalytic system. The proposed method is based on the assumption that the distribution of active centers (ACs) can be described by model distributions. By utilizing primary physicochemical data, such as the polymerization rate and molecular weight distribution, the simulation approach automatically identifies the kinetic parameters, determining the Frenkel statistical parameter and solving the problem of kinetic heterogeneity. The experimental results revealed the presence of at least three distinct types of ACs, each contributing different proportions to the polymerization process. The simulation approach offers valuable insights into the complexities of catalytic systems and their role in polymerization, paving the way for optimizing reaction conditions and advancing industrial polymer synthesis processes. This study marks a significant step forward in understanding and controlling polymerization reactions, with potential implications for the development of innovative materials and industrial applications.
... To address overfitting, an effective approach is to utilize regularization techniques. One widely used method is known as Tikhonov regularization (Tikhonov and Arsenin 1978 ). This technique involves adding a regularization term to the objective function, which penalizes overly complex or osci l latory solutions. ...
Article
Full-text available
The discrete cosine transform is a commonly used technique in the field of signal processing that employs cosine basis functions for signal analysis. Traditionally, the regression coefficients of the cosine basis functions are solely based on frequency information. This paper extends the regression coefficients associated with the cosine basis functions to take into account both frequency and time information, not just frequency information alone. This modification results in an ill-posed linear system, which requires regularization to prevent overfitting. To address this, the paper uses shaping regularization, a technique used to stabilize ill-posed problems. By doing so, the absolute values of these extended coefficients, now exhibiting variations in both frequency and time domains, are defined as the time-frequency distribution of that input signal. The numerical experiments conducted to validate this approach demonstrate that the proposed method yields a commendable time-frequency resolution. Consequently, it proves valuable for interpreting seismic data, showcasing its potential for applications in this field.
... It is typical in the field of Ill-Posed and Inverse Problems to impose a priori bounds like ones in (5.9): that solution belongs to an a priori chosen bounded set, see, e.g. [34]. ...
Article
Full-text available
The second-order mean field games system (MFGS) in a bounded domain with the lateral Cauchy data are considered. This means that both Dirichlet and Neumann boundary data for the solution of the MFGS are given. Two Hölder stability estimates for two slightly different cases are derived. These estimates indicate how stable the solution of the MFGS is with respect to the possible noise in the lateral Cauchy data. Our stability estimates imply uniqueness. The key mathematical apparatus is the apparatus of two new Carleman estimates.
... This problem is ill-posed because it has an infinite number of solutions, thus the regularization term is required. Tikhonov regularization [40], commonly applied in regularizing the optimization problem, is adopted herein: ...
Article
Vision-based structural motion estimation methods show great potential in structural health monitoring (SHM). However, available methods cannot automatically estimate and eliminate three-dimensional (3D) camera motion effects caused by environmental factors from the video containing structural motions. This limits further applications of vision-based methods in SHM. To this end, this paper proposed a target-free video measurement method, which is formulated as a feature point congealing problem, and its solution includes two steps. First, the links of feature points from stationary backgrounds are assumed to have consistent and smooth motions, and are detected by a maximum a posteriori (MAP) estimation of the Bayesian model. Second, based on the detected links, the joint homography matrix containing only 3D camera motion effects can be updated iteratively, by minimizing transformation differences between the current frame and other frames. The superiority of the proposed method over traditional methods was validated in case studies of Humen bridge motion estimations. It was shown the proposed method has the best performance in video stabilization and camera motion effects estimation. Moreover, combined with the phase-based method, subpixel small structural motions can be well estimated.
... The variables Cd and Cm are covariance matrices for both data and model. A regularization parameter (α) is used to stabilize the ill-posed inversion processes (Bell et al., 1978). The minimum lateral cell size in the model mesh is 500 m × 500 m, except for the padding cells added on all sides (see Fig. 3a). ...
Article
Mount Lawu is a stratovolcano located on the border of Central Java and East Java Provinces in Indonesia. At least eleven manifestations indicate its geothermal potential in the form of hot water and fumaroles. This study aims to describe a geothermal system based on an integrated analysis of magnetotelluric (MT) and satellite gravity data. Some additional geochemical and petrological data analyses were also used to explain the existing hydrothermal system in this area. It is inferred from the gravity derivative analysis that fault structures exist in a location with high permeability, which acts as a controlling structure for geothermal manifestations. The 3-D resistivity inversion was carried out using MT full-impedance tensor data, while the 3-D density model was reconstructed based on the residual anomaly computed from the topography-free gravity disturbance. The 3-D resistivity model, from magnetotelluric inversion, shows a clay cap distribution, corresponds to a low resistivity anomaly, centered at the southern part of the mountain peak with a thickness of about 1 kilometer, which narrows to the western region where the hot spring shows an outflow manifestation. The gravity inversion shows several low-density bodies in the same location as the low-resistivity anomaly attributed to a clay cap. Coincident resistivity and density anomalies constrain the horizontal location of the geothermal reservoir. Despite the incapability of the model to resolve the body of the geothermal heat source, it can be inferred from the inversion results that the heat possibly emanates through the thermal conduction passage below the reservoir zone from the source located deeper down than the reconstructed models.
... Several approaches for the estimation of are presented in [31], including the Tikhonov curve and cooling techniques. In our sparse inversion, we prefer the cooling strategy [32]. ...
Article
Full-text available
Imaging the intra-sediment magma chamber in the Damavand region, northern Iran, is beyond the resolution of the local seismic observations. Gravity anomalies can precisely image the lateral extension of magma reservoir. In order to provide vertical extension of magma chamber, we apply inversion of magnetic data with a higher sensitivity to shallow structure in comparison to gravity data. More importantly, knowledge of magma chamber’s density allows prediction of its mechanical behaviour including the potential of eruption. As Damavand is estimated to be an active volcano, it is important to revisit the physical properties of the magma chamber to be able to evaluate the potential of eruption. Here, we apply the sparse norm inversion of Bouguer gravity anomaly and magnetic data to model the uppermost crust beneath Damavand volcano. Qualitative analysis of the Bouguer anomaly shows that the power of the spectrum remains almost unchanged by upward continuation using heights greater than 4 km. Thus, we conclude that the 4-km upward continued Bouguer anomaly represents the regional gravitational effects free from very shallow effects. Inversion of magnetic anomaly, interestingly, shows a susceptibility structure, with susceptibility contrast of up to ~ +0.025 SI, in the same place as density anomaly. This study proposes a 10-km wide magma chamber beneath Damavand from depth ~3 km to depth ~12 km. The resulted density structure is comparable with the obtained values from derived densities (using thermodynamic mineral phase equilibrium) based on geochemical data and those from conversion of seismic velocity to density. According to the geochemical data analysis, the lava is andesitic which is categorized among dense crustal rocks (2.8 g/cm3). But, our modeling results shows a density contrast of maximum + 0.25 g/cm3 between the magma chamber and the surrounding sedimentary rocks (with density of 2.45 g/cm3) above 5 km. Therefore, we can conclude that the shallow magma chamber, composed of dense andesite, is relatively warm and probably not completely consolidated. The high temperature of magma chamber appears to be neutralized by the impact of high density of andesite (naturally dense rocks) to result in moderate negative anomaly in tomography (i.e., ∆Vs=~ -2 %). Magma chamber’s temperature might exceed 750-800 ºC which is still beyond the solidus-liquid transition temperature of 1100 ºC. Therefore, we can conclude that the magma is no liquid and is partially consolidated. International Journal of Mining and Geo-Engineering (ISSN Print: 2345-6930; ISSN Online: 2345-6949)
... Following [53], the inverse problem is solved by reducing it to an exercise in optimization. The main idea behind this method is to find the state vector that minimizes the residual between simulated data and measurements. ...
Article
Convolution neural networks are widely used for image processing in remote sensing. Aquacultures have an important role in food security and hence should be monitored. In this paper, a novel lightweight neural network for in-terrestrial aquaculture field retrieval from high-resolution remote sensing images is proposed. The structure of this pond segmentation network is based on the UNet architecture, providing higher training speed. Experiments are performed on Gaofen satellite datasets in Shanghai, China. The proposed network detects the inland aquaculture ponds in a shorter time than stateof-the-art neural network-based models and reaches an overall accuracy of about 90 %.
... Model-driven methods, such as FWI, utilise physical models to perform sound speed reconstruction. In this case, the constraints (prior information) of the problem are given in a formulated form, such as total variation regularisation [39] and Tikhonov regularisation [40]. On the other hand, DNNs are data-driven methods and utilise the constraints estimated from the training data to perform sound speed reconstruction. ...
Article
Full-text available
Sound speed reconstruction has been investigated for quantitative evaluation of tissue properties in breast examination. Full waveform inversion (FWI), a mainstream method for conventional sound speed reconstruction, is an iterative method that includes numerical simulation of wave propagation, resulting in high computational cost. In contrast, high-speed reconstruction of sound speed using a deep neural network (DNN) has been proposed in recent years. Although the generalization performance is highly dependent on the training data, how to generate data for sufficient generalization performance is still unclear. In this study, the quality and generalization performance of DNN-based sound speed reconstruction with a ring array transducer were evaluated on a natural image-derived dataset and a breast phantom dataset. The DNN trained on breast phantom data (BP-DNN) could not reconstruct the structures on natural image data with diverse structures. On the other hand, the DNN trained on natural image data (NI-DNN) successfully reconstructed the structures on both natural image and breast phantom test data. Furthermore, the NI-DNN successfully reconstructed tumour structures in the breast, while the BP-DNN overlooked them. From these results, it was demonstrated that natural image data enables DNNs to learn sound speed reconstruction with high generalization performance and high resolution.
... This regularization method is motivated by classical descent algorithms in the continuous optimization literature (see, e.g., [5]), with a notable difference of the use of independent samples of scenarios at each iteration. Traditional regularization methods work by modifying the objective function via a proximal term, see, e.g., [29]. Singh et al. [26] exploit this idea to break symmetries in MIP models and achieve good quality feasible solutions quickly. ...
Chapter
Full-text available
Lagrangian relaxation schemes, coupled with a subgradient procedure, are frequently employed to solve chance-constrained optimization models. Subgradient procedures typically rely on step-size update rules. Although there is extensive research on the properties of these step-size update rules, there is little consensus on which rules are most suitable practically; especially, when the underlying model is a computationally challenging instance of a chance-constrained program. To close this gap, we seek to determine whether a single step-size rule can be statistically guaranteed to perform better than others. We couple the Lagrangian procedure with three strategies to identify lower bounds for two-stage chance-constrained programs. We consider two instances of such models that differ in the presence of binary variables in the second-stage. With a series of computational experiments, we demonstrate—in marked contrast to existing theoretical results—that no significant statistical differences in terms of optimality gaps is detected between six well-known step-size update rules. Despite this, our results demonstrate that a Lagrangian procedure provides computational benefit over a naive solution method—regardless of the underlying step-size update rule.
... To solve it, the Tikhonov regularization method was used, where the solution itself depends on the choice of the regularization parameter. The procedure for choosing the regularization parameter  in this paper is fully described in the monograph [17]. ...
Article
A method for reconstructing surface activity density (SAD) maps based on the solution of the Fredholm equation has been developed and applied. The reconstruction of SAD maps was carried out for the Site of the Temporary Storage (STS) of spent fuel and radioactive waste in Andreeva Bay using the results of measuring campaign in 2001-2002 and for the sheltering construction of the solid radioactive waste (SRW) using the results of measurements in 2021. The Fredholm equation was solved in two versions: under conditions of a barrier-free environment and taking into account buildings and structures located on the industrial site of the STS Andreeva Bay. Lorenz curves were generated to assess the compactness of the distributions of SAD and ambient dose equivalent rate (ADER) for the industrial site and the sheltering construction at STS Andreeva Bay, the area of the IV stage uranium tailing site near the city of Istiklol in the Republic of Tajikistan, and for roofs of the Chernobyl nuclear power plant (NPP). The nature of impact of the resolution (fragmentation) of the raster, the value of the radius of mutual influence of points (contamination sites), the height of the radiation detector above the scanned surface and the angular aperture of the radiation detector on the accuracy of the SAD reconstruction is shown. The method developed allows more accurate planning of decontamination work when only ADER measurements data is available. The proposed method can be applied to support the process of decontamination of radioactively contaminated territories, in particular during the remediation of the STS Andreeva Bay.
Article
Full-text available
Two shape-sensing algorithms, the calibration matrix (CM) method and the inverse Finite Element Method (iFEM), were compared on their ability to accurately reconstruct displacements, strains, and loads and on their computational efficiency. CM reconstructs deformation through a linear combination of known load cases using the sensor data measured for each of these known load cases and the sensor data measured for the actual load case. iFEM reconstructs deformation by minimizing a least-squares error functional based on the difference between the measured and numerical values for displacement and/or strain. In this study, CM is covered in detail to determine the applicability and practicality of the method. The CM results for several benchmark problems from the literature were compared to the iFEM results. In addition, a representative aerospace structure consisting of a twisted and tapered blade with a NACA 6412 cross-sectional profile was evaluated using quadratic hexahedral solid elements with reduced integration. Both methods assumed linear elastic material conditions and used discrete displacement sensors, strain sensors, or a combination of both to reconstruct the full displacement and strain fields. In our study, surface-mounted and distributed sensors throughout the volume of the structure were considered. This comparative study was performed to support the growing demand for load monitoring, specifically for applications where the sensor data is obtained from discrete and irregularly distributed points on the structure. In this study, the CM method was shown to achieve greater accuracy than iFEM. Averaged over all the load cases examined, the CM algorithm achieved average displacement and strain errors of less than 0.01%, whereas the iFEM algorithm had an average displacement error of 21% and an average strain error of 99%. In addition, CM also achieved equal or better computational efficiency than iFEM after initial setup , with similar first solution times and faster repeat solution times by a factor of approximately 100, for hundreds to thousands of sensors.
Article
Gravity inversion is an important approach for obtaining the spatial structure and physical properties of underground geological bodies based on surface information. Owing to the recent advances in deep learning, neural network-based methods have been widely used for gravity inversion. However, convolutional neural networks (CNNs) require a large number of labeled samples, and the generation of such datasets for all considered geological bodies, requiring gravity forward modeling, is expensive in terms of time and storage space. To reduce the dependence on labeled samples, a three-dimensional gravity inversion method based on a cycle-consistent generative adversarial network (Cycle-GAN) is proposed herein. This network comprises two parts: generator subnetworks and discriminator subnetworks. The generator subnetworks generate gravity forward and inversion data, while the discriminator subnetworks mainly ensure the consistency of the distribution between the generated and real data. We compared the results obtained on synthetic and real data. The findings suggest that Cycle-GAN outperforms CNNs in the inversion of underground geological bodies when using a small number of labeled samples. Furthermore, the results obtained using the proposed method on real data from the San Nicolas deposit in central Mexico are consistent with previously reported results.
Article
Full-text available
Halide perovskite materials offer significant promise for solar energy and optoelectronics yet understanding and enhancing their efficiency and stability require addressing lateral inhomogeneity challenges. While photoluminescence imaging techniques are employed for the measurement of their opto‐electronic and transport properties, going further in terms of precision requires longer acquisition times. Prolonged exposure of perovskites to light, given their high reactivity, can substantially alter these layers, rendering the acquired data less meaningful for analysis. In this paper, a method to extract high‐quality lifetime images from rapidly acquired, noisy time‐resolved photoluminescence images is proposed. This method leverages concepts of the field of constrained reconstruction and includes the Huber loss function and a specific form of total variation regularization. Through both simulations and experiments, it is demonstrated that the approach outperforms conventional pointwise methods. Optimal acceleration and optimization parameters tailored for decay time imaging of perovskite materials, offering new perspectives for accelerated experiments crucial in degradation process characterization are identified. Importantly, this methodology holds the potential for broader applications: it can be extended to explore additional beam‐sensitive materials, and other imaging characterization techniques and employed with more complex physical models to treat time‐resolved decays.
Article
Full-text available
We propose a mathematical model of convective diffusion of impurity particles accompanied by sorption processes in a body formed by three contacting porous layers with different physical and chemical characteristics under the conditions of imperfect contact for the concentration on the interfaces. The analytic solution of the contact initial-boundary value problem of convective diffusion of impurity substances in a composite layer is obtained with the help of integral transformations over the spatial variable applied in each contacting layer separately. A system of integral equations for the functions of concentration of migrating particles on the interfaces is obtained and solved. The formulas for finding the concentrations of impurity particles sorbed on the skeleton of the three-layered porous body are obtained.
Article
We consider the ill-posed problem of finding the position of the discontinuity lines of a function of two variables. It is assumed that the function is smooth outside the lines of discontinuity but has a discontinuity of the first kind on the line. At each node of a uniform grid with step \(\tau\), the mean values of the perturbed function on a square with side \(\tau\) are known. The perturbed function approximates the exact function in the space \(L_{2}(\mathbb{R}^{2})\). The perturbation level \(\delta\) is assumed to be known. Previously, the authors investigated (accuracy estimates were obtained) global discrete regularizing algorithms for approximating the set of lines of discontinuity of a noisy function provided that the line of discontinuity of the exact function satisfies the local Lipschitz condition. In this paper, we introduce a one-sided Lipschitz condition and formulate a new, wider correctness class. New methods for localizing discontinuity lines are constructed that work on an extended class of functions. A convergence theorem is proved, and estimates of the approximation error and other important characteristics of the algorithms are obtained. It is shown that the new methods determine the position of the discontinuity lines with guarantee in situations where the standard methods do not work.
Article
Full-text available
In this work, an advanced 2D nonparametric correlogram method is presented to cope with output-only measurements of linear (slow) time-varying systems. The proposed method is a novel generalization of the kernel function-based regularization techniques that have been developed for estimating linear time-invariant impulse response functions. In the proposed system identification technique, an estimation method is provided that can estimate the time-varying auto- and cross-correlation function and indirectly, the time-varying auto- and cross-correlation power spectrum estimates based on real-life measurements without measuring the perturbation signals. The (slow) time-varying behavior means that the dynamic of the system changes as a function of time. In this work, a tailored regularization cost function is considered to impose assumptions such as smoothness and stability on the 2D auto- and cross-correlation function resulting in robust and uniquely determined estimates. The proposed method is validated on two examples: a simulation to check the numerical correctness of the method, and a flutter test measurement of a scaled airplane model to illustrate the power of the method on a real-life challenging problem.
Chapter
The modal analysis problem for a beam performing bending vibrations is considered. The defect in the beam is modeled as a change in the cross-section area and the moment of inertia. The damage identification is based on the recovery of these coefficients by using additional information about resonant frequencies and eigenmodes. The solution of such a coefficient problem is conducted to minimize a special misfit functional. The paper presents the construction of this functional, considering the specificity of the modal analysis problem. The trust region method was used to solve the optimization problem. The gradient and the Hessian of the misfit functional were obtained on the sensitivity analysis of the forward problem.
Article
Full-text available
В гравиразведке важнейшей является задача продолжения потенциальных полей с поверхности Земли вглубь. На основе решения такой задачи идентифицируется положение аномалий гравитационного поля. Приближенное решение задачи продолжения потенциальных полей часто базируется на решении интегрального уравнения первого рода с применением тех или иных процедур регуляризации. Аналогичный подход используется в нашей работе, когда продолженное поле представляется в виде потенциала простого слоя или его вертикальной производной. Плотность эквивалентного простого слоя положительна (отрицательна) для положительных (отрицательных) аномалий плотности при условии, что поверхность эквивалентного потенциала простого слоя включает все аномалии. Учет этого свойства является ключевой особенностью предложенного вычислительного алгоритма продолжения потенциальных полей в сторону аномалий. Определение неотрицательной плотности потенциала простого слоя базируется на NNLS (Non-Negative Least Squares) методе. Эффективность разработанного вычислительного алгоритма иллюстрируется расчетами для двумерных задач.
Article
Full-text available
Compared with surface wave corresponding to the normal mode, which is widely studied, there is less research on guided-P wave corresponding to the leaking mode. Guided-P wave carries the dispersion information that can be used to construct the subsurface velocity structures. In this paper, to simultaneously estimate P-wave velocity (\({{v}}_{{P}}\)) and S-wave velocity (\({{v}}_{{S}}\)) structures, an integrated inversion method of guided-P and surface wave dispersion curves is proposed. Through the calculation of Jacobian matrix, the sensitivity of dispersion curves is quantitatively analyzed. It shows that the dispersion curves of guided-P and surface waves are, respectively, sensitive to the \({{v}}_{{P}}\) and \({{v}}_{{S}}\). Synthetic model tests demonstrate the proposed integrated inversion method can estimate the \({{v}}_{{P}}\) and \({{v}}_{{S}}\) models accurately and effectively identify low-velocity interlayers. The integrated inversion method is also applied to the field seismic data acquired for oil and gas prospecting. The pseudo-2D \({{v}}_{{P}}\), \({{v}}_{{S}}\) and Poisson’s ratio inversion results are of significance for near-surface geological interpretation. The comparison with the result of first-arrival traveltime tomography further demonstrates the accuracy and practicality of the proposed integrated inversion method. Not only in the field of exploration seismic, the guided-P wave dispersion information can also be extracted from the earthquake seismic, engineering seismic and ambient noise. The proposed inversion method can exploit previously neglected guided-P wave to characterize the subsurface \({{v}}_{{P}}\) structures, showing broad and promising application prospects. This compensates for the inherent defect that the surface wave dispersion curve is mainly sensitive to the \({{v}}_{{S}}\) structure.
Chapter
In a conventional non-cooperative negotiating scenario, two or more forward-thinking participants make offers and counteroffers alternately until an agreement is achieved, with a penalty taking into account the length of time it takes players to make a decision. We provide a game that helps myopic participants achieve equilibrium as if they were forward-thinking agents. One of the game’s main mechanics is that players are penalized for deviating from their prior best reply plan as well as for the amount of time they spend to make decisions at each stage of play. Our chapter adds to existing research on typical myopic agent bargaining while also broadening the class of processes and functions that may be used to define and apply Rubinstein’s non-cooperative bargaining solutions.
Chapter
A multi-objective Pareto front solution is presented in this chapter for a particular type of discrete-time ergodic controllable Markov chains. We offer a technique that, given specific boundaries, chooses the best multi-objective option for the Pareto frontier as a decision support system. We only consider a class of finite, ergodic, and controllable Markov chains while addressing this issue. The regularized penalty method utilizes a projection-gradient strategy to identify the Pareto policies along the Pareto frontier and is based on Tikhonov’s regularization method. The goal is to make the parameters as efficient as possible while still maintaining the original form of the functional. After setting the initial value, we gradually reduce it until each policy closely resembles the Pareto policy. In this sense, we specify the precise direction of the parameter tendencies toward zero and establish the convergence of the gradient regularized penalty algorithm. The matching picture in the objective space receives a Pareto frontier of only Pareto policies thanks to our policy-gradient multi-objective algorithms, which, on the other hand, use a gradient-based strategy. In order to improve security when transporting cash and valuables, we empirically validate the technique by providing a numerical example of a genuine alternative solution to the vehicle routing planning problem. In addition, we describe a portfolio optimization and represent the Pareto frontier. The decision-making techniques investigated in this paper are consistent with the most widely used computational intelligent models in the Artificial Intelligence research field.
Preprint
Full-text available
p> The probing depth of transient electromagnetic method (TEM) refers to the depth range at which changes in underground conductivity can be effectively detected. It typically ranges from tens of meters to several kilometers and is influenced by factors such as instrument parameters and the conductivity of the subsurface structure. Rapid and accurate calculating the probing depth is beneficial for determining the feasibility of exploration engineering, setting appropriate inversion parameters and improving exploration accuracy. However, mainstream methods suffer from issues such as low computational precision, large uncertainties, or high computational requirements, making them unsuitable for processing massive airborne electromagnetic data. In this study, we propose a prediction model based on deep learning that can directly compute the probing depth from the TEM responses, and its effectiveness and accuracy is validated through synthetic models and field measurements. Furthermore, we apply this algorithm to deep learning-based ATEM inversion by constraining the one-dimensional resistivity models in the training set above the probing depth, to reduce the non-uniqueness of the inversion, accelerate the convergence, and improve its prediction accuracy. </p
Article
A fast numerical algorithm for solving the Cauchy problem for elliptic equations with variable coefficients in standard calculation domains (rectangles, circles, or rings) is proposed. The algorithm is designed to calculate the heat flux at the inaccessible boundary. It is based on the separation of variables method. This approach employs a finite difference approximation and allows obtaining a solution to a discrete problem in arithmetic operations of the order of N ⁢ ln ⁡ N N\operatorname{ln}N , where 𝑁 is the number of grid points. As a rule, iterative procedures are needed to solve the Cauchy problem for elliptic equations. The currently available direct algorithms for solving the Cauchy problem have been developed only for (Laplace, Helmholtz) operators with constant coefficients and for use of analytical solutions for problems with such operators. A novel feature of the results of the present paper is that the direct algorithm can be used for an elliptic operator with variable coefficients (of a special form). It is important that in this case no analytical solution to the problem can be obtained. The algorithm significantly increases the range of problems that can be solved. It can be used to create devices for determining in real time heat fluxes on the parts of inhomogeneous constructions that cannot be measured. For example, to determine the heat flux on the inner radius of a pipe made of different materials.
Article
Full-text available
The James Webb Space Telescope is performing beyond our expectations. Its Near Infrared Spectrograph (NIRSpec) provides versatile spectroscopic capabilities in the 0.6–5.3 µm wavelength range, where a new window is opening for studying Trans-Neptunian objects in particular. We propose a spectral extraction method for NIRSpec fixed slit observations, with the aim of meeting the superior performance on the instrument with the most advanced data processing. We applied this method on the fixed slit dataset of the guaranteed-time observation program 1231, which targets Plutino 2003 AZ 84 . We compared the spectra we extracted with those from the calibration pipeline.
ResearchGate has not been able to resolve any references for this publication.