Figure 2 - uploaded by Luisa D’Amore
Content may be subject to copyright.
Example test F 1 : (a) the computed inverse function (circle–curve) compared with the exact one (continuous); (b) absolute error versus the t-values.  

Example test F 1 : (a) the computed inverse function (circle–curve) compared with the exact one (continuous); (b) absolute error versus the t-values.  

Source publication
Article
Full-text available
We propose a numerical method for computing a function, given its Laplace transform function on the real axis. The inversion algorithm is based on the Fourier series expansion of the unknown function and the Fourier coefficients are approximated using a Tikhonov regularization method. The key point of this approach is the use of the regularization...

Context in source publication

Context 1
... test functions are the following ones: [6, 8, 20, 21] The first test function (see figure 2) is standard in testing Fourier series methods; it does not have singularities on the real axis and its inverse function is continuous everywhere on the real axis. The inverse functions of F 2 (s) (see figure 3) and F 5 (s) (see figure 6) have a jump discontinuity at t = 1 and at 1, 2. Perhaps these are the most significant functions for testing a Fourier series method. ...

Similar publications

Article
Full-text available
The accuracy and efficiency for computing option prices play very important in the financial risk management and hedging for the investors. In this paper, we for the first time develop a fast and accurate numerical method that combines Laplace transform for time variable and compact difference for spatial discretization, for computing option prices...

Citations

... On each subdomain we formulate a local KF problem analogous to the original one, defined on local models. In order to enforce the matching of local solutions on overlapping regions, local KF problems are slightly modified by adding a correction term, acting as a smoothness-regularization constraint on local solutions [14], which keeps track of contributions of adjacent domains to overlapping regions; the same correction is applied to covariance matrices, thereby improving the conditioning of the error covariance matrices. ...
Article
We present an innovative interpretation of Kalman filter (KF) combining the ideas of Schwarz domain decomposition (DD) and parallel in time (PinT) approaches. Thereafter we call it DD-KF. In contrast to standard DD approaches which are already incorporated in KF and other state estimation models, implementing a straightforward data parallelism inside the loop over time, DD-KF ab-initio partitions the whole model, including filter equations and dynamic model along both space and time directions/steps. As a consequence, we get local KFs reproducing the original filter at smaller dimensions on local domains. Also, sub problems could be solved in parallel. In order to enforce the matching of local solutions on overlapping regions, and then to achieve the same global solution of KF, local KFs are slightly modified by adding a correction term keeping track of contributions of adjacent sub-domains to overlapping regions. Such a correction term balances localization errors along overlapping regions, acting as a regularization constraint on local solutions. Furthermore, such a localization excludes remote observations from each analyzed location improving the conditioning of the error covariance matrices. As dynamic model we consider shallow water equations which can be regarded a consistent tool to get a proof of concept of the reliability assessment of DD-KF in monitoring and forecasting of weather systems and ocean currents.
... DA encompasses the entire sequence of operations that, starting from observations/measurements of physical quantities and with additional information, such as mathematical models governing the evolution of these quantities, improve their estimation of a suitable function. In order to understand how such a function is obtained, we suggest that, from a mathematical perspective, DA is an inverse and ill-posed problem [5]. Hence, regularization methods are used to obtain a well-posed problem. ...
Article
Full-text available
This paper is presented in the context of sensitivity analysis (SA) of large-scale data assimilation (DA) models. We studied consistency, convergence, stability and roundoff error propagation of the reduced-space optimization technique arising in parallel 4D Variational DA problems. The results are helpful to understand the reliability of DA, to assess what confidence one can have that the simulation results are correct and to determine its configuration in any application. The main contributions of the present work are as follows. By using forward error analysis, we derived the number of conditions of the parallel approach. We found that the parallel approach reduces the number of conditions, revealing that it is more appropriate than the standard approach usually implemented in most operative software. As the background values are used as initial conditions of local PDE models, we analyzed stability with respect to time direction. Finally, we proved consistency of the proposed approach by analyzing local truncation errors of each computational kernel.
... and Q= VV T , covariance matrices of the errors on observations and background, respectively. We now define the 4D-DA inverse problem 33 . ...
Preprint
Full-text available
We focus on Partial Differential Equation (PDE) based Data Assimilatio problems (DA) solved by means of variational approaches and Kalman filter algorithm. Recently, we presented a Domain Decomposition framework (we call it DD-DA, for short) performing a decomposition of the whole physical domain along space and time directions, and joining the idea of Schwarz' methods and parallel in time approaches. For effective parallelization of DD-DA algorithms, the computational load assigned to subdomains must be equally distributed. Usually computational cost is proportional to the amount of data entities assigned to partitions. Good quality partitioning also requires the volume of communication during calculation to be kept at its minimum. In order to deal with DD-DA problems where the observations are nonuniformly distributed and general sparse, in the present work we employ a parallel load balancing algorithm based on adaptive and dynamic defining of boundaries of DD -- which is aimed to balance workload according to data location. We call it DyDD. As the numerical model underlying DA problems arising from the so-called discretize-then-optimize approach is the constrained least square model (CLS), we will use CLS as a reference state estimation problem and we validate DyDD on different scenarios.
... and Q= VV T , covariance matrices of the errors on observations and background, respectively. We now define the 4D-DA inverse problem 33 . ...
Article
We focus on PDE‐based Data Assimilation problems (DA) solved by means of variational approaches and Kalman Filter algorithm. Recently, we presented a Domain Decomposition (DD‐DA) framework performing a decomposition of the whole physical domain along space and time directions, and joining the idea of Schwarz’s methods and Parallel in Time (PinT)‐ based approaches. For effective parallelization of domain decomposition algorithms, the computational load assigned to sub domains must be equally distributed. Usually computational cost is proportional to the amount of data entities assigned to partitions. Good quality partitioning also requires the volume of communication during calculation to be kept at its minimum. In order to deal with DD‐DA problems where the observations are non uniformly distributed and general sparse, in the present work we employ a parallel load balancing algorithm ‐ based on adaptive and dynamic defining of boundaries of DD ‐ which is aimed to balance workload according to data location. We call it DyDD. As the numerical model underlying DA problems arising from the so‐called discretize‐then‐optimize approach is the Constrained Least Square model (CLS), we will use CLS as a reference state estimation problem and we validate DyDD on different scenarios.
... The construction of a generalized polynomial smoothing spline for approximating Laplace transform functions only known at a finite set of measurements along the real axis had been studied in [15]. D'Amore and Murli [19] expanded the unknown function in Fourier series and the Fourier coefficients are approximated using the Tikhonov regularization method. D'Amore et al. [20] adopted an integral equation of convolution type whose solution is the inverse Laplace transform function. ...
... Applying Laplace transform to (19), we have ...
Article
Full-text available
In Rani et al. (Numerical inversion of Laplace transform based on Bernstein operational matrix, Mathematical Methods in the Applied Sciences (2018) pp. 1–13), a numerical method is developed to find the inverse Laplace transform of certain functions using Bernstein operational matrix. Here, we describe Bernstein operational matrix of integration and propose an algorithm to solve linear time-varying systems governing differential equations. Apart from discussing error estimate, the method is implemented to linear differential equations on Bessel equation of order zero, damped harmonic oscillator, some higher order differential equations, singular integral equation, Volterra integral and integro-differential equations and nonlinear Volterra integral equations of the first kind. A comparison with some existing methods like Haar operational matrix, block pulse operational matrix and others are discussed. The method is simple and easy to implement on a variety of problems. Relative errors estimate just for 5th or 6th approximation show high applicability of the method.
... When the regularization parameter λ appraches to zero the regularized problem tends to the DA (ill posed) inverse problem, while the increase the regularization parameter has the effect of decreasing the uncertainty in the background [27]. The 3D-Var operator is: ...
Article
Full-text available
Data assimilation (DA) is a methodology for combining mathematical models simulating complex systems (the background knowledge) and measurements (the reality or observational data) in order to improve the estimate of the system state (the forecast). The DA is an inverse and ill posed problem usually used to handle a huge amount of data, so, it is a big and computationally expensive problem. In the present work we prove that the functional decomposition of the 3D variational data assimilation (3D Var DA) operator, previously introduced by the authors, is equivalent to apply multiplicative parallel Schwarz (MPS) method, to the Euler–Lagrange equations arising from the minimization of the data assimilation functional. It results that convergence issues as well as mesh refininement techniques and coarse grid correction—issues of the functional decomposition not previously addressed—could be employed to improve performance and scalability of the 3D Var DA functional decomposition in real cases.
... We provide a numerical validation of the proposed approach: We introduce a proper condition number measuring the difficulty to solve the numerical problem [10,11], only depending on the characteristics of the scene, and we relate it to the number of iterations of the inner loop. Then, we introduce a measure of the accuracy of the final image in finite precision arithmetic, and we relate it to the number of the iterations of the outer loop needed to provide the solution within the so-called best attainable accuracy in finite preci-sion [17]. ...
Article
Full-text available
In most algorithms of global illumination, light–surface interaction terminates declaring that result at some point is close enough to some reference ground truth data. The underlying principle of such criterion is to minimize the processing time without compromising the (subjective) visual perception of the resulting image. We introduce an objective-driven condition for stopping the simulation of light transport. It is inspired by the physical meaning of light propagation. Besides, it takes into account that computations are performed in finite precision. Its main feature is the definition of the threshold establishing the maximum number of pixels that are completed in finite precision. Its value is computed at run time depending on the brightness of the image. As a proof of concept of the validity of this approach, we employ the stopping condition in a light tracing algorithm, propagating light that is generated by the light source. We assess the quality of the computed image by measuring the Peak Signal-to-Noise Ratio and the Structured Similarity Index error metrics on the standard scene of the Cornell Box. Numerical validation is performed by comparing results with the output of the NVIDIA(Formula presented.) Iray render whose stopping condition is based on Russian roulette and on the elapsed time.
... The solution of large scale linear discrete ill posed problems arise in a variety of applications, such as those in the earth/climate science, including earth observation (remote sensing) and data assimilation [2,7], or those arising in image analysis, including medical imaging, astronomical imaging and restoration of digital films [1,4,9,10,[12][13][14]26] and those arising in solving the Laplace transform integral equation [3]. Upon discretization, linear systems such as the following ...
Article
Full-text available
We introduce a decomposition of the Tikhonov Regularization (TR) functional which split this operator into several TR functionals, suitably modified in order to enforce the matching of their solutions. As a consequence, instead of solving one problem we can solve several problems reproducing the initial one at smaller dimensions. Such approach leads to a reduction of the time complexity of the resulting algorithm. Since the subproblems are solved in parallel, this decomposition also leads to a reduction of the overall execution time. Main outcome of the decomposition is that the parallel algorithm is oriented to exploit the highest performance of parallel architectures where concurrency is implemented both at the coarsest and finest levels of granularity. Performance analysis is discussed in terms of the algorithm and software scalability. Validation is performed on a reference parallel architecture made of a distributed memory multiprocessor and a Graphic Processing Unit. Results are presented on the Data Assimilation problem, for oceanographic models.
... Let varDA X×Y×Z denote the Data Assimilation (DA) problem, as described in [5,6,7]. This is an inverse ill posed problem [8] defined in a set Ω of size X × Y × Z. We now introduce two decomposition approaches which have been considered in [5,6,7] to solve it. ...
Conference Paper
We analyse and discuss the performance of a decomposition approach introduced for solving large scale Variational Data Assimilation (DD-VAR DA) problems. Our performance analysis uses a set of matrices (decomposition and execution)[9], built to highlight the dependency relationship among component parts of a computational problem and/or among operators of the algorithm that solves the problem [10?], that are the fundamental characteristics of an algorithm. We will show how performance metrics depend on the complexity of the algorithm and on parameters characterizing the structure of the two matrices, like their number of rows and columns. We use a new definition of speed up, involving the scale-up factor which measure the performance gain in terms of time complexity reduction, to describe the non-linear behavior of the performance gain.
... For simplicity of notations, according to [11], here we assume that σ 0 := 0; otherwise, we need only consider the shifted Laplace transform [5,6,12,13] F shift (s) = F(s − σ 0 ), and its inverse function ...
... The real inversion of Laplace transform is ill posed, so its solution is highly sensitive to perturbations on data [12,14,15]. Regularization approaches aims to look for the so-called regularized inverse function f δ α such that [16,17]: ...
... If t is small the spacing a(t) is large while it is small when t is large. This is a common feature of most numerical methods for inverting Laplace transforms [4,6,12,15,19], as it follows from Abelian and Tauberian theorems, which connect the behaviour of Laplace function with that of its inverse [20]. In particular, Abelian-type theorems provide information of the behaviour of inverse function as t → 0, based on the knowledge about the behaviour of Laplace transform as s → ∞, whereas Tauberian-type theorems provide information of the behaviour of inverse function as t → ∞ based on the knowledge of Laplace function as s → 0. ...
Article
Full-text available
We are concerned with Gaver’s formula, which is at the heart of a numerical algorithm, widely used in scientific and engineering applications, for computing approximations of inverse Laplace transform in multi-precision arithmetic systems. We demonstrate that, once parameters n (i.e. the number of terms of Gaver’s formula) and (i.e. an upper bound on noise on data) are given, then the number of correct significant digits of computed values of the inverse function is bounded above by . In case of noise free data this number is arbitrarily large, as it is bounded below by n. We establish the requirement of the multi-precision system ensuring that the quality of numerical results is fulfilled. Experiments and comparisons validate the effectiveness of such approach.