Figure 3 - uploaded by Sergei Pereverzyev
Content may be subject to copyright.
Left: Discrepancy curve for Example 5.1(iii) with B WD B 1 , 

Left: Discrepancy curve for Example 5.1(iii) with B WD B 1 , 

Source publication
Article
Full-text available
For solving linear ill-posed problems regularization methods are required when the right-hand side is with some noise. In the present paper regularized solutions are ob-tained by multi-parameter regularization and the regularization parameters are chosen by a multi-parameter discrepancy principle. Under certain smoothness assumptions we pro-vide or...

Similar publications

Article
Full-text available
In this paper, we present the convergence rate analysis of the modified Landweber method under logarithmic source condition for nonlinear ill-posed problems. The regularization parameter is chosen according to the discrepancy principle. The reconstructions of the shape of an unknown domain for an inverse potential problem by using the modified Land...
Article
Full-text available
In this work, we provide mathematical and numerical analyses for a thermoviscoelastic nonlinear beam model with Coulomb friction dry law. Since the dynamic frictional conditions are nonsmooth, a regularization technique with smoothing parameters is applied to approximate a nonlinear variational formulation. We prove the existence of weak solutions...
Article
Full-text available
In this paper, we mainly consider the inverse problem for identifying the unknown heat source in spherical symmetric domain. We propose a truncation regularization method combined with an a posteriori regularization parameter choice rule to deal with this problem. The Hölder type convergence estimate is obtained. Numerical results are presented to...
Article
Full-text available
This note summarizes some preliminary results on the fast solution of the coefficient inverse problem for the Helmholtz equation, given measured pressure in a set of observation points. The Helmholtz equation is the model PDE for the harmonic problem of the linear theory of elasticity, and this work is a move in that direction. The problem has been...
Article
Full-text available
We investigate level-set type approaches for solving ill-posed inverse problems, under the assumption that the solution is a piecewise constant function. Our goal is to identify the level sets as well as the level values of the unknown parameter function. Two distinct level-set frameworks are proposed for solving the inverse problem. In both of the...

Citations

... However, the majority of the literature primarily addresses the development of suitable rules for parameter selection. Lu, Pereverzev et al. [4,5] have extensively investigated two L 2 -based terms, introducing a refined discrepancy principle to compute dual regularization parameters, along with its numerical implementation. ...
Preprint
Full-text available
This work tackles the problem of image restoration, a crucial task in many fields of applied sciences, focusing on removing degradation caused by blur and noise during the acquisition process. Drawing inspiration from the multi-penalty approach based on the Uniform Penalty principle introduced in [Bortolotti et al. arXiv.math.NA/2309.14163], we develop here a new image restoration model and an iterative algorithm for its effective solution. The model incorporates pixel-wise regularization terms and establishes a rule for parameters selection, aiming to restore images through the solution of a sequence of constrained optimization problems. To achieve this, we present a modified version of the Newton Projection method, adapted to multi-penalty scenarios, and prove its convergence. Numerical experiments demonstrate the efficacy of the method in eliminating noise and blur while preserving the image edges.
... How to choose a proper regularization parameter λ? This is an interesting topic in regularization problems, for further study, we may consider the method in [11,12,17,18,19]. ...
Preprint
Full-text available
In this paper, we propose a fully discrete soft thresholding trigonometric polynomial approximation on [−π, π], named Lasso trigonometric interpolation. This approximation is an ℓ1-regularized discrete least squares approximation under the same conditions of classical trigono-metric interpolation on an equidistant grid. Lasso trigonometric interpolation is sparse and meanwhile it is an efficient tool to deal with noisy data. We theoretically analyze Lasso trigono-metric interpolation for continuous periodic function. The principal results show that the L2 error bound of Lasso trigonometric interpolation is less than that of classical trigonometric interpolation , which improved the robustness of trigonometric interpolation. This paper also presents numerical results on Lasso trigonometric interpolation on [−π, π], with or without the presence of data errors.
... The motivation of this paper is that the traditional sparse regularization method cannot effectively recovery the approximate solution with ill-conditioned problems because the stability of l1 regularization is weaker than l2 regularization in statistics [26]. To improve the stability of sparsity regularization, inspired by multi-regularization theory [31,32,33], a smooth l2 term is added to original sparsity regularization to construct multi-parameter regularization. ...
Article
Full-text available
In this paper, we will solve sparse regularization with ill-conditioned problems. Typically, sparse regularization method is appropriate for compressive sensing problem, where random matrices are well-conditioned. For ill-conditioned problems, for example image inpainting, image deblurring, sparsity regularization is often unstable. The motivation of this paper is that the traditional sparse regularization method cannot effectively recovery the approximate solution with ill-conditioned problems because the stability of ℓ¹ regularization is weaker than ℓ² regularization in statistics. To improve the stability of sparsity regularization, a smooth ℓ² term is added to original sparsity regularization. This method admits sparsity inverse problems are ill-conditioned. Contributions of this paper are as follows. Convergence of the minimizer and its stability are studied. A stable multi-parameter thresholding algorithm for ill-conditioned problems are proposed and numerical results are presented to illustrate the features of the functional and algorithms.
... More precisely, we propose an improved modi ed quasi-boundary-value method with two parameters α > and r ≥ , where the parameter α is introduced to lter the high frequencies, and the second parameter r to include the regularity of the solution of the original problem. The advantage of the multi-parameter regularization is such that it gives more freedom in attaining order optimal accuracy [22][23][24][25][26][27][28][29]. ...
Article
Full-text available
In this paper, we are concerned with the problem of approximating a solution of an ill-posed biparabolic problem in the abstract setting. In order to overcome the instability of the original problem, we propose a modified quasi-boundary value method to construct approximate stable solutions for the original ill-posed boundary value problem. Finally, some other convergence results including some explicit convergence rates are also established under a priori bound assumptions on the exact solution. Moreover, numerical tests are presented to illustrate the accuracy and efficiency of this method.
... However, there are some limitations to this approach to ℓ 1 + ℓ 1 problems due to the fact that it is difficult to obtain the dual formulation of ℓ 1 + ℓ 1 problems. Inspired by [14] and multiregularization theory [35][36][37], a smooth ℓ 2 term is added to original functional of regularization. The dual problem of this new cost function is reduced to a constraint smooth functional. ...
Article
Full-text available
We consider sparse signal inversion with impulsive noise. There are three major ingredients. The first is regularizing properties; we discuss convergence rate of regularized solutions. The second is devoted to the numerical solutions. It is challenging due to the fact that both fidelity and regularization term lack differentiability. Moreover, for ill-conditioned problems, sparsity regularization is often unstable. We propose a novel dual spectral projected gradient (DSPG) method which combines the dual problem of multiparameter regularization with spectral projection gradient method to solve the nonsmooth l1+l1 optimization functional. We show that one can overcome the nondifferentiability and instability by adding a smooth l2 regularization term to the original optimization functional. The advantage of the proposed functional is that its convex duality reduced to a constraint smooth functional. Moreover, it is stable even for ill-conditioned problems. Spectral projected gradient algorithm is used to compute the minimizers and we prove the convergence. The third is numerical simulation. Some experiments are performed, using compressed sensing and image inpainting, to demonstrate the efficiency of the proposed approach.
... The aim of the paper is to describe this set and to indicate its subset such that for regularization parameters from this subset the combination of LSQ projection and Tikhonov's method has the same rate of convergence under standard source conditions. The discrepancy set is an analog of the discrepancy curve defined and investigated in [10] for multiple penalty regularization of Tikhonov -type. ...
Article
Full-text available
To solve a linear ill-posed problem, a combination of the finite dimensional least squares projection method and the Tikhonov regularization is considered. The dimension of the projection is treated as the second parameter of regularization. A two-parameter discrepancy principle defines a discrepancy set for any data error bound. The aim of the paper is to describe this set and to indicate its subset such that for regularization parameters from this subset the related regularized solution has the same order of accuracy as the Tikhonov regularization with the standard discrepancy principle but without any discretization.
... Multiparameter Tikhonov can be used when a satisfactory choice of the regularization operator is unknown in advance, or can be seen as an attempt to combine the strengths of different regularization operators. In some applications, using more than one regularization operator and parameter allows for more accurate solutions [1,2,17,20]. ...
... Choosing satisfactory μ i k in multiparameter regularization is more difficult than the corresponding one-parameter problem. See for example [1,2,6,14,16,20,20]. In particular, there is no obvious multiparameter extension of the discrepancy principle. ...
... Choosing satisfactory μ i k in multiparameter regularization is more difficult than the corresponding one-parameter problem. See for example [1,2,6,14,16,20,20]. In particular, there is no obvious multiparameter extension of the discrepancy principle. ...
Article
Full-text available
Tikhonov regularization is a popular method to approximate solutions of linear discrete ill-posed problems when the observed or measured data is contaminated by noise. Multiparameter Tikhonov regularization may improve the quality of the computed approximate solutions. We propose a new iterative method for large-scale multiparameter Tikhonov regularization with general regularization operators based on a multidirectional subspace expansion. The multidirectional subspace expansion may be combined with subspace truncation to avoid excessive growth of the search space. Furthermore, we introduce a simple and effective parameter selection strategy based on the discrepancy principle and related to perturbation results.
... The advantage of the multi-parameter regularization is such that it gives more freedom in attaining orderoptimal accuracy, since there are many regularization parameters satisfying multi-parameter discrepancy principle. In [31,35] the authors show one of the possible ways to employ this freedom in choosing the regularization. They consider the case when one is interested in an order optimal approximation of the solution u with respect to L -norm and simultaneously in an estimation of its value at some point. ...
Article
The paper is devoted to investigating a Cauchy problem for homogeneous elliptic PDEs in the abstract Hilbert space given by u′′⁢(t)-A⁢u⁢(t)=0, 0<t<T, u⁢(0)=φ, u′⁢(0)=0, where A is a positive self-adjoint and unbounded linear operator. The problem is severely ill-posed in the sense of Hadamard [23]. We shall give a new regularization method for this problem when the operator A is replaced by Aα=A⁢(I+α⁢A)-1 and u⁢(0)=φ is replaced by a nonlocal condition. We show the convergence of this method and we construct a family of regularizing operators for the considered problem. Convergence estimates are established under a priori regularity assumptions on the problem data. Some numerical results are given to show the effectiveness of the proposed method.
... see, e.g., [1,2,5,6,15,17,19,20,26] and references therein. The scalars λ i ≥ 0 are regularization parameters and the L i ∈ R P i ×N are regularization matrices for i = 1, . . . ...
... Specifically, they solved m different one-parameter problems (one for each regularization term appearing in (1.4)) and then combined the m approximations of x ex so obtained. More recently, Lu and Pereverzyev [19] and Lu et al. [20] investigated application of the discrepancy principle to select a regularization vector for (1.4). They showed for square nonsingular regularization matrices L i that any combination of nonnegative regularization parameters λ i such that x Λ satisfies the discrepancy principle constitutes a regularization method in a well-defined sense. ...
... If L is a discretization of a derivative operator, then the latter choice corresponds to maximizing a discrete Sobolev norm of the regularized solution. In the available literature on multi-parameter Tikhonov regularization, the issue of choosing a particular regularization vector among the vectors that satisfy the discrepancy principle appears to be discussed just marginally in [20]. We are interested in studying the choice of a regularization vector Λ when one or several regularization matrices L i in (1.4) has a nontrivial null space, which is the case when L i is a projection matrix or a discretization of a derivative operator. ...
Article
Full-text available
This paper proposes a new approach for choosing the regularization parameters in multi-parameter regularization methods when applied to approximate the solution of linear discrete ill-posed problems. We consider both direct methods, such as Tikhonov regularization with two or more regularization terms, and iterative methods based on the projection of a Tikhonov-regularized problem onto Krylov subspaces of increasing dimension. The latter methods regularize by choosing appropriate regularization terms and the dimension of the Krylov subspace. Our investigation focuses on selecting a proper set of regularization parameters that satisfies the discrepancy principle and maximizes a suitable quantity, whose size reflects the quality of the computed approximate solution. Theoretical results are shown and illustrated by numerical experiments.
... Lu et al. [13] discussed the discrepancy principle for Hilbert space scales, and derived some error estimates. However, the parameter selection is vastly nonunique due to lack of constraints and thus not directly applicable in practice, for which later a quasi-optimality criterion was suggested [14]. Recently, the authors [7] investigated the discrepancy principle and a balancing principle for general convex variational models. ...
Article
Full-text available
We study multi-parameter regularization (multiple penalties) for solving linear inverse problems to promote simultaneously distinct features of the sought-for objects. We revisit a balancing principle for choosing regularization parameters from the viewpoint of augmented Tikhonov regularization, and derive a new parameter choice strategy called the \textit{balanced discrepancy principle}. A priori and a posteriori error estimates are provided to theoretically justify the principles, and numerical algorithms for efficiently implementing the principles are also provided. Numerical results on denoising are presented to illustrate the feasibility of the balanced discrepancy principle.