ArticlePDF Available

Dynamical Systems and Discrete Methods for Solving Nonlinear Ill-Posed Problems

Authors:

Abstract

Contents 1. Introduction 2. Continuous methods for well posed problems 3. Discretization theorems for well-posed problems 4. Riccati integral inequality 5. Regularization procedure for ill-posed problems 6. Discretization theorem for ill-posed problems 7. Regularized continuous methods for monotone operators 8. Regularized discrete methods for monotone operators 1 Introduction The theme of this chapter is solving of nonlinear operator equation F (z) = 0; F : H ! H; (1) by establishing a relation between the limiting behavior for large times of the trajectories of dynamical systems in H and the solutions to equation (1) in a real Hilbert space H. We consider a real Hilbert space for the sake of simplicity, equation (1) in a complex Hilbert space can be treated similarly, and a part of results can be generalized to a Banach space. A standard approach
Applied Mathematics Reviews, World Sci. Pub. Co., vol. 1, (2000), 491-536
DYNAMICAL SYSTEMS AND DISCRETE METHODS FOR
SOLVING NONLINEAR ILL-POSED PROBLEMS
RUBEN G. AIRAPETYAN AND ALEXANDER G. RAMM
Contents
1. Introduction
2. Continuous methods for well posed problems
3. Discretization theorems for well-posed problems
4. Nonlinear integral inequality
5. Regularization procedure for ill-posed problems
6. Discretization theorem for ill-posed problems
7. Regularized continuous methods for monotone operators
8. Regularized discrete methods for monotone operators
1. Introduction
The theme of this chapter is solving of nonlinear operator equation
(1.1) F(z) = 0, F :HH,
by establishing a relation between the limiting behavior for large times of the tra-
jectories of dynamical systems in Hand the solutions to equation (1.1) in a real
Hilbert space H. We consider a real Hilbert space for the sake of simplicity, equa-
tion (1.1) in a complex Hilbert space can be treated similarly, and a part of results
can be generalized to a Banach space.
A standard approach to numerical solution of equation (1.1) consists of using
one of the numerous iterative methods. These methods are also very useful for a
theoretical investigation of problem (1.1). For some operators Fthey allow one
to establish existence of a solution to problem (1.1), or existence and uniqueness
theorems.
In the iterative methods one chooses some initial approximation z0and defines
a sequence of points {zn}n=0,1,2,... having as a limit limn→∞ znone of the solutions
of equation (1.1). The choice of an initial point z0is very important especially for
problems solution to which is not unique. Usually the choice of z0determines to
what solution the iterative sequence converges.
The simplest iterative method is the method of simple iteration defined as follows:
(1.2) zn=zn1F(zn1), n = 1,2, . . .
z0is given.
In the well-known Newton’s method ([14, 18, 19]) one constructs a sequence {zn}
by the following formula:
(1.3) zn=zn1[F0(zn1)]1F(zn1), n = 1,2, . . . .
1
2 RUBEN G. AIRAPETYAN AND ALEXANDER G. RAMM
Here F0(h) is the Fechet derivative of the operator F, defined at a point hHas
a linear operator from Hto Hsuch that:
(1.4) F(h+ξ)F(h) = F0(h)ξ+o(||ξ||).
Thus, an important condition for applicability of Newton’s method is invertibility of
the Fechet derivative operator F0. The important advantage of Newton’s method
is the quadratic convergence to the solution in a neighborhood of this solution:
(1.5) ||zn+1 zn|| const||znzn1||2.
In many cases when Newton’s method diverges one uses the damped Newton’s
method:
(1.6) zn=zn1ωn[F0(zn1)]1F(zn1), n = 1,2, . . . ,
where ωnis an appropriately chosen sequence of positive numbers.
In the gradient method one constructs an iterative sequence using the formula:
(1.7) zn=zn1[F0(zn1)]F(zn1), n = 1,2, . . . .
The iterative methods mentioned above are widely known representatives of a large
family of iterative methods. A detailed description of many iterative methods one
can find in [19]. See also an approach to construction of iterative process for solving
nonlinear equation proposed in [20, 21].
The applicability of iterative methods is established by various convergence theo-
rems (see [14, 18, 19]), which specify assumptions on the operator Fwhich guarantee
convergence of the iterative sequence to a solution of equation (1.1). The proofs
of convergence theorems for iterative methods are usually based on the contraction
mapping principle.
Another approach to solving problem (1.1) is based on a construction of a dy-
namical system with the trajectory starting from an initial approximation point z0
and having a solution to problem (1.1) as a limiting point.
In [12] R. Courant proposed the dynamical system, which is a continuous analog
of the gradient method, in the problem of minimization of some functionals. In [16]
M.K. Gavurin proposed continuous Newton’s method and established the corre-
sponding convergence theorem. In continuous Newton’s method one considers the
following Cauchy problem for a nonlinear differential equation in a Hilbert space
H:
(1.8) ˙z(t) = [F0(z(t))]1F(z(t)) z(0) = z0,
where z0is some initial approximation point. In [25] E.P. Zhidkov and I.V. Puzynin
applied continuous Newton’s method for solving nonlinear physical problems. Y.
Alber used continuous methods for solving operator equations and variational in-
equalities ([6, 7, 8]). A modified continuous Newton’s method is proposed in [1, 3].
In this method one avoids numerically difficult inversion of the Fechet derivative F0
by solving an expanded system of nonlinear differential equation in Hilbert space.
In [2, 3] one can find the applications of this method to some physical problems.
The goal of this chapter is to develop a general approach to continuous analogs
of discrete methods and to establish fairly general convergence theorems. This
approach is based on an analysis of the solution to the Cauchy problem for a
nonlinear differential equation in a Hilbert space. Such an analysis was done for
well-posed and some ill-posed problems in [1, 4, 5], and was based on a usage of
integral inequalities.
DYNAMICAL SYSTEMS AND DISCRETE METHODS FOR SOLVING NONLINEAR ILL-POSED PROBLEMS3
Let z0be an initial approximation for a solution to (1.1) and z(t) be the trajectory
of an autonomous dynamical system:
(1.9) ˙z(t) = Φ(z(t)),0t < , z(0) = z0.
The main question investigated in this chapter is:
Under what assumptions on operators Fin (1.1) and Φ in (1.9) one can guarantee
that:
(i) Cauchy problem (1.9) is uniquely solvable for t[0,+),
(ii) the solution z(t) tends to one of the solutions of (1.1) as t ,
(iii) there exists a step ω(or a sequence {ωn}) such that the corresponding
discrete (iterative) method zn+1 =zn+ωΦ(zn) (or zn+1 =zn+ωnΦ(zn)) produces
the sequence {zn}, which converges to one of the solutions of (1.1).
The answers are given in Theorems 2.2, 3.1 and Corollaries 1, 2.
Thus an analysis of continuous processes is based on the investigation of the
asymptotic behavior of nonlinear dynamical systems in Banach and Hilbert spaces.
If a convergence theorem is proved for a continuous method, one can construct
various discrete schemes generated by this continuous process. Thus construction
of a discrete numerical scheme is divided into two parts: construction of the contin-
uous process and numerical integration of the corresponding nonlinear differential
equation in a Hilbert space.
The main assumption in Theorems 2.2, 3.1 and Corollaries 1, 2 is that the Fr´echet
derivative F0of the operator Fhas a trivial null-space at the solution to (1.1). If
F0has a nontrivial null-space at the solution to (1.1) then, in general, one can use
the classical Newton method for solving of (1.1) only under some strong special
assumptions on the operator F(see [9, 13]). In order to relax these assumptions
and to construct numerical method for solving (1.1) when F0has a nontrivial null-
space at the solution to (1.1), one needs a regularized discrete Newton-like methods
(see [6, 11, 15, 22, 24]).
In this chapter we consider the continuous Newton’s method:
(1.10) ˙z(t) = [F0(z(t)) + ε(t)I]1[F(z(t)) + ε(t)(z(t)˜z0)], z(0) = z0,
where ε(t) is a specially chosen positive function which tends to zero as t+.
Thus, in the framework of our general approach, instead of the Cauchy problem
(1.9) for autonomous equation the following Cauchy problem for nonautonomous
equation has to be considered:
(1.11) ˙z(t) = Φ(z(t), t), z(0) = z0,
where Φ is a nonlinear operator, Φ : H×[0,+)H.
An analysis of dynamical system (1.11) is more complicated. In this study we
use new integral inequality (Theorem 4.2). Based on Theorem 4.2, a general con-
vergence theorem for a regularized continuous process is proved (Theorem 5.1).
Applying this theorem to the regularized Newton’s and simple iteration methods
(for monotone operators) we obtain convergence theorems under less restrictive
conditions on the equation than the theorems known for the corresponding discrete
methods. Statements of these theorems contain some useful recommendations for
the choice of a regularizing operator and estimate the rate of convergence of the
regularized process. Convergence theorems for regularized continuous Newton-like
methods are established in [4, 7, 22]. The applications of these scheme to Gauss-
Newton-type methods for nonmonotone operators can be found in [4, 5].
4 RUBEN G. AIRAPETYAN AND ALEXANDER G. RAMM
Throughout this chapter the Hilbert space is assumed real-valued.
2. Continuous methods for well posed problems
Let ybe a solution to equation (1.1), z0be some point in H, considered as an
initial approximation to a solution to equation (1.1), and z(t) be some trajectory
in Hsuch that z(0) = z0.
Definition 2.1. We say that a trajectory z(t) converges to solution yof equation
(1.1) exponentially if there exist positive constants cand c1such that
(2.1) ||z(t)y|| c1ect||F(z0)||,||F(z(t))|| ||F(z0)||ect .
Consider Cauchy problem (1.9). Examples of the choice of an operator Φ are
given in Remark 1.
Theorem 2.2. Assume that there exist some positive numbers r,csuch that F,F0,
and Φare Fechet differentiable and bounded in Br(z0)and the following conditions
hold for every hBr(z0):
(2.2) (F0(h)Φ(h), F (h)) c||F(h)||2,
and
(2.3) ||Φ(h)|| rc
||F(z0)||||F(h)||.
Then
1) there exists a solution z=z(t), t [0,), to problem (1.9) and z(t)
Br(z0)for all t[0,+);
2) there exists
(2.4) lim
t+z(t) = y,
yis a solution of problem (1.1) in Br(z0), and z(t)converges to yexponentially.
Remark 2.3.a) Choosing Φ(h) = [F0(h)]1F(h) one gets Continuous Newton’s
method. In this case c= 1 and Theorem 2.2 yields the convergence theorem for
Continuous Newton’s method ([16]);
b) choosing Φ = F, one gets a simple iteration method for which condition
(2.2) means strict monotonicity of F;
c) Φ(h) = [F0(h)]F(h) corresponds to the gradient method.
Proof of Theorem 2.2.
From the Fechet differentiability of Φ in Br(z0), we get the local existence of a
solution of the problem (1.8). Then from (1.8) one gets:
(2.5) F0(z(t)) ˙z(t) = F0(z(t))Φ(z(t).
From (2.5) denoting λ(t) = F(z(t)), one gets:
(2.6) ˙
λ(t) = F0(z(t))Φ(z(t), λ(0) = F(z0).
Therefore, for sufficiently small tfor which z(t)Br(z0), one can use estimate
(2.2) and get:
d
dt||λ(t)||2= 2( ˙
λ(t), λ(t)) = 2(F0(z(t))Φ(z(t), F (z(t)) 2c||λ(t)||2.
Thus, the following estimate holds at least for sufficiently small t > 0:
(2.7) ||λ(t)|| ||F(z0)||ect.
DYNAMICAL SYSTEMS AND DISCRETE METHODS FOR SOLVING NONLINEAR ILL-POSED PROBLEMS5
For 0 t1t2one has:
||z(t2)z(t1)|| ||
t2
Z
t1
˙z(s)ds||
t2
Z
t1
||Φ(z(s))||ds
(2.8) rc
||F(z0)||
t2
Z
t1
||λ(s)||ds r(ect1ect2)rect1.
Setting t1= 0 and t2=t, one concludes from (2.8) that z(t)Br(z0) for all t > 0.
Now let t1=tand t2+in (2.8). Then one gets
(2.9) ||z(t)y|| rect,
where yis defined in (2.4) and the limit in (2.4) does exist due to (2.8). The
exponential convergence of z(t) to yfollows now from (2.7) and (2.9).
Remark 2.4.Note that in general the assumptions of Theorem 2.2 do not imply the
uniqueness of a solution of equation (1.1) in Br(y). If (1.1) is not uniquely solvable
then z(t) converges to one of its solutions.
To establish convergence theorems for discrete methods we need a modified ver-
sion of the statement of Theorem 2.2. We start with the following definition.
Definition 2.5. Let ybe a unique solution to equation (1.1) in some set M.Mis
called Φ-attractive set for yif for any z0Mthe trajectory z(t) of the dynamical
system (1.9) does not leave Mand tends to yas t+. If z(t) converges to
yexponentially we call Man exponentially Φ-attractive set for y. If constants c
and c1in Definition 2.1 do not depend on a point z0Mwe call Ma uniformly-
exponentially Φ-attractive set for y.
Let us formulate a corollary to Theorem 2.2.
Corollary 1. Assume that there exist some positive numbers r,csuch that
yis the unique solution to problem (1.1) in the ball Br(y),z0Br(y), and the
assumptions of Theorem2.2 are satisfied in Br(y). Then Br(y)is a uniformly-
exponentially Φ-attractive set for ywith cdefined in (2.2) and
(2.10) c1=r
||F(z0)||.
3. Discretization theorems for well-posed problems
Consider the following Cauchy problem:
(3.1) ˙z(t) = Φ(z(t)), z(0) = z0,
and the corresponding discrete process:
(3.2) zn+1 =zn+ωΦ(zn).
Theorem 3.1. Let Br(y)be a uniformly-exponentially Φ-attractive set for yfor
some r > 0such that rc1||F(z0)||, where c1is the same as in Definition 2.1,
z0Br(y). Assume that
(1) Fand Φare Fechet differentiable in Br(y), and
||Φ(h)|| N0||F(h)||,||F0(h)|| N1,
||Φ0(h)|| N2for hBr(y),(3.3)
6 RUBEN G. AIRAPETYAN AND ALEXANDER G. RAMM
Figure 1. Discretization scheme of a continuous process.
(2) c2and ωare some positive constants satisfying the following inequalities:
(3.4) (c1e +N0N2ω2)||F(z0)|| rec2ω,
(3.5) e +N0N1N2ω2ec2ω.
Then all {zn},n= 1,2, . . . , defined by formula (3.2) belong to the ball Br(y)
and
(3.6) ||zny|| rec2 ,||F(zn)|| ||F(z0)||ec2 , n = 0,1,2, . . . .
Proof of Theorem 3.1
The idea of the proof is illustrated in Figure1. Let z0be an initial approximation
point. Since Br(y) is an exponentially attractive set for yintegral curve ψ0(t) of
equation (3.1) connects z0with y. Let ωbe a stepsize. Consider points ψ1(ω) and
z1defined by (3.1) and (3.2) correspondingly. The point ψ1(ω) is located on an
integral curve and the point z1is located on a tangent line to this integral curve
passing through z0. Since ψ1(t) converges exponentially to yand the distance
between ψ1(ω) and z1decreases as ω2for ω0, one shows that for a sufficiently
small step ω,z1is closer to ythan z0. Therefore z1also belongs to Br(y). Moreover
using the triangle inequality one can estimate the distance between z1and y. Then
take the integral curve ψ2(t), which connects z1with y, points ψ2(ω) and z2, show
that z2belongs to Br(y), and estimate the distance between z2and y. Repeating
this process one estimates the distance between {zn}and yand shows that this
distance exponentially tends to zero.
We prove (3.6) using mathematical induction. For n= 0 conditions (3.6) are
satisfied. Assume that (3.6) are satisfied for n=m1.
Denote by ψm(t) the solution to the following Cauchy problem:
(3.7) ˙z(t) = Φ(z(t)),0< t ω, z(0) = zm1.
Then one has:
(3.8) ||zmy|| ||ψm(ω)y|| +||ψm(ω)zm||.
Since Br(y) is a uniform-exponentially Φ-attractive set for yone gets:
(3.9) ||F(ψm(t))|| ||F(zm1)||ect, t [0, ω],
and
(3.10) ||ψm(ω)y|| c1e||F(zm1)|| c1e ec2ω(m1) ||F(z0)||.
From (3.1) and (3.2) one obtains:
ψm(ω)zm=
ω
Z
0
[Φ(z(τ)) Φ(zm1)] =
ω
Z
0
1
Z
0
d
dsΦ(z( ))ds =
(3.11) =
ω
Z
0
τ
1
Z
0
Φ0(z()) ˙z()ds
DYNAMICAL SYSTEMS AND DISCRETE METHODS FOR SOLVING NONLINEAR ILL-POSED PROBLEMS7
Therefore
(3.12) ||ψm(ω)zm|| ω
ω
Z
0||Φ0(z(s))|| ||Φ(z(s))||ds.
Using (3.9) one gets:
||ψm(ω)zm|| ωN0N2
ω
Z
0||F(z(s))||ds
(3.13) ω2N0N2||F(zm1)|| ω2N0N2ec2ω(m1)||F(z0)||.
From (3.8), (3.10), and (3.13) one gets:
(3.14) ||zmy|| (c1eec2ω(m1) +ω2N0N2ec2ω(m1) )||F(z0)||.
Using condition (2.10) one obtains:
(3.15) ||zmy|| rec2 .
Also
||F(zm)|| ||F(ψm(ω))|| +N1||ψm(ω)zm|| e||F(zm1)||
+ω2N0N1N2ec2ω(m1)||F(z0)|| ec2ω(m1) (e +ω2N0N1N2)||F(z0)||
(3.16) (e +ω2N0N1N2)ec2ω(m1)||F(z0)||.
From (3.16) and (3.5) one gets:
(3.17) ||F(zm)|| ||F(z0)||ec2.
Thus the estimate (2.10) and (3.5) hold also for n=m. Theorem 3.1 is proved.
Corollary 2. Assume that there exist some positive numbers r,csuch that:
(1) yis the unique solution to problem (1.1) in the ball Br(y), and an initial
approximation point z0Br(y),
(2) the assumptions of Theorem2.2 are satisfied in Br(y),
(3)
(3.18) ||F0(h)|| N1,||Φ0(h)|| N2for hBr(y),
(4) c2and ωare some positive constants satisfying the following inequality:
(3.19) e +cN2ω2max 1,rN1
||F(z0)||ec2ω,
(5) z0is an initial approximation point and the sequence {zn},n= 1,2, . . . , is
defined recursively:
(3.20) zn=zn1+ωΦ(zn1),
Then all {zn},n= 1,2, . . . , belong to the ball Br(y)and
(3.21) ||zny|| rec2 ,||F(zn)|| ||F(z0)||ec2 , n = 0,1,2, . . . .
Proof of Corollary 2.
Since the assumptions of Theorem 2.2 are satisfied in Br(y), choosing
(3.22) N0=rc
||F(z0)||
8 RUBEN G. AIRAPETYAN AND ALEXANDER G. RAMM
one gets the first inequality in (3.3) from (2.3). The second and the third inequalities
in (3.3) follow from (3.18).
From Corollary 1 one gets that Br(y) is a uniformly-exponentially Φ-attractive
set for ywith cdefined in condition (2.2) and
(3.23) c1=r
||F(z0)||.
For such N0and c1condition (3.19) is equivalent to conditions (2.10) and (3.5). To
finish the proof one refers to Theorem 3.1.
4. Nonlinear integral inequality
The main result of this section is Theorem 4.2 which is used throughout the
paper.
The following lemma is a version of some known results concerning integral
inequalities (see e.g. Theorem 22.1 in [23]). For convenience of the reader and to
make the presentation essentially self-contained we include a proof.
Lemma 4.1. Let f(t, w),g(t, u)be continuous on region [0, T )×D(DR,
T ) and f(t, w)g(t, u)if wu,t(0, T ),w, u D. Assume that g(t, u)is
such that the Cauchy problem
(4.1) ˙u=g(t, u), u(0) = u0, u0D
has a unique solution. If
(4.2) ˙wf(t, w), w(0) = w0u0, w0D,
then u(t)w(t)for all tfor which u(t)and w(t)are defined.
Proof of Lemma 4.1
Step 1. Suppose first f(t, w)< g(t, u), if wu. Since w0u0and ˙w(0)
f(t, w0)< g(t, u0) = ˙u(0), there exists δ > 0 such that u(t)> w(t) on (0, δ].
Assume that for some t1> δ one has u(t1)< w(t1).Then for some t2< t1one has
u(t2) = w(t2) and u(t)< w(t) for t(t2, t1].
One gets
˙w(t2)˙u(t2) = g(t, u(t2)) > f (t, w(t2)) ˙w(t2).
This contradiction proves that there is no point t2such that u(t2) = w(t2).
Step 2. Now consider the case f(t, w)g(t, u), if wu. Define
˙un=g(t, un) + εn, un(0) = u0, εn>0, n = 0,1, ...,
where εntends monotonically to zero. Then
˙wf(t, w)g(t, u)< g(t, u) + εn, w u.
By Step 1 un(t)w(t), n= 0,1, ... . Fix an arbitrary compact set [0, T1], 0 < T1<
T.
(4.3) un(t) = u0+
t
Z
0
g(τ, un(τ)) +εnt.
Since g(t, u) is continuous, the sequence {un}is uniformly bounded and equicon-
tinuous on [0, T1]. Therefore there exists a subsequence {unk}which converges
DYNAMICAL SYSTEMS AND DISCRETE METHODS FOR SOLVING NONLINEAR ILL-POSED PROBLEMS9
uniformly to a continuous function u(t). By continuity of g(t, u) we can pass to the
limit in (4.3) and get
(4.4) u(t) = u0+
t
Z
0
g(τ, u(τ)), t [0, T1].
Since T1is arbitrary (4.4) is equivalent to the initial Cauchy problem that has a
unique solution. The inequality unk(t)w(t), k = 0,1, ... implies u(t)w(t).If
the solution to the Cauchy problem (4.1) is not unique, the inequality w(t)u(t)
holds for the maximal solution to (4.1).
The following theorem is a key to the basic result, namely to Theorem 5.1.
Theorem 4.2. Let γ(t), σ(t), β (t)C[t0,)for some real number t0. If there
exists a positive function µ(t)C1[t0,)such that
(4.5) 0 σ(t)µ(t)
2γ(t)˙µ(t)
µ(t), β(t)1
2µ(t)γ(t)˙µ(t)
µ(t),
then a nonnegative solution to the following inequalities:
(4.6) ˙v(t) γ(t)v(t) + σ(t)v2(t) + β(t), v(t0)<1
µ(t0),
satisfies the estimate:
(4.7) v(t)1ν(t)
µ(t)<1
µ(t),
for all t[t0,), where
(4.8) ν(t) = 1
1µ(t0)v(t0)+1
2Zt
t0γ(s)˙µ(s)
µ(s)ds1
.
Remark 4.3.Without loss of generality one can assume β(t)0.
In [8] a differential inequality ˙v A(t)ψ(v(t)) + β(t) was studied under some
assumptions which include, among others, the positivity of ψ(v) for v > 0. In
Theorem 4.2 the term γ(t)v(t) + σ(t)v2(t) (which is analogous to some extent
to the term A(t)ψ(v(t))) can change sign. Our Theorem 4.2 is not covered by
the result in [8]. In particular, in Theorem 4.2 an analog of ψ(v), for the case
γ(t) = σ(t) = A(t), is the function ψ(v) := vv2. This function goes to −∞ as v
goes to +, so it does not satisfy the positivity condition imposed in [8].
Unlike in the case of Bihari integral inequality ([10]) one cannot separate vari-
ables in the right hand side of the first inequality (4.6) and estimate v(t) by a
solution of the Cauchy problem for a differential equation with separating vari-
ables. The proof below is based on a special choice of the solution to the Riccati
equation majorizing a solution of inequality (4.6).
Proof of Theorem 4.2.
Denote:
(4.9) w(t) := v(t)eRt
t0γ(s)ds,
then (4.6) implies:
(4.10) ˙w(t)a(t)w2(t) + b(t), w(t0) = v(t0),
10 RUBEN G. AIRAPETYAN AND ALEXANDER G. RAMM
where
a(t) = σ(t)eRt
t0γ(s)ds, b(t) = β(t)eRt
t0γ(s)ds.
Consider Riccati’s equation:
(4.11) ˙u(t) = ˙
f(t)
g(t)u2(t)˙g(t)
f(t).
One can check by a direct calculation that the the solution to problem (4.11) is
given by the following formula [17, eq. 1.33]:
(4.12) u(t) = g(t)
f(t)+"f2(t) CZt
t0
˙
f(s)
g(s)f2(s)ds!#1
.
Define fand gas follows:
(4.13) f(t) := µ1
2(t)e1
2Rt
t0γ(s)ds, g(t) := µ1
2(t)e
1
2Rt
t0γ(s)ds,
and consider the Cauchy problem for equation (4.11) with the initial condition
u(t0) = v(t0). Then Cin (4.12) takes the form:
C=1
µ(t0)v(t0)1.
From (4.5) one gets
a(t)˙
f(t)
g(t), b(t) ˙g(t)
f(t).
Since fg =1 one has:
Zt
t0
˙
f(s)
g(s)f2(s)ds =Zt
t0
˙
f(s)
f(s)ds =1
2Zt
t0γ(s)˙µ(s)
µ(s)ds.
Thus
(4.14) u(t) = eRt
t0γ(s)ds
µ(t)"11
1µ(t0)v(t0)+1
2Zt
t0γ(s)˙µ(s)
µ(s)ds1#.
It follows from conditions (4.5) and from the second inequality in (4.6) that the
solution to problem (4.11) exists for all t[0,) and the following inequality
holds with ν(t) defined by (4.8):
(4.15) 1 >1ν(t)µ(t0)v(t0).
From Lemma 4.1 and from formula (4.14) one gets:
(4.16) v(t)eRt
t0γ(s)ds := w(t)u(t) = 1ν(t)
µ(t)eRt
t0γ(s)ds <1
µ(t)eRt
t0γ(s)ds,
and thus estimate (4.7) is proved.
To illustrate conditions of Theorem 4.2 consider the following examples of func-
tions γ,σ,β, satisfying (4.5) for t0= 0.
Example 4.4.Let
(4.17) γ(t) = c1(1 + t)ν1, σ(t) = c2(1 + t)ν2, β(t) = c3(1 + t)ν3,
where c2>0, c3>0. Choose µ(t) := c(1 + t)ν,c > 0. From (4.5), (4.6) one gets
the following conditions
c2cc1
2(1 + t)ν+ν1ν2
2(1 + t)ν1ν2,
DYNAMICAL SYSTEMS AND DISCRETE METHODS FOR SOLVING NONLINEAR ILL-POSED PROBLEMS11
(4.18) c3c1
2c(1 + t)ν1νν3ν
2c(1 + t)ν1ν3, cv(0) <1.
Thus one obtains the following conditions:
(4.19) ν1 1, ν2ν1νν1ν3,
and
(4.20) c1> ν, 2c2
c1νcc1ν
2c3
, cv(0) <1.
Therefore for such γ,σ,βa function µwith the desired properties exists if
(4.21) ν1 1, ν2+ν32ν1,
and
(4.22) c1> ν2ν1,2c2c3c1+ν1ν2,2c2v(0) < c1+ν1ν2.
In this case one can choose ν=ν2ν1,c=2c2
c1+ν1ν2. However in order to have
v(t)0 as t+(the case of interest in Theorem 5.1) one needs the following
conditions:
(4.23) ν1 1, ν2+ν32ν1, ν1> ν3,
and
(4.24) c1> ν2ν1,2c2c3c1,2c2v(0) < c1.
Example 4.5.If
γ(t) = γ0, σ(t) = σ0eν t, β(t) = β0eνt , µ(t) = µ0eν t,
then conditions (4.5), (4.6) are satisfied if
0σ0µ0
2(γ0ν), β01
2µ0
(γ0ν), µ0v(0) <1.
Example 4.6.Here and throughout the paper log stands for the natural logarithm.
For some t1>0
γ(t) = 1
plog(t+t1), µ(t) = clog(t+t1),
conditions (4.5), (4.6) are satisfied if
0σ(t)c
2plog(t+t1)1
t+t1,
β(t)1
2clog2(t+t1)plog(t+t1)1
t+t1, v(0)clog t1<1.
In all considered examples µ(t) can tend to infinity as t+and provide a
decay of a nonnegative solution to integral inequality (4.6) even if σ(t) tends to
infinity. Moreover in the first and the third examples v(t) tends to zero as t+
when γ(t)0 and σ(t)+.
12 RUBEN G. AIRAPETYAN AND ALEXANDER G. RAMM
5. Regularization procedure for ill-posed problems
In the well-posed case (when the Fechet derivative F0of the operator Fis a
bijection in a neighborhood of the solution of equation (1.1)) in order to solve
equation (1.1) one can use the following continuous processes:
simple iteration method:
(5.1) ˙z(t) = F(z(t)), z(0) = z0H,
Newton’s method:
(5.2) ˙z(t) = [F0(z(t))]1F(z(t)), z(0) = z0H.
However if F0is not continuously invertible (ill-posed case) one has to replace the
Cauchy problems (5.1) and (5.2) by the corresponding regularized Cauchy problems
(see [4, 5, 6]):
regularized simple iteration method:
(5.3) ˙z(t) = [F(z(t)) + ε(t)(z(t)˜z0)], z(0) = z0H,
regularized Newton’s method:
(5.4) ˙z(t) = [F0(z(t))+ε(t)I]1[F(z(t)) + ε(t)(z(t)˜z0)], z(0) = z0H.
Here ˜z0His some element, ε(t)>0 is a suitable function, with the properties
specified in Theorems 7.12 and 7.22, and Iis the identity operator. The equations
in (5.3) and (5.4) are no longer autonomous.
Our goal is to develop a uniform approach to such regularized methods. Let us
consider the Cauchy problem:
(5.5) ˙z(t) = Φ(z(t), t), z(0) = z0H,
with an operator Φ : H×[0,)H.
Let ybe a solution to equation (1.1) (as it was mentioned in Introduction, we
assume that this equation is solvable). Denote: BR(y) := {h:hH, ||hy|| < R}.
Theorem 5.1. Assume that there exists r > 0such that Φ(h, t)is Fr´echet dif-
ferentiable with respect to hin the ball B2r(y)for any t[0,)and satisfy the
following condition:
there exists a differentiable function x(t),x: [0,+)Br(y), such that for
any hB2r(y), t [0,+)
(5.6) (Φ(h, t), h x(t)) α(t)||hx(t)|| γ(t)||hx(t)||2+σ(t)||hx(t)||3,
where α(t)is a continuous function, α(t)0,γ(t)and σ(t)satisfy conditions (4.5)
of Theorem 4.2 with
(5.7) β(t) := ||˙x(t)|| +α(t),
µ(t)in (4.5) tends to +as t+, and
(5.8) ||z0x(0)|| <1
µ(0),inf
t[0,+)µ(t)1
r.
Then problem (5.5) has a unique solution z(t)B2r(y)for all t[0,), and
(5.9) ||z(t)x(t)|| <1ν(t)
µ(t)<1
µ(t),
DYNAMICAL SYSTEMS AND DISCRETE METHODS FOR SOLVING NONLINEAR ILL-POSED PROBLEMS13
where ν(t)is defined by (4.8), and
(5.10) lim
t+||z(t)x(t)|| = 0.
Remark 5.2.One can choose the regularizing operator Φ(h, t) in (5.5) such that
condition (5.6) holds in the case when F0∗(h)F0(h) is not boundedly invertible (see
Sect. 7, Lemmas 7.11, 7.20).
Proof of Theorem 5.1
Since Φ(h, t) is Fechet differentiable with respect to hproblem (5.5) is locally
uniquely solvable. Denote by [0, T ) the maximal interval on which the solution
z(t) to problem (5.5) exists and z(t)B2r(y). One has to show that T= +.
Assume T < +, then the trajectory z(t) hits the boundary of B2r(y) at t=T:
||z(T)y|| = 2r. Since His a real Hilbert space one has:
(5.11)
1
2
d
dt||z(t)x(t)||2= ( ˙z˙x, z(t)x(t)) = (Φ(z(t), t), z(t)x(t)) ( ˙x, z (t)x(t)).
Therefore from (5.6) and (5.7) for t[0, T ) one obtains
(5.12) 1
2
d
dt||z(t)x(t)||2 γ||z(t)x(t)||2+σ(t)||z(t)x(t)||3+β(t)||z(t)x(t)||.
Denote
v(t) := ||z(t)x(t)||.
From (5.12) one has:
v(t) ˙v(t) γ(t)v2(t) + σ(t)v3(t) + β(t)v(t).
If v > 0, one gets:
(5.13) ˙v(t) γ(t)v(t) + σ(t)v2(t) + β(t).
If v= 0 on some interval, then inequality (5.13) is satisfied trivially because β(t)
0. Thus (5.13) holds for all t > 0.
By Theorem 4.2 using (5.8) one obtains
(5.14) ||z(t)x(t)|| 1
µ(t)r, for t[0, T ).
Thus, since x(t)Br(y), one gets
(5.15) ||z(t)y|| ||z(t)x(t)|| +||x(t)y|| <2r, for t[0, T ).
Therefore there exists a sequence {tn} Tsuch that {z(tn)}converges weakly to
some z. From equation (5.5) one derives the uniform boundedness of the norm
||˙z(t)|| on [0, T ) since ||Φ(z(t), t)|| const <for ||z(t)|| const and 0 t
const. Thus there exists limtT||z(t)z|| = 0. Since
(5.16) ||zy|| ||zx(T)|| +||x(T)y|| <2rfor t[0, T ),
the conditions for the unique local solvability of the Cauchy problem for equation
(5.5) with initial condition z(T) = zare satisfied. Therefore one can continue
the solution to (5.5) through the point T. This contradicts the assumption of
maximality of T, thus T= +. Moreover, from (4.7) one gets:
(5.17) lim
t+||z(t)x(t)|| lim
t+
1
µ(t)= 0.
14 RUBEN G. AIRAPETYAN AND ALEXANDER G. RAMM
In order to establish the discretization theorem in the next section we need
to estimate ||Φ|| along the trajectory z(t). The following theorem gives such an
estimate.
Theorem 5.3. Let the assumptions of Theorem 5.1 hold and z(t)solve problem
(5.5). Assume that the following two conditions hold:
(1) for any hbelonging to the trajectory z(t),Φ(h, t)is differentiable with re-
spect to tand the following two inequalities hold:
(5.18) 0(h, t)ξ, ξ ) a1(t) + a2(t)||Φ(h, t)||!||ξ||2for any ξH,
(5.19)
Φ
∂t (h, t)β1(t) + b1(t)||Φ(h, t)|| +b2(t)||Φ(h, t)||2,
where Φ0(h, t)is the Fechet derivative with respect to h,Φ/∂t is the de-
rivative with respect to t,a1(t),a2(t),β1(t),b1(t), and b2(t)are continuous
functions;
(2) the functions γ1(t) := a1(t)b1(t),σ1(t) := a2(t) + b2(t)and β1(t)are con-
tinuous and satisfy conditions (4.5) of Theorem 4.2 with v1(0) := ||Φ(z0,0)|| <
1
µ1(0) and with µ1(t)>0, which tends to +as t+.
Then the following estimate holds:
(5.20) ||Φ(z(t), t)|| 1
µ1(t).
Proof of Theorem 5.3.
Denote v1(t) := ||Φ(z(t), t)||. Recall that His a real Hilbert space. From (5.5)
one gets:
(5.21) v1
dv1
dt =d
dtΦ(z(t), t),Φ(z(t), t)=Φ
∂t + Φ0(z(t), t) ˙z(t),Φ(z(t), t).
From (5.21) and (5.5) one gets:
(5.22) v1
dv1
dt = Φ0(z(t), t)Φ(z(t), t),Φ(z(t), t)!+Φ
∂t ,Φ(z(t), t).
Using (5.18) and (5.19) one obtains from (5.22) the following inequality:
v1˙v1(a1(t) + a2(t)v1)v2
1+ (β1(t) + b1(t)v1+b2(t)v2
1)v1,
or
(5.23) ˙v1β1+ (b1a1)v1+ (a2+b2)v2
1=β1γ1v1+σ1v2
1,
where γ1(t) = a1(t)b1(t) and σ1(t) = a2(t) + b2(t).
It follows from the assumptions of Theorem 5.3 that v1(t) satisfies the integral
inequality:
(5.24) dv1
dt γ1(t)v1(t) + σ1(t)v2
1(t) + β1(t).
To finish the proof of Theorem 5.3 one uses Theorem 4.2.
DYNAMICAL SYSTEMS AND DISCRETE METHODS FOR SOLVING NONLINEAR ILL-POSED PROBLEMS15
6. Discretization theorem for ill-posed problems
The theorem of this section gives an answer to the following question: under
what assumptions on Φ and {ωn}the convergence of a continuous process
(6.1) ˙z(t) = Φ(z(t), t), z(0) = z0,
implies the convergence of the corresponding discrete process
(6.2) zn=zn1+ωnΦ(zn1, tn1),
(6.3) tn=tn1+ωn, t0= 0, n = 1,2, . . . ,
where z0in (94) is the same as in (6.1).
Theorem 6.1. Let Φsatisfy conditions of Theorems 5.1 and 5.3 with a function
x(t), which tends to yas t+.
Assume that
(1)
(6.4) ||Φ0(h, t)|| a3(t) + a2(t)||Φ(h, t)||,
where a3(t)is a nonnegative continuous function and a2(t)is the same as
in Theorem 5.3;
(2) the sequence {zn}is defined by formulas (6.2) and (6.3);
(3)
(6.5) A= sup
t[0,+)
(a1(t) + a3(t)) <+,
where a1(t)is defined in (5.18);
(4)
(6.6) 0 < ωn1
A,
X
n=1
ωn=;
(5)
(6.7) ωn
νn(ωn)<µ1(tn)
µ(tn),
where
(6.8) 1
νn(t)=1
1µ(tn1)||zn1x(tn1)|| +1
2Zt
tn1γ(s)˙µ(s)
µ(s)ds,
the continuous positive functions µ(t),µ1(t)are defined in Theorems 5.1
and 5.3;
(6) the function µ1(t)is monotonically increasing.
Then the following conclusions hold:
i) all zn,n= 1,2, . . . , belong to the ball B2r(y);
ii)
(6.9) ||znx(tn)|| <1
µ(tn);
iii)
(6.10) tn+,1
µ(tn)0as n ;
16 RUBEN G. AIRAPETYAN AND ALEXANDER G. RAMM
iv)
(6.11) lim
n→∞ ||zny|| = 0.
Proof of Theorem 6.1.
Statements (6.10) follow from (6.6) and from the assumption that µ1(t) as
t . We prove (6.9) by induction. It follows from (5.8) that ||z0x(0)|| <1
µ(0) .
Suppose that
(6.12) ||zn1x(tn1)|| <1
µ(tn1).
We want to prove that (6.12) with nreplacing n1 is true. Denote by ψn(t) the
solution to the following Cauchy problem:
(6.13) ˙z(t) = Φ(z(t), t), tn1< t tn, z(tn1) = zn1.
For problem (6.13) the conditions of Theorem 5.1 are satisfied by assumption.
Therefore from (5.9) one gets:
(6.14) ||ψn(t)x(t)|| 1νn(t)
µ(t),
with νn(t) defined in (6.8). Using (6.1) and (6.2) one gets:
ψn(tn)zn=
tn
Z
tn1
[Φ(z(τ), τ )Φ(zn1, tn1)] =
=
tn
Z
tn1
1
Z
0
d
dsΦ(z(tn1+s(τtn1)), tn1+s(τtn1))ds =
=
tn
Z
tn1
1
Z
0
[[Φ0(z(tn1+s(τtn1))) ˙z(tn1+s(τtn1))(τtn1)+
+Φ
∂t (z(tn1+s(τtn1)))(τtn1)
(6.15)
=
ωn
Z
0
θdθ
1
Z
0
0(z(tn1+), tn1+) ˙z(tn1+) + Φ
∂t (z(tn1+), tn1+)]ds.
Replacing tn1+θby τand using (6.1), one gets:
||ψn(tn)zn||
ωn
Z
0
tn
Z
tn1 ||Φ0(z(s), s)Φ(z(s), s) + Φ
∂t (z(s), s))||!ds
(6.16) ωn
tn
Z
tn1||Φ0(z(s), s)|| ||Φ(z(s), s)|| +
Φ
∂t (z(s), s)ds.
DYNAMICAL SYSTEMS AND DISCRETE METHODS FOR SOLVING NONLINEAR ILL-POSED PROBLEMS17
From (5.19) and (6.4) one obtains:
||Φ0(z(s), s)|| ||Φ(z(s), s)|| +
Φ
∂t (z(s), s)
(a3(s) + a2(s)||Φ(z(s), s)||)||Φ(z(s), s)||
(6.17) +β1(s) + b1(s)||Φ(z(s), s)|| +b2(s)||Φ(z(s), s)||2.
Using estimate (5.20) and the notation σ1=a2+b2from Theorem 5.3, one gets:
(6.18) ||ψn(tn)zn|| ωn
tn
Z
tn1a3(s)
µ1(s)+a2(s)
µ2
1(s)+β1(s) + b1(s)
µ1(s)+b2(s)
µ2
1(s)ds.
According to the assumptions of Theorem 5.3 conditions (4.5) of Theorem 4.2 are
satisfied with γ1(t) = a1(t)b1(t), σ1(t), β1(t) and µ1(t) in place of γ,σ,βand µ
respectively. Thus one gets:
||ψn(tn)zn|| ωn
tn
Z
tn1β1(s) + a3(s) + b1(s)
µ1(s)+a2(s) + b2(s)
µ2
1(s)ds
(6.19) ωn
tn
Z
tn1β1(s) + a1(s) + a3(s)γ1(s)
µ1(s)+σ1(s)
µ2
1(s)ds.
Using inequalities (4.5), assumption (6.5), first assumption (6.6), the positivity and
the monotonicity of µ1(t), one gets the estimate:
||ψn(tn)zn|| ωn
tn
Z
tn1a1(s) + a3(s)
µ1(s)˙µ1
µ2
1ds
(6.20) ωnn1
µ1(tn1)+1
µ1(tn)ωn
1
µ1(tn).
From (6.20) and (6.7) one gets:
(6.21) ||ψn(tn)zn|| <νn(ωn)
µ(tn).
From this inequality and (6.14) one obtains:
||znx(tn)|| ||znψn(tn)|| +||ψn(tn)x(tn)||
(6.22) <νn(ωn)
µ(tn)+1νn(ωn)
µ(tn)=1
µ(tn).
Thus (6.9) is proved.
Finally, (6.11) follows follows from (6.9) and (6.10). Theorem 6.1 is proved.
Remark 6.2.We prove now that a sequence {ωn}which satisfies (6.6) and (6.7) does
always exist. Choose an arbitrary c(0,1) and consider a continuous function:
(6.23) χ(s):=scµ1(tn1+s)
µ(tn1+s)νn(s)
18 RUBEN G. AIRAPETYAN AND ALEXANDER G. RAMM
on the interval [0,1/A] with νndefined in (6.8). Clearly χ(0) <0. Choose ωn= 1/A
if χ(s)<0 on [0,1/A] and ωn=s0otherwise, where s0is the smallest zero of the
function χ(s) on [0,1/A]. Therefore
(6.24) ωn=1
Aor ωn=cµ1(tn1+ωn)
µ(tn1+ωn)νn(ωn).
By (6.3) tn=tn1+ωn. Since 0 < c < 1, the constructed sequence {ωn}satisfies
the first condition in (6.6) and condition (6.7). To show that the second condition
in (6.6) is also satisfied assume that
(6.25)
X
n=1
ωn=C < .
Then (6.2) implies tnCfor all n. Therefore it follows from (6.24) that for every
neither ωn= 1/A or
ωn=cµ1(tn)
µ(tn)(µ(tn)νn(ωn)˜c > 0.
Thus one has inequality ωnmin{˜c, 1/A}>0 for all n. This is a contradiction to
(6.25).
7. Regularized Continuous Methods
for Monotone Operators
In this section we apply the regularization procedure described in Sect. 3 to
solve nonlinear operator equation (1.1) with a monotone operator F. Assume that
F(y) = 0 and Fis Fr´echet differentiable in a ball B2r(y).
Definition 7.1. A mapping ϕis monotone in a ball Br(y)Hif
(ϕ(h1)ϕ(h2), h1h2)0,h1, h2Br(y).
Note that a Fechet differentiable operator is monotone in the ball B2r(y) if and
only if
(7.1) (F0(h)ξ, ξ )0 for all hB2r(y), ξ H.
Definition 7.2. A mapping ϕis strongly monotone in a ball Br(y) if there exists
a constant k > 0 such that
(ϕ(h1)ϕ(h2), h1h2)k||h1h2||2,h1, h2Br(y).
Under the assumption (7.1) the operator F0(h) + εI is boundedly invertible for
hB2r(y) and for all positive ε. Define Φ as follows:
(7.2) Φ(h, t):=[F0(h) + ε(t)I]1[F(h) + ε(t)(h˜z0)],
where ˜z0is a point belonging to Br(y), and ε(t) is some positive function on the
interval [0,). Some restrictions on ε(t) will be stated in Theorem 7.12.
An outline of the convergence proof is the following. One considers an auxiliary
well-posed problem:
(7.3) Fε(x) := F(x) + ε(t)(x˜z0) = 0, ε > 0,
and shows that the difference between its solution x(t) and the solution z(t) to
problem (5.5) tends to zero as t+. On the other hand one shows that x(t)
converges to the exact solution yof equation (1.1). Thus one proves the convergence
of z(t) to yas t+.
DYNAMICAL SYSTEMS AND DISCRETE METHODS FOR SOLVING NONLINEAR ILL-POSED PROBLEMS19
We recall first some definitions from nonlinear functional analysis which are used
below. The most essential restrictions on the operator Fimposed in this section
are monotonicity and Fechet differentiability in the ball B2r(y).
Definition 7.3. A mapping ϕ:HHis hemicontinuous if the map t(ϕ(x0+
th1), h2) is continuous in a neighborhood of t= 0 for any x0, h1, h2H.
Definition 7.4. A mapping ϕ:HHis coercive if ||ϕ(h)|| as ||h|| .
Let *denote weak convergence and denote strong convergence in H.
Definition 7.5. ϕis w-closed in a ball Br(y) if
{ {xn}n=1,2,... Br(y), xn* ξ and ϕ(xn)η}implies ϕ(ξ) = η.
Lemma 7.6. If ϕis monotone and continuous in a ball Br(y), then ϕis w-closed
in Br(y).
Proof of Lemma 7.6.
Consider a sequence {xn}n=1,2,... Br(y) such that xn* ξ and ϕ(xn)η.
Take any hHand sufficienly small positive tsuch that ξ+th Br(y). Since ϕ
is monotone in a ball Br(y), one has:
(7.4) (ϕ(xn)ϕ(ξ+th), xn(ξ+th)) 0.
Since ϕ(xn)ηand xn* ξ one concludes (ηϕ(ξ+th), h)0. Let t0 and
use continuity of ϕto get (ηϕ(ξ), h)0 for any hH. This implies η=ϕ(ξ).
Lemma is proved.
The well known result (see [14, p.100]) says that if an operator φ:HHis
monotone, hemicontinuous, and coercive, then the equation φ(h) = 0 is uniquely
solvable. Because the monotone operator F, which is studied in this section, is
defined only in the ball B2r(y), in the following Lemma the existence of the solution
to equation (7.3) is proved for locally defined smooth monotone operators.
Lemma 7.7. If Fis monotone and Fr´echet differentiable in a ball Br(y)then
problem (7.3) is uniquely solvable in Br(y).
Proof of Lemma 7.7. Consider the Cauchy problem:
(7.5) ˙z=Fε(z), z(0) = y,
where Fε(z) = F(z) + ε(z˜z0), ε > 0 is a constant, and ˜z0Br(y). Then
1
2
d
dt||Fε(z(t))||2=(F0
ε(z)Fε(z), Fε(z))
=(F0(z)Fε(z), Fε(z)) ε(Fε(z), Fε(z)) ε||Fε(z)||2.
Therefore
d
dt||Fε(z(t))|| ε||Fε(z(t))||.
From this inequality one gets
||Fε(z(t))|| eεt||Fε(y)|| eεt ε||y˜z0|| < rεeεt,
if z(t)Br(y). Let us prove that z(t)Br(y) for all t > 0:
||z(t)y||
t
Z
0||Fε(z(s))||ds ||y˜z0||1eεt <||y˜z0|| < r.
20 RUBEN G. AIRAPETYAN AND ALEXANDER G. RAMM
Hence z(t) is defined for all t[0,+). Also
||z(t2)z(t1)||
t2
Z
t1
||Fε(z(s))||ds reεt10 as min{t1, t2} +.
Thus, by the Cauchy criterion, there exists the strong limit: xε:= limt→∞ z(t)
Br(y). Since Fεis Fechet differentiable in Br(y) it is continuous in Br(y) and
limt→∞ ||Fε(z(t))|| = 0. Thus xεis a solution to the equation (7.3) in Br(y).
Lemma 7.8. Suppose all the assumptions of Lemma 7.7 are satisfied, and yis the
unique solution to (1.1) in B2r(y). Let x(t)solve (7.3) for ε=ε(t), and ε(t)tend
to zero as t+. Then x(t)Br(y)for all t[0,+)and
(7.6) lim
t+||x(t)y|| = 0.
Proof of Lemma 7.8.
First let us show that x(t) is bounded. Indeed, it follows from (7.3) that
F(x(t)) F(y) + ε(t)[x(t)y] = ε(t)(˜z0y).
Therefore
(7.7) (F(x(t)) F(y), x(t)y) + ε(t)||x(t)y||2=ε(t)(˜z0y, x(t)y).
This and (7.1) imply
(7.8) ||x(t)y|| ||˜z0y|| < r,
and therefore F(x(t)) 0 = F(y) as t+.
Also, it follows from (7.8) that there exists a sequence {x(tn)},tn as
n , which converges weakly to some element ˜yH. Since Fis w-closed one
gets that Fy) = 0, and, by the uniqueness of the solution to (1.1) in B2r(y), it
follows that ˜y=y. Let us show that the sequence {x(tn)}converges strongly to y.
Indeed, from (7.7), (7.1) and the relation x(tn)* y, one gets:
(7.9) ||x(tn)y||2z0y, x(tn)y)0 as n .
Thus
(7.10) lim
n→∞ ||x(tn)y|| = 0.
From (7.10) it follows by the standard argument that x(t)yas t . Lemma 7.8
is proved.
Lemma 7.9. Assume that Fis continuously Fechet differentiable in a ball B2r(y),
supxB2r(y)||F0(x)|| N1, and condition (7.1) holds. If ε(t)is continuously dif-
ferentiable, then the solution x(t)to problem (7.3) with ε=ε(t)is continuously
differentiable in the strong sense and one has
(7.11) ||˙x(t)|| |˙ε(t)|
ε(t)||y˜z0|| r|˙ε(t)|
ε(t), t [0,+).
Proof of Lemma 7.9.
Fechet differentiability of Fimplies hemicontinuity of F. Therefore problem
(7.3) with ε=ε(t) is uniquely solvable. The differentiability of x(t) with respect to t
DYNAMICAL SYSTEMS AND DISCRETE METHODS FOR SOLVING NONLINEAR ILL-POSED PROBLEMS21
follows from the implicit function theorem ([14]). To derive (7.11) one differentiates
equation (7.3) and uses the estimate
[F0(x(t)) + ε(t)I]1
1
ε(t). The result is:
(7.12) ||˙x(t)|| =|˙ε(t)| · ||[F0(x(t)) + ε(t)I]1(x(t)˜z0)|| |˙ε(t)|
ε(t)||y˜z0||.
Here we have used the estimate
(7.13) ||x(t)˜z0|| ||y˜z0||,
which can be derived from (7.3) similarly to the derivation of (7.8). Indeed,
F(x(t)) F(y) + ε(x(t)˜z0) = 0.
It follows from the monotonicity of Fthat (x(t)˜z0, x(t)y)0. Therefore
(x(t)˜z0, x(t)˜z0)(x(t)˜z0, y ˜z0) ||x(t)˜z0|| · ||y˜z0||.
From this estimate and (7.8) one gets (7.13).
Finally, estimate (7.11) follows from (7.12) and (7.13).
Remark 7.10.Lemma 7.8 and formula (7.10) do not give a rate of convergence to
y. This rate, in general, can be arbitrary slow.
To estimate ||x(t)y|| one may try to use the following inequality:
(7.14) ||x(t)y||
+
Z
t
||˙x(τ)|| r
+
Z
t
|˙ε(τ)|
ε(τ)dτ.
Since ε(t)0 as t+monotonically, then
+
R
t
|˙ε(τ)|
ε(τ) =log ε(t)|
t= +,
so in fact one can not use the above estimate in order to estimate the rate of
convergence of ||x(t)y||.
This illustrates the strength of conclusion (7.6).
Lemma 7.11. Assume that ε=ε(t)>0,Fis twice Fechet differentiable in
B2r(y), condition (7.1) holds, and
(7.15) ||F0(x)|| N1,||F00(x)|| N2xB2r(y).
Then for the operator Φdefined by (7.2) and x(t), the solution to (7.3) with ε=ε(t),
estimate (5.6) holds with
(7.16) α(t)0, γ(t)1,and σ(t) := N2
2ε(t).
Proof of Lemma 7.11.
Since x(t) is the solution to (7.3) applying Taylor’s formula one gets:
(Φ(h, t), h x(t)) = ([F0(h) + ε(t)I]1[F(h)F(x(t)) + ε(t)(hx(t))], h x(t))
[F0(h) + ε(t)I]1[F0(h)(hx(t)) + ε(t)(hx(t))], h x(t)!+N2||hx(t)||3
2ε(t)
(7.17) = −||hx(t)||2+N2||hx(t)||3
2ε(t).
From (7.17) and (5.6) the conclusion of Lemma 7.11 follows.
Let us state the main result of this section.
22 RUBEN G. AIRAPETYAN AND ALEXANDER G. RAMM
Theorem 7.12. Assume:
(1) problem (1.1) has a unique solution yin B2r(y);
(2) Fis twice Fechet differentiable in B2r(y)and inequalities (7.1), (7.15)
hold;
(3)
(7.18) z0,˜z0Br(y);
(4) ε(t)>0is continuously differentiable, monotonically decreases to 0as t
+, and
(7.19) Cε:= max
t[0,)
ε(0)|˙ε(t)|
ε2(t)<1;
(5)
(7.20) ε(0) >N2
1Cε||z0x(0)||;
(6)
(7.21) ε(0) 2N2Cε
(1 Cε)2||˜z0y||;
(7)
(7.22) rε(0)(1 Cε)
N2
;
(8) Φ is defined by (7.2).
Then the following conclusions hold:
i) Cauchy problem (5.5) has a unique solution z(t)B2r(y)for t[0,+),
ii)
(7.23) ||z(t)x(t)|| 1Cε
N2
ε(t),lim
t+||z(t)y|| = 0,
iii)
||F(z(t))|| et(||F(z0)|| +ε(0)||˜z0z0||) + 3r
1Cε
ε(t)
(7.24) ||F(z0)||
ε(0) +||˜z0z0|| +3r
1Cεε(t).
Remark 7.13.Note that Theorem 7.12 establishes convergence for any initial ap-
proximation point z0if ris big enough and ε(t) is appropriately chosen. To make
an appropriate choice of ε(t) one has to choose some function ε(t) satisfying condi-
tion (7.19). Examples of such functions ε(t) are given below. One can observe that
condition (7.19) is invariant with respect to a multiplication ε(t) by a constant.
Therefore one can choose ε(t) satisfying conditions (7.20) and (7.21) by a multipli-
cation of the original ε(t) by a sufficiently large constant. If |˙ε(t)|
ε2(t)is not increasing,
then in conditions (7.20) and (7.21) Cε:= maxt[0,)ε(0)|˙ε(t)|
ε2(t)can be replaced by
Cε:= |˙ε(0)|
ε(0) .
DYNAMICAL SYSTEMS AND DISCRETE METHODS FOR SOLVING NONLINEAR ILL-POSED PROBLEMS23
Remark 7.14.Inequalities (7.24) give the estimates for ||F(z(t))||, the discrepancy
of the continuous process. The first estimate in (7.24) shows that decay of ||F(z(t))||
is estimated by the sum of two terms. The first term depends on ||F(z(0))|| and on
the distance between points z0and ˜z0. This term decreases with the rate etas in
the well posed cases. The second term decreases slower because of the ill-posedness
of the problem.
Remark 7.15.In order to get an estimate of the convergence rate for ||x(t)y|| one
has to make some additional assumptions either on F(x) or on the choice of the
initial approximation z0. Without such assumptions one cannot give an estimate
of the convergence rate. Indeed, as a simple example consider the scalar equation
F(x):=xm= 0. Then one gets the following algebraic equation for x(ε):
(7.25) Fε(x) := xm+ε(xz0) = 0.
Assume mis a positive integer and z0>0. It is known that the solution to this
equation is an algebraic function which can be represented by the Puiseux series:
x=P
j=1 cjεj
pin some neighborhood of zero. Thus x=c1ε1
p(1 + O(ε)) as ε0.
Now from (7.25) one gets:
cm
1εm
p(1 + O(ε)) + c1ε1+ 1
p(1 + O(ε)) = z0ε.
Thus p=m,c1=z
1
m
0and x(ε) = z
1
m
0ε1
m(1 + O(ε)). For ε= 0 one gets the solution
y= 0. Therefore
(7.26) |x(ε)y| ε1
m, ε 0,
where mcan be chosen arbitrary large.
Below in Propositions 7.17 and 7.19 some sufficient conditions are given that
allow one to obtain the estimates for ||x(t)y||.
Proof of Theorem 7.12.
For α(t), γ(t), σ(t) defined in (7.16), β(t) defined in (5.7), and v(t) := ||z(t)
x(t)|| we are looking for a function µ(t) satisfying inequalities (4.5) and the second
inequality in (4.6). Choose µ(t) = λ
ε(t), where λis a constant. Then rewrite first
inequality (4.5) as
(7.27) N2λ1|˙ε(t)|
ε(t).
Since α(t) = 0, using estimate (7.11) one gets:
(7.28) β(t) = ||˙x(t)|| |˙ε(t)|
ε(t)||y˜z0||.
Therefore the second inequality in (4.5) follows from the inequality:
(7.29) 2||˜z0y|||˙ε(t)|
ε2(t)1
λ1|˙ε(t)|
ε(t).
Also, the the second inequality in (4.6) can be rewritten as:
(7.30) λ||z0x(0)|| < ε(0).
Choose
(7.31) λ:= N2
1Cε
.
24 RUBEN G. AIRAPETYAN AND ALEXANDER G. RAMM
Then inequality (7.30) is the same as (7.20). It follows from (7.19) and the monotone
decay of ε(t) that
(7.32) |˙ε(t)|
ε(t)Cε
ε(t)
ε(0) Cε.
From (7.31) and (7.32) one gets:
(7.33) N2
λ= 1 Cε1|˙ε(t)|
ε(t),
and (7.27) holds.
From (7.19), (7.21), and (7.31) one gets:
(7.34) λ=N2
1Cε(1 Cε)ε(0)
2Cε||˜z0y|| 1ε(0)|˙ε(t)|
ε2(t)
2||˜z0y|||˙ε(t)|
ε2(t)
.
Hence one obtains:
(7.35) 2||˜z0y|||˙ε(t)|
ε2(t)1
λ1ε(0)|˙ε(t)|
ε2(t)1
λ1|˙ε(t)|
ε(t).
Therefore the first inequality in (7.29) also follows from the conditions of Theo-
rem 7.12.
It follows from the monotonicity of ε(t) that for the chosen function µ(t) the
assumption (7.22) of Theorem 7.12 is the same as the second inequality in (5.8).
Thus all assumptions of Theorem 5.1 are satisfied. Applying Theorem 5.1 one
concludes that z(t)B2r(y) and inequality (7.21) holds. The second relation
(7.23) follows from (7.6), inequality (7.23) and the triangle inequality:
||z(t)y|| ||z(t)x(t)|| +||x(t)y||.
To prove (7.24), one uses (7.2) and gets:
(7.36) [F0(z(t)) + ε(t)I] ˙z(t) = [F((z(t)) + ε(t)(z(t)˜z0)].
Denote:
(7.37) ρ(t) := ||F((z(t)) + ε(t)(z(t)˜z0)||.
Then it follows from (7.36) and (7.37) that
d
dtρ2(t) = 2 [F0(z(t)) + ε(t)] ˙z(t) + ˙
ε(t)(z(t)˜z0), F ((z(t)) + ε(t)(z(t)˜z0)!
(7.38) 2ρ2(t)+2ρ(t)|˙ε(t)|||z(t)˜z0||.
Since ρ(t) is a positive function, one obtains the following integral inequality:
(7.39) d
dtρ(t) ρ(t) + |˙ε(t)|||z(t)˜z0||.
It was already proved that z(t)B2r(y). Therefore from assumption (7.18) one
gets:
(7.40) ||z(t)˜z0|| ||z(t)y|| +||˜z0y|| 3r.
Using Lemma 4.1 one gets:
(7.41) ρ(t)etρ(0) + 3ret
t
Z
0
es|˙ε(s)|ds.
DYNAMICAL SYSTEMS AND DISCRETE METHODS FOR SOLVING NONLINEAR ILL-POSED PROBLEMS25
Denote:
(7.42) g(t) =
t
Z
0
es|˙ε(s)|ds, f (t) := Cε
1Cε
etε(t).
It follows from (7.32) that for t[0,) the following inequality holds:
(7.43) f0(t) = Cε
1Cε
et[ε(t)+ ˙ε(t)] Cε
1Cε
et|˙ε(t)|
Cε | ˙ε(t)|=et|˙ε(t)|=g0(t).
Since g(0) = 0, and by (7.42) f(0) >0, it follows that f(0) > g(0). Thus f(t)> g(t)
for all t[0,) and one obtains the following inequality:
(7.44) et
t
Z
0
es|˙ε(s)|ds Cε
1Cε
ε(t).
From (7.41) and (7.44) one obtains:
(7.45) ρ(t)etρ(0) + 3rCε
1Cε
ε(t).
It follows from (7.37) and the monotonicity of ε(t) that
(7.46) ρ(0) ||F(z0)|| +ε(0)||z0˜z0||,
and
(7.47) ||F(z(t))|| ρ(t) + ε(t)||z(t)˜z0||.
Thus from (7.45) and (7.47) one gets
(7.48) ||F(z(t))|| et(||F(z0)|| +ε(0)||z0˜z0||)+3rCε
1Cε
ε(t),
and the first inequality in (7.24) is proved.
It follows from (7.32) that
(7.49) [log(ε(t))]0log(ε(0)eCεt)0.
Therefore:
(7.50) log(ε(t)) log(ε(0)eCεt),
and, since Cε<1, one obtains:
(7.51) ε(t)ε(0)et.
The second inequality in (7.24) follows from (7.51) and the first inequality (7.24).
Theorem 7.12 is proved.
Example 7.16.1. Let ε(t) = ε0(t0+t)ν,ε0,t0and νare positive constants. Then
Cε=ν
t0and condition (7.19) is satisfied if ν(0,1] and t0> ν.
2. If ε(t) = ε0
log(t0+t), then Cε=1
t0log t0and condition (7.19) is satisfied if
t0log t0>1.
Note that if ε(t) = ε0eνt then condition (7.19) is not satisfied.
26 RUBEN G. AIRAPETYAN AND ALEXANDER G. RAMM
Proposition 7.17. Let all the assumptions of Theorem 7.12 hold. Suppose also
that the following inequality holds:
(7.52) (F(h), h y)c||hy||1+a, a > 0.
Then for the solution z(t)to problem (5.5) with Φdefined in (7.2), the following
estimate holds:
(7.53) ||z(t)y|| =Oεmin{1
a,1}(t).
Proof of Proposition 7.17.
Denote ||x(t)y|| := %(t). Since F(y) = 0, inequality (7.7) implies
(7.54) c%1+a(t) + ε(t)%2(t)ε(t)||˜z0y||%(t)
and %(t)0 as t+. This inequality can be reduced to
(7.55) c%a(t) + ε(t)%(t)ε(t)||˜z0y||.
Thus %a(t)||˜z0y||
cε(t), and
(7.56) %(t) = ||x(t)y|| ||˜z0y||
c
1
a
ε1
a(t).
Combining this estimate with estimate (7.23) for ||z(t)x(t)|| and using the triangle
inequality one gets:
||z(t)y|| ||z(t)x(t)|| +||x(t)y||
(7.57) 1Cε
N2
ε(t) + ||˜z0y||
c
1
a
ε1
a(t).
Proposition 7.17 is proved.
Example 7.18.In the case of a scalar function f(h) and even integer a > 0 the
estimate f(h)(hy)c|hy|1+ameans that f(h) = (hy)ag(h), where g(h)
c > 0, and hence yis a zero of multiplicity afor f.
Proposition 7.19. Let all the assumptions of Theorem 7.12 hold and there exists
vHsuch that
(7.58) z0y=F0(y)v, ||v|| <2
N2
.
Then the solution z(t)to problem (5.5) satisfies the following convergence rate
estimate:
(7.59) ||z(t)y|| 1Cε
N2
+4||v||
2N2||v||ε(t).
Proof of Proposition 7.19.
Note that F(y) = 0. Therefore from (7.3) one gets:
F(x(t)) F(y) + ε(t)(xy) = ε(t)(z0y).
By the Lagrange formula one has:
(7.60)
1
Z
0
(F0(y+s(x(t)y))ds +ε(t)I
(x(t)y) = ε(t)(z0y).
DYNAMICAL SYSTEMS AND DISCRETE METHODS FOR SOLVING NONLINEAR ILL-POSED PROBLEMS27
Denote Qε(x) :=
1
R
0
(F0(y+s(xy))ds +εI. From (7.60) it follows that
||xy|| =ε||Q1
ε(x)Q0(y)v|| ε||Q1
ε(x)(Q0(y)Qε(x))v|| +ε||Q1
ε(x)Qε(x)v||.
Since Qε(x) = Q0(x) + εI, one obtains
||xy|| ε||Q1
ε(x)(Q0(y)Q0(x))v|| +ε||Q1
ε(x)εv|| +ε||v||
(7.61) N2
2||xy||||v|| + 2ε||v||.
Here we have used assumption (7.1) which implies the inequality ||Q1
ε(x)|| 1
ε.
From (7.61) and the inequality (7.58) one gets:
(7.62) ||xy|| 4ε||v||
2N2||v||.
Using the triangle inequality from the first inequality (7.23), and inequalities (7.58)
and (7.62), one gets:
(7.63) ||z(t)y|| ||z(t)x(t)|| +||x(t)y|| 1Cε
N2
ε(t) + 4||v||
2N2||v||ε(t).
For ε=ε(t) and x=x(t) satisfying the assumptions of Theorem 7.12 one con-
cludes that estimate (7.59) holds.
Now we describe the simple iteration scheme for solving nonlinear equation (1.1).
Define:
(7.64) Φ(h, t) := [F(h) + ε(t)(h˜z0)],
where ˜z0Br(y) and ε(t)>0 is defined on [0,+). Some restrictions on ε(t) will
be stated in Theorem 7.22.
Lemma 7.20. Assume that Fis monotone, Φis defined by (7.64), and x(t)is a
solution to problem (7.3) with ε=ε(t)>0, t [0,+). Then for γ(t) := ε(t)>0
and for σ(t) = α(t)0inequality (5.6) holds.
Proof of Lemma 7.20.
Since x(t) solves (7.3), by the monotonicity of Fone has:
(Φ(h, t), h x(t)) = (F(h)F(x(t)), h x(t)) ε(t)(hx(t), h x(t))
(7.65) ε(t)||hx(t)||2.
Lemma 7.20 is proved.
Lemma 7.20 together with Lemma 7.21 presented below allow one to formu-
late the convergence result concerning the simple iteration procedure (see Theo-
rem 7.22).
Lemma 7.21. Let ν(t)be locally integrable on [0,+). Suppose that there exists
T0such that ν(t)C1[T, +)and
(7.66) ν(t)>0,˙ν(t)
ν2(t)C, for t[T , +).
Then
(7.67) lim
t+
t
Z
0
ν(τ) = +.
28 RUBEN G. AIRAPETYAN AND ALEXANDER G. RAMM
Proof of Lemma 7.21.
One can integrate (7.66)
t
Z
T
˙ν(τ)
ν2(τ)
t
Z
T
Cdτ , t [T , +)
and get
1
ν(t)C(tT) + 1
ν(T).
Without loss of generality we can assume that C > 0, and then
ν(t)1
C(tT) + 1
ν(T)
.
Integrating this inequality one gets (7.67) and completes the proof.
Lemmas 7.7 - 7.9 and 7.20 - 7.21 imply the following result.
Theorem 7.22. Assume that:
(1) problem (1.1) has a unique solution yin a ball B2r(y);
(2) Fis monotone;
(3) Fis continuously Fechet differentiable and
(7.68) ||F0(h)|| N1,for all hB2r(y);
(4) ε(t)>0is continuously differentiable, tends to zero monotonically as t
+, and limt+˙ε(t)
ε2(t)= 0.
Then, for Φdefined by (7.64), Cauchy problem (5.5) has a unique solution z(t)for
all t[0,+)and
lim
t→∞ ||z(t)y|| = 0.
Proof of Theorem 7.22.
In order to verify the assumptions of Theorem 5.1 we use estimate (7.65) to
conclude that α(t) = σ(t) = 0 and γ(t) = ε(t) in formula (5.6). By (5.7) β(t) =
||˙x(t)|| because α(t) = 0. By (7.11)
β(t) = ||˙x(t)|| |˙ε(t)|
ε(t)||y˜z0||.
To apply Theorem 5.1 one has to find a function µ(t)C1[0,+) satisfying (4.5)
and the second inequality in (4.6). This will be so if
(7.69) |˙ε(t)|
ε(t)||y˜z0|| 1
2µ(t)ε(t)˙µ(t)
µ(t), µ(0)||x(0) z0|| <1.
The function µ(t) can be chosen as the solution to the differential equation
(7.70) ˙µ(t)
µ2(t)+ε(t)
µ(t)=A|˙ε(t)|
ε(t),
where A:= 2||y˜z0||. Denote ρ(t) := 1
µ(t).Then
˙ρ(t) + ε(t)ρ(t) = A|˙ε(t)|
ε(t).
DYNAMICAL SYSTEMS AND DISCRETE METHODS FOR SOLVING NONLINEAR ILL-POSED PROBLEMS29
Solving this equation, one gets:
(7.71) ρ(t) =
A
t
Z
0
|˙ε(s)|
ε(s)e
s
R
0
ε(τ) ds +1
µ(0)
e
t
R
0
ε(τ) .
By Lemma 7.21 e
t
R
0
ε(τ) as t . Applying L’Hˆospital’s rule to (7.71) and
using condition 4 of Theorem 7.22 one gets:
lim
t+ρ(t) = lim
t+|˙ε(t)|
ε2(t)= 0.
Therefore µ(t)=1(t) tends to +as t+. To complete the proof one
can take µ(0) sufficiently small for the second inequality in (7.69) to hold. By
Theorem 5.1 one concludes that ||z(t)x(t)|| 0 as t+and by Lemma 7.8
that ||x(t)y|| 0 as t+. Therefore it follows from the estimate:
||z(t)y|| ||z(t)x(t)|| +||x(t)y||,
that ||z(t)y|| 0 as t+.
Remark 7.23.One has the estimate ||z(t)x(t)|| 1
µ(t)0 as t+. For the
term ||x(t)y|| one can get the rate of convergence if some additional assumptions
are made on For on z0(see Propositions 7.17, 7.19, and also Remark 7.15).
Remark 7.24.An interesting result similar to our Theorem 7.22 was established in
[6, Theorem 8] for accretive operators in Banach space (in the case of Hilbert space
accretive means monotone). The dynamical system considered in [6] is different
from the one we study. In contrast to Theorem 8 in [6], where the existence of the
global solution to the Cauchy problem for the corresponding nonlinear differential
equation is one of the assumptions, we prove the existence and uniqueness of the
solution to the corresponding Cauchy problem. The method of investigation in [6]
is based on a linear differential inequality which is a particular case of (4.6) with
σ(t)0. This linear differential inequality has been used in the literature by many
authors.
Example 7.25.1. Let ε(t) = ε0(1 + t)ν,ε0and νare positive constants. Then the
assumptions of Theorem 7.22 are satisfied if ν(0,1).
2. If ε(t) = ε0
log(1+t), then the assumptions of Theorem 7.22 are satisfied.
If ε(t) = ε0eνt then condition 4 of Theorem 7.22 is not satisfied.
8. Regularized Discrete Methods for Monotone Operators
In this section we apply the results of Sections 6 and 7 to derive convergence
theorems for regularized discrete methods.
First we consider the regularized Newton’s method:
(8.1) zn+1 =znωn+1[F0(zn)+ε(tn)I]1[F(zn)+ε(tn)(zn˜z0)], n = 0,1,2, . . . ,
where z0,˜z0Br(y), tn+1 =tn+ωn+1,t0= 0.
Applying Theorem 6.1 to the regularized Newton’s method one gets the following
theorem.
Theorem 8.1. Assume:
(1) problem (1.1) has a unique solution yin B2r(y);
30 RUBEN G. AIRAPETYAN AND ALEXANDER G. RAMM
(2) Fis twice Fechet differentiable in B2r(y), and inequalities (7.1), (7.15)
hold;
(3)
(8.2) z0,˜z0Br(y);
(4) ε(t)>0is continuously differentiable and monotonically tends to 0,
(8.3) 0 < Cε:= max
t[0,)
ε(0)|˙ε(t)|
ε2(t)1
42 + N2||˜z0y||
ε(0) 1
;
(5)
(8.4) ε(0) >N2
1Cε||z0x(0)||;
(8.5) ε(0) >2N2
12Cε||z0˜z0|| +r2N2
12Cε||F(z0)||;
(6)
(8.6) rε(0)(1 Cε)
N2
;
(7) Φ is defined in (7.2),
(8)
(8.7)
X
n=1
ωn=,
(9)
(8.8) ωn1
2,ωn
νn(ωn)<2(1 Cε)
12Cε
, n = 1,2, . . . ,
where the continuous positive functions µ(t),µ1(t)are defined in Theorems
5.1 and 5.3, µ1(t)is monotone, µ1(t) as t+, and νn(t)is defined
in (6.8).
Then the following conclusions hold:
i) all zn,n= 1,2, . . . , defined in (8.1) belong to the ball B2r(y),
ii)
(8.9) ||znx(tn)|| 1Cε
N2
ε(tn),
iii)
(8.10) tn+, ε(tn)0as n ,
iv)
(8.11) lim
n→∞ ||zny|| = 0.
Proof of Theorem 8.1.
Clearly the assumptions of Theorem 8.1 imply the assumptions of Theorem 7.12.
Therefore the assumptions of Theorem 5.1 with µ(t) = λ
ε(t),λ=N2
1Cεare also
satisfied (see the proof of Theorem 7.12).
DYNAMICAL SYSTEMS AND DISCRETE METHODS FOR SOLVING NONLINEAR ILL-POSED PROBLEMS31
To check the assumptions of Theorem 5.3, consider Φ defined by formula (7.2).
Since
Φ0(h, t) = [F0(h) + ε(t)I]1F00(h)[F0(h) + ε(t)I]1[F(h) + ε(t)(h˜z0)]
(8.12) [F0(h) + ε(t)I]1[F0(h) + ε(t)I],
for hB2r(y) one gets:
0(h, t)ξ, ξ ) −||ξ||2+||F00(h)||
ε(t)||Φ(h, t)|| · ||ξ||2
(8.13) −||ξ||2+N2
ε(t)||Φ(h, t)|| · ||ξ||2.
Since z(t)B2r(y) for any t[0,) Φ0satisfies condition (5.18) on [0,+) with
(8.14) a1(t) = 1, a2(t) = N2
ε(t).
From (7.2) one has:
(8.15) [F0(h) + ε(t)I]Φ(h, t) = F(h)ε(t)(h˜z0).
Differentiating (8.15) with respect to tone gets:
(8.16) [F0(h) + ε(t)I]Φ
∂t (h, t) = ˙ε(t)[Φ(h, t) + h˜z0].
Thus, with h=z(t) in (8.16), using the estimate ||F0(h) + εI|| 1
εand the triangle
inequality, one gets:
(8.17)
Φ
∂t (z(t), t)|˙ε(t)|
ε(t)(||Φ(z(t), t)|| +||z(t)x(t)|| +||x(t)˜z0||).
Using (7.13) and (7.23) one obtains:
(8.18)
Φ
∂t (z(t), t)|˙ε(t)|
ε(t)||Φ(z(t), t)|| +1Cε
N2
ε(t) + ||˜z0y||.
Since
(8.19) |˙ε(t)|
ε(t)Cε
ε(t)
ε(0) and 0 <ε(t)
ε(0) 1,
condition (5.19) holds with
β1(t) = Cε(1 Cε)
N2
+Cε||˜z0y||
ε(0) ε(t),
(8.20) b1(t) = Cε
ε(t)
ε(0),and b2(t) = 0.
Thus the functions γ1(t) and σ1(t) in Theorem 5.3 are:
γ1(t) := a1(t)b1(t) = 1 Cε
ε(t)
ε(0) >1
2,
(8.21) σ1(t) := a2(t) + b2(t) = a2(t) = N2
ε(t),
32 RUBEN G. AIRAPETYAN AND ALEXANDER G. RAMM
Choosing µ1(t) = λ1
ε(t), where λ1is a constant, one rewrites the assumptions of
Theorem 5.3, that is, conditions (4.5) and the second inequality (4.6) with v(0) :=
v1(0) := ||Φ(z0,0)||, Φ defined in (8.3), and t0= 0, as:
(8.22) N2λ1
21Cε
ε(t)
ε(0) |˙ε(t)|
ε(t),
(8.23) Cε(1 Cε)
N2
+Cε||˜z0y||
ε(0) 1
2λ11Cε
ε(t)
ε(0) |˙ε(t)|
ε(t),
(8.24) λ1||[F0(z0) + ε(0)]1||||F(z0) + ε(0)(z0˜z0)|| < ε(0).
Choose
(8.25) λ1:= 2N2
12Cε
.
From (8.3) one gets:
(8.26) |˙ε(t)|
ε(t)Cε
ε(t)
ε(0) Cε.
Inequality (8.26) and definition (8.25) imply (8.22). From (8.26) and (8.3) one gets
the following inequality:
(8.27) 4Cε(1 Cε)+4CεN2
ε(0) ||˜z0y|| (1 2Cε)2.
Formula (8.23) is a consequence of (8.27).
Because of the monotonicity of Fone has:
(8.28) ||[F0(z0) + ε(0)I]1|| 1
ε(0).
Therefore inequality (8.24), with λ1as in (8.25), follows from the inequality:
(8.29) ε(0)22N2||˜z0z0||
12Cε
ε(0) 2N2
12Cε||F(z0)|| >0.
Inequality (8.29) and therefore (8.24) is satisfied if
(8.30) ε(0) >N2||˜z0z0||
12Cε
+sN2
2||˜z0z0||2
(1 2Cε)2+2N2
12Cε||F(z0)||.
This inequality is satisfied if (8.5) holds. Thus (8.5) implies (8.24). We have shown
that the assumptions of Theorem 5.3 are satisfied.
Formula (8.12) implies assumption (6.4) of Theorem 6.1 with a3(t) = 1 and
a2(t) defined in (8.14). Thus A= 2 in (6.5). Therefore for µ(t) = λ/ε(t) and
µ1(t) = λ1(t) with λand λ1defined in (7.31) and (8.25), conditions (8.7) and
(8.8) of Theorem 8.1 are the same as conditions (6.6) and (6.7) of Theorem 6.1.
Hence one finishes the proof of Theorem 8.1 by applying Theorem 6.1.
Now consider a simple iteration method:
(8.31) zn+1 =znωn+1[F(zn) + ε(tn)(zn˜z0)], n = 0,1,2, . . . ,
where z0,˜z0Br(y), tn+1 =tn+ωn+1,t0= 0.
Theorem 8.2. Assume that:
(1) problem (1.1) has a unique solution yin a ball B2r(y);
DYNAMICAL SYSTEMS AND DISCRETE METHODS FOR SOLVING NONLINEAR ILL-POSED PROBLEMS33
(2) Fsatisfies condition (7.1);
(3) Fis continuously Fechet differentiable and
(8.32) ||F0(h)|| N1,hB2r(y);
(4) ε(t)>0is continuously differentiable, tends to zero monotonically as t
+, and limt+˙ε(t)
ε2(t)= 0;
(5)
(8.33) ε(0) > Cε,
where Cεis defined in (8.3);
(6)
(8.34) ||F(z0)||
ε(0) +||z0˜z0|| <6rCε
ε(0) Cε
;
(7) the sequence {zn}is defined in (8.31), z0,˜z0Br(y), and
(8.35)
X
n=1
ωn=,
(8)
(8.36) 0 < ωn1
N1+ 2ε(0),ωn
νn(ωn)<2(1 Cε)
12Cε
, n = 1,2, . . . ,
where the continuous positive functions µ(t),µ1(t)are defined in Theorems
5.1 and 5.3, µ1(t)is monotone, and νn(t)is defined in (6.8).
Then the following conclusions hold:
i) all zn,n= 1,2, . . . , belong to the ball B2r(y),
ii)
(8.37) ||znx(tn)|| 1Cε
N2
ε(tn),
iii)
(8.38) tn+, ε(tn)0as n ,
iv)
(8.39) lim
n→∞ ||zny|| = 0.
Proof of Theorem 8.2.
The assumptions of Theorem 8.2 imply the assumptions of Theorem 7.22. There-
fore the assumptions of Theorem 5.1 with µ(t) defined in (7.70) are also satisfied
(see the proof of Theorem 7.22).
To check the assumptions of Theorem 5.3 consider Φ defined by formula (7.64).
Since
(8.40) Φ0(h, t) = F0(h)ε(t)I,
condition (7.1) implies
(8.41) 0(h, t)ξ, ξ ) = (F0(h)ξ, ξ)ε(t)||ξ||2 ε(t)||ξ||2,ξH.
Therefore Φ0satisfies condition (5.18) with
(8.42) a1(t) = ε(t),and a2(t) = 0.
34 RUBEN G. AIRAPETYAN AND ALEXANDER G. RAMM
For hB2r(y) and ˜z0Br(y) one has:
(8.43)
Φ
∂t (z(t), t)
=||˙ε(t)(h˜z0)|| |˙ε(t)|(||hy|| +||˜z0y||)3r|˙ε(t)|.
Thus condition (5.19) also holds with
(8.44) β1(t) = 3r|˙ε(t)|, b1(t) = b2(t) = 0.
Since γ1(t) = ε(t), σ1(t)0, and in Theorem 5.3 v(0) = ||Φ(z0,0)|| =||F(z0) +
ε(0)(z0˜z0)|| in order to satisfy the assumptions of Theorem 5.3 the function µ1(t)
should satisfy the following conditions:
(8.45) 3r|˙ε(t)| 1
2µ1(t)ε(t)˙µ1(t)
µ1(t),
(8.46) µ(0)||F(z0) + ε(0)(z0˜z0)|| <1.
Choose
(8.47) µ1(t) = λ1
ε(t), λ1=ε(0) Cε
6rCε
.
Hence assumptions (8.33) and (8.34) of Theorem 8.2 imply conditions (8.45) and
(8.46). Therefore the assumptions of Theorem 5.3 are also satisfied.
From (8.40), (8.32) and the monotonicity of ε(t) one concludes that assumption
(6.4) of Theorem 6.1 holds with a3(t) = N1+ε(t) and a2(t)0, and the assumption
(6.5) with A=N1+ 2ε(0).
To finish the proof of Theorem 8.2 one refers to Theorem 6.1.
References
[1] Airapetyan, R.G. [2000] Continuous Newton method and its modification, Applicable Anal-
ysis, 73, N 3-4, pp. 463-484.
[2] Airapetyan, R.G. [2000] On new statement of inverse problem of Quantum Scattering Theory,
Operator theory and its applications, Amer. Math. Soc., Providence RI, 2000, Fields Inst.
Comm., 25.
[3] Airapetyan, R.G. and Puzynin, I.V. [1997] Newtonian iterative scheme with simultaneous
iterations of inverse derivative, Comp. Phys. Comm., 102, pp. 97–108.
[4] Airapetyan, R.G., Ramm, A.G. and Smirnova, A.B. [1999] Continuous analog of Gauss-
Newton method, Math. Models and Meth. in Appl. Sci., 9, N3, pp. 463–474.
[5] Airapetyan, R.G., Ramm, A.G. and Smirnova, A.B. [2000] Continuous methods for solv-
ing nonlinear ill-posed problems, Operator theory and its applications, Amer. Math. Soc.,
Providence RI, 2000, Fields Inst. Comm., 25, pp. 111-136.
[6] Alber, Ya.I. [1975] On a solution of operator equations of the first kind with accretive oper-
ators in Banach spaces, Diffferen. Uravneniya, 11, N12, 2242–2248.
[7] Alber, Ya.I. [1993] The regularization method for variational inequalities with nonsmooth
unbounded operators in Banach space, Appl. Math. Lett., 6, N4, 63–68.
[8] Alber, Ya.I. [1994] A new approach to the investigation of evolution differential equations in
Banach spaces, Nonlin. Anal., Theory, Methods & Appl., 23, N9, 1115–1134.
[9] Argyros, I.K. [1998] Polynomial operator equations in abstract spaces and applications, CRC
Press, Boca Raton.
[10] Beckenbach, E. and Bellman R. [1961] Inequalities, Springer-Verlag, Berlin.
[11] Blaschke, B., Neubauer, A. and Scherzer O. [1997] On convergence rates for the iteratively
regularized Gauss-Newton method, IMA J. Num. Anal., 17, 421–436.
[12] Courant, R [1943] Variational methods for the solution of problems of equilibrium and vibra-
tions, Bull. Amer. Math. Soc., 49, 1–23.
[13] Decker, D.W., Keller, H.B. and Kelley, C.T. [1983] Convergence rates for Newton’s method
at singular points, SIAM J. Numer. Anal., 20, N2, 296–314.
[14] Deimling, K. [1985] Nonlinear functional analysis, Springer-Verlag, New York.
DYNAMICAL SYSTEMS AND DISCRETE METHODS FOR SOLVING NONLINEAR ILL-POSED PROBLEMS35
[15] Engl, H.W., Hanke, M. and Neubauer, A. [1996] Regularization of inverse problems, Kluwer
Acad. Publ. Group, Dordrecht.
[16] Gavurin, M.K. [1958] Nonlinear functional equations and continuous analogies of iterative
methods, Izv. Vuzov. Ser. Matematika. 5, pp. 18–31.
[17] Kamke, E. [1974] Differentialgleichungen. osungmethoden und osungen, Chelsea, New
York.
[18] Kantorovich, L.V. and Akilov, G.P. [1982] Functional Analysis, Pergamon Press.
[19] Ortega, J.M. and Rheinboldt, W.C. [1970] Iterative Solution of Nonlinear Equations in Sev-
eral Variables, Academic Press.
[20] Ramm, A.G. [1999] A numerical method for some nonlinear problems, Math. Models and
Meth. in Appl.Sci., 9, N2, pp. 325-335.
[21] Ramm, A.G. and Smirnova, A.B. [1999] A numerical method for solving nonlinear ill-posed
problems, Nonlinear Funct. Anal. and Optimiz., 20, N3, pp. 317-332.
[22] Ryazantseva, I.P. [1994] On some continuous regularization methods for monotone equations,
Comput. Math. Math. Phys., 34, N1, 1–7.
[23] Szarski, J. [1967] Differential inequalities, PWN, Warszawa.
[24] Vasin, V.V. and Ageev, A.L., [1995] Ill-posed problems with a priori information, VNU,
Utrecht.
[25] Zhidkov, E.P. and Puzynin, I.V. [1967] Solving of the boundary problems for second order
nonlinear differential equations by means of the stabilization method, Soviet Math. Dokl. 8,
pp. 614-616.
... It solves an initialvalue problem of differential equations consisting of state variables including parts of projections as well as image pixel values. A system of differential equations based on an extension of dynamical methods [10][11][12][13][14][15][16][17][18][19][20] is constructed. It is an approach that optimizes a cost function of unknown variables consisting of an image and a projection. ...
... with initial state (0) = 0 . Note that both solutions to the differential equation in Eq. (14) with (0) = 0 and to the equations in Eq (6), in which the time derivative of is replaced with / = 0, with (0) = 0 and (0) = * are equivalent because ( ) ≡ * , for all ≥ 0, satisfies in regard to the latter system. An ODE solver ode113 in MATLAB (MathWorks, Natick, USA) was used for solving the initialvalue problem of the differential equations in Eqs. ...
... An ODE solver ode113 in MATLAB (MathWorks, Natick, USA) was used for solving the initialvalue problem of the differential equations in Eqs. (6) and (14). ...
Article
Full-text available
Image reconstruction in computed tomography can be treated as an inverse problem, namely, obtaining pixel values of a tomographic image from measured projections. However, a seriously degraded image with artifacts is produced when a certain part of the projections is inaccurate or missing. A novel method for simultaneously obtaining a reconstructed image and an estimated projection by solving an initial-value problem of differential equations is proposed. A system of differential equations is constructed on the basis of optimizing a cost function of unknown variables for an image and a projection. Three systems described by nonlinear differential equations are constructed, and the stability of a set of equilibria corresponding to an optimized solution for each system is proved by using the Lyapunov stability theorem. To validate the theoretical result given by the proposed method, metal artifact reduction was numerically performed.
... with noise level δ > 0 applies. In this context, let x † ∈ D(F ) denote an exact solution to equation (1) andx ∈ X a reference element (initial guess). ...
... If well-posedness of (1) fails, a regularization approach is required in order to find stable approximate solutions to the ill-posed equation (1). We are going to construct regularized solutions x δ α to x † by solving the equation ...
... A comprehensive study of Lavrentiev's regularization method for nonlinear equations in Hilbert spaces with monotone operators, even in a more general setting, can be found in the book [2], for modifications of the method see also [8,9,14,23]. Nevertheless, for completeness of exposition and since some of the estimates will be needed later on, in Section 2 we provide a short summary of the arguments leading to convergence of x δ α from (4) to x † (see also [1,6]). ...
Article
Full-text available
In this paper we deal with nonlinear ill-posed problems involving monotone operators, and consider Lavrentiev regularization methods, which, in contrast to Tikhonov regularization, usually do not make use of the adjoint of the derivative. There are plenty of qualitative and quantitative convergence results in the literature, both in Hilb ert and Banach spaces. Our aim here is mainly to contribute with some types of error estimate results derived under various source conditions and to interpret them in some settings. In particular, we propose and investigate new variational source conditions adapted to these Lavrentiev type techniques. Another focus of this paper is to exploit the concept of approximate source conditions.
... In this paper, we propose a continuous analog to the power-based accelerated OS-EM, which is based on the approach of continuous-time dynamical optimization [14][15][16][17][18][19]. The system is described by a switched nonlinear differential equation with piecewise smooth vector fields. ...
... ≤ 0 (16) and the Lyapunov function ( ) decreases along the flow. This concludes the proof. ...
Article
Full-text available
The maximum-likelihood expectation-maximization (ML-EM) algorithm is used for an iterative image reconstruction (IIR) method and performs well with respect to the inverse problem as cross-entropy minimization in computed tomography. For accelerating the convergence rate of the ML-EM, the ordered-subsets expectation-maximization (OS-EM) with a power factor is effective. In this paper, we propose a continuous analog to the power-based accelerated OS-EM algorithm. The continuous-time image reconstruction (CIR) system is described by nonlinear differential equations with piecewise smooth vector fields by a cyclic switching process. A numerical discretization of the differential equation by using the geometric multiplicative first-order expansion of the nonlinear vector field leads to an exact equivalent iterative formula of the power-based OS-EM. The convergence of nonnegatively constrained solutions to a globally stable equilibrium is guaranteed by the Lyapunov theorem for consistent inverse problems. We illustrate through numerical experiments that the convergence characteristics of the continuous system have the highest quality compared with that of discretization methods. We clarify how important the discretization method approximates the solution of the CIR to design a better IIR method.
... PDEM is derived from a nonlinear differential equation using the dynamical method [16][17][18][19][20][21] applied to tomographic inverse problems [22][23][24][25][26][27], which means that a discretization of the differential equation by using a first-order expansion of the vector field leads to an equivalent iterative formula. A theoretical result from the previous study [15] is that the global stability of an equilibrium corresponding to the desired solution observed in the continuous-time system is guaranteed by the Lyapunov theorem [28]. ...
Article
Full-text available
Recently, an extended family of power-divergence measures with two parameters was proposed together with an iterative reconstruction algorithm based on minimization of the divergence measure as an objective function of the reconstructed images for computed tomography. Numerical experiments on the reconstruction algorithm illustrated that it has advantages over conventional iterative methods from noisy measured projections by setting appropriate values of the parameters. In this paper, we present a novel neural network architecture for determining the most appropriate parameters depending on the noise level of the projections and the shape of the target image. Through experiments, we show that the algorithm of the architecture, which has an optimization sub-network with multiplicative connections rather than additive ones, works well.
... Glasko pointed out that being ill-posed in the elementary formulation is the characteristic property of inverse problems [8]; compare also [9], [10]. For some nonlinear ill-posed problems see [11]. For examples of inverse problems in the SSR setting see [12]. ...
Article
Full-text available
The special theory of relativity (STR) is operationally expanded onto orthogonal accelerations: normal and binormal that complement the instantaneous tangential speed and thus can be structurally extended into operationally complete 4D spacetime without defying the STR. Thus the former classic Lorentz factor, which defines proper time differential can be expanded onto within a trihedron moving in the Frenet frame (T,N,B). Since the tangential speed which was formerly assumed as being always constant, expands onto effective normal and binormal speeds ensuing from the normal and binormal accelerations, the expanded formula conforms to the former Lorentz factor. The obvious though previously overlooked fact that in order to change an initial speed one must apply accelerations (or decelerations, which are reverse accelerations), made the Einstein’s STR incomplete for it did not apply to nongravitational selfpropelled motion. Like a toy car lacking accelerator pedal, the STR could drive nowhere. Yet some scientists were teaching for over 115 years that the incomplete STR is just fine by pretending that gravity should take care of the absent accelerator. But gravity could not drive cars along even surface of earth. Gravity could only pull the car down along with the physics that peddled the nonsense while suppressing attempts at its rectification. The expanded formula neither defies the STR nor the general theory of relativity (GTR) which is just radial theory of gravitation. In fact, the expanded formula complements the STR and thus it supplements the GTR too. The famous Hafele-Keating experiments virtually confirmed the validity of the expanded formula proposed here.
... In this paper, we present an optimization method to handle not only dose-volume but also mean-dose constraints directly in IMRT treatment planning, which is an iterative algorithm derived from the discretization of a continuoustime dynamical system [22][23][24][25][26][27][28][29] with a set of equilibrium points. A theoretical proof for the convergence to an equilibrium corresponding to the desired IMRT planning is given by using the Lyapunov stability theorem. ...
Article
Full-text available
We give a novel approach for obtaining an intensity-modulated radiation therapy (IMRT) optimization solution based on the idea of continuous dynamical methods. The proposed method, which is an iterative algorithm derived from the discretization of a continuous-time dynamical system, can handle not only dose-volume but also mean-dose constraints directly in IMRT treatment planning. A theoretical proof for the convergence to an equilibrium corresponding to the desired IMRT planning is given by using the Lyapunov stability theorem. By introducing the concept of "acceptable," which means the existence of a nonempty set of beam weights satisfying the given dose-volume and mean-dose constraints, and by using the proposed method for an acceptable IMRT planning, one can resolve the issue that the objective and evaluation are different in the conventional planning process. Moreover, in the case where the target planning is totally unacceptable and partly acceptable except for one group of dose constraints, we give a procedure that enables us to obtain a nearly optimal solution close to the desired solution for unacceptable planning. The performance of the proposed approach for an acceptable or unacceptable planning is confirmed through numerical experiments simulating a clinical setup.
... Because the base optimization function is a symmetric premetric measure and it gives an upper bound of the Jensen-Shannon divergence [13,14], one can expect a better performance while preserving good properties of ML-EM and MART algorithms. The convergence to an exact solution and the monotonic decreasing of the SKLD with each iterative step for a consistent inverse problem are guaranteed using the approach of the continuous dynamical method [15][16][17][18][19][20]. Specifically, we construct an autonomous differential equation for which the proposed iterative formula gives a first-order numerical discretization with some step size and demonstrate the stability of an equilibrium in a continuoustime system using Lyapunov's stability theorem [21]. ...
Article
Full-text available
Iterative reconstruction (IR) algorithms based on the principle of optimization are known for producing better reconstructed images in computed tomography. In this paper, we present an IR algorithm based on minimizing a symmetrized Kullback-Leibler divergence (SKLD) that is called Jeffreys’ J -divergence. The SKLD with iterative steps is guaranteed to decrease in convergence monotonically using a continuous dynamical method for consistent inverse problems. Specifically, we construct an autonomous differential equation for which the proposed iterative formula gives a first-order numerical discretization and demonstrate the stability of a desired solution using Lyapunov’s theorem. We describe a hybrid Euler method combined with additive and multiplicative calculus for constructing an effective and robust discretization method, thereby enabling us to obtain an approximate solution to the differential equation. We performed experiments and found that the IR algorithm derived from the hybrid discretization achieved high performance.
... However, as a result of the ill-posedness of the inverse problem, regularization technique is necessary for stabilizing the computation. Since the convergence standard of landweber iterative method was proposed by Scherzer [11] in 1995, considerable attention on iterative method has been paid by numerous researches in [12][13][14][15][16], where different cost functions are formulated. ...
Article
Full-text available
This paper investigates an inverse problem for parabolic equations backward in time, which is solved by total-variation-like (TV-like, in abbreviation) regularization method with cost function ∥ux∥2. The existence, uniqueness and stability estimate for the regularization problem are deduced in the linear case. For numerical illustration, the variational adjoint method, which presents a simple method to derive the gradient of the optimization functional, is introduced to reconstruct the unknown initial condition for both linear and nonlinear parabolic equations. The conjugate gradient method is used to iteratively search for the optimal approximation. Numerical results validate the feasibility and effectiveness of the proposed algorithm. Copyright
Article
Full-text available
We present a novel optimization method to handle dose-volume constraints (DVCs) directly in intensity-modulated radiation therapy (IMRT) treatment planning based on the idea of continuous dynamical methods. Most of the conventional methods are constructed for solving inconsistent inverse problems with, e.g., dose-volume based objective functions, and one expects to obtain a feasible solution that minimally violates the DVCs. We introduce the concept of ‘acceptable’, meaning that there exists a nonempty set of radiation beam weights satisfying the given DVCs, and we resolve the issue that the objective and evaluation are different in the conventional planning approach. We apply the initial-value problem of the proposed dynamical system to an acceptable and inconsistent inverse problem and prove that the convergence to an equilibrium in the acceptable set of solutions is theoretically guaranteed by using the Lyapunov theorem. Indeed, we confirmed that we can obtain acceptable beam weights through numerical experiments using phantom data simulating a clinical setup for an acceptable and inconsistent IMRT planning system.
Article
Polynomial operators are a natural generalization of linear operators. Equations in such operators are the linear space analog of ordinary polynomials in one or several variables over the fields of real or complex numbers. Such equations encompass a broad spectrum of applied problems including all linear equations. Often the polynomial nature of many nonlinear problems goes unrecognized by researchers. This is more likely due to the fact that polynomial operators – unlike polynomials in a single variable – have received little attention. Consequently, this comprehensive presentation is needed, benefitting those working in the field as well as those seeking information about specific results or techniques. Polynomial operator equations in abstract spaces and applications – an outgrowth of fifteen years of the author’s research work – presents new and traditional results about polynomial equations as well as analyzes current iterative methods for their numerical solution in various general space settings. Topics include: Special cases of nonlinear operator equations, solution of polynomial operator equations of positive integer degree n, results on global existence theorems not related with contractions, Galois theory, polynomial integral and polynomial differential equations appearing in radiative transfer, heat transfer, neutron transport, electromechanical networks, elasticity, and other areas, results on the various Chandrasekhar equations, Weierstrass theorem, matrix representations, Lagrange and Hermite interpolation, bounds of polynomial equations in Banach space, Banach algebra, and Hilbert space. The materials discussed can be used for the following studies: Advanced numerical analysis, numerical functional analysis, functional analysis, approximation theory, integral and differential equations, tables include numerical solutions for Chandrasekhar’s equation I to VI, error bounds comparison, accelerations schemes I and II for Newton’s method, Newton’s method, secant method. The self-contained text thoroughly details results, adds exercises for each chapter, and includes several applications for the solution of integral and differential equations throughout every chapter.
Article
The existence of a solution, convergence and stability of the penalty method for variational inequalities with nonsmooth unbounded uniformly and properly monotone operators in Banach spase $B$ are investigated. All the objects of the inequality - the operator A, "the right-hand part" $f$ and the set of constrains $\Omega $ - are to be perturbed. The stability theorems are formulated in terms of geometric characteristics of the spaces $B$ and $B^*$. The results of this paper are continuity and generalization of the Lions' ones, published earlier in \cite{l}. They are new even in Hilbert spaces.
Article
Continuous regularization methods for nonlinear monotone equations are constructed using Newton-Kantorovich iteration and the method of steepest descent. Sufficient conditions for the methods to converge are obtained for the case where the data are only approximate.
Article
If Newton’s method is employed to find a root of a map from a Banach space into itself and the derivative is singular at that root, the convergence of the Newton iterates to the root is linear rather than quadratic. In this paper we give a detailed analysis of the linear convergence rates for several types of singular problems. For some of these problems we describe modifications of Newton’s method which will restore quadratic convergence.