ArticlePDF Available

Class of iterative signal restoration algorithms

Authors:

Abstract

A class of iterative signal restoration algorithms is derived based on a representation theorem for the generalized inverse of a matrix. These algorithms exhibit a first or higher order of convergence, and some of them consist of an online and an offline computational part. The conditions for convergence, the rate of convergence of these algorithms, and the computational load required to achieve the same restoration results are derived. An iterative algorithm is also presented which exhibits a higher rate of convergence than the standard quadratic algorithm with no extra computational load. These algorithms can be applied to the restoration of signals of any dimensionality. The presented approach unifies a large number of iterative restoration algorithms. Based on the convergence properties of these algorithms, combined algorithms are proposed that incorporate a priori knowledge about the solution in the form of constraints and converge faster than previously published algorithms
778
IEtE
IKANSAC IIONS
Oh
ACOUSTICS. SPI<F.CH.
.ANI)
SIGKAL
PROCESSING.
VOL.
3X.
NO
5.
MAY
IYYO
A Class
of
Iterative Signal Restoration Algorithms
AGGELOS K. KATSAGGELOS,
MEMBER, IEEE,
AND
SERAFIM
N.
EFSTRATIADIS,
STUDENT
MEMBER,
IwE
Absfmct-In this paper, a class of iterative signal restoration algo-
rithms
is
derived based on a representation theorem for the general-
ized inverse
of
a matrix. These algorithms exhibit a first
or
higher
or-
der
of
convergence, and some
of
them consist of an on-line and an
off-
line computational part. The conditions for convergence, the rate
of
convergence of these algorithms, and the computational load required
to achieve the same restoration result5 are derived.
A
new
iterative
algorithm
is
also presented which exhibits a higher rate of convergence
than the standard quadratic algorithm with
no
extra computational
load. These algorithms can be applied
to
the restoration of signals of
any dimensionalitj.
Iterative restoration algorithms that have ap-
peared in the literature represent special cases
of
the class of algo-
rithms described here. Therefore, the approach presented here unifies
a large number
of
iterative restoration algorithms. Furthermore, ba\ed
on the convergence properties of these algorithms, combined algo-
rithms are proposed that incorporate
apriori
knowledge about the
w-
lution in the form
of
constraints and converge faster than the previ-
ously
used algorithms.
I.
INTRODUCTION
HE
recovery or restoration of
a
signal that has been
T
distorted is one of the most important problems
in
sig-
rial
processing applications [I],
[
181. More specifically,
the following degradation model is considered:
41
=
Dx,
(1)
where the vectors
y
and
x
represent, respectively, lexi-
cographically ordered blurred and original signals. The
matrix
D
represents a linear deterministic distortion which
may be space varying or space invariant. When
y
and
x
represent images, then the distortion may be due to mo-
tion between the camera and the scene or due to atmo-
spheric turbulence. The signal restoration problem is then
to invert
(1)
or to find
a
signal
as
close
as
possible to the
original one, subject to a suitable optimality criterion
given
y
and
D.
Equation
(I)
also
represents the more gen-
eral degradation model where an additive noise term is
considered. In this case, the restoration problem takes
again the form of solving
(1)
for
x,
where
D
is replaced
by
a square well-conditioned matrix and
y
by
D'y,
where
denotes the transpose
of
a
matrix or vector. This case
will be separately studied
in
Section 111, since computa-
tionally simpler algorithms can be used.
Iterative algorithms are used
in
our work
in
solving the
Manuscript received August
2.
1988;
revised June
16,
1989.
This work
was supported
in
part by the National Science Foundation under Grant
MIP-
86142 17.
The authors are with the Department
01'
Electrical Engineering and
Computer Science. Northwestern University. The Technological Institute.
Evanaton. 1L
60208-3
I
18.
IEEE
Log Number 9034417.
signal restoration problem. Iterative restoration algo-
rithms have a number of advantages over direct or recur-
sive restoration techniques, and they have been used ex-
tensively
in
the literature
[
181. Most of these algorithms
have
a
linear or first-order convergence rate. Singh
et
al.
[
191 derived an iterative restoration algorithm with a
quadratic rate of convergence, when the matrix
D
in
(I)
is invertible. Morris
et
ul.
[
14]-[ 161 and Lagendijk
et
ul.
[13] generalized this algorithm for higher orders
of
con-
vergence.
In
their derivation, the matrix
D
in
(I)
was in-
vertible. In 1141-[I61
it
was further assumed that
D
rep-
resents
a
convolution operator.
In this paper, we extend the results
in
[
131-[ 161 and
[
191 by showing that when
D
is singular, the higher order
algorithms converge to the minimum norm solution of
(l),
provided that a solution exists. This is a very important
result because for
a large number
of
distortions of prac-
tical interest (motion, out-of-focus), the matrix
D
is sin-
gular. Furthermore, we derive iterative algorithms with
linear and higher order convergence rates for the general
case when
D
in
(1)
is
a
rectangular matrix.
In
this case,
the limiting solution
of
these algorithms is the minimum
norm least-squares (MNLS) solution
of
(1).
The deriva-
tion of these algorithms is based on a representation theo-
rem for the generalized inverse
D+
of the matrix
D.
Iter-
ative restoration algorithms benefit
a
great deal from the
use of constraints which incorporate properties of the
so-
lution into the restoration process. However, the direct
use of constraints with the higher order algorithms may
result
in
divergence or meaningless results. We propose
techniques which allow us to effectively use constraints
with
a
combination of linear and higher order iterative
algorithms.
The derivation of the linear and higher order algorithms
obtaining the MNLS solution of
(1)
is presented
in
Sec-
tion 11. Computationally simpler higher order algorithms
solving for the minimum norm solution
of
(I),
when
D
is
a
square, positive semidefinite matrix, are presented in
Section 111. Such a situation may result, for example,
when a noise term is added to
(I).
Then. after regular-
ization, the restoration problem is again the solution of
a
set of linear equations analogous to
(I),
where
D
and
are replaced by another matrix
A
and a vector
6,
respec-
tively. These algorithms extend the results reported
in
[13]-[I61 and 1191.
In
Section IV, the algorithms are
compared with respect to their computational load. The
incorporation of constraints are discussed
in
Section
V,
and a number of experimental results are presented
in
Sec-
tion VI. Finally, conclusions are presented
in
Section VII.
0096-35 18/90/0500-0778$0
1
.OO
O
1990 IEEE
KATSAGGELOS AND EFSTRATIADIS ITERATIVE SIGNAL RESrORATlOK ALGORITHMS
779
11.
MINIMUM
NORM
LEAST-SQUARES SOLUTION
In
this section we assume that the matrix D
in
(I)
is an
in
x
n
matrix, where
m
I
n.
That is, D
E
L(R”,
R”’),
x
E
R”
and
y
E
R”’,
where
L(
R”,
RJJ’) is the set of matrices
that map
R”
into
R”’,
the
n-dimensional and
m-dimensional Euclidean spaces. respectively. Let
03
(D)
and X(D) denote, respectively, the range and the
null
space of D and let dim
(S
)
denote the dimensionality of
the subspace
S
[20].
If dim((R(D))
=
r,
then since
dim(03(DT))
=
r,
weget thatdim(X(D))
=
n
-
rand
dim(31.(DT))
=
m
-
r.
Equation
(1)
has at most one
solution if and only if
r
=
n,
and we get no solution if
y
E
31.
(
DT). The degradation model of
(I)
can be modified
so
that D is
a
square matrix
(in
=
n),
by increasing the
size
of
x,
by adding zeros,
or
by reducing
the
size of
y.
Even
in
this case, however, for a large number
of
com-
mon distortions (motion, out-of-focus), the distortion ma-
trix is singular, that is,
r
<
n.
Since
in
both cases (square
and rectangular D)
it
cannot be guaranteed that
y
E
03
(D),
a
least-squares
(LS)
solution is sought (the case when D
is square and
y
E
03
(D) will be studied
in
Section
111).
Such
a
solution minimizes the Euclidean norm
11
Dx
-
y
(1.
The
LS
solution satisfies the normal equations
DTD.x
=
D’y.
(2)
The set of x’s that satisfy
(2)
forms a closed convex set
which contains
a unique vector of minimum norm
[5].
Then the generalized inverse
D+
E
L(R”’,
R”)
is defined
by D+y
=
x+, where x+ is the minimum norm least-
squares
(MNLS)
solution of
(1).
A general theorem rep-
resenting D+
as
the limit
of
a
sequence of matrices, pre-
sented in Groetch
[5],
is presented next without proof,
due to its significance.
A. Representation
of
D+
and
xf
Theorem
I:
Suppose D
E
L(R”,
RI”)
and let D*
=
DTD/03(DT). If
Q
is an open set with a(D*)
C
Q
C
(0,
w)
and
{
h(z)}
is
a
family of continuous real valued
functions on
Q
with
lirnkh
(z)
=
z-’
uniformly on
a(
D*),
then
D+
=
lim
h
(D*)
DT,
(3)
I,
where the convergence is
in
the uniform topology for
L(R”’,
R”).
Furthermore,
I(h(D*P‘-
D+I/
5
SUP(
11
-
zh(z,l}
IID+II,
(4)
where the supremum is taken over
all
i
E
a(D*).
0
Some of the notation used above is
as
follows. The
spectral radius of a square matrix
T
and the restriction of
Tto
a
subspace
S
of
R”
are, respectively, denoted by
a(
T)
and
TIS
[20]. Clearly, the matrix D* is symmetric and
positive definite. Therefore, its spectral radius is
a subset
of the set
(0,
w
1.
Theorem
1
is very powerful because
it
provides us with
a general expression
(3)
for representing and iteratively
computing the generalized inverse of
a matrix. Further-
more,
it
provides us with
a
measure
of
the rate of con-
vergence
(4).
Therefore.
it
unifies a large number
of
it-
erative restoration techniques. Any family
of
functions
{
h
}
with the properties stated by Theorem
1
can result
in
a new representation for D’. Clearly.
some
of
these
families of functions result
in
more attractive representa-
tions, from a computational point of view.
It
is noted here
that Theorem
1
holds not only
for
matrices but for any
linear operator
with
a closed range
[5J.
In
signal restoration we are primarily interested
in
solv-
ing for
x+.
Expressions
for
.x+
instead of D+ are derived
as follows. The convergence of the sequence
{
h.(D*)
D7
}
to DT is
in
the uniform topology for
L(
R”’,
RI’),
which means that
[5]
iim (I~~(D*)D~
-
D+I(
=
0.
(5)
I
The uniform convergence of
{
h
(D*
)
DT} to Df implies
strong convergence of
{
(D*) DT} to D’, which means
that for each
y
E
R”’
lim I/~~(D*)D‘\.
-
~+yll
=
0.
(6)
I
Therefore,
(3)
and (4) are written, respectively,
as
[3],
[
101
xi
=
limh(D*)DT
(7)
I
and
II~~(D*)D‘?.
-
x+/l
5
sup(
(1
-
zh(z)l}
.
I/X+II.
(8)
where the supremum is again over all
i
E
a(
0”).
There-
fore, Theorem
1
can be restated with
(7)
and
(8)
replacing
(3)
and
(4),
respectively. In the following section, differ-
ent iterative restoration algorithms will be derived, cor-
responding to different choices of
{
h
(z)
},
by using
(7)
and
(8).
B.
A Linear Algorithtn
Consider the sequence of functions
{
h
.h(z)
=
P
>
0
h(z)
=h,(z)
(1
-
:AI(:
I
I
=o
z)
1
defined by
It
is shown
[3],
(51
that Iimkdmh(z)
=
z-’
uniformly on
compact subsets of the set
i;:o
<
:
<
p).
2
(IO)
Q2,=
{z:11
-p;I
<
I}
=
According to Theorem
1
and
(9),
by setting xI
=
h
(D*
)
D‘?,, we get the iteration
xo
=
PDTy
XI
+
I
=
.YI
+
ODT(
-
D.rk)
=
(I
-
/3DTD).~,
+
PDTY,
(11)
7x0
IF~E
mANsAc-rioNs
ON
ACOUSTICS.
SPEECH.
ANI)
SIGNAL
PROCFSSING.
vo~
jx.
NO
5.
MAY
IWO
which converges to
xt
for
0
<
p
<
2
.
I/DI(-2.
(12)
Iteration
(1
1)
also
results from a successive approxima-
tions approach to the solution of the normal equations (2).
It
has been studied and used extensively for signal resto-
ration
(71,
[
181. According to
(8),
the rate of convergence
of iteration
(I
1)
is linear and
it
is characterized by the
re
la
t
ion
where
[5]
c
=
max
{
(1
-
(
14)
An equivalent way of describing the linear rate of con-
vergence of iteration
(1
1)
is with the use of the residual
error at step
k
of iteration
(9)
[3].
It
is defined as
r,
=
1
-
&(z)
(15)
and
it
represents the residual error associated with each
eigenvalue of
D*,
since
z
E
o(D*).
Then, according to
iteration
(9),
r,+
I
=
ror,.
(16)
Equation
(16)
represents a straight line on the r,
r,
+
I-
plane.
C.
Higher Order Algorithms
tegerp
?
2
Consider the sequence of functions
{
h
(z)
}
for an in-
fo(z)
=
P
>
0,
The sequence defined by (17) converges uniformly to
z-'
on compact subsets of
(10)
[3], [5]. Application of
Theorem
1
results
in
the algorithm
[3],
[5],
[8]-[10]
Do
=
/3DTD,
xg
=
PDTy,
(18a)
p
-
I
a,
=
'c
(I
-
D,)',
1=0
DL+l
=
CPLDL,
.~i(+l
=
@LXL.
(18c)
An advantage of iteration (18) is that the matrix sequence
{
CP,
}
or
{
D,
}
can be computed
in
advance or
off--line,
although for a general D this may result
in
excessive stor-
age. The solution sequence
{
x,
1
is then computed
on-line
after the distorted data
y
are available. As observed from
(18),
the limit of
Dk
is the projection onto the row space
of
D. This projection is equal to the identity matrix when
D
is invertible. That is, the distortion matrix is also up-
dated. This means that
if
xi(
is interpreted as the observed
distorted signal at each iteration, then the distortion op-
erator, which maps the original signal into
xk,
is ap-
proaching the identity operator (if the inverse exists)
as
the iteration number increases.
Algorithm
(1
8)
exhibits
p
th
order of convergence. That
is, according to relation
(8)
[3],
161,
where the convergence factor
c
is given by
(14).
Equiv-
alently,
it
is easily shown that
[3]
r,,
I
=
rf,
where
r,
is defined by
(15).
Equation (20) represents a
pth-order curve on the
r, r,
+
,-plane. Certain of these
curves for
p
=
2,
3,
4,
9,
20 are shown
in
Fig.
1.
The
curve representing the rate of convergence of the linear
algorithm
(16) is also shown. Clearly,
as
p
increases, the
residual error for most of its values tends to go to zero
in
one iteration. Notice that the values
-
1
and
1
are ex-
cluded from the range of values that
r,
takes.
D.
A
New
Iterutive Algorithm
Let us regard
z-I
as
the root of the function
f(u)
=
(U-'
-
z)~,
where
17
>
0.
If the Newton-Raphson method
is used
in
approximating this root, then the sequence
{
U,
}
is generated according to
[3]
for a suitable
U,.
Suppose that for
/3
>
0,
a
sequence of
functions
{
h
(z)
}
is defined by
fok)
=
P
The convergence and the rate of convergence of this
al-
gorithm can be described by considering
r,
defined by
(15).
That is,
it
is found
in
a
straightforward way that
Note that for
17
=
I,
this algorithm becomes the quadratic
algorithm
(p
=
2) of
(17).
The curves described by
(23)
for different values of
17
are shown
in
Fig.
2.
The lines
r,
+
I
=
r,
and
r,
+
I
=
-r,,
also shown
in
this figure, are
dividing the space into the regions
I
and 11, defined, re-
spectively, by
I
r,
+
I
I
<
1
r,
I
and
1
r,
+
I
I
L
I
r,
[.
Clearly,
if part of the curve represented by
(23)
for
a
certain
17
lies
in region I, and if
1
r,
I
<
I,
iteration (22) converges;
otherwise,
it
may not converge. For example, for
17
>
1
and
I
r,
1
<
I,
iteration (22) converges to
z-'
on compact
subsets of
flo,
although the convergence rate of the algo-
rithm may be slower than that of the quadratic.
On the other hand, for
0.5
<
17
<
I,
the part of the
curve (23) for which
r,
5
-17
lies
in
the region
11.
There-
fore, we need to restrict the residuals to satisfy
rk
>
-17,
KATSAGGELOS AND EFSTRATIADIS. ITERATIVE SIGNAL RESTORATION ALGORITHMS 78
I
1
.-
x
'0
*
-1
-1
0
1
rk
-axis
Fig.
1,
Representation of the residual error of
(16)
and
(20).
respectively.
for various values of the orderp.
1
.-
X
to
+
-LL
I
-1
0
1
rk
-axis
Fig.
2.
Representation
of
the residual error of
(23).
for various values of
0.
or
r,
I
0.
One way to accomplish this is by using
17
=
1
in
evaluating
rl
(k
=
0)
and then changing
17
to any value
such that
0.5
<
17
<
1. The rate of convergence of iter-
ation (22) is pictorially represented by the slope of the
curves shown in Fig.
2.
For example, for
rk
I
0.2, iter-
ation (22) converges faster with
17
=
0.8
than with
17
=
1.0.
With the conditions
on
7
imposed according to Fig.
2 in mind, application of Theorem
1
results in the itera-
tion
[3]
Do
=
pDTD,
xo
=
pDTy,
(
24a )
(
24b
)
( 24c )
1
+,
=
I
+
-
(I
-
D,),
17
Dk+
I
=
+,Dp,
Xk+
I
=
+,x,,
where
v
=
1
fork
=
0
and 0.5
<
17
<
1
fork
2
1.
In
general, the rate of convergence of iteration (24) depends
on the distribution of the eigenvalues of the matrix
D*
defined by Theorem
I.
111.
MINIMUM NORM SOLUTION
In
this section we consider the solution of
Ax
=
b,
(25)
where
A
is a square positive semidefinite matrix and b
E
&(A).
This is a case of special interest. Equation (25)
may be the degradation model of (l), when, for example,
D
=
A
represents the degradation due to atmospheric
tur-
bulence. Equation (25) may also result from the regular-
ization of the ill-posed signal restoration problem. More
specifically, the following degradation model is consid-
ered.
1'
=
Dx
+
w,
where
y
and
x
represent, respectively, the lexicographi-
cally ordered distorted and original signals, and
w
denotes
the additive noise. According to a regularization approach
presented in
[7]
and
[I
11, the solution of
(26)
is replaced
by the solution of the well-conditioned system of equa-
tions
(26)
(DTD
+
cyCTC)x
=
DTq.
(27)
The matrix
C
represents a high-pass filter and its role is
to restrict the energy of the restored signal at high fre-
quencies, due primarily to the amplified noise. The reg-
ularization parameter
cy
is a function of the signal-to-noise
ratio
[7].
Therefore, the presence of additive noise
in
the
degradation model does not alter the form of the iterative
algorithms presented
in
Section
11,
since
(1)
is now re-
placed by
(25).
Clearly, (25) can be solved by using any of the algo-
rithms presented
in
Section
11.
A key difference, however,
between
(1)
and (25) is that although matrix
D
is in gen-
eral a rectangular matrix, matrix
A
is always square, pos-
itive definite, or positive semidefinite. Therefore, (25)
might have a solution, which means that b
E
&
(A).
As
a
matter of fact, the constraint
C can be designed
in
such a
way that b
E
&
(A)
[7]. In this case, the minimum norm
solution can be found with fewer computations than those
required by the least-squares approach, as is shown next.
An iteration due to Bialy [2] with linear rate of conver-
gence, suitable for finding the solution of
(25),
is pre-
sented by the following theorem.
Theorem
2:
Let
A
:
R"
--t
R"
be a positive semidefinite
matrix. Forb
E
R",
xo
E
R"
consider the iterative process
Xk+l
=
Xk
+
P(b
-
Ax,!),
(28)
where
0
<
0
<
2
-
IIAII-'.
Then, the sequence
{xk,
k
I
0}
converges to
x*
=
i
+
P,,,,
{
xo},
where
i
is the
minimum norm solution of
Ax
=
b and
P,,,,
{
xo}
is the
projection of
xo
onto the null space of
A,
if and only if
b
We can think of iterations
(1
I)
and (28) as forming a
E
&(A).
U
pair, since they both have a linear rate of convergence.
Iteration
(I
1)
successively approximates
the
solution
to
the normal equations (2), while iteration (28) successively
approximates the solution to (25). In extending the above-
mentioned correspondence between the linear algorithms
(I
1)
and (28) to the higher order algorithms, we present
the following theorem
[3],
IS].
Theorem
3:
Let
A
:
R"
-+
R"
be a positive semidefinite
matrix. For
a given integer
p
L
2
and
p
>
0,
consider
the iterative process
A.
=
PAT,
xo
=
oh,
(294
P-
I
Ah+
I
=
@kAk3
Xk+
I
=
@L-~L?
(29c)
where
0
<
P
<
2
.
11
A
1I-I.
Then the sequence
{
q,
k
2
0)
converges to
x*
=
i.
where
i
is the minimum norm
U
The proof of Theorem
3
is presented
in
the Appendix.
Algorithm (29) with
p
=
2
was proposed by Singh
er
al.
[
191
for the case that
I/
I
-
A
11
<
1,
and by Morris
et
al.
[
141 for the case that
A
is positive definite and represents
a linear space invariant system (convolution case). Al-
gorithm
(29)
for any
p
2
2
was proposed by Morris
et
al.
[14], 11.51 and by Lagendijk et
cil.
[13] for the case
that
A
is positive definite. Therefore, Theorem
3
extends
the previously reported results.
solution of
Ru
=
h,
if and only if
b
E
63
(A).
Iv.
COMPARISON
BASED
ON
THE
COMPUTATIONAL
LOAD
The question we address in this section is
the
follow-
ing. For a specific restoration problem, which of the it-
erative algorithms presented
in
Sections
I1
(B,
C, and D)
and
111
should one use? We answer this question by con-
sidering the amount of computation required by each
al-
gorithm
in
obtaining the same solution point or
in
satis-
fying the same error criterion.
Clearly, algorithms (28) and (29), if applicable, should
be used, since they require fewer computations than their
counterparts, iterations
(1
1)
and
(1
8),
respectively. Ad-
ditionally, iteration (24) should be used over iteration
(1
8)
for
p
=
2,
if
q
is chosen according to the discussion
in
Section 11-D, since the former requires the same number
of computations as the latter, with the exception of an
additional multiplication by the scalar
1
/q.
Therefore,
in
the following, the algorithms of Section
11-B
and C will
be compared. The same comparison holds true for the al-
gorithms of Section
111.
Iterative algorithms give the exact solution as
k
+
CO,
but
in
practice the iterative process is terminated after a
finite number of iterations. Since the distortion operator
is known,
c
in
(14) is known, therefore, the number
of
iterations required by the algorithms to reach an approx-
imate solution can
be
computed. More specifically, let us
denote by
k,
and
k,,
the iteration steps of the first and
p
th-
order algorithms, respectively. Let us also suppose that
m,]
iterations of the
p
th-order algorithm are run, that is,
k,,
=
1.
*
.
'
,
m,).
Then. according to
(1
3)
and
(1
9),
the
k,,
th
iteration step of algorithm
(
18) is equivalent to
N(
k,,
)
iterations of the linear algorithm, where
N(
k,,
)
=
pk~z
-
pk/'
-
I,
(30)
That is, had the
k,,
th
iteration step of algorithm
(1
8)
been
replaced by
N(k,,)
iteration steps of algorithm
(I
l),
the
restoration results would have been the same. Now, the
total
number of iteration steps of algorithm
(I
1)
denoted
by
m,,
which are equivalent to
m,,
iteration steps of
al-
gorithm
(I
8),
are given by the expression
According to
(31).
due to the exponential relation be-
tween
ml
and
m,,,
a tremendous number of iterations may
be required by the linear algorithm
in
obtaining the same
result with a higher order algorithm. For example,
if
p
=
5
and
ins
=
10, then
ml
=
9
765
624. However, the re-
lation between the computational load required by the lin-
ear and
p
th-order algorithm in running, respectively,
m
I
and m,, iterations, is not exponential, as explained below.
In the general case, let us assume that matrix
D
has
dimensions
tn
X
n;
then
D*
is an
n
X
n
square matrix.
Thus, the computational load for the linear algorithm after
ml
iterations is
MI
=
mn'
+
(
ml
+
1
)mn
multiplies and
A,
=
n'(m
-
1)
+
(m,
+
1
)mn
additions, with a total
of
Cl
=
n'(
2m
-
1
)
+
2
(
m,
+
1
)mn
operations. On the
other hand,
m,,
iterations of the pth-order algorithm re-
quire
M,,
=
ntn
+
m,,
[nm
+
(p
-
1
)n'm] multiplies and
A,,
=
n(m
-
I)
+
m,,[n(m
-
17)
+
(p
-
I)n'm] ad-
ditions, with a total of
C,,
=
n(2m
-
1)
+
m,[n(2m
-
n)
+
2(p
-
I)n'm] operations. The efficiency of the
higher order algorithms over the linear depends
on
the
order chosen, the dimensions
m
and
n
of the matrix
D,
and the number of iterations required. Table
I
shows the
smallest number of iterations which the quadratic algo-
rithm
(p
=
2)
must run
in
order to be computationally
more efficient than the linear algorithm,
as a function of
the dimensions of the matrix
D.
In
this case, matrix
D
is
considered to be square
(m
=
n)
and multiplies and ad-
ditions are assumed to require the same amount of com-
putation. According to Table
I,
although the required
number of computations per iteration is greater for the
higher order algorithms, the overall computational load is
indeed less than that required by the linear algorithm, after
a
small number of iterations. The latter is due to the fact
that the error for a given
p
decreases exponentially with
a factor
p,
whereas the number of computations increases
linearly with the same factor.
The computational savings with the use of the higher
order algorithms increases when the distortion matrix
D
has a special form. For example, consider the common
case when
D
is circulant. Then the algorithms are imple-
mented using the Discrete Fourier Transform (DFT). For
the linear algorithm, the number of computations after
in,
KATSAGGELOS AND
EFSTRATIADIS
ITERATIVF,
SIGNAL
RF.STOR/\~IION
ALGORITHAIS
783
12-20
91-35
3G-G3
64-113
114-904
5-7
6-11
i
8
'3
10
11
I
iterations is
MI
=
(m,
+
2) NF
complex multiplies and
A,
=
(m,
+
1
)NF
complex additions, with a total of
Cl
=
(
2ml
+
3) NF
complex operations, where
NF
is the ex-
tent of the DFT. For the
p
th-order algorithm, the number
of computations after
in,,
iterations is
Mp
=
(m,p
+
1
)
N,
complex multiplies and
A,,
=
m,,(
p
-
1
)NF
complex ad-
ditions, with a total of
C,,
=
[
m,,
(2p
-
1
)
+
1
]
NF
com-
plex operations. Clearly, since
Cl
and
C,,
depend linearly
on
m,
and
mp,
respectively, while the relation between
m,
and
mp
is exponential, according to
(31),
C,,
decreases
relatively to
C,,
as the order
p
and iteration number
m,,
increase. For example, consider the case when
p
=
2
and
m2
=
8;
then
C2
=
25NF.
According to
(31),
the equiv-
alent number of iterations for the linear algorithm is
m,
=
255
and
C,
=
513NF.
Ifp
=
3
and
m3
=
8,
then
C3
=
41NF.
In this case, the linear algorithm requires
m,
=
6560
and
C,
=
13 123NF
complex operations.
The analysis of the required computational load can be
carried out from a different point of view, if we assume
that an error threshold
E
is determined
in
advance
in
ter-
minating the iteration. Then, we are interested
in
finding
the smallest
ml
or
m,,,
and
of
course that choice of the
order
p
which minimizes the total number of computa-
tions. By using
(13),
m,
is determined by
ml
=
[log
(E/c)/log
(c)1
,
where
r.1
is the smallest integer
which is greater than or equal to
x.
For the higher order
algorithms,
m,,
is given by
and the optimum order
pop[
minimizes
C,,/NF.
Two ex-
amples with
c
=
0.9,
are given
in
Table 11. In the first
example,
E
=
IO-'andp,
,,,,
=
3,
m3
=
4,
and
C,
=
21NF.
In the second example,
E
=
IOp6
and
p
,,,,,
=
2,
mz
=
8,
and
C2
=
25NF.
Note that
in
the last example, the linear
algorithm would require
m,
=
131
iterations and
C,
=
265NF
complex operations
in
order to meet the same error
criterion.
In conclusion, the computational load required by the
p
th-order algorithm is indeed smaller when compared to
the computational load required by the linear algorithm.
This statement is further amplified if the order
p
is a com-
posite number. Then arithmetic computations are reduced
dramatically, due to the decomposition of the pth-order
algorithm into lower order algorithms, as was discussed
by Morris
et
al.
[
161.
V.
COMBINED
ALGORITHMS
An attractive feature of the linear iterative algorithms
of
(1
1)
and
(28)
is the possibility of incorporating prior
knowledge about the solution into the restoration process,
in
the form of constraints [
181.
Among the different con-
straints, the nonlinear positivity constraint has been shown
to be very powerful and useful
[
181.
However, according
to our experimental evidence, when the positivity con-
straint is used with the higher order algorithms, it gener-
ally leads to erroneous results
or
causes divergence. The
qualitative explanation we offer at this point is that this
behavior is due to the decoupling of the computation of
D,!
from the computation of
x,!
in
(18), (24),
and
(29).
That is, there is no adjustment mechanism
in
the higher
order algorithm as with the linear algorithm via the error
term
(
y
-
Dx,!)
in
(1
1)
or the error term
(b
-
Ax,!)
in
(28).
Therefore, the development of constrained higher
order algorithms is an open research topic.
A
first step
toward this direction is an iterative algorithm which makes
use of both the linear and the
p
th-order algorithms along
with the application of constraints, as discussed next for
the algorithm
in
Section
I1
[3],
[IO].
Let us denote by
k,
and
k,,
the iteration numbers of the
first and
pth
order algorithms, respectively. According to
(30)
and
(31),
a combination of these algorithms can pro-
duce the same restoration results as each algorithm alone.
More specifically, given a positive number
E,
the required
total number of iterations
m,
and
m,,
for algorithms
(I
I)
and
(18),
respectively, are determined as discussed
in
Section
IV.
If
m,,
is even(odd), then the pth order algo-
rithm updates the solution only at its odd (even) iteration
steps except at the last one, while its even(odd) iteration
steps are replaced by
N(
k,,)
equivalent iterations of al-
gorithm
(13).
(The opposite occurs for
tn,)
odd.) The last
iteration of the
pth
order algorithm is replaced by
K
=
m,
784
IttE
TRANSACTIONS
ON
ACOLISIICS.
SPEtCH.
4NI)
SIGNAL
PROCESSING.
VOL
iX.
KO
5.
MAY
1990
-
p""'-'
iterations of algorithm
(13).
For example, if
m,,
is even, then fork,,
=
I,
3,
. . .
,
m,,
-
1.
we have
k,
=
0.
while for k,,
=
2,
4,
.
.
.
,
in,,
-
2, we have
k,
=
I,
2,
*
,
K.
In general, if we denote by
k
the iteration number of
the combined algorithm, then
,
N(k,,),andfork,,
=
tn,,,
wehavek,
=
1,2,
* *
. mod
(I
*
m/,,
2)
+
k,,
(33)
where mod
(i,
2) represents the modulo operation. When
the combined algorithm is used, the proper deterministic
constraint(s) can be imposed whenever algorithm
(1
I)
is
applied. Note that, since after the incorporation of con-
straints
(30)
does not hold as is, the range of
k,
can be
smaller than
N(
k,,).
Adaptive regularized iterative image restoration algo-
rithms have also appeared
in
the literature
161,
[
1
I],
based
on iterations
(I
1)
and (28). We have proposed a combined
adaptive iterative algorithm based on iterations
(1
8),
(24),
and (29)
[4].
The same idea is used as the one described
above. That is, one iteration of the pth-order algorithm
(18), for example, is combined with N(k,,) iterations
(30)
of the linear adaptive algorithm,
in
forming a combined
adaptive iteration step.
VI.
EXPERIMENTAL
RESULTS
Certain experimental results which demonstrate some
of the basic ideas
of
the previous sections are described
in
this section.
A
synthetic signal of length
64
samples
consisting of two impulses,
x
(n)
=
6
(n
-
30)
+
6
(n
-
35),
is used
in
our experiments. The simulated distortion
is due to motion over
11
samples. The impulse response
of such a distorting system is a rectangle, resulting
in
a
singular matrix
D.
The normalized residual error [left-
hand side of conditions (13) and (19)] is shown
in
Fig.
3,
resulting respectively from the application of iterations
(1 1)
and (18) for different values of
p.
In our simulations,
the value of
x+
was substituted by the available signal
x,,,.
The normalized error is shown again
in
Fig. 4 with the
application of the positivity constraint. The combined al-
gorithm described in Section
V
for
m,,
even is imple-
mented
in
this case for the higher order algorithms. It is
observed in this case that the smaller the parameter
p,
the
higher the convergence rate. This is due to the fact that
the smaller the parameter
p,
the more often the higher
order algorithm is applied. Due to this observation, the
linear algorithms combined with the algorithm proposed
in
Section 11-D is not shown
in
Fig.
4,
since its perfor-
mance is very similar with the performance
of
the quad-
ratic algorithm.
Finally, the algorithm with quadratic convergence is
compared to the algorithm proposed
in
Section
11-D.
The
distortion is the same as before, while an image line is
used as a test signal. Thc normalized error
is
shown
in
Fig.
5.
The faster convergence of the new algorithm over
the quadratic algorithm is obtained with no extra compu-
1
iteration number
I
Fig.
3.
Normalized residual error versus number
ot
iteration lor algo-
rithms
(1
1)
and (18). for various values
otp.
-
3
-.
'./
'/
I
0
100
200
300
400
500
iteraZion number
Fig.
4.
Normalized residual error versus iteration number for the linear
and combined algorithms
of
Section
V.
tor various values
otp.
with the
incorporation
of
the positivity conbtraint.
-E
0
2
4
6
8
10~214161820222~2~2~3~323~3~~8~
iceration number
I
Fig.
5.
Normalized residual error versus iteration number for algorithm
(24)
tor various values
ot
q ( q
=
I
.O
corresponds
to
the quadratic al-
tational load.
gorithm
)
KATSAGGELOS
AND
EFSTRATIADIS. ITERATIVE SIGNAI. RESTORATION ALGORITHMS
785
VII.
DISCUSSION
AND
CONCLUSIONS
A number of iterative signal restoration algorithms have
been derived based on
a
representation theorem for the
generalized inverse of a matrix. Some of these algorithms
have appeared
in
the literature and some are new. An al-
gorithm relating to the method of stochastic approxima-
tions can be also derived based on Theorem
1
131,
[5],
[
121. Therefore, the approach followed here unifies the
derivation of a large number of iterative restoration al-
gorithms. These algorithms are applicable to the general
case when additive noise is considered
in
the distortion
model. The restoration approach is the same since the so-
lution of
(1)
is replaced by the solution of (25). According
to the analysis of Section
IV,
the application of the higher
order algorithms is more advantageous due to the com-
putational savings.
In
addition, due to the fact that they
require a smaller number of iterations to converge, trun-
cation or roundoff errors may be less pronounced.
One of the attractive properties of
the
linear restoration
algorithms is the possibility of incorporating constraints
in
the iteration, which express
c(
priori knowledge about
the solution. Although the straightforward incorporation
of constraints
in
the
higher order algorithms results
in
un-
desirable results, we have proposed an algorithm which
combines the constrained linear and the
p
th-order itera-
tions. This combined algorithm converges faster than the
constrained linear algorithm and with less overall com-
putational load.
The algorithms presented can be used for the restora-
tion of signals of any dimensionality as well as for the
solution of any type of inverse problem which accepts
the
formulation of
(1)
or (25). The application of the algo-
rithms
to
band-limited signal extrapolation is currently
under investigation. Since the approach presented here
in
deriving iterative restoration algorithms is general, the use
of other families of functionsh
(z)
which satisfy Theorem
1,
and may lead to useful iterative restoration algorithms,
is also currently under investigation.
APPENDIX
PROOF
OF
THEOREM
3
Denote by
A,,
i
=
1,
.
,
n
the eigenvalues of
A.
Since
A
is positive semidefinite,
A,
>
0
for i
=
1,
*
. .
,
rand
A,
=
Ofori
=
r
+
1,
-
* *
,
n,
where
r
is the rank
of the matrix. Since
A
is symmetric,
it
has a complete set
of orthonormal eigenvectors
uI,
. .
*
,
U,,, where
(U,,
U/)
=
6,.
That is,
A
can be written as
If we define
T
=
(I
-
PA)
=
U(I
-
PA)UT,
(A-2)
and
Tk
=
I
-
Ak,
then the iterative algorithm (29) can be
written as
Solving for
xL,
we obtain the following formula:
cP-1
\
or by using (A-2).
pk
-
I
xL
=
c
U(I
-
PA)IUTbb.
(
A-5
1
I
=o
Since
A
is symmetric,
R”
=
X(A)
o
CR(A~)
=
x(A~)
0
R(A).
The
(n
-
Y)
eigenvectors that correspond
to
the
zero eigenvalues of
A
span
32
(A
)
since
Au,
=
A,u,
=
0,
or
U,
E
N(A)
fori=r+
l;..
,n
and
(ul,
U,)
=
6,
for
i,j
E
(r
+
1,
n).
Then
@(A)
=
@(AT)
=
spanlu,,
be written as
*
,
U,}
and bcan
b
=
Uc
or
UTb
=
c,
(‘4-6)
where
c
=
[cl,
tor. From (A-5) and (A-6) we get
*
,
cI,lT
is the coefficient column vec
/’A
-
I
XL
=
U(I
-
PA)’c
,=O
Since
A
is
positive semidefinite,
0
<
A,,,
I
)I
A
11.
In
fact,
A,,,,
=
IIA
I)?.
It
is assumed that
or
0
<
P
.
))AI)
<
2
o
<
PA,,,,
<
2
or
I
1
-
PA”,,,
1
<
1’
(A-8)
(A-9
1
Therefore, since
A,
>
0
for
i
=
1,
*
*
.
,r
11
-
PA,(
<
1
and
(1
-
PA,)”cL)
+
0
fork
-+
03,
(A-10)
where
U
is a positive, strictly increasing function of
k
such
that
U(
1)
I
1.
Now, if
b
E
@(A)
Cr+I
=
* * *
=
c,,
=
0.
(A-11)
Finally, from (A-7) we get
I
XL
=
C
c,A,’
[
1
-
(1
-
Ph,)”i]u,
(A-12)
J=
I
and fork
+
03,
due to (A-lo),
I
i
=
x,
=
C
C/A,-lU/,
(A-13)
,=I
786
IEEE
TRANSACTIONS ON ACOUSTICS. SPEECH. AND
SIGNAL
PROCESSING.
VOL.
38.
NO.
5.
MAY
1990
where
is
the minimum nom solution, since
Af
=
b
and
1161
-.
"Fast reconstruction
of
linearly distorted signals."
/€€E
Trcni.\.
Acousr..
Spedi.
SiCynd
Proc~e.~iri,y.
vol. 36. pp. 1017-1025. July
1988.
ui7d
Science.
2nd ed.
the infinite set of solutions is equal to
x
=
i
+
.i,
where
get
E
32
(A
).
Now
if
b
e
(A
19
from
(A-7)
and
(A-lO),
we
[
171 A.
W.
Naylor and
G.
R. Sell.
Lineur
Operuror
Theory
iri
Eiigirirrriri,y
1181
R.
W.
Schafer. R.
M.
Mersereau. and M. A. Richards. "Constrained
New York: Springer-Verlag,
1982.
iterative restoration algorithms,"
Proc..
IEEE.
vol.
60.
pp. 432-450.
Apr.
1981.
1191
S.
Singh.
S.
N. Tandon. and H. M. Gupta. "An iterative restoration
technique."
SigriuI
Processitig
11.
pp.
1-1
I.
1986.
1201
G.
Strang,
Lirieur
Aigrbru
wid
Irs
Applicurioris,
2nd ed.
Academic. 1980.
(A-
14)
New York:
and fork
-+
00,
xk
4
03
since at least one of the ci, where
i
=
r
+
1,
. . .
,
n,
is different from zero.
Q.E.D.
REFERENCES
[I]
H. C. Andrews and B. R. Hunt.
Digirul
Irntr,ye
Rr.\tortrriori.
Engle-
wood CliWs. NJ: Prentice-Hall. 1977.
[2]
H. Bialy, "Iterative Behandlung Linearen Funktionalgleichungen."
Arch.
Rurion.
Mvch.
Ancrl.
4, pp. 166-176, July 1959.
131
S.
N. Efstratiadis, "Fast iterative signal restoration algorithms." M.S.
thesis. Northwestern Univ.. Dep. Elec. Eng., Comput. Sci.. June
1988.
141
S.
N. Efstratiadis and A.
K.
Katsaggelos. "Fast adaptive iterative
image restoration algorithms." in
Proc.
SP1E
Syrrip.
Visuui
Cornmuri.
/niu,ye
Processing
'88.
Cambridge, MA. Nov. 1988. pp.
10-18.
151
C.
W.
Groetch.
Gerierrr/i:etl
1n1~~rse.s
of
Lincwr
Operutors.
New
York: Marcel Dekker. 1977.
[6]
A. K. Katsaggelos. J. Biemond. R. M. Mersereau. and R.
W.
Schafer,
"Nonstationary iterative image restoration.'' in
Proc.
1985
Im.
Corij:
Acoust..
Sprrcli.
Sigt7ul
Procrssirig.
Tampa. FL. Mar. 1985. pp. 696-
699.
171
-.
"A general formulation of constrained iterative restoration al-
gorithms." in
Pro(..
1985
Irir.
Cor7f:
Acoitsf..
Spfw'h.
Sigt7ul
Pro-
cessirig.
Tampa.
FL,
Mar. 1985,
pp.
700-703.
181
A. K. Katsaggelos.
"A
uniticd approach to iterative image restora-
tion." in Proc.
SPIE
Syrrip.
Visuul
Conini~~n.
1nitrga
Proc.e.s.sing
'87,
Cambridge, MA, Oct. 1987, pp. 163-167.
[9]
A.
K.
Katsaggelos and
S.
N. Efstratiadis. "Fast iterative image res-
toration algorithms." in
Pro<..
25th
Anriu.
Allerton
CotiJ
Comrriun..
Cmitr..
Cornput.,
Monticello. 1L. Sept. 1987.
pp.
493-502.
[IO]
-.
"A unified approach to iterative signal restoration." in
Proc.
/€E€
In/.
Cotif.
Acoust..
Speech.
Sigriul
Proce.c.sir7g.
New York. Apr.
1988. pp. 1028-1031.
[I
I]
A. K. Katsaggelos. "Iterative image restoration algorithms."
Opr.
Oig..
vol.
28,
no.
7. pp. 735-748. July
1989.
1121 H.
J.
Kushner and D.
S.
Clark.
Srochusric
Appro.uirnurion
Merhods
,fiar
Coristruiricd
ut7d
~ric[~~i.srr~ririf,[I
.Yy.\rwi.s.
New York: Springer-
Verlag. Appl. Math. Sci.. vol. 26. 1978.
[
131
R.
L.
Lagendijk. R. M. Mersercau. and J. Biemond. "On increasing
the convergence rate of regularized image restoration algorithms." in
Proc.
Iiir.
Conf:
Acousr.,
Sperd7.
Siguul
Processing.
Dallas. TX. Apr.
1987. pp.
28.2.1-4.
1141
C. E.
Morris. M. A. Richards. and M. H. Hayes. "Iterative decon-
volution algorithm with quadratic convergence."
J.
Opr.
Soc..
Arner.
A.,
vol.
4,
no.
1.
pp.
200-207. Jan. 1987.
[IS]
-.
"A generalized fast iterative deconvolution algorithm." in
Proc.
19871rir.
Co~f
Acousr..
Spcw.11.
Si~y~icrl
Procr.s.siri,y.
Dallas. TX, Apr.
1987, pp. 36.1.1-36.1.4.
Aggelos
K.
Katsaggelos
(S'80-M'85) was born
in Arnea. Greece, on April 17. 1956. He received
the Diploma degree in electrical and mechanical
engineering from the Aristotelian University of
Thessaloniki. Thessaloniki. Greece. in 1979. He
received the M.S. and Ph.D. degrees. both in
electrical engineering. from the Georgia Institute
of Technology. Atlanta. in
I981
and 1985. re-
spectively.
From 1980 to 1985 he was a Research Assis-
tant at the Digital Signal Processing Laboratory of
the Electrical Engineering School of Georgia Tech. where he was engaged
in research on image restoration. He is currently an Assistant Professor in
the Department of Electrical Engineering and Computer Science at North-
western University. Evanston. IL. During the 1986-1987 academic year
he was an Assistant Professor at Polytechnic University. Department of
Electrical Engineering and Computer Science, Brooklyn. NY. His current
research interests include signal and image processing, processing
of
mow
ing images. computational vision, and VLSl implementation of signal pro-
cessing algorithms. He is the Editor of the book
Digirtrl
Irnncyc,
Restorcrriori
(New York: Springer-Verlag).
Dr. Katsaggelos is a member of SPIE. the IEEE-CAS Technical Con-
mittee
on
Visual Signal Processing and Communications. the Technical
Chamber of Commerce of Greece. and Sigma Xi.
Serafim
N.
Efstratiadis
(S'89)
was born in
Greece in 1964. He received the Diploma degree
in electrical engineering from the Aristotelian
University of Thessaloniki. Thessaloniki. Greece.
in 1986 and the M.S. degree
in
electrical engi-
neering from Northwestern University. Evanston,
IL. in 1988. He is currently working toward the
Ph.D. degree in the area of motion compensated
image sequence restoration.
He has been a Research Assistant. and cur-
rently he is a Teaching Assistant. in the Deoart-
ment of Electrical Engineering and Computer Science at Northwestern Uni-
versity. His research interests include multidimensional signal processing.
image modeling. identification. restoration. and video communications.
Mr. Efstratiadis is a member of the Technical Chamber
of
Commerce
of Greece.
... For iterative scheme new estimate is given by old estimate plus some function of old estimate. The steepest descent method updates an initial estimate iteratively in the reverse direction of the gradient of the cost function C(f ) [14]. An iteration of the steepest descent method is given by; ...
... [3] also presented a very fast and reliable method to compute Moore-Penrose inverse. By using a general framework where analytic functions of scalers are first developed and then matrices substituted for the scalers, Katsaggelos and Efstratiadis [12] produced a convergence faster than quadratic, for restricted initial estimates. In this paper, a higher order iterative method to compute the Moore-Penrose inverses of arbitrary matrices using only the Penrose equation (ii) is developed by extending the iterative method described in [1]. ...
Article
A higher order iterative method to compute the Moore-Penrose inverses of arbitrary matrices using only the Penrose equation (ii) is developed by extending the iterative method described in [1]. Convergence properties as well as the error estimates of the method are studied. The efficacy of the method is demonstrated by working out four numerical examples, two involving a full rank matrix and an ill-conditioned Hilbert matrix, whereas, the other two involving randomly generated full rank and rank deficient matrices. The performance measures are the number of iterations and CPU time in seconds used by the method. It is observed that the number of iterations always decreases as expected and the CPU time first decreases gradually and then increases with the increase of the order of the method for all examples considered.
... [3] also presented a very fast and reliable method to compute Moore-Penrose inverse. By using a general framework where analytic functions of scalers are first developed and then matrices substituted for the scalers, Katsaggelos and Efstratiadis [12] produced a convergence faster than quadratic, for restricted initial estimates. In this paper, a higher order iterative method to compute the Moore-Penrose inverses of arbitrary matrices using only the Penrose equation (ii) is developed by extending the iterative method described in [1]. ...
Article
Full-text available
A higher order iterative method to compute the Moore-Penrose inverses of arbitrary matrices using only the Penrose equation (ii) is developed by extending the iterative method described in [1]. Convergence properties as well as the error estimates of the method are studied. The efficacy of the method is demonstrated by working out four numerical examples, two involving a full rank matrix and an ill-conditioned Hilbert matrix, whereas, the other two involving randomly generated full rank and rank deficient matrices. The performance measures are the number of iterations and CPU time in seconds used by the method. It is observed that the number of iterations always decreases as expected and the CPU time first decreases gradually and then increases with the increase of the order of the method for all examples considered.
... Katsikis et al. (2011) also presented a very fast and reliable method to compute Moore-Penrose inverse. By using a general framework where analytic functions of scalers are first developed and then matrices substituted for the scalers, Katsaggelos and Efstratiadis (1990) produced a convergence faster than quadratic, for restricted initial estimates. In this paper, a third order iterative method for estimating the Moore-Penrose inverse is developed by extending the second order iterative method (5) discussed in Petković and Stanimirović (2011). ...
Article
Full-text available
A third order iterative method for estimating the Moore-Penrose generalised inverse is developed by extending the second order iterative method described in Petkovi and Stanimirovi 2011. Convergence analysis along with the error estimates of the method are investigated. Three numerical examples, two for full rank simple and randomly generated singular rectangular matrices and third for rank deficient singular square matrices with large condition numbers from the matrix computation toolbox are worked out to demonstrate the efficacy of the method. The performance measures used are the number of iterations and CPU time used by the method. On comparing the results obtained by our method with those obtained with the method given in Petkovi and Stanimirovi 2011, it is observed that our method gives improved performance.
Chapter
This chapter describes the class of iterative algorithms to the problem of restoring a noisy and blurred image. Iterative algorithms form an important part of optimization theory and numerical analysis. The basic idea behind such an algorithm is that the solution to the problem of recovering a signal, which satisfies certain constraints from its degraded observation, can be found by the alternate implementation of the degradation and the constraint operator. Problems that can be solved with such an iterative algorithm are the phase-only recovery problem, the magnitude-only recovery problem, the band-limited extrapolation problem, the image restoration problem, and the filter design problem. There are a number of advantages associated with iterative restoration algorithms, among which: there is no need to determine or implement the inverse of an operator, knowledge about the solution can be incorporated into the restoration process in a relatively straightforward manner, the solution process can be monitored as it progresses, and the partially restored signal can be utilized in determining unknown parameters pertaining to the solution.
Article
Ill-posed problems described by first-kind Fredholm equations appear in many interesting practical cases in engineering or mathematical physics, such as deconvolution, and require regularization techniques to get adequate solutions. This paper presents, under a projection operators onto convex sets (POCS) framework to introduce the needed regularization operations, a series of non-adaptive and adaptive POCS regularization algorithms which offer as main advantages (besides including previously proposed methods) a lower computational load and the possibility of including any kind of constraints, linear and nonlinear. A series of simulation examples serve to appreciate the high performance of the proposed techniques in a typical problem: recovering the discontinuites of a sequence showing sharp edges.
Conference Paper
Full-text available
Array Processing ( AP ) is a powerful tool to extract the noise source characteristics from the signals that are measured during tests performed in wind tunnel facility. This kind of means of investigation has been intensively used to study the airframe noise generated by a 1/11th scaled model Airbus A320/A321 during an experiment conducted in the CEPRA 19 anechoic wind tunnel. In particular, a Localization Method (LM ) has allowed to identity the main noise sources on the aircraft model with a good accuracy. This paper proposes a Spectral Estimation Method (SEM ) in order to associate the actual Power Spectral Density ( PSD ) of the acoustic sources found by the LM . The SEM is based on the measurement of the Cross-Spectral Density Matrix (CSDM ) which contains the PSD of interest and on a model of this matrix. A minimization of the mean square error between the two CSDM is adopted to obtain the estimates of the actual PSD . However, the direct minimization may lead to non acceptable PSD . The a priori knowledge on the positivity of the PSD is incorporated in the algorithm to compute the desired solutions. Numerical and experimental results demonstrate the effectiveness of the estimation procedure.
Article
In this paper, methods for reducing the computational load of an adaptive iterative image restoration algorithm while producing a re- stored image of high visual quality are proposed. These methods are based on a class of iterative restoration algorithms that exhibit a first- or higher- order convergence, and some of them consist of on-line and off-line corn- putational parts. Since only the first-order or linear algorithm can take a computationally feasible adaptive formulation for image restoration, it- erative algorithms that combine the linear and higher-order algorithms are proposed. These algorithms converge to the weighted minimum norm least-squares solution with significantly less computational load corn- pared to the adaptive linear algorithm. The quality of the adaptively re- stored image depends on the choice of the weight coefficients, which are evaluated based on the spatial activity of the image. Various methods for computing the local spatial activity in the image are proposed. These methods are shown to produce visually better restoration results. Also, a method for computing the weight coefficients only at the edges is pro- posed, which results in additional computational savings. Finally, exper- imental results are presented and the various restoration methods are compared with respect to their computational complexity, the mean squared error, and the visual quality of the restored images.
Article
We present a new projection-based algorithm for solving the classical blind-deconvolution problem. In our approach all known a priori information about both the unknown source and the blurring functions is expressed through constraint sets. In computer simulations the algorithm performed well even when the prior information was not accurate. To see how well our algorithm compares against others, we compared it with another recently published deconvolution method [J. Opt. Soc. Am. A 9, 932 (1992)]. The advantages of each method are discussed.
Article
Full-text available
This tutorial paper discusses the use of successive-approximation-based iterative restoration algorithms for the removal of linear blurs and noise from images. Iterative algorithms are particularly attractive for this application because they allow for the incorporation of prior knowledge about the class of feasible solutions, because they can be used to remove nonstationary blurs, and because they are fairly robust with respect to errors in the approximation of the blurring operator. Regularization is introduced as a means for preventing the excessive noise magnification that is typically associated with ill-posed inverse problems such as the deblurring problem. Iterative algorithms with higher convergence rates and a multistep iterative algorithm are also discussed. A number of examples is presented.
Conference Paper
In this paper fast adaptive iterative image restoration algorithms are proposed. These algorithms are based on a class of nonadaptive restoration algorithms ihich exhibit a first or higher order of convergence and some of them consist of an on-line and an off-line -computational part. Since only the linear algorithm can take a computationally feasible adaptive formulation, an iterative algorithm which combines the linear adaptive and the higher order nonadaptive algorithms is proposed. An adaptive window size method is followed in the implementation of the linear adaptive algorithm, rithm, which improves the restoration results. A method to update the computation of the measure of the bcal activity only in the near edge areas is also proposed, thus resulting in great computational savings. Finally, experimental results are presented.
Conference Paper
A generalized iterative deconvolution algorithm is presented. It is shown that the algorithm can be tailored to achieve a pth-order rate of convergence to the inverse filter solution for any integer p. When compared to the method of successive approximations, the generalized algorithm is computationally more efficient in obtaining the same solution.
Article
Deconvolution is an important problem in many branches of physics and engineering, and many different algorithms have been considered for deconvolving two signals. Iterative algorithms based on the method of successive approximations have become popular for signal deconvolution because of the flexibility that they allow for the incorporation of signal constraints into the restoration. One of the limitations with these iterative algorithms, however, is that they achieve only a linear rate of convergence. In this paper an accelerated iterative deconvolution algorithm is presented that is based on the idea of updating the observation equation after each iteration. With this approach it is shown that the modified iterative algorithm achieves a quadratic rate of convergence. Although with this new algorithm there is a significant increase in the convergence rate, the incorporation of signal constraints into the iteration is more difficult than with the algorithms based on the method of successive approximation.
Article
In this paper a class of iterative image restoration algorithms is derived based on a representation theorem for the generalized inverse of a matrix. These algorithms exhibit a first or higher order of convergence and some of them consist of an "on-line" and an "off-line" computational part. The conditions of convergence and the rate of convergence of these algorithms are derived. A faster rate of convergence can be achieved by increasing the computational load. The algorithms can be applied to the restoration of signals of any dimensionality. Iterative restoration algorithms that have appeared in the literature represent special cases of the class of algorithms described here.
Conference Paper
In [1] a regularized iterative algorithm was described which has been shown to be very suitable for solving the ill-posed image restoration problem. By incorporating deterministic constraints and adaptivity this very general algorithm is capable of achieving both noise suppression and ringing reduction in the restoration process. It consumes, however, considerable computation to obtain a (visually) stable solution due to the low convergence speed of the algorithm. The purpose of this paper is to investigate the possibilities for speeding up the convergence of this restoration method. To this end we compare the classical steepest descent algorithm (with linear convergence) with a conjugate gradients based method (superlinear convergence) and a new Q-th order converging algorithm. The latter solution method has the highest convergence rate, but is restricted in its application to space-invariant image restoration with a linear constraint. Although the actual convergence speed of the algorithms involved generally depends on the image data to be restored, it will be shown that for real-life images the constrained conjugate gradients algorithm yields a considerable convergence speed improvement.
Conference Paper
This paper introduces a general formulation of constrained iterative restoration algorithms in which deterministic and/or statistical information about the undistorted signal and statistical information about the noise are directly incorporated into the iterative procedure. This a priori information is incorporated into the restoration algorithm by what we call "soft" or statistical constraints. Their effect on the solution depends on the amount of noise on the data; that is, the constraint operator is "turned off" for noiseless data. The development of the new iterative algorithm is based on results from regularization techniques for stabilizing ill-posed problems.
Conference Paper
This paper introduces different types of nonstationary constrained iterative image restoration algorithms. The adaptivity of the algorithm is introduced by the constraint operator which incorporates properties of the response of the human visual system. The properties of the visual system are represented by noise masking and visibility functions. A new way of computing the masking function is also introduced. The proposed algorithms are general and can be used for any type of linear constraint and distortion operators. The algorithms can also be used to restore signals different than images, since the constraint operator can be interpreted as adapting to the local signal activity.