ArticlePDF Available

Differential evolution with Gaussian mutation and dynamic parameter adjustment

Authors:

Abstract and Figures

Differential evolution (DE) is a remarkable evolutionary algorithm for global optimization over continuous search space, whose performance is significantly influenced by its mutation operator and control parameters (scaling factor and crossover rate). In order to enhance the performance of DE, we adopt a novel Gaussian mutation operator and a modified common mutation operator to collaboratively produce new mutant vectors, and employ a periodic function and a Gaussian function to generate the required values of scaling factor and crossover rate, respectively. In the proposed variant of DE (denoted by GPDE), the two adopted mutation operators are adaptively applied to generate the corresponding mutant vector of each individual based on their own cumulative scores, the periodic scaling factor can provide a better balance between exploration ability and exploitation ability, and the Gaussian function-based crossover rate will possess fluctuant value, which possibly enhance the population diversity. To verify the performance of proposed GPDE, a suite of thirty benchmark functions and four real-world problems are applied to conduct the simulation experiment. The simulation results demonstrate that the proposed GPDE performs significantly better than five state-of-the-art DE variants and other two meta-heuristics algorithms.
This content is subject to copyright. Terms and conditions apply.
Soft Comput
DOI 10.1007/s00500-017-2885-z
METHODOLOGIES AND APPLICATION
Differential evolution with Gaussian mutation and dynamic
parameter adjustment
Gaoji Sun1·Yanfei Lan2·Ruiqing Zhao2
© Springer-Verlag GmbH Germany 2017
Abstract Differential evolution (DE) is a remarkable evo-
lutionary algorithm for global optimization over continuous
search space, whose performance is significantly influenced
by its mutation operator and control parameters (scaling fac-
tor and crossover rate). In order to enhance the performance
of DE, we adopt a novel Gaussian mutation operator and a
modified common mutation operator to collaboratively pro-
duce new mutant vectors, and employ a periodic function and
a Gaussian function to generate the required values of scaling
factor and crossover rate, respectively. In the proposed variant
of DE (denoted by GPDE), the two adopted mutation oper-
ators are adaptively applied to generate the corresponding
mutant vector of each individual based on their own cumu-
lative scores, the periodic scaling factor can provide a better
balance between exploration ability and exploitation ability,
and the Gaussian function-based crossover rate will possess
fluctuant value, which possibly enhance the population diver-
sity. To verify the performance of proposed GPDE, a suite
of thirty benchmark functions and four real-world problems
are applied to conduct the simulation experiment. The simu-
lation results demonstrate that the proposed GPDE performs
significantly better than five state-of-the-art DE variants and
other two meta-heuristics algorithms.
Communicated by V. Loia.
BGaoji Sun
gsun@zjnu.edu.cn
BYanfei Lan
lanyf@tju.edu.cn
1College of Economic and Management, Zhejiang Normal
University, Jinhua 321004, China
2Institute of Systems Engineering, Tianjin University, Tianjin
300072, China
Keywords Differential evolution ·Gaussian mutation ·
Dynamic parameter adjustment ·Evolutionary computation ·
Global optimization
1 Introduction
Differential evolution (DE), proposed by Storn and Price
(Storn and Price 1997), is a simple yet efficient evolution-
ary algorithm (EA) for the global numerical optimization.
Due to its simple structure and ease of use, DE has been
successfully applied to solve many real-world problems,
including decision-making (Zhang et al. 2010), dynamic
scheduling (Tang et al. 2014), parameter optimization (Gong
and Cai 2014), spam detection (Idris et al. 2014), system fault
diagnosis (Zhao et al. 2014), motion estimation (Cuevas et al.
2013) and so on. More details on the recent research about
DE can be found in the literature reviews (Neri and Tirronen
2010;Das and Suganthan 2011a) and the references therein.
Three evolutionary operators (mutation, crossover and
selection) and three control parameters (population size,
scaling factor and crossover rate) are included in the orig-
inal DE algorithm, which have significant influence on its
performance. Thus, many researchers have been engaged
in improving DE by designing new evolutionary operators,
combining multiple operators and adopting adaptive or self-
adaptive strategies for those control parameters. Although
various variants of DE have been proposed, there still exists
a big room for improvement, owing to the thorny work of bal-
ancing the global exploration ability and local exploitation
ability (Lin and Gen 2009;ˇ
Crepinšek et al. 2013).
Using Gaussian function can randomly produce new solu-
tions around a given position, which may provide an excellent
exploitation ability. Meanwhile, periodic or fluctuant param-
eter adjustment strategy possibly can achieve a good balance
123
G. Sun et al.
between the exploitation operation around the already-found
good solutions and the exploration operation for seeking
out non-visited regions in the search space. Inspired by
the above observations, we design a novel Gaussian muta-
tion operator (which, respectively, takes the position of the
best individual among three randomly selected individuals
and the distance between the other two as the mean and
standard deviation of Gaussian distribution) and a modified
common mutation operator (denoted by DE/rand-worst/1) to
collaboratively produce the new potential position for every
individual, and the collaborative rule between them relies on
their own cumulative scores during the evolutionary process.
In addition, the scaling factor adopts a cosine function to real-
ize the objective of adjusting its value periodically, and the
crossover rate employs a Gaussian function to dynamically
adjust the population diversity during the evolutionary pro-
cess. At last, a novel DE variant is proposed via combining
the above-mentioned Gaussian mutation operator, DE/rand-
worst/1 and parameter adjustment strategies, which is called
GPDE for short. A suite of 30 benchmark functions with
different dimensions and four real-world optimization prob-
lems are applied to evaluate the performance of GPDE, and
its performance is compared to five excellent DE variants
and two up-to-date meta-heuristics algorithms. The com-
parative results show that GPDE obviously outperforms the
seven compared algorithms. Moreover, the parameter anal-
ysis expresses that the adopted control parameters within
GPDE are robust.
The remainder of this paper is organized as follows. Sec-
tion 2briefly introduces the basic operators of original DE
algorithm. Section 3reviews some currently related works
on DE. Section 4provides a detailed description of the pro-
posed GPDE algorithm and its overall procedure. Section 5
presents the comparison between GPDE and seven compared
algorithms. Section 6draws the conclusions.
2 Differential evolution
DE is a population-based stochastic search algorithm which
simulates the natural evolutionary process via mutation,
crossover and selection to move its population toward the
global optimum. The DE algorithm mainly contains the fol-
lowing four operations.
2.1 Initialization operation
Similar to other EAs, DE searches for a global optimum in
the D-dimensional real parameter space with a population of
vectors xi=[xi,1,xi,2,...,xi,D],i=1,2,...,NP, where
NP is the population size. An initial population should cover
the entire search space by uniformly randomizing individuals
between the prescribed lower bounds L=L1,L2,...,LD
and upper bounds U=U1,U2,...,UD.The jth compo-
nent of the ith individual can be initialized as follows,
xi,j=Lj+rand[0,1UjLj,(1)
where rand[0,1]represents a uniformly distributed random
number within the interval [0,1]and is used throughout the
paper.
2.2 Mutation operation
After the initialization operation, DE employs a mutation
operation to produce a mutant vector vi=vi,1,v
i,2,...,v
i,D
for each target vector xi. The followings are five most fre-
quently used mutation operators implemented in various DE
algorithms.
(1) DE/rand/1
vi=xr1+F·(xr2xr3). (2)
(2) DE/best/1
vi=xbest +F·(xr1xr2). (3)
(3) DE/current-to-best/1
vi=xi+F·(xbest xr1)+F·(xr2xr3). (4)
(4) DE/best/2
vi=xbest +F·(xr1xr2)+F·(xr3xr4). (5)
(5) DE/rand/2
vi=xr1+F·(xr2xr3)+F·(xr4xr5). (6)
The indices r1,r2,r3,r4and r5in the above equations
are mutually exclusive integers randomly generated from
set {1,2,...,NP}and are also different from the index
i. The parameter Fis called the scaling factor, which is
a positive real number for scaling the difference vectors.
The vector xbest =(xbest,1,xbest,2,...,xbest,D)is the
best individual in the current population.
2.3 Crossover operation
After the mutation operation, DE performs a binomial
crossover operator on target vector xiand its correspond-
ing mutant vector vito produce a trial vector ui=
ui,1,ui,2,...,ui,D. This process can be expressed as
ui,j=vi,j,if rand[0,1]≤CR or j=jrand
xi,j,otherwise.(7)
123
Differential evolution with Gaussian mutation and dynamic parameter adjustment
The crossover rate CR is a user-specified constant within the
interval (0,1)in original DE, which controls the fraction
of trial vector components inherited from the mutant vec-
tor. The index jrand is an integer randomly chosen from set
{1,2,...,D}, which is used to ensure that the trial vector
has at least one component different from the target vector.
2.4 Selection operation
After the crossover operation, a selection operation is exe-
cuted between the trial vector and the target vector according
to their fitness values f(·), and the better one will survive
to the next generation. Without loss of generality, we only
consider minimization problems. Specifically, the selection
operator can be expressed as follows:
xi=ui,if f(ui)f(xi)
xi,otherwise.(8)
From the expression of selection operator (8), it is easy to
see that the population of DE either gets better or remains
the same in fitness status, but never deteriorates.
3 Related work
In the past decades, many meta-heuristic algorithms have
been proposed, such genetic algorithm (Goldberg 1989), dif-
ferential evolution (Storn and Price 1997), particle swarm
optimization (Kennedy et al. 2001), ant colony optimiza-
tion (Dorigo and Blum 2005), joint operations algorithm (Sun
et al. 2016) and so on. These meta-heuristic algorithms have
been successfully applied in various fields, such as produc-
tion planning (Lan et al. 2012), procurement planning (Sun
et al. 2010), location problems (Wang and Watada 2012),
workforce planning (Yang et al. 2017). Among these meta-
heuristic algorithms, DE has shown outstanding performance
in solving many test functions and real-world problems, but
its performance highly depends on the selected evolutionary
operators and the values of control parameters. To over-
come these drawbacks, many variants have been proposed
to improve the performance of DE. In this section, we only
provide a brief overview of the enhanced approaches which
is related to our work.
There are lots of researchers who tried to enhance DE
via designing new mutation operators or combining mul-
tiple operators. Qin et al. (2009) proposed a self-adaptive
DE (SADE) which focuses on the mutation operator selec-
tion and crossover rate of DE. Zhang and Sanderson (2009)
presented a self-adaptive DE with optional external archive
(JADE) which employs a new mutation operator called
“DE/current-to-pbest.” Han et al. (2013) introduced a group-
based DE variant (GDE) which divides the population into
two groups and each group employs a different mutation
operator. Wang et al. (2013) proposed a modified Gaus-
sian bare-bones DE variant (MGBDE) which combines two
mutation operators, and one of the mutation operators is
designed based on Gaussian distribution. Das et al. (2009)
presented two kinds of topological neighborhood models
and embedded them into the mutation operators of DE.
Gong et al. (2011a) introduced a simple strategy adapta-
tion mechanism (SaM) which can be used for coordinating
different mutation operators. Many other DE variants also
adopted new designed mutation operator or multi-mutation
operator strategies with different searching features, such as
NDi-DE (Cai and Wang 2013), MS-DE (Wang et al. 2014),
CoDE (Wang et al. 2011), HLXDE (Cai and Wang 2015),
MDE_pBX (Islam et al. 2012), AdapSS-JADE (Gong et al.
2011b), IDDE (Sun et al. 2017).
Some other researchers applied parameter adjustment to
improve the performance of DE. For instance, Draa et al.
(2015) introduced sinusoidal differential evolution (SinDE),
which adopts two sinusoidal formulas to adjust the values of
scaling factor and crossover rate. Brest et al. (2006) proposed
a self-adaptive scheme for the DE’s control parameters. Liu
and Lampinen (2005) applied fuzzy logic controllers to adapt
the value of crossover rate. Zhu et al. (2013) adopted an adap-
tive population tuning scheme to enhance DE. Ghosh et al.
(2011) introduced a control parameter adaptation strategy,
which is based on the fitness values of individuals in DE
population. Yu et al. (2014) proposed a two-level adaptive
parameter control strategy, which is based on the optimiza-
tion states and the fitness values of individuals. Sarker et al.
(2014) introduced a new mechanism to dynamically select
the best performing combinations of control parameters,
which is based on the success rate of each parameter com-
bination. Karafotias et al. (2015) provided a comprehensive
overview about the parameter control in evolutionary algo-
rithms.
Actually, many aforementioned references simultane-
ously utilize new evolutionary operators and adaptive control
parameters to enhance the performance of DE, including
SADE (Qin et al. 2009), JADE (Zhang and Sanderson 2009),
MGBDE (Wang et al. 2013) and CoDE (Wang et al. 2011).
In addition, Mallipeddi et al. (2011) employed a pool of
distinct mutation operators along with a pool of values for
each control parameter which coexists and competes to pro-
duce offsprings during the evolutionary process. Yang et al.
(2015) proposed a mechanism named auto-enhanced popu-
lation diversity to automatically enhance the performance of
DE, which is based on the population diversity at the dimen-
sional level. Biswas et al. (2015) presented an improved
information-sharing mechanism among the individuals to
enhance the niche behavior of DE. Tang et al. (2015)intro-
duced a novel variant of DE with an individual-dependent
mechanism which includes an individual-dependent param-
123
G. Sun et al.
eter setting and mutation operator. However, these DE
variants still cannot resolve the problems of premature con-
vergence or stagnation when handling complex optimization
problems.
4 Description of GPDE
In this section, we firstly provide a detailed description of
the new Gaussian mutation operator, the modified common
mutation operator and the cooperative rule between them,
and then summarize the overall procedure of GPDE.
4.1 Gaussian mutation operator
Gaussian distribution is very important and often used in
statistics and natural sciences to represent real-valued ran-
dom variable, which can be denoted by N(μ, σ 2), where
μand σare its mean and standard deviation, respectively.
It is well known that there is 3-σrule which exists in Gaus-
sian distribution. Specifically, about 68% of the values drawn
from the Gaussian distribution N(μ, σ 2)are within the inter-
val [μσ, μ +σ]; about 95% of the values lie within the
interval [μ2σ, μ +2σ]; and about 99.7% are within the
interval [μ3σ, μ +3σ].
The 3-σrule of Gaussian distribution provides a wonder-
ful chance to control the hunting zone which depends on the
requirement of considered problem. Actually, Gaussian dis-
tribution has been widely used to adjust the values of control
parameters, such as SADE (Qin et al. 2009), MGBDE (Wang
et al. 2013), DEGL (Das et al. 2009) and MDE_pBE (Islam
et al. 2012), but rarely applied to generate new mutation
operator. In order to take full advantage of Gaussian dis-
tribution, we propose the following novel mutation operator
which combines crossover operator to directly produce the
new trial vector (denoted by ug
i=(ug
i,1,ug
i,2,...,ug
i,D))for
the ith individual, i=1,2,...,NP,
ug
i,j=
Nxr1,j,xr2,jxr3,j2,if j=jrand
or rand[0,1]≤CRi
t,
xi,j,otherwise,
(9)
where the indices r1,r2,r3are mutually exclusive inte-
gers randomly generated from the set {1,2,...,NP}and
are also different from the base index i. Note that the
r1th individual is the best one among the three randomly
selected individuals, and the novel Gaussian mutation opera-
tor Nxr1,j,xr2,jxr3,j2in formula (9), respectively,
takes the position xr1,jof the best one and the distance
|xr2,jxr3,j|between the other two as the mean and standard
deviation; meanwhile, it will only be executed when meeting
the triggering condition j=jrand or rand[0,1]≤CRi
t,
which means that the proposed Gaussian mutation opera-
tor only transfers a certain proportional dimensions of each
corresponding individual to new positions around the best
selected one. Furthermore, if the new position of one dimen-
sion is within one standard deviation |xr2,jxr3,j|away
from the mean xr1,j, we can call that it executes the exploita-
tion operation in this dimension; otherwise, it carries out the
exploration operation. Therefore, according to the 3–σrule,
the designed Gaussian mutation operator can simultaneously
conduct exploitation and exploration works, especially the
former one. In addition, dynamic parameter CRi
texpresses
the crossover rate of the ith individual in the tth generation,
which can be computed by
CRi
t=N(0.5,V), i=1,2,...,NP,t=1,2,...,T,
(10)
where Vindicates the variance of Gaussian distribution
N(0.5,V), and it is applied to control the fluctuation of
crossover rate, and Tis the maximum allowable generation.
It should be pointed out that Vis a user-specified constant
and it has to simultaneously ensure that the value of dynamic
crossover rate CRi
thas a certain extent of fluctuation and
almost falls into the range [0,1], and thus its reasonable
interval is [0.01,0.1]. Actually, the individuals employing
different crossover rates in the same generation can poten-
tially enhance the population diversity.
4.2 DE/rand-worst/1
In mutation operator, the base individual can be taken as the
center point of the searching area, the difference vector is
applied to set the searching direction, and the scale factor is
employed to control the step size. In a general way, a bet-
ter base individual has a higher probability to produce better
offsprings, a more proper direction induces a more efficient
searching behavior of the population, and a periodic scaling
factor has potential advantage in balancing exploration abil-
ity and exploitation ability. Based on the aforementioned con-
siderations, we incorporate the fitness information of selected
individuals and a periodic scaling factor to modify the
most popular mutation operator (DE/rand/1). The obtained
modified mutation operator DE/rand/1 (denoted by DE/rand-
worst/1) which combines crossover operator is applied to
directly produce the new trial vector (denoted by um
i=
(um
i,1,um
i,2,...,um
i,D), i=1,2,...,NP) of each individual,
the corresponding formula can be described as follows,
um
i,j=
xr
1,j+Ft·(xr
2,jxr
3,j), if j=jrand or
rand[0,1]≤CRi
t,
xi,j,otherwise,
(11)
123
Differential evolution with Gaussian mutation and dynamic parameter adjustment
where the indices r
1,r
2and r
3are mutually exclusive inte-
gers randomly chosen from the set {1,2,...,NP}, which are
also different from the index i. Moreover, the r
3th individual
is the worst one among the three randomly selected indi-
viduals, which can ensure that the base individual is not the
worst one and the searching direction is relatively better. In
addition, note that the mutation operator DE/rand-worst/1 in
formula (11) has the same triggering condition with Gaussian
mutation operator in formula (9). About the periodic scal-
ing factor, we apply a cosine function to realize the periodic
adjustment strategy, which can be expressed via the follow-
ing formula,
Ft=cos(t·FR ·π),(12)
where Ftis the value of scaling factor in the tth generation,
and FR represents the frequency of cosine function, which is a
user-specified constant and applied to adjust the turnover rate
between the exploration and exploitation operations. Usually,
a smaller frequency FR corresponds to a smaller turnover
rate.
4.3 Cooperative rule
Up to now, two mutation operators (Gaussian and DE/rand-
worst/1) have been introduced, which combine a same
crossover operator to produce the new trial vector uifor
each individual, and then the cooperative rule between them
becomes a burning problem. A natural and reasonable rule is
that adaptively executing one of the two mutation operators in
terms of their own performance. To evaluate the performance
of adopted mutation operators during the evolutionary pro-
cess, we introduce a new concept called “cumulative score”
into the mutation operation. For the two adopted mutation
operators, their cumulative scores during the evolutionary
process can be obtained via the following three steps. Firstly,
the values of their initial cumulative scores are set (denoted
by CSg
0and CSm
0) to 0.5. Secondly, suppose that the values
of their historical cumulative scores are CSg
t1and CSm
t1,
and then their single-period scores in the current generation
(denoted by Sg
tand Sm
t) can be obtained via the following
two formulas, respectively.
Sg
t=
Cg
t
Ng
t
,if Ng
t>0,
CSg
t1
t,otherwise,
(13)
Sm
t=Cm
t
Nm
t,if Nm
t>0,
CSm
t1
t,otherwise,
(14)
where the indices Ng
tand Nm
trepresent the numbers of new
trial vectors produced by Gaussian mutation operator and
DE/rand-worst/1 in the tth generation, respectively. Actually,
the value of Ng
talways is equals to NP Nm
t, owing to the
fact that the population only executes NP times of mutation
operators in one generation. And the indices Cg
tand Cm
t,
respectively, represent the success times of the two opera-
tors’ execution, where the concept of success means that the
new produced trial vector is better than the original target
vector. Note that formulas (13) and (14) express that the cur-
rent single-period score of each adopted mutation operator
is equal to its current success rate when executing at least
once in the current generation, otherwise takes its average
value of historical cumulative score. Thirdly, after obtaining
the current single-period scores of the two adopted mutation
operators, their current cumulative scores can be updated by
the following two formulas,
CSg
t=CSg
t1+Sg
t,(15)
CSm
t=CSm
t1+Sm
t.(16)
Now, the value of a parameter involved in cooperative
rule can be derived in terms of the two mutation operators’
cumulative scores, which can be calculated by,
CSt=CSg
t
CSg
t+CSm
t
,(17)
where parameter CStis applied to control the selection prob-
ability of Gaussian mutation operator in the next generation.
Furthermore, the detailed cooperative rule can be described
as follows,
ui=ug
i,if rand[0,1]<CSt,
um
i,otherwise.(18)
The cooperative rule (18) shows that the chance of execut-
ing the adopted two mutation operators relies on their own
cumulative scores, and the one with higher cumulative score
has more chance to produce the trial vectors. After the new
trial vector produced, GPDE will compare the fitness values
of each individual xiand its new trial vector uiand then
produce the offspring via the selection operator (8).
4.4 The overall procedure of GPDE
We have provided a detailed description of Gaussian muta-
tion operator, DE/rand-worst/1, and the cooperative rule
between them. Now, we summarize the overall procedure
of GPDE into Algorithm 1.
5 Comparison and result analysis
In this section, we firstly provide the test functions, real-world
problems and compared DE algorithms, secondly present the
123
G. Sun et al.
Tabl e 1 Summary of the IEEE
CEC 2014 benchmark functions Type No. Functions f
Unimodal f1Rotated high conditioned elliptic function 100
f2Rotated bent cigar function 200
f3Rotated discus function 300
Multimodal f4Shifted and rotated Rosenbrock function 400
f5Shifted and rotated Ackley’s function 500
f6Shifted and rotated Weierstrass function 600
f7Shifted and rotated Griewank’s function 700
f8Shifted Rastrigin’s function 800
f9Shifted and rotated Rastrigin’s function 900
f10 Shifted Schwefel’s function 1000
f11 Shifted and rotated Schwefel’s function 1100
f12 Shifted and rotated Katsuura function 1200
f13 Shifted and rotated HappyCat function 1300
f14 Shifted and rotated HGBat function 1400
f15 Shifted and rotated Expanded Griewank’s
plus Rosenbrock’s function
1500
f16 Shifted and rotated Expanded Scaffer’s F6
function
1600
Hybrid f17 Hybrid function 1 1700
f18 Hybrid function 2 1800
f19 Hybrid function 3 1900
f20 Hybrid function 4 2000
f21 Hybrid function 5 2100
f22 Hybrid function 6 2200
Composition f23 Composition function 1 2300
f24 Composition function 2 2400
f25 Composition function 3 2500
f26 Composition function 4 2600
f27 Composition function 5 2700
f28 Composition function 6 2800
f29 Composition function 7 2900
f30 Composition function 8 3000
Search space: [−100,100]D
comparative results between GPDE and the other seven algo-
rithms, and analyze the effects of control parameters on the
performance of GPDE at last.
5.1 Test functions and real-world problems
In order to evaluate the performance of GPDE, we apply a set
of 30 well-known test functions from IEEE CEC 2014 (Liang
et al. 2013) and four real-world problems to conduct the com-
parative experiment. Specifically, based on the characteristics
of the 30 test functions, they can be divided into four classes,
which are summarized in Table 1. Moreover, the adopted
test functions are carried out in the comparative experiment
when their dimensions are equal to 30, 50 and 100, respec-
tively. In addition, the four real-world problems (denoted by
rf1,rf2,rf3and rf4, respectively) are widely used to evaluate
the performance of various algorithms, which are applica-
tions to parameter estimation for frequency-modulated sound
waves (Das and Suganthan 2011b), spread spectrum radar
poly-phase code design (Das and Suganthan 2011b), sys-
tems of linear equations (García-Martínez et al. 2008), and
parameter optimization for polynomial fitting problem (Her-
rera and Lozano 2000), respectively.
5.2 Compared algorithms and parameter configurations
In our comparative experiment, GPDE is compared with five
excellent DE variants, including SADE (Qin et al. 2009),
JADE (Zhang and Sanderson 2009), GDE (Han et al. 2013),
MGBDE (Wang et al. 2013) and SinDE (Draa et al. 2015).
123
Differential evolution with Gaussian mutation and dynamic parameter adjustment
Tabl e 2 Comparative results on functions f1f15 with D=30
Func. Metric SADE JADE GDE MGBDE SinDE C-ABC CCPSO2 GPDE
f1Mean 3.22e+04 3.76e+04 6.85e+03 5.57e+03 1.92e+06 8.63e+04 9.12e+05 5.21e+04
Std. 2.17e+04 3.66e+04 6.41e+03 3.45e+03 1.11e+06 6.15e+04 6.40e+05 3.51e+04
+/=/−− − − − + + + −
f2Mean 4.84e20 3.08e20 3.06e22 3.92e15 2.90e24 7.49e01 8.87e+03 1.35e23
Std. 2.39e19 9.38e20 3.80e22 1.95e14 1.01e23 1.12e+00 9.26e+03 2.17e22
+/=/−+ + + + = + + −
f3Mean 9.82e10 1.34e01 3.56e03 2.08e18 2.18e12 4.32e+00 1.95e+04 5.42e25
Std. 4.91e09 6.05e01 9.98e03 9.54e18 9.18e12 8.66e+00 1.76e+04 1.83e24
+/=/−+ + + + + + + −
f4Mean 1.39e+01 1.94e+01 5.48e+00 9.95e05 1.43e+01 1.15e+01 4.09e+01 2.99e+00
Std. 2.72e+01 3.09e+01 1.89e+01 4.48e04 2.28e+01 2.54e+01 5.16e+01 1.49e+01
+/=/−+ + + − + + + −
f5Mean 2.04e+01 2.00e+01 2.09e+01 2.02e+01 2.06e+01 2.09e+01 2.00e+01 2.00e+01
Std. 4.74e02 3.99e03 1.48e01 3.92e02 4.77e02 1.21e01 9.03e03 6.53e06
+/=/−+ + + + + + + −
f6Mean 8.93e+00 1.14e+01 8.25e+00 2.22e+01 1.93e02 4.03e+00 1.33e+01 1.33e+00
Std. 2.03e+00 1.75e+00 2.85e+00 3.64e+00 9.27e02 2.02e+00 3.13e+00 1.16e+00
+/=/−+ + + + − + + −
f7Mean 1.89e02 2.69e02 1.30e02 1.12e02 0.00e+00 2.55e02 2.32e02 2.17e03
Std. 2.10e02 2.17e02 1.75e02 1.44e02 0.00e+00 4.00e02 2.63e02 4.17e03
+/=/−+ + + + = + + −
f8Mean 4.78e+00 3.98e02 6.38e+01 1.29e+02 4.70e01 2.60e+01 0.00e+00 9.79e+00
Std. 2.54e+00 1.99e01 1.58e+01 3.17e+01 5.69e01 7.64e+00 0.00e+00 3.72e+00
+/=/−− − + + − + − −
f9Mean 4.59e+01 4.97e+01 7.46e+01 1.56e+02 3.58e+01 3.13e+01 7.34e+01 3.46e+01
Std. 1.09e+01 8.71e+00 2.74e+01 2.95e+01 7.51e+00 1.02e+01 1.87e+01 9.26e+00
+/=/−+ + + + = = + −
f10 Mean 3.40e+00 6.98e+00 1.80e+03 1.25e+03 9.31e+00 2.42e+03 4.49e03 1.25e+02
Std. 2.09e+00 2.40e+01 6.43e+02 7.96e+02 4.77e+00 1.18e+03 8.05e03 9.65e+01
+/=/−− − + + − + − −
f11 Mean 2.42e+03 2.04e+03 5.06e+03 2.85e+03 2.35e+03 6.58e+03 2.07e+03 1.97e+03
Std. 5.45e+02 2.43e+02 1.60e+03 6.29e+02 4.08e+02 4.87e+02 3.70e+02 4.71e+02
+/=/−+ = + + + + = −
f12 Mean 5.91e01 1.28e01 1.77e+00 3.36e01 8.04e01 1.90e+00 8.49e02 1.49e01
Std. 8.56e02 2.61e02 8.70e01 3.67e02 1.24e01 3.80e01 1.85e02 7.84e02
+/=/−+ = + + + + − −
f13 Mean 2.80e01 3.09e01 3.64e01 4.40e01 2.06e01 3.10e01 3.27e01 2.40e01
Std. 5.43e02 5.90e02 7.23e02 7.72e02 5.27e02 6.09e02 9.95e02 6.84e02
+/=/−+ + + + = + + −
f14 Mean 2.41e01 2.50e01 2.95e01 2.58e01 2.42e01 2.82e01 2.06e01 2.22e01
Std. 4.47e02 1.01e01 9.12e02 5.78e02 2.60e02 4.64e02 1.32e01 3.40e02
+/=/−= = + + + + − −
f15 Mean 4.57e+00 1.23e+01 8.12e+00 1.33e+01 4.81e+00 6.57e+00 7.15e+00 3.75e+00
Std. 1.31e+00 6.69e+00 2.95e+00 2.37e+00 9.80e01 5.03e+00 3.14e+00 9.44e01
+/=/−+ + + + + + + −
123
G. Sun et al.
Tabl e 3 Comparative results on functions f16 f30 with D=30
Func. Metric SADE JADE GDE MGBDE SinDE C-ABC CCPSO2 GPDE
f16 Mean 1.03e+01 1.02e+01 1.10e+01 1.07e+01 1.00e+01 1.24e+01 9.76e+00 9.64e+00
Std. 4.16e01 3.56e01 1.12e+00 6.49e01 5.22e01 2.49e01 6.60e01 7.85e01
+/=/++++++=−
f17 Mean 5.95e+03 6.29e+04 1.91e+04 1.71e+03 1.25e+05 1.37e+04 7.32e+05 3.84e+03
Std. 4.17e+03 6.21e+04 3.03e+04 7.57e+02 1.20e+05 9.70e+03 4.57e+05 3.53e+03
+/=/+++−+++−
f18 Mean 8.34e+02 7.06e+02 7.43e+01 1.03e+02 5.15e+02 2.70e+03 5.53e+03 2.16e+01
Std. 1.22e+03 9.62e+02 1.90e+02 3.60e+01 6.94e+02 3.24e+03 5.52e+03 9.20e+00
+/=/+++++++−
f19 Mean 4.23e+00 1.11e+01 4.73e+00 2.33e+01 3.71e+00 6.54e+00 1.14e+01 3.45e+00
Std. 1.23e+00 1.66e+01 1.20e+00 2.55e+01 7.18e01 8.46e+00 1.84e+01 1.18e+00
+/=/+++++++−
f20 Mean 1.08e+02 1.37e+03 2.88e+01 7.80e+01 2.57e+01 3.32e+02 3.34e+04 1.71e+01
Std. 1.42e+02 2.06e+03 2.19e+01 4.38e+01 2.85e+01 2.95e+02 1.77e+04 1.12e+01
+/=/+++++++−
f21 Mean 4.84e+03 5.74e+03 3.29e+03 8.50e+02 9.23e+03 9.11e+03 4.50e+05 3.63e+03
Std. 4.42e+03 7.17e+03 5.36e+03 3.90e+02 7.52e+03 7.06e+03 3.82e+05 4.61e+03
+/=/++=−+++−
f22 Mean 1.55e+02 2.10e+02 4.82e+02 7.17e+02 7.25e+01 1.52e+02 6.73e+02 2.90e+02
Std. 8.61e+01 7.94e+01 2.09e+02 2.74e+02 6.28e+01 1.30e+02 1.78e+02 1.41e+02
+/=/−=++−−+−
f23 Mean 3.15e+02 3.15e+02 3.15e+02 3.15e+02 3.15e+02 3.15e+02 3.15e+02 3.15e+02
Std. 1.38e13 3.48e13 6.13e13 1.21e12 1.24e13 1.78e13 3.27e05 1.04e13
+/=/=======−
f24 Mean 2.28e+02 2.30e+02 2.35e+02 2.42e+02 2.23e+02 2.36e+02 2.27e+02 2.27e+02
Std. 5.41e+00 3.87e+00 7.27e+00 1.18e+01 8.57e01 7.48e+00 3.12e+00 4.51e+00
+/=/=+++−+=−
f25 Mean 2.10e+02 2.12e+02 2.04e+02 2.20e+02 2.04e+02 2.04e+02 2.09e+02 2.04e+02
Std. 2.07e+00 1.47e+00 1.19e+00 6.22e+00 6.64e01 8.41e01 4.60e+00 7.80e01
+/=/++=+==+−
f26 Mean 1.12e+02 1.56e+02 1.00e+02 1.56e+02 1.00e+02 1.00e+02 1.42e+02 1.08e+02
Std. 3.31e+01 5.05e+01 9.32e02 5.04e+01 3.99e02 7.14e02 6.27e+01 2.76e+01
+/=/=+=+−=+−
f27 Mean 4.36e+02 4.29e+02 4.28e+02 8.61e+02 3.02e+02 4.14e+02 6.08e+02 3.34e+02
Std. 5.74e+01 6.28e+01 5.93e+01 3.11e+02 7.08e+00 4.31e+01 1.42e+02 3.49e+01
+/=/++++−++−
f28 Mean 9.12e+02 9.19e+02 9.56e+02 2.60e+03 7.94e+02 8.65e+02 1.22e+03 7.93e+02
Std. 4.37e+01 6.51e+01 6.23e+01 7.61e+02 3.23e+01 6.06e+01 4.64e+02 2.60e+01
+/=/++++=++−
f29 Mean 6.83e+02 7.84e+02 4.80e+02 7.18e+02 1.32e+03 3.48e+05 1.02e+06 6.32e+02
Std. 2.64e+02 2.69e+02 2.73e+02 1.26e+02 2.39e+02 1.73e+06 2.82e+06 1.96e+02
+/=/=+−=+++−
f30 Mean 1.96e+03 2.05e+03 1.11e+03 2.14e+03 8.10e+02 1.26e+03 3.17e+03 1.62e+03
Std. 6.22e+02 5.50e+02 3.00e+02 7.18e+02 1.69e+02 4.43e+02 8.42e+02 7.04e+02
+/=/++−+−=+−
123
Differential evolution with Gaussian mutation and dynamic parameter adjustment
Tabl e 4 Comparative results on functions f1f15 with D=50
Func. Metric SADE JADE GDE MGBDE SinDE C-ABC CCPSO2 GPDE
f1Mean 1.81e+05 7.59e+04 4.26e+05 6.96e+04 2.96e+06 5.85e+05 2.52e+06 9.52e+05
Std. 7.78e+04 3.67e+04 1.89e+05 3.61e+04 1.02e+06 2.12e+05 1.00e+06 3.05e+05
+/=/−−−−+−+−
f2Mean 4.72e+03 4.03e+03 2.27e+01 1.37e10 4.15e+03 3.27e+03 9.70e+03 1.75e+00
Std. 4.07e+03 5.19e+03 3.58e+01 4.09e10 2.76e+03 3.92e+03 9.99e+03 3.35e+00
+/=/+++−+++−
f3Mean 1.91e+01 3.88e02 3.09e+01 2.70e05 5.81e+02 5.67e+03 3.90e+04 1.45e+00
Std. 2.32e+01 1.04e01 3.14e+01 4.57e05 4.24e+02 2.11e+03 1.59e+04 4.09e+00
+/=/+−+−+++−
f4Mean 6.24e+01 6.26e+01 2.41e+01 1.50e+01 9.28e+01 7.40e+01 7.88e+01 4.37e+01
Std. 3.62e+01 2.72e+01 3.31e+01 3.83e+01 3.51e+00 3.22e+01 3.09e+01 3.58e+01
+/=/++−−+++−
f5Mean 2.07e+01 2.01e+01 2.11e+01 2.04e+01 2.08e+01 2.11e+01 2.00e+01 2.00e+01
Std. 3.48e02 8.33e03 5.23e02 2.87e02 3.49e02 4.32e02 1.06e03 4.73e06
+/=/+++++++−
f6Mean 1.95e+01 2.54e+01 1.53e+01 4.19e+01 4.07e02 4.77e+00 2.39e+01 4.97e+00
Std. 3.73e+00 2.37e+00 2.82e+00 4.31e+00 1.34e01 2.50e+00 6.37e+00 2.99e+00
+/=/++++−=+−
f7Mean 8.70e03 2.16e02 9.03e03 6.24e03 1.18e16 4.60e03 1.72e02 6.57e04
Std. 9.17e03 4.29e02 9.96e03 6.16e03 1.29e16 5.88e03 2.44e02 2.55e03
+/=/++++=++−
f8Mean 9.42e+00 6.63e02 1.11e+02 2.58e+02 1.10e+01 4.24e+01 0.00e+00 1.57e+01
Std. 3.62e+00 2.57e01 2.01e+01 4.42e+01 4.33e+00 1.24e+01 0.00e+00 3.93e+00
+/=/−−++−+−−
f9Mean 9.15e+01 1.04e+02 1.09e+02 2.95e+02 7.04e+01 7.81e+01 1.49e+02 6.76e+01
Std. 1.15e+01 1.46e+01 2.81e+01 3.68e+01 1.92e+01 7.51e+01 3.34e+01 9.61e+00
+/=/++++==+−
f10 Mean 2.50e+00 3.05e+00 3.46e+03 3.37e+03 1.57e+02 9.45e+03 2.56e03 2.03e+02
Std. 1.07e+00 9.81e01 1.13e+03 1.93e+03 8.15e+01 3.01e+03 6.98e03 1.46e+02
+/=/−−++=+−−
f11 Mean 7.04e+03 4.07e+03 1.26e+04 5.65e+03 4.95e+03 1.33e+04 4.23e+03 4.52e+03
Std. 4.82e+02 4.54e+02 2.13e+03 6.00e+02 7.01e+02 2.71e+02 5.64e+02 6.60e+02
+/=/+=++=+=−
f12 Mean 7.82e01 1.57e01 3.15e+00 4.10e01 1.34e+00 3.12e+00 8.73e02 1.54e01
Std. 9.98e02 1.51e02 4.66e01 3.78e02 1.25e01 4.71e01 2.73e02 6.97e02
+/=/+=++++−−
f13 Mean 4.17e01 4.56e01 5.06e01 5.46e01 3.46e01 4.01e01 4.28e01 3.44e01
Std. 5.86e02 6.73e02 9.63e02 1.12e01 5.18e02 5.08e02 6.72e02 6.10e02
+/=/++++=++−
f14 Mean 3.14e01 2.85e01 3.50e01 3.60e01 2.48e01 3.23e01 3.48e01 2.46e01
Std. 3.21e02 1.76e02 1.57e01 1.30e01 3.08e02 2.99e02 2.44e01 2.42e02
+/=/++++=+=−
f15 Mean 1.44e+01 3.03e+01 1.85e+01 2.77e+01 8.38e+00 2.32e+01 1.28e+01 6.86e+00
Std. 3.35e+00 7.10e+00 1.20e+01 4.11e+00 1.56e+00 1.20e+01 5.75e+00 1.81e+00
+/=/+++++++−
123
G. Sun et al.
Tabl e 5 Comparative results on functions f16 f30 with D=50
Func. Metric SADE JADE GDE MGBDE SinDE C-ABC CCPSO2 GPDE
f16 Mean 1.98e+01 1.86e+01 2.22e+01 1.92e+01 2.00e+01 2.23e+01 1.74e+01 1.86e+01
Std. 2.50e01 4.04e01 3.04e01 4.33e01 6.39e01 1.34e01 8.29e01 9.38e01
+/=/+=++++−−
f17 Mean 2.23e+04 9.29e+04 5.48e+04 1.06e+04 3.94e+05 5.19e+04 6.64e+05 5.41e+04
Std. 1.41e+04 7.40e+04 3.40e+04 4.95e+03 2.26e+05 2.49e+04 4.44e+05 2.85e+04
+/=/−+=−+=+−
f18 Mean 4.02e+02 8.88e+02 1.50e+02 6.03e+02 3.53e+02 5.89e+02 2.01e+03 4.28e+01
Std. 3.23e+02 7.82e+02 1.56e+02 1.20e+03 3.13e+02 7.44e+02 1.28e+03 3.36e+01
+/=/+++++++−
f19 Mean 1.36e+01 3.40e+01 9.41e+00 1.92e+01 9.58e+00 2.38e+01 1.48e+01 6.82e+00
Std. 5.79e+00 2.14e+01 4.50e+00 2.18e+00 7.10e01 2.03e+01 2.35e+00 1.34e+00
+/=/+++++++−
f20 Mean 2.39e+02 6.87e+02 4.80e+02 1.81e+02 2.16e+02 1.12e+03 6.45e+04 1.17e+02
Std. 6.34e+01 1.37e+03 5.13e+02 4.31e+01 1.53e+02 2.91e+02 2.38e+04 1.45e+02
+/=/+++++++−
f21 Mean 2.65e+04 4.22e+04 2.93e+04 3.02e+03 2.62e+05 4.97e+04 8.82e+05 3.13e+04
Std. 1.94e+04 4.93e+04 2.48e+04 1.81e+03 1.49e+05 3.74e+04 8.11e+05 3.80e+04
+/=/=+=−+++−
f22 Mean 4.22e+02 5.80e+02 1.36e+03 1.23e+03 2.57e+02 5.87e+02 1.26e+03 2.14e+02
Std. 1.04e+02 1.07e+02 4.14e+02 3.62e+02 1.49e+02 3.61e+02 2.80e+02 1.64e+02
+/=/++++=++−
f23 Mean 3.44e+02 3.44e+02 3.44e+02 3.44e+02 3.44e+02 3.44e+02 3.44e+02 3.44e+02
Std. 1.55e13 2.36e13 1.87e13 5.02e13 1.15e13 1.04e13 3.46e12 8.73e14
+/=/=======−
f24 Mean 2.72e+02 2.78e+02 2.81e+02 3.02e+02 2.64e+02 2.76e+02 2.58e+02 2.63e+02
Std. 6.57e+00 3.95e+00 3.36e+00 1.13e+01 3.28e+00 4.35e+00 3.10e+00 6.01e+00
+/=/++++=+=−
f25 Mean 2.10e+02 2.28e+02 2.08e+02 2.38e+02 2.10e+02 2.08e+02 2.15e+02 2.08e+02
Std. 1.01e+01 2.16e+00 2.06e+00 4.72e+00 1.29e+00 1.83e+00 3.63e+00 2.23e+00
+/=/=+=+==+−
f26 Mean 1.54e+02 1.20e+02 1.00e+02 1.07e+02 1.00e+02 1.07e+02 1.75e+02 1.09e+02
Std. 5.15e+01 4.12e+01 7.72e02 2.57e+01 2.36e02 2.57e+01 9.01e+01 3.51e+01
+/=/++++==+−
f27 Mean 7.70e+02 7.75e+02 7.05e+02 1.58e+03 3.22e+02 4.57e+02 1.05e+03 4.32e+02
Std. 8.80e+01 1.53e+02 1.05e+02 1.48e+02 2.13e+01 6.65e+01 1.75e+02 3.60e+01
+/=/++++−=+−
f28 Mean 1.42e+03 1.61e+03 1.45e+03 5.36e+03 1.09e+03 1.15e+03 2.12e+03 1.19e+03
Std. 1.14e+02 2.46e+02 1.10e+02 1.03e+03 3.62e+01 6.78e+01 6.65e+02 5.11e+01
+/=/++++−=+−
f29 Mean 1.05e+03 9.94e+02 8.36e+02 8.96e+02 1.89e+03 1.36e+03 2.48e+03 1.27e+03
Std. 1.93e+02 1.17e+02 2.51e+02 2.04e+02 3.13e+02 3.54e+02 6.52e+02 2.25e+02
+/=/−−−−+=+−
f30 Mean 1.06e+04 1.14e+04 9.89e+03 1.18e+04 8.99e+03 9.12e+03 1.24e+04 9.07e+03
Std. 1.68e+03 1.71e+03 5.53e+02 9.07e+02 2.86e+02 4.41e+02 1.48+03e 4.86e+02
+/=/++++==+−
123
Differential evolution with Gaussian mutation and dynamic parameter adjustment
Tabl e 6 Comparative results on functions f1f15 with D=100
Func. Metric SADE JADE GDE MGBDE SinDE C-ABC CCPSO2 GPDE
f1Mean 8.96e+05 4.70e+05 8.29e+06 4.38e+05 2.19e+07 4.73e+06 1.92e+07 9.59e+06
Std. 1.45e+05 1.93e+05 3.20e+06 1.51e+05 5.50e+06 9.62e+05 6.89e+06 2.74e+06
+/=/−− − = − + − + −
f2Mean 1.31e+04 6.38e+03 1.74e+04 2.80e10 1.06e+04 1.27e+04 3.67e+04 1.23e+01
Std. 8.76e+03 1.08e+04 2.13e+04 4.04e10 6.32e+03 1.02e+04 3.80e+04 1.50e+01
+/=/−+ + + − + + + −
f3Mean 7.66e+01 8.65e+00 5.22e+03 2.51e02 3.22e+03 5.46e+04 5.01e+04 3.45e+02
Std. 6.52e+01 2.83e+00 4.07e+03 6.79e02 1.34e+03 1.10e+04 2.24e+04 2.62e+02
+/=/−− − + − + + + −
f4Mean 1.71e+02 1.62e+02 1.89e+02 1.35e+02 1.59e+02 1.69e+02 2.09e+02 1.60e+02
Std. 4.05e+01 4.80e+01 3.38e+01 5.92e+01 2.57e+01 3.63e+01 3.14e+01 2.77e+01
+/=/−= = + − = = + −
f5Mean 2.10e+01 2.03e+01 2.13e+01 2.07e+01 2.12e+01 2.13e+01 2.00e+01 2.00e+01
Std. 2.30e02 1.75e02 2.31e02 2.06e02 2.46e02 3.04e02 1.18e03 9.75e07
+/=/−+ + + + + + + −
f6Mean 6.44e+01 7.23e+01 4.56e+01 1.03e+02 4.79e+00 8.30e+00 6.35e+01 6.36e+00
Std. 5.35e+00 3.68e+00 7.45e+00 5.42e+00 2.59e+00 2.67e+00 1.07e+01 3.85e+00
+/=/−+ + + + = + + −
f7Mean 2.63e03 6.39e03 6.07e03 3.61e03 7.52e11 2.00e16 3.78e03 3.77e16
Std. 6.20e03 1.06e02 9.15e03 6.02e03 6.49e11 1.47e16 5.88e03 2.44e16
+/=/−+ + + + + = + −
f8Mean 1.92e+01 4.17e+00 2.32e+02 5.84e+02 6.22e+01 2.42e+02 5.09e09 3.92e+01
Std. 5.89e+00 9.84e01 4.44e+01 4.27e+01 8.16e+00 1.76e+02 3.75e09 6.47e+00
+/=/−− − + + + + − −
f9Mean 2.77e+02 2.72e+02 3.30e+02 6.57e+02 1.36e+02 8.17e+02 4.21e+02 1.45e+02
Std. 2.84e+01 2.04e+01 1.65e+02 7.80e+01 2.61e+01 7.54e+01 1.23e+02 2.72e+01
+/=/−+ + + + = + + −
f10 Mean 1.26e+02 1.49e+01 6.69e+03 1.01e+04 5.53e+03 2.69e+04 1.40e05 5.73e+02
Std. 1.97e+01 2.49e+00 1.25e+03 3.99e+03 7.98e+02 5.42e+02 3.37e05 2.32e+02
+/=/−− − + + + + − −
f11 Mean 2.05e+04 1.06e+04 3.03e+04 1.41e+04 1.53e+04 3.03e+04 1.12e+04 1.22e+04
Std. 5.94e+02 6.56e+02 5.24e+02 1.21e+03 2.10e+03 4.83e+02 1.04e+03 2.12e+03
+/=/−+ = + + + + = −
f12 Mean 1.62e+00 2.91e01 4.01e+00 6.80e01 2.49e+00 4.05e+00 1.19e01 3.18e01
Std. 1.51e01 3.69e02 1.74e01 5.26e02 1.30e01 1.68e01 3.19e02 1.22e01
+/=/−+ = + + + + − −
f13 Mean 4.67e01 4.85e01 6.53e01 5.90e01 5.33e01 5.40e01 5.30e01 4.71e01
Std. 3.88e02 5.58e02 8.10e02 8.60e02 4.26e02 4.92e02 6.22e02 6.06e02
+/=/−= = + + + + + −
f14 Mean 3.21e01 2.88e01 3.47e01 3.67e01 2.83e01 3.34e01 3.33e01 2.80e01
Std. 1.97e02 2.10e02 3.77e02 1.07e01 2.40e02 2.64e02 1.97e01 1.74e02
+/=/−+ = + + = + + −
f15 Mean 4.09e+01 6.61e+01 8.70e+01 6.95e+01 2.26e+01 7.43e+01 4.36e+01 1.59e+01
Std. 1.12e+01 1.82e+01 1.80e+01 7.08e+00 5.45e+00 1.79e+00 2.14e+01 2.52e+00
+/=/−+ + + + + + + −
123
G. Sun et al.
Tabl e 7 Comparative results on functions f16 f30 with D=100
Func. Metric SADE JADE GDE MGBDE SinDE C-ABC CCPSO2 GPDE
f16 Mean 4.38e+01 4.14e+01 4.66e+01 4.21e+01 4.48e+01 4.65e+01 3.93e+01 4.13e+01
Std. 3.77e01 3.46e01 2.57e01 4.73e01 3.32e01 2.69e01 1.07e+00 8.40e01
+/=/+=++++−−
f17 Mean 1.50e+05 3.27e+05 4.82e+05 8.97e+04 2.49e+06 1.92e+06 3.98e+06 1.01e+06
Std. 4.37e+04 1.63e+05 2.07e+05 4.39e+04 1.09e+06 5.58e+05 1.99e+06 5.12e+05
+/=/−−−−+++−
f18 Mean 8.02e+02 6.84e+02 7.02e+02 1.50e+03 2.61e+02 5.46e+02 4.71e+03 5.74e+02
Std. 8.63e+02 4.40e+02 7.69e+02 1.64e+03 3.12e+02 5.69e+02 4.68e+03 1.08e+03
+/=/++++−=+−
f19 Mean 8.39e+01 1.01e+02 9.83e+01 8.95e+01 9.02e+01 8.51e+01 1.02e+02 8.91e+01
Std. 3.13e+01 4.27e+01 1.86e+01 3.56e+01 8.56e01 1.99e+01 1.95e+01 9.74e01
+/=/=++=+=+−
f20 Mean 7.14e+02 1.09e+03 3.21e+03 4.54e+02 6.32e+03 1.83e+04 1.08e+05 5.71e+02
Std. 2.09e+02 1.71e+03 1.17e+03 1.29e+02 1.63e+03 5.75e+03 4.45e+04 8.60e+01
+/=/+++−+++−
f21 Mean 7.28e+04 1.34e+05 1.87e+05 2.96e+04 1.98e+06 1.29e+06 3.02e+06 3.85e+05
Std. 5.04e+04 8.18e+04 7.22e+04 1.54e+04 6.00e+05 5.08e+05 1.16e+06 1.64e+05
+/=/−−−−+++−
f22 Mean 1.38e+03 1.41e+03 3.65e+03 2.69e+03 1.35e+03 4.03e+03 2.62e+03 1.16e+03
Std. 2.94e+02 2.58e+02 9.48e+02 4.76e+02 3.96e+02 2.20e+02 5.07e+02 2.59e+02
+/=/+++++++−
f23 Mean 3.48e+02 3.48e+02 3.48e+02 3.48e+02 3.48e+02 3.48e+02 3.49e+02 3.48e+02
Std. 4.07e13 6.44e12 5.33e05 3.19e12 1.11e04 3.73e12 7.64e01 5.08e13
+/=/======+−
f24 Mean 3.78e+02 3.99e+02 4.06e+02 4.52e+02 3.62e+02 3.78e+02 3.31e+02 3.64e+02
Std. 6.30e+00 7.55e+00 8.35e+00 2.28e+01 1.69e+00 4.21e+00 1.05e+01 2.65e+00
+/=/++++−+−−
f25 Mean 2.00e+02 2.57e+02 2.45e+02 2.88e+02 2.49e+02 2.45e+02 2.54e+02 2.39e+02
Std. 6.26e14 8.37e+00 9.57e+00 1.66e+01 4.19e+00 1.19e+01 1.17e+01 4.76e+00
+/=/−+=++=+−
f26 Mean 2.00e+02 2.00e+02 2.01e+02 2.00e+02 2.01e+02 2.00e+02 2.02e+02 2.01e+02
Std. 2.49e02 7.47e03 1.10e01 1.24e02 2.09e01 6.94e02 4.20e01 1.77e01
+/=/−−=−=−+−
f27 Mean 1.43e+03 1.37e+03 1.40e+03 3.12e+03 3.06e+02 4.43e+02 2.09e+03 3.74e+02
Std. 1.01e+02 2.15e+02 1.63e+02 2.39e+02 1.24e+01 3.76e+01 3.55e+02 5.91e+01
+/=/++++−++−
f28 Mean 3.20e+03 4.72e+03 2.91e+03 1.39e+04 2.20e+03 1.95e+03 3.87e+03 2.08e+03
Std. 3.34e+02 6.34e+02 3.15e+02 1.92e+03 4.63e+01 3.32e+02 8.04e+02 1.78e+02
+/=/+++++=+−
f29 Mean 1.37e+03 1.45e+03 1.98e+03 1.23e+03 2.81e+03 5.68e+03 4.37e+03 1.91e+03
Std. 2.51e+02 1.85e+02 4.05e+02 2.18e+02 4.80e+02 2.50e+03 8.66e+02 1.41e+02
+/=/−−=−+++−
f30 Mean 8.17e+03 8.59e+03 6.01e+03 7.45e+03 9.53e+03 6.64e+03 1.91e+04 8.90e+03
Std. 9.43e+02 1.58e+03 9.47e+02 2.02e+03 1.25e+03 1.44e+03 6.97e+03 8.01e+02
+/=/−=−−=−+−
123
Differential evolution with Gaussian mutation and dynamic parameter adjustment
Algorithm 1 The overall procedure of GPDE
1: Set the values of parameters NP,FR,Vand T;
2: Initialize NP individuals with random positions via the formula (1);
3: for (t=1;t<=T;t++),do
4: Compute the value of scaling factor Ftin the tth generation via
the formula (12);
5: Compute the value of parameter CStin the tth generation via the
formula (17);
6: for (i=1;i<=NP;i++),do
7: Compute the crossover rate CRi
tof the ith individual in the tth
generation via the formula (10);
8: if rand[0,1]<CStthen
9: Generate the new trial vector via the formula (9) included
Gaussian mutation operator;
10: else
11: Generate the new trial vector via the formula (11) containing
mutation operator DE/rand-worst/1;
12: end if;
13: Update the ith individual via the selection operator (8);
14: Replace the best individual xbest by the new individual xiif xi
is better than xbest ;
15: end for
16: end for
17: Output the position of the best individual as the global optimal
solution.
To be specific, SADE and JADE are two state-of-the-art DE
variants, and GDE and MGBDE are recently proposed vari-
ants both of which adopt two different mutation operators;
in particular, MGBDE employs a similar Gaussian muta-
tion operator with GPDE, and SinDE is an up-to-date DE
variant, which applies two sinusoidal functions to adjust
the values of mutation scaling factor and crossover rate.
These selected DE variants not only have outstanding perfor-
mance, but also have some similar aspects to our proposed
GPDE, that is why we take them as the comparison object. In
addition, two good performing state-of-the-art meta-heuristic
algorithms, i.e., cooperative coevolving particle swarm opti-
mization with random grouping [denoted by CCPSO2 (Li
and Yao 2012] and collective resource-based artificial bee
colony with decentralized tasking (denoted by C-ABC (Bose
et al. 2014) for short), are used to enrich the comparative
experiment.
For all the aforementioned compared algorithms, except
the population sizes NP, which, respectively, are equal to
Dand 5Dfor test functions and real-world problems, the
other involved control parameters keep the same with their
corresponding literature. In GPDE, there are only three user-
specified control parameters, including population size NP,
periodic adjustment parameter FR and the variance Vof
crossover rate. Note that the values of FR and Vare, respec-
tively, set to 0.05 and 0.1, and the value of NP always takes
the same as its competitors, and all these three values will
keep no change for all the adopted test functions and real-
world problems. In addition, all the compared algorithms are
Tabl e 8 Comparative results on real-world problems rf1rf4
Func. Metric SADE JADE GDE MGBDE SinDE C-ABC CCPSO2 GPDE
rf1Mean 1.05e+00 9.95e+00 6.35e+00 1.26e+01 4.68e01 2.23e+00 1.24e+01 1.34e+00
Std. 3.66e+00 4.74e+00 6.87e+00 4.41e+00 2.34e+00 4.62e+00 7.67e+00 3.72e+00
+/=/=+++==+−
rf2Mean 1.93e+00 1.79e+00 2.38e+00 1.95e+00 1.78e+00 2.39e+00 1.80e+00 1.63e+00
Std. 8.64e02 9.98e02 9.90e02 1.56e01 1.69e01 9.67e02 1.96e01 2.19e01
+/=/+++++++−
rf3Mean 1.76e04 4.38e09 4.63e14 1.09e+01 7.07e+00 7.98e+00 2.66e+02 2.82e+00
Std. 4.55e04 1.49e08 7.08e14 6.47e+00 4.25e+00 6.99e+00 1.76e+02 5.11e+00
+/=/−−−++++−
rf4Mean 0.00e+00 0.00e+00 0.00e+00 0.00e+00 4.63e+00 9.83e+00 2.87e+02 0.00e+00
Std. 0.00e+00 0.00e+00 0.00e+00 0.00e+00 7.68e+00 1.16e+01 3.13e+02 0.00e+00
+/=/====+++−
Tabl e 9 Statistical results on all test functions and real-world problems
Func. Metric SADE JADE GDE MGBDE SinDE C-ABC CCPSO2 GPDE
30-D+/=/21/5/4 22/5/3 23/4/3 24/2/4 15/7/8 23/6/1 22/4/4
50-D+/=/22/3/5 21/4/5 23/4/3 22/1/7 14/12/4 19/10/1 22/4/4
100-D+/=/16/4/10 14/8/8 22/5/3 18/2/10 20/7/3 20/7/3 24/1/5
Real-word +/=/1/2/1 2/1/1 2/1/1 3/1/0 3/1/0 3/1/0 4/0/0
Tot a l +/=/ 60/14/20 59/18/17 70/14/10 67/6/21 52/27/15 65/24/5 72/9/13
123
G. Sun et al.
0 0.5 1 1.5 22.5 3
x 105
103
104
105
106
107
108
109
1010
Function Evaluations
Fitness Error Value
Function f1
SADE
JADE
GDE
MGBDE
SinDE
C−ABC
CCPSO2
GPDE
0 0.5 1 1.5 2 2.5 3
x 105
10−20
10−15
10−10
10−5
100
105
1010
Function Evaluations
Fitness Error Value
Function f2
SADE
JADE
GDE
MGBDE
SinDE
C−ABC
CCPSO2
GPDE
0 0.5 1 1.5 22.5 3
x 105
10−25
10−20
10−15
10−10
10−5
100
105
Function Evaluations
Fitness Error Value
Function f3
SADE
JADE
GDE
MGBDE
SinDE
C−ABC
CCPSO2
GPDE
0 0.5 1 1.5 2 2.5 3
x 105
10−4
10−3
10−2
10−1
100
101
102
103
104
105
Function Evaluations
Fitness Error Value
Function f4
SADE
JADE
GDE
MGBDE
SinDE
C−ABC
CCPSO2
GPDE
0 0.5 1 1.5 22.5 3
x 105
10−2
10−1
100
101
Function Evaluations
Fitness Error Value
Function f12
SADE
JADE
GDE
MGBDE
SinDE
C−ABC
CCPSO2
GPDE
0 0.5 1 1.5 2 2.5 3
x 105
10−1
100
101
102
Function Evaluations
Fitness Error Value
Function f13
SADE
JADE
GDE
MGBDE
SinDE
C−ABC
CCPSO2
GPDE
Fig. 1 Convergence graphs (mean curves) for eight algorithms on functions f1,f2,f3,f4,f12 and f13 with D=30 over 50 independent runs
123
Differential evolution with Gaussian mutation and dynamic parameter adjustment
00.5 11.5 22.5 3
x 105
101
102
103
104
105
106
107
108
109
1010
1011
Function Evaluations
Fitness Error Value
Function f18
SADE
JADE
GDE
MGBDE
SinDE
C−ABC
CCPSO2
GPDE
00.5 11.5 22.5 3
x 105
100
101
102
103
104
Function Evaluations
Fitness Error Value
Fu n ct io n f19
SADE
JADE
GDE
MGBDE
SinDE
C−ABC
CCPSO2
GPDE
0 0.5 1 1.5 2 2.5 3
x 105
101
102
103
104
105
106
107
108
Function Evaluations
Fitness Error Value
Function f20
SADE
JADE
GDE
MGBDE
SinDE
C−ABC
CCPSO2
GPDE
00.5 11.5 22.5 3
x 105
102
103
104
105
106
107
108
109
Function Evaluations
Fitness Error Value
Fu n ct io n f21
SADE
JADE
GDE
MGBDE
SinDE
C−ABC
CCPSO2
GPDE
0 0.5 1 1.5 2 2.5 3
x 105
102
103
104
105
106
107
108
109
1010
Function Evaluations
Fitness Error Value
Function f29
SADE
JADE
GDE
MGBDE
SinDE
C−ABC
CCPSO2
GPDE
00.5 11.5 22.5 3
x 105
102
103
104
105
106
107
108
Function Evaluations
Fitness Error Value
Fu n ct io n f30
SADE
JADE
GDE
MGBDE
SinDE
C−ABC
CCPSO2
GPDE
Fig. 2 Convergence graphs (mean curves) for eight algorithms on functions f18,f19,f20,f21 ,f29 and f30 with D=30 over 50 independent runs
123
G. Sun et al.
0 0.5 1 1.5 2 2.5 3 3.5 4 4.5 5
x 105
104
105
106
107
108
109
1010
1011
Function Evaluations
Fitness Error Value
Function f1
SADE
JADE
GDE
MGBDE
SinDE
C−ABC
CCPSO2
GPDE
00.5 1 1.5 2 2.5 33.5 44.5 5
x 105
10−10
10−5
100
105
1010
Function Evaluations
Fitness Error Value
Function f2
SADE
JADE
GDE
MGBDE
SinDE
C−ABC
CCPSO2
GPDE
0 0.5 1 1.5 2 2.5 3 3.5 4 4.5 5
x 105
10−4
10−2
100
102
104
106
Function Evaluations
Fitness Error Value
Function f3
SADE
JADE
GDE
MGBDE
SinDE
C−ABC
CCPSO2
GPDE
00.5 11.5 22.5 33.5 4 4.5 5
x 105
101
102
103
104
105
106
Function Evaluations
Fitness Error Value
Function f4
SADE
JADE
GDE
MGBDE
SinDE
C−ABC
CCPSO2
GPDE
0 0.5 1 1.5 2 2.5 3 3.5 4 4.5 5
x 105
10−2
10−1
100
101
Function Evaluations
Fitness Error Value
Function f12
SADE
JADE
GDE
MGBDE
SinDE
C−ABC
CCPSO2
GPDE
00.5 11.5 22.5 33.5 4 4.5 5
x 105
10−1
100
101
102
Function Evaluations
Fitness Error Value
Function f13
SADE
JADE
GDE
MGBDE
SinDE
C−ABC
CCPSO2
GPDE
Fig. 3 Convergence graphs (mean curves) for eight algorithms on functions f1,f2,f3,f4,f12 and f13 with D=50 over 50 independent runs
123
Differential evolution with Gaussian mutation and dynamic parameter adjustment
0 0.5 1 1.5 2 2.5 3 3.5 4 4.5 5
x 105
101
102
103
104
105
106
107
108
109
1010
1011
Function Evaluations
Fitness Error Value
Function f18
SADE
JADE
GDE
MGBDE
SinDE
C−ABC
CCPSO2
GPDE
0 0.5 1 1.5 22.5 33.5 44.5 5
x 105
100
101
102
103
104
Function Evaluations
Fitness Error Value
Function f19
SADE
JADE
GDE
MGBDE
SinDE
C−ABC
CCPSO2
GPDE
0 0.5 1 1.5 2 2.5 3 3.5 4 4.5 5
x 105
102
103
104
105
106
107
108
Function Evaluations
Fitness Error Value
Function f20
SADE
JADE
GDE
MGBDE
SinDE
C−ABC
CCPSO2
GPDE
0 0.5 1 1.5 2 2.5 3 3.5 4 4.5 5
x 105
103
104
105
106
107
108
109
Function Evaluations
Fitness Error Value
Function f21
SADE
JADE
GDE
MGBDE
SinDE
C−ABC
CCPSO2
GPDE
0 0.5 1 1.5 2 2.5 3 3.5 4 4.5 5
x 105
102
103
104
105
106
107
108
109
1010
Function Evaluations
Fitness Error Value
Function f29
SADE
JADE
GDE
MGBDE
SinDE
C−ABC
CCPSO2
GPDE
0 0.5 1 1.5 2 2.5 3 3.5 4 4.5 5
x 105
103
104
105
106
107
108
Function Evaluations
Fitness Error Value
Function f30
SADE
JADE
GDE
MGBDE
SinDE
C−ABC
CCPSO2
GPDE
Fig. 4 Convergence graphs (mean curves) for eight algorithms on functions f18,f19,f20,f21 ,f29 and f30 with D=50 over 50 independent runs
123
G. Sun et al.
0 1 2 3 4 5 6 7 8 9 10
x 105
105
106
107
108
109
1010
1011
Function Evaluations
Fitness Error Value
Function f1
SADE
JADE
GDE
MGBDE
SinDE
C−ABC
CCPSO2
GPDE
0 1 2 3 4 5 6 7 8 9 10
x 105
10−10
10−5
100
105
1010
Function Evaluations
Fitness Error Value
Function f2
SADE
JADE
GDE
MGBDE
SinDE
C−ABC
CCPSO2
GPDE
0 1 2 3 4 5 6 7 8 9 10
x 105
10−2
10−1
100
101
102
103
104
105
106
107
Function Evaluations
Fitness Error Value
Function f3
SADE
JADE
GDE
MGBDE
SinDE
C−ABC
CCPSO2
GPDE
0 1 2 3 4 5 6 7 8 9 10
x 105
102
103
104
105
106
Function Evaluations
Fitness Error Value
Function f4
SADE
JADE
GDE
MGBDE
SinDE
C−ABC
CCPSO2
GPDE
0 1 2 3 4 5 6 7 8 9 10
x 105
10−2
10−1
100
101
Function Evaluations
Fitness Error Value
Function f12
SADE
JADE
GDE
MGBDE
SinDE
C−ABC
CCPSO2
GPDE
0 1 2 3 4 5 6 7 8 9 10
x 105
10−1
100
101
102
Function Evaluations
Fitness Error Value
Function f13
SADE
JADE
GDE
MGBDE
SinDE
C−ABC
CCPSO2
GPDE
Fig. 5 Convergence graphs (mean curves) for eight algorithms on functions f1,f2,f3,f4,f12 and f13 with D=100 over 50 independent runs
123
Differential evolution with Gaussian mutation and dynamic parameter adjustment
0 1 2 3 4 5 6 7 8 9 10
x 105
102
103
104
105
106
107
108
109
1010
1011
Function Evaluations
Fitness Error Value
Function f18
SADE
JADE
GDE
MGBDE
SinDE
C−ABC
CCPSO2
GPDE
0 1 2 3 4 5 6 7 8 9 10
x 105
101
102
103
104
Function Evaluations
Fitness Error Value
Fu n ct i o n f19
SADE
JADE
GDE
MGBDE
SinDE
C−ABC
CCPSO2
GPDE
0 1 2 3 4 5 6 7 8 9 10
x 105
102
103
104
105
106
107
108
Function Evaluations
Fitness Error Value
Function f20
SADE
JADE
GDE
MGBDE
SinDE
C−ABC
CCPSO2
GPDE
0 1 2 3 4 5 6 7 8 9 10
x 105
104
105
106
107
108
109
1010
Function Evaluations
Fitness Error Value
Fu n ct i o n f21
SADE
JADE
GDE
MGBDE
SinDE
C−ABC
CCPSO2
GPDE
0 1 2 3 4 5 6 7 8 9 10
x 105
103
104
105
106
107
108
109
1010
Function Evaluations
Fitness Error Value
Function f29
SADE
JADE
GDE
MGBDE
SinDE
C−ABC
CCPSO2
GPDE
0 1 2 3 4 5 6 7 8 9 10
x 105
103
104
105
106
107
108
109
Function Evaluations
Fitness Error Value
Fu n ct i o n f30
SADE
JADE
GDE
MGBDE
SinDE
C−ABC
CCPSO2
GPDE
Fig. 6 Convergence graphs (mean curves) for eight algorithms on functions f18 ,f19,f20 ,f21,f29 and f30 with D=100 over 50 independent
runs
123
G. Sun et al.
Tabl e 1 0 Parameter configurations of different GPDEs
Par. GPDE-C1 GPDE-C2 GPDE-C3 GPDE-C4 GPDE-C5 GPDE-C6 GPDE-C7 GPDE-C8 GPDE-C9
FR 0.03 0.05 0.1 0.03 0.05 0.10.03 0.05 0.1
V0.03 0.03 0.03 0.05 0.05 0.05 0.10.10.1
tested 50 independent runs for every function and the mean
results are used in the comparison, and the maximum allow-
able generations are set to 10000 for all the test functions and
real-world problems.
5.3 Comparative results
To evaluate the performances of the participant algorithms
and provide a comprehensive comparison, we, respectively,
report the mean (denoted by “Mean”) fitness error value
fxbestfx, the corresponding standard deviation
(Std.) and the statistical conclusion of comparative results
based on 50 independent runs, where xbest is the best obtained
solution and xis the known optimal solution. In addition, the
statistical conclusions of the comparative results are based
on the paired Wilcoxon rank sum test which is conducted at
0.05 significance level to assess the significance of the perfor-
mance difference between GPDE and each competitor. We
mark the three kinds of statistical significance cases with “+,”
=” and “” to indicate that GPDE is significantly better
than, similar to, or worse than the corresponding competitor,
respectively. The comparative results of test functions with
different dimensions are summarized in Tables 2,3,4,5,6,7
and 8, and the best results are indicated with boldface font to
highlight the best algorithm for each test function. Moreover,
the numbers of the three cases (+/=/)obtained by the
compared results are summarized in Table 9.
Table 9shows that GPDE obtains the best overall perfor-
mance among the eight compared algorithms. In details, for
the 94 functions, GPDE performs better than SADE, JADE,
GDE, MGBDE, SinDE, C-ABC and CCPSO2 on 60, 59, 70,
67, 52, 65 and 72 functions, and only loses in 20, 17, 10, 21,
15, 5 and 13 functions, respectively. Moreover, GPDE out-
performs its competitors on every adopted dimensions of test
functions and has no worst “Mean” in 94 functions, which
means that GPDE is robust, and thus it is a reliable algorithm
for handling various problems with different dimensions.
In addition, to observe the convergence characteristics of
the compared algorithms, we select 36 functions with dif-
ferent dimensions and plot their convergence graphs based
on the mean values over 50 runs in Figs. 1,2,3,4,5and
6. Obviously, GPDE has a wonderful convergence rate. As a
conclusion, the experimental results and convergence graphs
have demonstrated that GPDE performs significantly better
than the other seven compared algorithms.
5.4 Robustness analysis of control parameters
Generally speaking, the involved control parameters may
have important effects on the algorithmic performance, but it
is good news for the users if the control parameters is robust.
To verify the robustness of control parameters in GPDE, we
compare various GPDEs with different parameter configura-
tions, which are listed in Table 10. Note that we only evaluate
the robustness of periodic adjustment parameter FR and the
variance Vof crossover rate, because population size NP
usually has no obvious effect on the performance of DE.
Since most of the results obtained by different GPDEs
are very close to each other, we only select 36 test functions
with different dimensions whose results have relatively clear
differences to reveal the robustness of control parameters
FR and Vvia convergence graphs. The convergence graphs
of 36 selected functions obtained by GPDEs with different
parameter configurations are plotted in Figs. 7,8,9,10,11
and 12.
Three discoveries can be found out from Figs. 7,8,9,10,
11 and 12. Above all, the results of GPDEs with different
parameter configurations have no obvious fluctuation, which
implies that the control parameters of GPDE are robust. Sec-
ondly, the variance Vof crossover rate has a slightly bigger
influence on the performance of GPDE than periodic adjust-
ment parameter FR, because Vhas more direct effect on the
population diversity than FR. At last, under a prescribed limit,
a bigger value of Vleads to a better result, because a big-
ger value of Vis often corresponding to a better population
diversity.
6 Conclusions
Differential evolution is an excellent evolutionary algo-
rithm for global numerical optimization, but it is not com-
pletely free from the problems of premature convergence and
stagnation. In order to alleviate these problems and enhance
DE, we propose a new variant called GPDE. In GPDE, a novel
Gaussian mutation operator who, respectively, takes the posi-
tion of the best individual among three randomly selected
individuals and the distance between the other two as the
mean and standard deviation, and a modified common muta-
tion operator is applied to cooperatively generate the mutant
vectors. Moreover, scaling factor adopts a cosine function to
123
Differential evolution with Gaussian mutation and dynamic parameter adjustment
0 0.5 1 1.5 2 2.5 3
x 105
104
105
106
107
108
109
1010
Function Evaluations
Fitness Error Value
Fu n ct i o n f1
GPDE−C1
GPDE−C2
GPDE−C3
GPDE−C4
GPDE−C5
GPDE−C6
GPDE−C7
GPDE−C8
GPDE−C9
00.5 11.5 22.5 3
x 105
10−20
10−15
10−10
10−5
100
105
1010
Function Evaluations
Fitness Error Value
Function f2
GPDE−C1
GPDE−C2
GPDE−C3
GPDE−C4
GPDE−C5
GPDE−C6
GPDE−C7
GPDE−C8
GPDE−C9
0 0.5 1 1.5 2 2.5 3
x 105
10−25
10−20
10−15
10−10
10−5
100
105
Function Evaluations
Fitness Error Value
Fu n ct i o n f3
GPDE−C1
GPDE−C2
GPDE−C3
GPDE−C4
GPDE−C5
GPDE−C6
GPDE−C7
GPDE−C8
GPDE−C9
00.5 11.5 22.5 3
x 105
10−2
10−1
100
101
102
103
104
105
Function Evaluations
Fitness Error Value
Function f4
GPDE−C1
GPDE−C2
GPDE−C3
GPDE−C4
GPDE−C5
GPDE−C6
GPDE−C7
GPDE−C8
GPDE−C9
0 0.5 1 1.5 2 2.5 3
x 105
10−1
100
101
Function Evaluations
Fitness Error Value
Function f12
GPDE−C1
GPDE−C2
GPDE−C3
GPDE−C4
GPDE−C5
GPDE−C6
GPDE−C7
GPDE−C8
GPDE−C9
00.5 11.5 22.5 3
x 105
10−1
100
101
102
Function Evaluations
Fitness Error Value
Function f13
GPDE−C1
GPDE−C2
GPDE−C3
GPDE−C4
GPDE−C5
GPDE−C6
GPDE−C7
GPDE−C8
GPDE−C9
Fig. 7 Convergence graphs (mean curves) for the GPDE with different parameter configurations on functions f1,f2,f3,f4,f12 and f13 with
D=30 over 50 independent runs
123
G. Sun et al.
0 0.5 1 1.5 2 2.5 3
x 105
101
102
103
104
105
106
107
108
109
1010
1011
Function Evaluations
Fitness Error Value
Function f18
GPDE−C1
GPDE−C2
GPDE−C3
GPDE−C4
GPDE−C5
GPDE−C6
GPDE−C7
GPDE−C8
GPDE−C9
0 0.5 1 1.5 2 2.5 3
x 105
100
101
102
103
104
Function Evaluations
Fitness Error Value
Function f19
GPDE−C1
GPDE−C2
GPDE−C3
GPDE−C4
GPDE−C5
GPDE−C6
GPDE−C7
GPDE−C8
GPDE−C9
0 0.5 1 1.5 2 2.5 3
x 105
101
102
103
104
105
106
107
108
Function Evaluations
Fitness Error Value
Function f20
GPDE−C1
GPDE−C2
GPDE−C3
GPDE−C4
GPDE−C5
GPDE−C6
GPDE−C7
GPDE−C8
GPDE−C9
0 0.5 1 1.5 2 2.5 3
x 105
103
104
105
106
107
108
109
Function Evaluations
Fitness Error Value
Function f21
GPDE−C1
GPDE−C2
GPDE−C3
GPDE−C4
GPDE−C5
GPDE−C6
GPDE−C7
GPDE−C8
GPDE−C9
0 0.5 1 1.5 2 2.5 3
x 105
102
103
104
105
106
107
108
109
1010
Function Evaluations
Fitness Error Value
Function f29
GPDE−C1
GPDE−C2
GPDE−C3
GPDE−C4
GPDE−C5
GPDE−C6
GPDE−C7
GPDE−C8
GPDE−C9
0 0.5 1 1.5 2 2.5 3
x 105
103
104
105
106
107
108
Function Evaluations
Fitness Error Value
Function f30
GPDE−C1
GPDE−C2
GPDE−C3
GPDE−C4
GPDE−C5
GPDE−C6
GPDE−C7
GPDE−C8
GPDE−C9
Fig. 8 Convergence graphs (mean curves) for the GPDE with different parameter configurations on functions f18,f19,f20,f21 ,f29 and f30 with
D=30 over 50 independent runs
123
Differential evolution with Gaussian mutation and dynamic parameter adjustment
0 0.5 1 1.5 2 2.5 3 3.5 4 4.5 5
x 105
105
106
107
108
109
1010
1011
Function Evaluations
Fitness Error Value
Function f1
GPDE−C1
GPDE−C2
GPDE−C3
GPDE−C4
GPDE−C5
GPDE−C6
GPDE−C7
GPDE−C8
GPDE−C9
0 0.5 1 1.5 2 2.5 3 3.5 4 4.5 5
x 105
100
102
104
106
108
1010
1012
Function Evaluations
Fitness Error Value
Fu n ct i o n f2
GPDE−C1
GPDE−C2
GPDE−C3
GPDE−C4
GPDE−C5
GPDE−C6
GPDE−C7
GPDE−C8
GPDE−C9
0 0.5 1 1.5 2 2.5 3 3.5 4 4.5 5
x 105
10−1
100
101
102
103
104
105
106
107
Function Evaluations
Fitness Error Value
Function f3
GPDE−C1
GPDE−C2
GPDE−C3
GPDE−C4
GPDE−C5
GPDE−C6
GPDE−C7
GPDE−C8
GPDE−C9
0 0.5 1 1.5 2 2.5 3 3.5 4 4.5 5
x 105
101
102
103
104
105
106
Function Evaluations
Fitness Error Value
Fu n ct i o n f4
GPDE−C1
GPDE−C2
GPDE−C3
GPDE−C4
GPDE−C5
GPDE−C6
GPDE−C7
GPDE−C8
GPDE−C9
0 0.5 1 1.5 2 2.5 3 3.5 4 4.5 5
x 105
10−1
100
101
102
Function Evaluations
Fitness Error Value
Function f12
GPDE−C1
GPDE−C2
GPDE−C3
GPDE−C4
GPDE−C5
GPDE−C6
GPDE−C7
GPDE−C8
GPDE−C9
0 0.5 1 1.5 2 2.5 3 3.5 4 4.5 5
x 105
10−1
100
101
102
Function Evaluations
Fitness Error Value
Fu n ct i o n f13
GPDE−C1
GPDE−C2
GPDE−C3
GPDE−C4
GPDE−C5
GPDE−C6
GPDE−C7
GPDE−C8
GPDE−C9
Fig. 9 Convergence graphs (mean curves) for the GPDE with different parameter configurations on functions f1,f2,f3,f4,f12 and f13 with
D=50 over 50 independent runs
123
G. Sun et al.
0 0.5 1 1.5 2 2.5 3 3.5 4 4.5 5
x 105
101
102
103
104
105
106
107
108
109
1010
1011
Function Evaluations
Fitness Error Value
Function f18
GPDE−C1
GPDE−C2
GPDE−C3
GPDE−C4
GPDE−C5
GPDE−C6
GPDE−C7
GPDE−C8
GPDE−C9
00.5 11.5 22.5 33.5 44.5 5
x 105
100
101
102
103
104
Function Evaluations
Fitness Error Value
Function f19
GPDE−C1
GPDE−C2
GPDE−C3
GPDE−C4
GPDE−C5
GPDE−C6
GPDE−C7
GPDE−C8
GPDE−C9
0 0.5 1 1.5 2 2.5 3 3.5 4 4.5 5
x 105
101
102
103
104
105
106
107
108
Function Evaluations
Fitness Error Value
Function f20
GPDE−C1
GPDE−C2
GPDE−C3
GPDE−C4
GPDE−C5
GPDE−C6
GPDE−C7
GPDE−C8
GPDE−C9
00.5 11.5 22.5 33.5 44.5 5
x 105
104
105
106
107
108
109
Function Evaluations
Fitness Error Value
Function f21
GPDE−C1
GPDE−C2
GPDE−C3
GPDE−C4
GPDE−C5
GPDE−C6
GPDE−C7
GPDE−C8
GPDE−C9
0 0.5 1 1.5 2 2.5 3 3.5 4 4.5 5
x 105
102
103
104
105
106
107
108
109
1010
Function Evaluations
Fitness Error Value
Function f29
GPDE−C1
GPDE−C2
GPDE−C3
GPDE−C4
GPDE−C5
GPDE−C6
GPDE−C7
GPDE−C8
GPDE−C9
00.5 11.5 22.5 33.5 44.5 5
x 105
103
104
105
106
107
108
Function Evaluations
Fitness Error Value
Function f30
GPDE−C1
GPDE−C2
GPDE−C3
GPDE−C4
GPDE−C5
GPDE−C6
GPDE−C7
GPDE−C8
GPDE−C9
Fig. 10 Convergence graphs (mean curves) for the GPDE with different parameter configurations on functions f18 ,f19,f20,f21 ,f29 and f30 with
D=50 over 50 independent runs
123
Differential evolution with Gaussian mutation and dynamic parameter adjustment
0 1 2 3 4 5 6 7 8 9 10
x 105
106
107
108
109
1010
1011
Function Evaluations
Fitness Error Value
Function f1
GPDE−C1
GPDE−C2
GPDE−C3
GPDE−C4
GPDE−C5
GPDE−C6
GPDE−C7
GPDE−C8
GPDE−C9
0 1 2 3 4 5 6 7 8 9 10
x 105
102
104
106
108
1010
1012
Function Evaluations
Fitness Error Value
Function f2
GPDE−C1
GPDE−C2
GPDE−C3
GPDE−C4
GPDE−C5
GPDE−C6
GPDE−C7
GPDE−C8
GPDE−C9
0 1 2 3 4 5 6 7 8 9 10
x 105
102
103
104
105
106
107
Function Evaluations
Fitness Error Value
Function f3
GPDE−C1
GPDE−C2
GPDE−C3
GPDE−C4
GPDE−C5
GPDE−C6
GPDE−C7
GPDE−C8
GPDE−C9
0 1 2 3 4 5 6 7 8 9 10
x 105
102
103
104
105
106
Function Evaluations
Fitness Error Value
Fu n ct i o n f4
GPDE−C1
GPDE−C2
GPDE−C3
GPDE−C4
GPDE−C5
GPDE−C6
GPDE−C7
GPDE−C8
GPDE−C9
0 1 2 3 4 5 6 7 8 9 10
x 105
10−1
100
101
Function Evaluations
Fitness Error Value
Function f12
GPDE−C1
GPDE−C2
GPDE−C3
GPDE−C4
GPDE−C5
GPDE−C6
GPDE−C7
GPDE−C8
GPDE−C9
0 1 2 3 4 5 6 7 8 9 10
x 105
10−1
100
101
102
Function Evaluations
Fitness Error Value
Function f13
GPDE−C1
GPDE−C2
GPDE−C3
GPDE−C4
GPDE−C5
GPDE−C6
GPDE−C7
GPDE−C8
GPDE−C9
Fig. 11 Convergence graphs (mean curves) for the GPDE with different parameter configurations on functions f1,f2,f3,f4,f12 and f13 with
D=100 over 50 independent runs
123
G. Sun et al.
0 1 2 3 4 5 6 7 8 9 10
x 105
102
103
104
105
106
107
108
109
1010
1011
Function Evaluations
Fitness Error Value
Function f18
GPDE−C1
GPDE−C2
GPDE−C3
GPDE−C4
GPDE−C5
GPDE−C6
GPDE−C7
GPDE−C8
GPDE−C9
0 1 2 3 4 5 6 7 8 9 10
x 105
101
102
103
104
105
Function Evaluations
Fitness Error Value
Function f19
GPDE−C1
GPDE−C2
GPDE−C3
GPDE−C4
GPDE−C5
GPDE−C6
GPDE−C7
GPDE−C8
GPDE−C9
0 1 2 3 4 5 6 7 8 9 10
x 105
102
103
104
105
106
107
108
Function Evaluations
Fitness Error Value
Function f20
GPDE−C1
GPDE−C2
GPDE−C3
GPDE−C4
GPDE−C5
GPDE−C6
GPDE−C7
GPDE−C8
GPDE−C9
0 1 2 3 4 5 6 7 8 9 10
x 105
105
106
107
108
109
1010
Function Evaluations
Fitness Error Value
Function f21
GPDE−C1
GPDE−C2
GPDE−C3
GPDE−C4
GPDE−C5
GPDE−C6
GPDE−C7
GPDE−C8
GPDE−C9
0 1 2 3 4 5 6 7 8 9 10
x 105
103
104
105
106
107
108
109
1010
Function Evaluations
Fitness Error Value
Function f29
GPDE−C1
GPDE−C2
GPDE−C3
GPDE−C4
GPDE−C5
GPDE−C6
GPDE−C7
GPDE−C8
GPDE−C9
0 1 2 3 4 5 6 7 8 9 10
x 105
103
104
105
106
107
108
109
Function Evaluations
Fitness Error Value
Function f30
GPDE−C1
GPDE−C2
GPDE−C3
GPDE−C4
GPDE−C5
GPDE−C6
GPDE−C7
GPDE−C8
GPDE−C9
Fig. 12 Convergence graphs (mean curves) for the GPDE with different parameter configurations on functions f18 ,f19,f20,f21 ,f29 and f30 with
D=100 over 50 independent runs
123
Differential evolution with Gaussian mutation and dynamic parameter adjustment
adjust its value periodically, which has potential advantage
in balancing the exploration and exploitation abilities, and
crossover rate employs a Gaussian function to produce its
value dynamically, which can adjust the population diver-
sity. The test suite of IEEE CEC-2014 which contains 30 test
function and four real-world problems, and seven remarkable
meta-heuristic algorithms are used to evaluate the perfor-
mance of GPDE, and the obtained results show that GPDE
performs much better than the other seven compared DE
algorithms. In addition, the parameter analysis indicates that
the control parameters involved in GPDE are robust.
Acknowledgements The authors wish to thank the anonymous review-
ers, whose valuable comments lead to an improved version of the paper.
This work was supported by the National Natural Science Foundation
of China under Grant Nos. 71701187, 71771166 and 71471126, and
Research Project of Zhejiang Education Department under Grant No.
Y201738184, and High Performance Computing Center of Tianjin Uni-
versity, China.
Compliance with ethical standards
Conflict of interest All the authors declares that they have no conflict
of interest.
Human and animal rights This article does not contain any studies
with human participants or animals performed by any of the authors.
References
Biswas S, Kundu S, Das S (2015) Inducing niching behavior in dif-
ferential evolution through local information sharing. IEEE Trans
Evol Comput 19(2):246–263
Bose D, Biswas S, Vasilakos AW, Laha S (2014) Optimal filter design
using an improved artificial bee colony algorithm. Inf Sci 281:443–
461
Brest J, Greiner S, Boškovi´c B, Mernik M, Žumer V (2006) Self-
adapting control parameters in differential evolution: a compar-
ative study on numerical benchmark problems. IEEE Trans Evol
Comput 10(6):646–657
Cai Y, Wang J (2013) Differential evolution with neighborhood and
direction information for numerical optimization. IEEE Trans
Cybern 43(6):2202–2215
Cai Y, Wang J (2015) Differential evolution with hybrid linkage
crossover. Inf Sci 320:244–287
ˇ
Crepinšek M, Liu SH, Mernik M (2013) Exploration and exploitation in
evolutionary algorithms: a survey. ACM Comput Surv 45(3):1–33
Cuevas E, Zaldívar D, Pérez-Cisneros M, Oliva D (2013) Block-
matching algorithm based on differential evolution for motion
estimation. Eng Appl Artif Intell 26:488–498
Das S, Suganthan PN (2011a) Differential evolution: a survey of the
state-of-the-art. IEEE Trans Evol Comput 15(1):4–31
Das S, Suganthan PN (2011b) Problem definitions and evaluation
criteria for CEC 2011 competition on testing evolutionary algo-
rithms on real world optimization problems. Jadavpur University,
Kolkata, India, and Nanyang Technol. Univ.,Singapore, Dec. 2010
Das S, Konar A, Chakraborty UK, Abraham A (2009) Differential evo-
lution using a neighborhood based mutation operator. IEEE Trans
Evol Comput 13(3):526–553
Dorigo M, Blum C (2005) Ant colony optimization theory: a survey.
Theor Comput Sci 344:243–278
Draa A, Bouzoubia S, Boukhalfa I (2015) A sinusoidal differential evo-
lution algorithm for numerical optimisation. Appl Soft Comput
27:99–126
García-Martínez C, Lozano M, Herrera F, Molina D, Sánchez A (2008)
Global and local real-coded genetic algorithms based on parent-
centric crossover operators. Eur J Oper Res 185(3):1088–1113
Ghosh A, Das S, Chowdhury A, Giri R (2011) An improved differential
evolution algorithm with fitness-based adaptation of the control
parameters. Inf Sci 181(18):3749–3765
Goldberg D (1989) Genetic algorithms in search, optimization and
machine learning. Addison-Wesley, New York
Gong WY, Cai ZH (2014) Parameter optimization of PEMFC model
with improved multi-strategy adaptive differential evolution. Eng
Appl Artif Intell 27:28–40
Gong WY, Cai ZH, Ling CX, Li H (2011a) Enhanced differential evo-
lution with adaptive strategies for numerical optimization. IEEE
Trans Syst Man Cybern B Cybern 41(2):397–413
Gong WY, Fialho A, Cai ZH, Li H (2011b) Adaptive strategy selection
in differential evolution for numerical optimization: an empirical
study. Inf Sci 181(24):5364–5386
Han MF, Liao SH, Chang JY, Lin CT (2013) Dynamic group-based
differential evolution using a self-adaptive strategy for global opti-
mization problems. Appl Intell 39(1):41–56
Herrera F, Lozano M (2000) Gradual distributed real-coded genetic
algorithms. IEEE Trans Evol Comput 4(1):43–63
Idris I, Selamat A, Omatu S (2014) Hybrid email spam detection model
with negative selection algorithm and differential evolution. Eng
Appl Artif Intell 28:97–110
Islam SM, Das S, Ghosh S, Roy S (2012) An adaptive differential evo-
lution algorithm with novel mutation and crossover strategies for
global numerical optimization. IEEE Trans Syst Man Cybern B
Cybern 42(2):482–500
Karafotias G, Hoogendoorn M, Eiben AE (2015) Parameter control in
evolutionary algorithms: trends and challenges. IEEE Trans Evol
Comput 19(2):167–187
Kennedy J, Eberhart R, Shi Y (2001) Swarm intelligence. Morgan Kauf-
man, San Francisco
Lan Y, Liu Y, Sun G (2012) Modeling fuzzy multi-period production
planning and sourcing problem with credibility service levels. J
Comput Appl Math 231:208–221
Li XD, Yao X (2012) Cooperatively coevolving particle swarms for
large scale optimization. IEEE Trans Evol Comput 16(2):210–224
Liang JJ, Qu BY, Suganthan PN (2013) Problem definitions and
evaluation criteria for the CEC 2014 special session and compe-
tition on single objective real-parameter numerical optimization.
Zhengzhou University, China, and Nanyang Technological Uni-
versity, Singapore
Lin L, Gen M (2009) Auto-tuning strategy for evolutionary algorithms:
balancing between exploration and exploitation. Soft Comput
13(2):157–168
Liu J, Lampinen J (2005) A fuzzy adaptive differential evolution algo-
rithm. Soft Comput 9(6):448–462
Mallipeddi R, Suganthan PN, Pan QK, Tasgetiren MF (2011) Differen-
tial evolution algorithm with ensemble of parameters and mutation
strategies. Appl Soft Comput 11(2):1679–1696
Neri F, Tirronen V (2010) Recent advances in differential evolution: a
survey and experimental analysis. Artif Intell Rev 33(1–2):61–106
Qin AK, Huang VL, Suganthan PN (2009) Differential evolution algo-
rithm with strategy adaptation for global numerical optimization.
IEEE Trans Evol Comput 13(2):398–417
Sarker R, Elsayed SM, Ray T (2014) Differential evolution with
dynamic parameters selection for optimization problems. IEEE
Trans Evol Comput 18(5):689–707
Storn R, Price K (1997) Differential evolution-a simple and efficient
heuristic for global optimization over continuous spaces. J Global
Optim 11(4):341–359
123
G. Sun et al.
Sun G, Liu Y, Lan Y (2010) Optimizing material procurement plan-
ning problem by two-stage fuzzy programming. Comput Ind Eng
58:97–107
Sun G, Zhao R, Lan Y (2016) Joint operations algorithm for large-scale
global optimization. Appl Soft Comput 38:1025–1039
Sun G, Peng J, Zhao R (2017) Differential evolution with individual-
dependent and dynamic parameter adjustment. Soft Comput.
doi:10.1007/s00500-017-2626-3
Tang LX, Zhao Y, Liu JY (2014) An improved differential evolu-
tion algorithm for practical dynamic scheduling in steelmaking-
continuous casting production. IEEE Trans Evol Comput
18(2):209–225
TangLX, Dong Y,Liu J (2015) Differential evolution with an individual-
dependent mechanism. IEEE Trans Evol Comput 19(4):560–574
Wang S, Watada J (2012) A hybrid modified PSO approach to VaR-
based facility location problems with variable capacity in fuzzy
random uncertainty. Inf Sci 192(1):3–18
Wang Y, Cai Z, Zhang Q (2011) Differential evolution with compos-
ite trial vector generation strategies and control parameters. IEEE
Trans Evol Comput 15(1):55–66
Wang H, Rahnamayan S, Sun H, Omran MGH (2013) Gaussian bare-
bones differential evolution. IEEE Trans Cybern 43(2):634–647
Wang J, Liao J, Zhou Y, Cai Y (2014) Differential evolution enhanced
with multiobjective sorting based mutation operators. IEEE Trans
Cybern 44(12):2792–2805
Yang M, Li C, Cai Z, Guan J (2015) Differential evolution with auto-
enhanced population diversity. IEEE Trans Cybern 45(2):302–315
Yang G, Tang W, Zhao R (2017) An uncertain workforce planning
problem with job satisfaction. Int J Mach Learn Cybern 8(5):1681–
1693
Yu W, Shen M, Chen W, Zhan Z, Gong Y, Lin Y, Liu O, Zhang J (2014)
Differential evolution with two-level parameter adaptation. IEEE
Trans Cybern 44(7):1080–1099
Zhang J, Sanderson AC (2009) JADE: adaptive differential evolu-
tion with optional external archive. IEEE Trans Evol Comput
13(5):945–958
Zhang J, Avasarala V, Subbu R (2010) Evolutionary optimization of
transition probability matrices for credit decision-making. Eur J
Oper Res 200(2):557–567
Zhao J, Xu Y, Luo F, Dong Z, Peng Y (2014) Power system fault diag-
nosis based on history driven differential evolution and stochastic
time domain simulation. Inf Sci 275:13–29
Zhu W, Tang Y, Fang J, Zhang W (2013) Adaptive population tuning
scheme for differential evolution. Inf Sci 223:164–191
123
... The coronary arterial system consists the epicardial arteries (diameter ~ 5mm) 59 and the coronary microvasculature (diameter < 500μm) which hold 10% and 90% of 60 the total myocardial blood volume, responsible for approximate 25% and 75% of the 61 total coronary vascular resistance for coronary blood flow (CBF), respectively [1]. The 62 abnormality in epicardial arteries or/and coronary microvasculature system can lead 63 to insufficient CBF to meet myocardial metabolic demand, i.e., myocardial ischemia, 64 which has been listed as the leading cause for the global death [2]. 65 Obstructive coronary artery disease (CAD) and coronary microvascular dysfunc-66 tion (CMD) are two major causes of myocardial ischemia [3]. ...
... The minimum fit-548 ness value of the modified SSA is 0.0039, while that of SSA is 0.0235. Bernoulli chaotic variables adopted in our algorithm for the generation of the initial-554 ized populations are more ergodic and uniform ( Figure 10) than random variables 555 used in SSA [41], which can enhance the randomness of the initial position of spar-556 rows and show higher performance than random method in SSA [42,45,[60][61][62]. produce new solutions around a given position [63], which is beneficial to the algo-564 rithm to jump out the local optimum [45,64,65]. Bernoulli chaotic disturbance can 565 decrease positions of sparrows and prevent becomes dispersive [44]. ...
... In this DE variant, the multi-mutation strategies included "DE/rand/1", "DE/best/1", and designed mutation technique, and only one mutation strategy was selected to form the trial vector in each generation. In their algorithm GPDE, Sun et al. [48] designed a novel Gaussian mutation and improved common mutation schemes to adaptively produce new mutant individuals. Wang et al. [49] proposed a self-adaptive mutation DE algorithm based on PSO (DEPSO). ...
... Single strategy, design a new mutation strategy DE/lbest/1 [17], DE/current-to-pbest-w/1 [29], DE/current-to-gr_best/1 [32], DE/current-to-leader/1 [33], DE/current-to-cbest/1 [35], DE/neighbor-to-neighbor/1 [36], and the mutation strategy based on historical population [37], respectively. SaDE [13], SaM-JADE [25], EPSDE [26], ZEPDE [41], HHDE [42], NM [43], EFADE [45], CSDE [46], MSaDE [47], GPDE [48], DEPSO [49], a fitness-based adaptive DE proposed by Xia et al. [51], TLADE [52], and APSDE [53]. ...
Article
Full-text available
The differential evolution (DE) algorithm is a simple and efficient population-based evolutionary algorithm. In DE, the mutation strategy and the control parameter play important roles in performance enhancement. However, single strategy and fixed parameter are not universally applicable to problems and evolution stages with diverse characteristics; besides, the weakness of the advanced DE optimization framework, called selective-candidate framework with similarity selection rule (SCSS), is found by focusing on its single strategy and fixed parameter greedy degree (GD) setting. To address these problems, we mainly combine the multiple candidates generation with multi-strategy (MCG-MS) and the adaptive similarity selection (ASS) rule. On the one hand, in MCG-MS, two symmetrical mutation strategies, “DE/current-to-pbest-w/1” and designed “DE/current-to-cbest-w/1”, are utilized to build the multi-strategy to produce two candidate individuals, which prevents the over-approximation of the candidate in SCSS. On the other hand, the ASS rule provides the individual selection mechanism for multi-strategy to determine the offspring from two candidates, where parameter GD is designed to increase linearly with evolution to maintain diversity at the early evolution stage and accelerate convergence at the later evolution stage. Based on the advanced algorithm jSO, replacing its offspring generation strategy with the combination of MCG-MS and ASS rule, this paper proposes multi-strategy differential evolution algorithm with adaptive similarity selection rule (MSDE-ASS). It combines the advantages of two symmetric strategies and has an efficient individual selection mechanism without parameter adjustment. MSDE-ASS is verified under the Congress on Evolutionary Computation (CEC) 2017 competition test suite on real-parameter single-objective numerical optimization, and the results indicate that, of the 174 cases in total, it wins in 81 cases and loses in 30 cases, and it has the smallest performance ranking value, of 3.05. Therefore, MSDE-ASS stands out compared to the other state-of-the-art DEs.
... The underlying premise of this multifaceted exploration emphasizes the need for continual methodological enhancement, particularly when optimal solutions elude the grasp of existing algorithms. Various avenues are explored, spanning from hybridization methods [14][15][16] to the integration of fuzzy logic for dynamic parameter adaptation [17][18][19]. Fuzzy logic is an area of soft computing that involves the use of approximate rather than exact reasoning. This type of logic tries to build models of human reasoning that manifest its approximate, qualitative character. ...
... From the results obtained and analyzing the average in each set of experiments, we can see that, for Experiment 1, five improvements were achieved in Functions 1, 3, 10, 18, and 22; for Experiment 2, the method only achieved two improvements, in Functions 8, and 25. For Experiment 3, fourteen improvements were achieved in Functions 2,4,6,7,9,11,12,19,20,21,23,26,28,and 29, and, for Experiment 4, eight improvements were obtained in Functions 5,13,14,15,16,17,27,and 30. As Experiment 3 is the set of rules that obtains the most significant number of improvements in the results of the mathematical functions, these are taken as a basis for experimenting with other membership functions. ...
Article
Full-text available
The pursuit of continuous improvement across diverse processes presents a pressing challenge. Precision in manufacturing, efficient delivery route planning, and accurate diagnostics are imperative, prompting the exploration of innovative solutions. Nature-inspired algorithms offer a pathway for enhancing these processes. In this study, we address this challenge by dynamically adapting parameters in the Bird Swarm Algorithm using General Type-2 Fuzzy Systems, encompassing a range of rules and membership functions. Two complex case studies validate the effectiveness of our approach. The first evaluates Congress of Evolutionary Competition 2017 functions, while the second tackles the intricacies of Congress of Evolutionary Competition 2019 functions. Our methodology achieves an 97% improvement for Congress of Evolutionary Competition 2017 functions and a significant 70% enhancement for Congress of Evolutionary Competition 2019 functions. Notably, our results are benchmarked against the original method. Crucially, rigorous statistical analysis underscores the significant advancements facilitated by our proposed method. The comparison demonstrates clear and statistically significant improvements over the original approach. This study proves the marked impact of integrating General Type-2 Fuzzy Systems into the Bird Swarm Algorithm, presenting a promising avenue for addressing intricate optimization challenges in diverse domains.
... To comprehensively verify the effectiveness of DEGGDE, we set three different dimensionalities, namely, 30D, 50D, and 100D, for all the CEC'2017 benchmark problems. To make comparisons with DEGGDE, eleven representative and advanced DE variants are selected, namely, SHADE [45], GPDE [79], DiDE [80], SEDE [81], FADE [32], FDDE [27], TPDE [69], NSHADE [53], CUSDE [25], PFIDE [70], and EJADE [54]. ...
... In particular, GPDE assembles a newly designed Gaussian distribution-based mutation operator and "DE/rand-worst/1" to mutate each individual adaptively. Then, a cosine function is employed to sample F and a Gaussian distribution is used to sample CR dynamically in GPDE [79]. SEDE hybridizes three different mutation strategies to mutate individuals adaptively by controlling the proportion of individuals where each mutation strategy is performed to get the associated mutation vector [81]. ...
Article
Full-text available
Differential evolution (DE) has shown remarkable performance in solving continuous optimization problems. However, its optimization performance still encounters limitations when confronted with complex optimization problems with lots of local regions. To address this issue, this paper proposes a dual elite groups-guided mutation strategy called “DE/current-to-duelite/1” for DE. As a result, a novel DE variant called DEGGDE is developed. Instead of only using the elites in the current population to direct the evolution of all individuals, DEGGDE additionally maintains an archive to store the obsolete parent individuals and then assembles the elites in both the current population and the archive to guide the mutation of all individuals. In this way, the diversity of the guiding exemplars in the mutation is expectedly promoted. With the guidance of these diverse elites, a good balance between exploration of the complex search space and exploitation of the found promising regions is hopefully maintained in DEGGDE. As a result, DEGGDE expectedly achieves good optimization performance in solving complex optimization problems. A large number of experiments are conducted on the CEC’2017 benchmark set with three different dimension sizes to demonstrate the effectiveness of DEGGDE. Experimental results have confirmed that DEGGDE performs competitively with or even significantly better than eleven state-of-the-art and representative DE variants.
... Besides, an adaptive parameter control strategy is included to dynamically regulate the parameter setting of the algorithm to improve search efficacy. Sun et al. (2019) proposed a new mutation based on the Gaussian distribution. A new Gaussian mutation operator and a modified common mutation operator are used to produce new mutant vectors. ...
Article
Full-text available
Differential evolution (DE) is among the best evolutionary algorithms for global optimization. However, the basic DE has several shortcomings, like the slow convergence speed, and it is more likely to be stuck at local optima. Additionally, DE's performance is sensitive to its mutation strategies and control parameters for mutation and crossover. In this scope, we present in this paper three mechanisms to overcome DE limitations. First, two novel mutations called DE/mean-current/2 and DE/best-mean-current/2 are proposed and integrated in the DE algorithm, and they have both exploration ability and exploitation trend. On the other hand, to avoid being trapped in local minima of hard functions, a new exploration operator has been proposed called Weibull flight based on the Weibull distribution. Finally, new adapted control parameters based on the Weibull distribution are integrated. These parameters contribute to the optimization process by adjusting mutation scale and alleviating the parameter setting problem often encountered in various metaheuristics. The efficacy of the proposed algorithms called meanDE, MDEW, AMDE, and AMDEW is validated through intensive experimentations using classical tests, some challenging tests, the CEC2017, CEC2020, the most recent CEC2022, four constraint engineering problems, and the data clustering problem. Moreover, comparisons with several popular, recent, and high-performance optimization algorithms show a high effectiveness of the proposed algorithms in locating the optimal or near-optimal solutions with higher efficiency. The experiments clearly indicate the effectiveness of the new mutations compared to the standard DE mutations. Moreover, the proposed Weibull flight has a great capacity to deal with the hard composition functions of CEC benchmarks. Finally, the use of adapted control parameters for the mutation scale helps overcome the parameter setting problem commonly encountered in various metaheuristics.
... In the Gaussian variant of Differential Evolution (GPDE) [30], a novel approach combines a Gaussian mutation operator and a modified common mutation operator to create mutant vectors based on cumulative scores. A periodic function regulates the scaling factor for a balance between exploration and exploitation, while a Gaussian function controls the crossover rate, enhancing population diversity with fluctuating values. ...
Article
Full-text available
This work introduces a Partition Bound Particle Swarm Optimization (PB-PSO) algorithm to enhance convergence rates in analog circuit optimization. Two new parameters, ζ <sub xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">1</sub> and ζ2, are incorporated to adaptively update particle velocities based on iteration numbers. The parameter ζ <sub xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">1</sub> depends on the non-linear convergence factor (α) and the number of iterations, N. The results indicate that ζ <sub xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">1</sub> ’s optimal value occurs with α = 4. ζ <sub xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">2</sub> partitions iterations into two regions, aiding local and global search. The PB-PSO algorithm, implemented in Python, demonstrates higher convergence rates than existing methods, with successful designs verified through Cadence-Virtuoso circuit simulations. The proposed PB-PSO algorithm converges in 15 and 13 iterations for differential amplifier and two-stage op-amp respectively. For a case study of two-stage amplifier, it achieves a gain of 60.4 dB with a phase margin of 79.76°, meeting input specifications within constraints. The figure of merit was then evaluated using the obtained parameters, which turns out to be 0.275 V <sup xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">-2</sup> .
Chapter
Solving single objective real-parameter problem is still a challenging task. In this paper, an effective and efficient self-adaptation framework is proposed, called ToHDE, which is hybrid with CMA-ES to improve the performance. The algorithm uses two mutation strategies with linear weighted parameter to balance the exploration and exploitation. Moreover, a two-stage population size reduction and a local research are used to increase the capability of ToHDE. We evaluated the performance of ToHDE on the IEEE CEC2014 benchmark suite and compared it with six state-of-the-art peer DE variants. The statistical results show that ToHDE is competitive with the compared methods.
Article
Full-text available
The exploration of premium and new locations is regarded as a fundamental function of every evolutionary algorithm. This is achieved using the crossover and mutation stages of the differential evolution (DE) method. A best-and-worst position-guided novel exploration approach for the DE algorithm is provided in this study. The proposed version, known as “Improved DE with Best and Worst positions (IDEBW)”, offers a more advantageous alternative for exploring new locations, either proceeding directly towards the best location or evacuating the worst location. The performance of the proposed IDEBW is investigated and compared with other DE variants and meta-heuristics algorithms based on 42 benchmark functions, including 13 classical and 29 non-traditional IEEE CEC-2017 test functions and 3 real-life applications of the IEEE CEC-2011 test suite. The results prove that the proposed approach successfully completes its task and makes the DE algorithm more efficient.
Article
Full-text available
The present work proposes the study and development of a strategy that uses an optimization algorithm combined with pattern classifiers to identify short-circuit stator failures, broken rotor bars and bearing wear in three-phase induction motors, using voltage, current, and speed signals. The Differential Evolution, Particle Swarm Optimization, and Simulated Annealing algorithms are used to estimate the electrical parameters of the induction motor through the equivalent electrical circuit and the failure identification arises by variation of these parameters with the evolution of each fault. The classification of each type of failure is tested using Artificial Neural Network, Support Vector Machine and k-Nearest Neighbor. The database used for this work was obtained through laboratory experiments performed with 1-HP and 2-HP line-connected motors, under mechanical load variation and unbalanced voltage.
Article
Full-text available
Differential evolution (DE) is a powerful and versatile evolutionary algorithm for global optimization over continuous search space, whose performance is significantly influenced by its mutation operator and control parameters (population size, scaling factor and crossover rate). In order to enhance the performance of DE, we adopt a new variant of classic mutation operator, a gradual decrease rule for population size, an individual-dependent and dynamic strategy to generate the required values of scaling factor and crossover rate during the evolutionary process, respectively. In the proposed variant of DE (denoted by IDDE), the adopted mutation operator merges the superiority of two classic mutation operators (DE/best/2 and DE/rand/2) together, and the adjustment mechanism of control parameters applies the fitness value information of each individual and dynamic fluctuation rule, which can provide a better balance between the exploration ability and exploitation ability. To verify the performance of proposed IDDE, a suite of thirty benchmark functions is applied to conduct the simulation experiment. The simulation results demonstrate that the proposed IDDE performs significantly better than five state-of-the-art DE variants and other two evolutionary algorithms.
Article
Full-text available
To investigate the effect of employees’ job satisfaction on the firm’s workforce planning, this paper builds a multi-period uncertain workforce planning model with job satisfaction level, where the labor demands and operation costs are characterized as uncertain variables. The job satisfaction level is defined as the employees’ psychological satisfaction about overtime through prospect theory. The proposed uncertain model can be transformed into an equivalent deterministic form, which contains complex nonlinear constraints and cannot be solved by conventional optimization methods. Thus, a hybrid joint operations algorithm (JOA) integrated with LINGO software is designed to solve the proposed workforce planning problem. Consequently, several numerical experiments are conducted to compare our proposed JOA with a hybrid particle swarm optimization algorithm to verify the effectiveness of the JOA algorithm. The results demonstrate that the firm’s total operation cost increases with the employees’ job satisfaction level, the loss averse degree and outside firms’ overtime level, respectively. Meanwhile, the firm would overpay in bounded rational cases with job satisfaction, and the overpayment can be seen as the value of bounded rationality, which ensures the firm’s normal operation.
Article
A new heuristic approach for minimizing possiblynonlinear and non-differentiable continuous spacefunctions is presented. By means of an extensivetestbed it is demonstrated that the new methodconverges faster and with more certainty than manyother acclaimed global optimization methods. The newmethod requires few control variables, is robust, easyto use, and lends itself very well to parallelcomputation.
Article
Differential evolution (DE) is a well-known optimization algorithm that utilizes the difference of positions between individuals to perturb base vectors and thus generate new mutant individuals. However, the difference between the fitness values of individuals, which may be helpful to improve the performance of the algorithm, has not been used to tune parameters and choose mutation strategies. In this paper, we propose a novel variant of DE with an individual-dependent mechanism that includes an individual-dependent parameter (IDP) setting and an individual-dependent mutation (IDM) strategy. In the IDP setting, control parameters are set for individuals according to the differences in their fitness values. In the IDM strategy, four mutation operators with different searching characteristics are assigned to the superior and inferior individuals, respectively, at different stages of the evolution process. The performance of the proposed algorithm is then extensively evaluated on a suite of the 28 latest benchmark functions developed for the 2013 Congress on Evolutionary Computation special session. Experimental results demonstrate the algorithm’s outstanding performance.
Article
In the field of evolutionary algorithms (EAs), differential evolution (DE) has been the subject of much attention due to its strong global optimization capability and simple implementation. However, in most DE algorithms, crossover operator often ignores the consideration of interactions between pairs of variables. That is, DE is linkage-blind, and the problem-specific linkages are not utilized effectively to guide the search process. Furthermore, linkage learning techniques have been verified to play an important role in EA optimization. Therefore, to alleviate the drawback of linkage-blind in DE and enhance its performance, a novel linkage utilization technique, called hybrid linkage crossover (HLX), is proposed in this study. HLX utilizes the perturbation-based method to automatically extract the linkage information of a specific problem and then uses the linkage information to guide the crossover process. By incorporating HLX into DE, the resulting algorithm, named HLXDE, is presented. In order to evaluate the effectiveness of HLXDE, HLX is incorporated into six original DE algorithms, as well as several advanced DE variants. Experimental results demonstrate the high performance of HLX for the DE algorithms studied.
Article
More than a decade after the first extensive overview on parameter control, we revisit the field and present a survey of the state-of-the-art. We briefly summarize the development of the field and discuss existing work related to each major parameter or component of an evolutionary algorithm. Based on this overview, we observe trends in the area, identify some (methodological) shortcomings, and give recommendations for future research.
Article
In practical situations, it is very often desirable to detect multiple optimally sustainable solutions of an optimization problem. The population-based evolutionary multimodal optimization algorithms can be very helpful in such cases. They detect and maintain multiple optimal solutions during the run by incorporating specialized niching operations to aid the parallel localized convergence of population members around different basins of attraction. This paper presents an improved information-sharing mechanism among the individuals of an evolutionary algorithm for inducing efficient niching behavior. The mechanism can be integrated with stochastic real-parameter optimizers relying on differential perturbation of the individuals (candidate solutions) based on the population distribution. Various real-coded genetic algorithms (GAs), particle swarm optimization (PSO), and differential evolution (DE) fit the example of such algorithms. The main problem arising from differential perturbation is the unequal attraction toward the different basins of attraction that is detrimental to the objective of parallel convergence to multiple basins of attraction. We present our study through DE algorithm owing to its highly random nature of mutation and show how population diversity is preserved by modifying the basic perturbation (mutation) scheme through the use of random individuals selected probabilistically. By integrating the proposed technique with DE framework, we present three improved versions of well-known DE-based niching methods. Through an extensive experimental analysis, a statistically significant improvement in the overall performance has been observed upon integrating of our technique with the DE-based niching methods.
Article
This paper presents a new variant of the Differential Evolution (DE) algorithm called Sinusoidal Differential Evolution (SinDE). The key idea of the proposed SinDE is the use of new sinusoidal formulas to automatically adjust the values of the DE main parameters: the scaling factor and the crossover rate. The objective of using the proposed sinusoidal formulas is the search for a good balance between the exploration of non visited regions of the search space and the exploitation of the already found good solutions. By applying it on the recently proposed CEC-2013 set of benchmark functions, the proposed approach is statistically compared with the classical DE, the linearly parameter adjusting DE and 10 other state-of-the-art metaheuristics. The obtained results have proven the superiority of the proposed SinDE, it outperformed other approaches especially for multimodal and composition functions.