A preview of this full-text is provided by Springer Nature.
Content available from Neural Computing and Applications
This content is subject to copyright. Terms and conditions apply.
ORIGINAL ARTICLE
Hybridizing grey wolf optimization with neural network algorithm
for global numerical optimization problems
Yiying Zhang
1
•Zhigang Jin
1
•Ye Chen
1,2
Received: 19 January 2019 / Accepted: 19 October 2019 / Published online: 28 October 2019
ÓSpringer-Verlag London Ltd., part of Springer Nature 2019
Abstract
This paper proposes a novel hybrid algorithm, called grey wolf optimization with neural network algorithm (GNNA), for
solving global numerical optimization problems. The core idea of GNNA is to make full use of good global search ability
of neural network algorithm (NNA) and fast convergence of grey wolf optimizer (GWO). Moreover, both NNA and GWO
are improved to boost their own advantages. For NNA, an improved NNA is given to strengthen the exploration ability of
NNA by discarding transfer operator and introducing random modification factor. For GWO, an enhanced GWO is
presented, which adjusts the exploration rate based on reinforcement learning principles. Then the improved NNA and the
enhanced GWO are hybridized by dynamic population mechanism. A comprehensive set of 23 well-known unconstrained
benchmark functions are employed to examine the performance of GNNA compared with 13 metaheuristic algorithms.
Such comparisons suggest that the combination of the improved NNA and the enhanced GWO is very effective and GNNA
is clearly seen to be more successful in both solution quality and computational efficiency.
Keywords Artificial neural networks Reinforcement learning Grey wolf optimizer Numerical optimization
1 Introduction
In the real world, optimization problems can be found in
almost all engineering fields. Solving optimization problem
is to find an optimal solution from all possible solutions of
the given constrained space to maximize or minimize its
objective function. Optimization approaches can be
broadly divided into two types: deterministic methods and
metaheuristic methods. As conventional optimization
approaches, deterministic methods apply specific
mathematical principles at each iteration and may need
other information like gradient, initial points and hessian
matrix [1,2]. Although deterministic methods can be
viewed as available options for some simple and ideal
optimization problems, they are not effective in solving
complex optimization problems such as large-scale, mul-
timodal and highly constrained engineering optimization
problems [3,4]. Metaheuristic algorithms are modern
optimization methods, which commonly operate by some
defined principles and randomness to imitate natural phe-
nomena and are proving to be better than conventional
optimization methods in solving complex practical opti-
mization problems.
A lot of metaheuristic algorithms have been developed
over the last two decades. These algorithms can be roughly
separated into the following four categories according to
different types of inspiration:
1. Swarm intelligence algorithms. These algorithms are
inspired from some behaviour of animal and plant,
such as foraging process of bird flocking in particle
swarm optimization (PSO) [5], obligate brood parasitic
behaviour of some cuckoo species in cuckoo search
(CS) [6], echolocation behaviour of bats in bat
&Zhigang Jin
zgjin@tju.edu.cn
Yiying Zhang
zhangyiying@tju.edu.cn
Ye Chen
chenye1132@163.com
1
School of Electrical and Information Engineering, Tianjin
University, 92 Weijin Road, Tianjin 300072,
People’s Republic of China
2
College of Applied Science and Technology, Hainan
University, 58 People Avenue, Haikou 570228, People’s
Republic of China
123
Neural Computing and Applications (2020) 32:10451–10470
https://doi.org/10.1007/s00521-019-04580-4(0123456789().,-volV)(0123456789().,-volV)
Content courtesy of Springer Nature, terms of use apply. Rights reserved.