ArticlePDF Available

Multi objective optimization of computationally expensive multi-modal functions with RBF surrogates and multi-rule selection

Authors:

Abstract

GOMORS is a parallel response surface-assisted evolutionary algorithm approach to multi-objective optimization that is designed to obtain good non-dominated solutions to black box problems with relatively few objective function evaluations. GOMORS uses Radial Basic Functions to iteratively compute surrogate response surfaces as an approximation of the computationally expensive objective function. A multi objective search utilizing evolution, local search, multi method search and non-dominated sorting is done on the surrogate radial basis function surface because it is inexpensive to compute. A balance between exploration, exploitation and diversification is obtained through a novel procedure that simultaneously selects evaluation points within an algorithm iteration through different metrics including Approximate Hypervolume Improvement, Maximizing minimum domain distance, Maximizing minimum objective space distance, and surrogate-assisted local search, which can be computed in parallel. The results are compared to ParEGO (a kriging surrogate method solving many weighted single objective optimizations) and the widely used NSGA-II. The results indicate that GOMORS outperforms ParEGO and NSGA-II on problems tested. For example, on a groundwater PDE problem, GOMORS outperforms ParEGO with 100, 200 and 400 evaluations for a 6 dimensional problem, a 12 dimensional problem and a 24 dimensional problem. For a fixed number of evaluations, the differences in performance between GOMORS and ParEGO become larger as the number of dimensions increase. As the number of evaluations increase, the differences between GOMORS and ParEGO become smaller. Both surrogate-based methods are much better than NSGA-II for all cases considered.
J Glob Optim (2016) 64:17–32
DOI 10.1007/s10898-015-0270-y
Multi objective optimization of computationally
expensive multi-modal functions with RBF surrogates
and multi-rule selection
Taimoor Akhtar ·Christine A. Shoemaker
Received: 31 October 2013 / Accepted: 15 January 2015 / Published online: 8 February 2015
© The Author(s) 2015. This article is published with open access at Springerlink.com
Abstract GOMORS is a parallel response surface-assisted evolutionary algorithm approach
to multi-objective optimization that is designed to obtain good non-dominated solutions
to black box problems with relatively few objective function evaluations. GOMORS uses
Radial Basic Functions to iteratively compute surrogate response surfaces as an approxi-
mation of the computationally expensive objective function. A multi objective search uti-
lizing evolution, local search, multi method search and non-dominated sorting is done on
the surrogate radial basis function surface because it is inexpensive to compute. A balance
between exploration, exploitation and diversification is obtained through a novel procedure
that simultaneously selects evaluation points within an algorithm iteration through different
metrics including Approximate Hypervolume Improvement, Maximizing minimum domain
distance, Maximizing minimum objective space distance, and surrogate-assisted local search,
which can be computed in parallel. The results are compared to ParEGO (a kriging surro-
gate method solving many weighted single objective optimizations) and the widely used
NSGA-II. The results indicate that GOMORS outperforms ParEGO and NSGA-II on prob-
lems tested. For example, on a groundwater PDE problem, GOMORS outperforms ParEGO
with 100, 200 and 400 evaluations for a 6 dimensional problem, a 12 dimensional problem
and a 24 dimensional problem. For a fixed number of evaluations, the differences in perfor-
mance between GOMORS and ParEGO become larger as the number of dimensions increase.
As the number of evaluations increase, the differences between GOMORS and ParEGO
become smaller. Both surrogate-based methods are much better than NSGA-II for all cases
considered.
Electronic supplementary material The online version of this article (doi:10.1007/s10898-015-0270-y)
contains supplementary material, which is available to authorized users.
T. Akhtar (B)·C. A. Shoemaker
School of Civil and Environmental Engineering and School of Operations Research and Information
Engineering, Cornell University,
Holister Hall, Ithaca, NY, USA
e-mail: ta223@cornell.edu
C. A. Shoemaker
e-mail: cas12@cornell.edu
123
18 J Glob Optim (2016) 64:17–32
Keywords Function approximation ·Multi objective optimization ·Radial basis
functions ·Expensive optimization ·Global optimization ·Evolutionary optimization ·
Parallel ·Metamodel
1 Introduction
Multi-objective optimization (MO) approaches involve a large number of function evalu-
ations, which make it difficult to use MO in simulation–optimization problems where the
optimization is multi-objective and the nonlinear simulation is computationally expensive
and has multiple local minima (multi modal). Many applied engineering optimization prob-
lems involve multiple objectives and the computational cost of evaluating each objective is
high (e.g minutes to days per objective function evaluation by simulation) [12,32]. We focus
on special algorithms that aim to produce reasonably good results within a limited number
of expensive objective function evaluations.
Many authors (for instance, Deb et al. [5]andZhangetal.[52]) have successfully
employed evolutionary strategies for solving multi-objective optimization problems. Even
with the improvement over traditional methods, these algorithms require, typically, many
objective function evaluations which can be infeasible for computationally expensive prob-
lems. Added challenges to multi-objective optimization of expensive functions arise with
increase in dimensionality of the decision variables and objectives.
The use of iterative response surface modeling or function approximation techniques
inside an optimization algorithm can be highly beneficial in reducing time for computing
objectives for multi-objective optimization of such problems. Since the aim of efficient multi-
objective optimization is to identify good solutions within a limited number of expensive
function evaluations, approximating techniques can be incorporated into the optimization
process to reduce computational costs. Gutmann [15] introduced the idea of using radial
basis functions (RBF) [3] for addressing single objective optimization of computationally
expensive problems. Jin et al. [18] appears to be the first journal paper to combine a non
quadratic response surface with a single objective evolutionary algorithm by using neural net
approximations. Regis and Shoemaker [36] were the first to use Radial Basis Functions (not
a neural net) to improve the efficiency of an evolutionary algorithm with limited numbers
of evaluations. Later they introduced a non-evolutionary algorithm Stochastic-RBF [37],
which is a very effective radial basis function-based method for single objective optimization
of expensive global optimization problems. These methods have been extended to include
parallelism [40]; high-dimensional problems [42]; constraints [38]; local optimization [49,
50]; integer problems [30,31] and other extensions [39,41]. Kriging-based methods have
also been explored for addressing single objective optimization problems [9,16,19]. Jones
et al. [19] introduced Efficient Global Optimization (EGO), which is an algorithm for single
objective global optimization within a limited budget of evaluations.
Response surface methods have also become popular for multi-objective optimization
problems, with kriging-surrogate techniques being the most popular. Various authors have
used kriging-based methods by extending the EGO framework for multi-objective optimiza-
tion of computationally expensive problems. For instance, Knowles [21] combined EGO with
Tchebycheff weighting to convert an MO problem into numerous single objective problems.
Tsang et al. [53] and Emmerich et al. [10] combined EGO with evolutionary search assis-
tance. Ponweiser et al. [33] and Beume et al. [2] explored the idea of maximizing expected
improvement in hypervolume. Authors have also explored the use of other function approxi-
mation techniques inside an optimization algorithm, including Radial Basis Functions (RBFs)
123
J Glob Optim (2016) 64:17–32 19
[13,20,25,45,51], Support Vector Machines [24,43] and Artificial Neural Networks [7]. Evo-
lutionary algorithms are the dominant optimization algorithms used in these methods. Some
papers also highlight the effectiveness of local search in improving performance of response
surface based methods [20,23,45].
Various authors have indicated that RBFs could be more effective than other approximation
methods in multi-objective optimization of computationally expensive problems with more
than 15 decision variables [8,29,43]. Various authors have employed RBFs for surrogate-
assisted multi-objective optimization of expensive functions with focus on problems with
more than 15 decision variables. For instance, Karakasis and Giannakoglou [20]employ
RBFs within an MOEA framework and use an inexact pre-evaluation phase (IPE) to select
a subset of solutions for expensive evaluations in each generation of an MOEA. They also
recommend the use of locally fitted RBFs for surrogate approximation. Georgopoulou and
Giannakoglou [13] build upon [20] by employing gradient based refinements of promising
solutions highlighted by RBF approximation during each MOEA generation. Santana et
al. [45] divide the heuristic search into two phases where the first phase employs a global
surrogate-assisted search within an MOEA framework, and rough set theory is used in the
second phase for local refinements.
This paper focuses on the use of RBFs and evolutionary algorithms for multi-objective
optimization of computationally expensive problems, where the number of function evalua-
tions are limited relative to the problem dimension (e.g. to a few hundred evaluations for the
example problems tested here). An added purpose of the investigation is to be able to solve
MO problems where the number of decision variables varies between 15 and 25.
To this effect we propose a new algorithm, GOMORS, that combines radial basis function
approximation with multi-objective evolutionary optimization, within the general iterative
framework of surrogate-assisted heuristic search algorithms. Our approach is different from
prevalent RBF based MO algorithms that use evolutionary algorithms [13,20,25,45,51]. Most
RBF based evolutionary algorithms employ surrogates in an inexact pre-evaluation phase
(IPE) in order to inexpensively evaluate child populations after every MOEA generation. By
contrast our proposed methodology employs evolutionary optimization within each iteration
of the algorithm framework to identify numerous potential points for expensive evaluations.
Hence, multiple MOEA generations evolve via surrogate-assisted search in each algorithm
iteration in GOMORS.
The novelty of the optimization approach is in the amalgamation of various evaluation
point selection rules in order to ensure that various value functions are incorporated in select-
ing some points for expensive evaluations from the potential points identified by surrogate-
assisted evolutionary search during each algorithm iteration. The combination of multiple
selection rules targets a balanced selection between exploration, exploitation and front diver-
sification. The selection strategy incorporates the power of local search and also ensures that
the algorithm can be used in a parallel setting to further improve its efficiency.
2 Problem description
Let D(domain space) be a unit hypercube and a subset of Rdand xbe a variable in the
domain space, i.e., x∈[0,1]d. Let the number of objectives in the multi-objective problem
equal kand let fi(x)be the ith objective. fiis a function of xand fi:D → Rfor 1ik.
The framework of the multi-objective optimization problem we aim to solve is as follows:
minimize F(x)=[f1(x),..., fk(x)]T
subject to x∈[0,1]d(1)
123
20 J Glob Optim (2016) 64:17–32
Our goal for the multi-objective optimization problem is (within a limited number of
objective function evaluations) to find a set of Pareto-optimal solutions P={x
i|x
i
D,1in}.Pis defined via the following definitions:
Domination A solution x1Ddominates another solution x2Dif and only if fi(x1)
fi(x2)for all 1ik,and fi(x1)< fi(x2)for at least one i∈{1,...,k}.
Non-domination Given a set of solutions S={xi|xiD,1in}, a solution xSis
non-dominated in Sif there does not exist a solution xjSwhich dominates x.
Pareto optimality A candidate solution xSwhich is non-dominated in Sis called a
globally Pareto-optimal solution if S=D, i.e., Sis the entire feasible domain space of the
defined problem.
3 Algorithm description
3.1 General framework
The proposed optimization algorithm is called Gap Optimized Multi-objective Optimization
using Response Surfaces (GOMORS). GOMORS employs the generic iterative setting of
surrogate-assisted algorithms in which the response surface model used to approximate the
costly objective functions is updated after each iteration. Multiple points are selected for
expensive function evaluation in each iteration, evaluated in parallel, and subsequently used
to update the response surface model. The algorithm framework is defined below:
Step 0 - Define Algorithm Inputs:
M- Maximum number of expensive function evaluations
r- The Gap radius parameter used in Step 2.3
t- Number of expensive evaluations to be performed after each algorithm iteration
Step 1 - Initial Evaluation Points Selection: Select an initial set of points {x1,...,xm},wherexiD,for
1im, using an experimental design. Evaluate the objectives F=[f1,..., fk]at the selected mpoints,
via expensive simulations. Let Sm={xi|xiD,i=1,...,m}denote the initial set of evaluated points.
Let Pm={xiSm|xiis non-dominated in Sm}be the set of non-dominated points from Sm.
Step 2 - Iterative Improvement: Run algorithm iteratively until termination condition is satisfied:
While mM
Step 2.1 - Fit/Update Response Surface Models: Fit/update response surface models for each objective
based on the set of already evaluated simulation points Sm.Let
Fm(x)=[
fm,1,...,
fm,k]be the
inexpensive approximate objective functions obtained by fitting a response surface model on Sm.
Step 2.2 - Surrogate Assisted Global Search: Use a multi-objective evolutionary algorithm (MOEA)
to solve the following optimization problem:
minimize:
Fm(x)=[
fm,1(x),...,
fm,k(x)]T(2)
Let
PA
m={x1,...,xnA}denote the solutions obtained by solving Equation 2, i.e, utilizing an MOEA
for searching on the response surfaces.
Step 2.3a - Identify Least Crowded Solution:. Using the crowding distance calculation procedure
proposed by Deb et al. [5], identify the least crowded element of the expensively evaluated non-dominated
set, Pm.Letxcr owdbe the least crowded element of Pm(see definition in Table 1).
Step 2.3b - Local Search:. Use a multi-objective evolutionary algorithm in a small neighborhood of
xcro wdto solve the following optimization problem:
minimize:
Fm(x)={
fm,1(x),...,
fm,k(x)}
subject to: (xcro wdr)x(xcrowd+r)(3)
The problem solved in Equation 3 we will call the “Gap Optimization Problem”.
Let
PB
m={x1,..., xnB}denote the solutions obtained by solving Equation (3) ,i.e, utilizing an MOEA
for local search on the response surfaces, within a radius, r, around the least crowded solution, xcrowd.
123
J Glob Optim (2016) 64:17–32 21
Step 2.4 - Select Points for Expensive Function Evaluations: Our evaluation points are then selected
from the sets
PA
mand
PB
m(from Equations 2 and 3, and also called candidate points) based on the rules
described in Sect. 3.4. Let Scur r =x1,...,xtbe the set of tpoints selected for expensive evaluations in
current algorithm iteration. (Discussed in detail in Sect. 3.4)
Step 2.5 - Do expensive function evaluations and update Non-dominated solution set: Evaluate costly
functions, F, on the selected evaluation points, Scurr . Update Sm, i.e., add the new expensively evaluated
points to the set of already evaluated points. Consequently, m=m+t,Sm={Sm}∪{Scur r }. Update
Pm, i.e., compute Pm={xiSm|xiis non-dominated in Sm}.
End Loop
Step 3 - Return Best Approximated Front: Return PM={xiSM|xiis non-dominated in SM}as an
approximation to the globally optimal solution set, i.e, P={x
i|x
iis non-dominated in D}.
The algorithm initiates with selection of an initial set of points for costly function evalua-
tions of the objective function set, F. Latin hypercube sampling, a space-filling experimental
design scheme proposed by McKay et al. [26] is used for selection of the initial evaluation
set. The iterative framework of GOMORS (Step 2) follows, and consists of three major seg-
ments: (1) Building a Response Surface Model for approximating expensive objectives (Step
2.1), (2) Applying an Evolutionary algorithm for solving multi-objective response surface
problems (Steps 2.2 and 2.3) and (3) Selecting multiple points for expensive evaluations
from the solutions of the multi-objective response surface problems, and evaluating them in
parallel (Step 2.4). The algorithm terminates after Mexpensive evaluations and the output is
a non-dominated set of solutions PM, which is an approximation to the true Pareto solution of
the problem i.e P. Sections 3.23.4 and the online supplement discuss details of the major
steps of the algorithm.
Tab l e 1 Definitions of sets and variables
Item Description
F(x)=[f1(x),..., fk(x)]Expensive objectives for the multiple-objective optimization problem
P={x
1,...,x
n}Set of Pareto-optimal solutions given the objectives F
Sm={x1,...,xm}Set of points which have been evaluated via costly simulation until
iteration mof the optimization algorithm
Fm(x)=[
fm,1(x),...,
fm,k(x)]
fm,iis the inexpensive surrogate approximation of the ith objective,
fi, generated using the set of points, Sm
Pm={xSm}The non-dominated solutions in Smbased on expensive evaluations
xcro wd=arg max
xPm
[d(x)]The least crowded element in Pm, according to the crowding definition,
d(.),whered(xj)indicates the function by [6] to compute crowding
distance for xjgiven (xj,F(xj)) for all xjPm
PA
m={x1,..., xnA}Candidate points obtained by Surrogate-Assisted Global Search on the
approximate objective function set
Fm(see Equation 2)
PB
m={x1,..., xnB}Candidate points obtained by Gap Optimization of the approximate
objectives,
Fm(see Equation 3)
HV(P)Hypervolume [1] of the objective space dominated by the objective
vectors corresponding to the set P. The hyperspace is such that
yRkand yis bounded by a reference vector b(see Fig. 1for
illustration)
123
22 J Glob Optim (2016) 64:17–32
500 1000 1500 2000 2500
1000
1200
1400
1600
1800
2000
2200
2400
F1(x)
F2(x)
Reference
Point
(A) - Hypervolume
500 1000 1500 2000 2500
1000
1200
1400
1600
1800
2000
2200
2400
F1(x)
F2(x)
Hypervolume Improvement:
Objective SpaceObjective Space
(B) - Hypervolume Imrovement
HYPERVOLUME:
HV(P
m)
Non-Dominated
Solutions:
P
m
Ideal Solution(s):
UNCOVERED
HYPERVOLUME:
Reference
Point
Non-Dominated
Solutions:
P
m
Fig. 1 Visualization of ahypervolume and uncovered hypervolume and bhypervolume improvement
employed in Rule 1 and Rule 4 of Step 2.4 (discussed in Sect. 3.4). Maximization of hypervolume improve-
ment implies selection of a point for function evaluation that predicts maximum improvement in hypervolume
coverage of a non-dominated set as per RBF approximation
3.2 Response surface modeling
The first major component of the iterative framework is the procedure used for approximating
the costly functions, F, by the inexpensive surrogates,
Fm(x)=[
fm,1,...,
fm,k]. A response
surface model based on the evaluated points is generated in Step 2.1 of the algorithm and
subsequently updated after each iteration. For example artificial neural networks [35], Support
Vector Machines (SVM) [47], kriging [44], and radial basis functions (RBFs) [3,34] could
be employed to generate the response surface model.
While kriging has been used in MO [21,33,53], the number of parameters to be esti-
mated for kriging meta-models increases quickly with an increase in the number of decision
variables [16,17], and hence, re-evaluating kriging surrogates in each iteration of a kriging
based algorithm may itself become a computationally expensive step. Various authors [29,42]
including Manriquez et al. [8] have reported that kriging-based surrogate optimization is not
effective for high dimensional problems (approximately defined as problems with more than
15 decision variables). In [8] they demonstrate the relative effectiveness of RBF approxi-
mation in tackling such high dimensional problems. The GOMORS algorithm proposed in
this paper hence makes use of RBFs as the surrogate model for approximating the costly
functions, although other surrogate models could be used in the GOMORS strategy.
3.3 Evolutionary optimization
The surrogate optimization problems defined in Steps 2.2 (Surrogate-assisted global search)
and 2.3 (Gap Optimization) aim to find near optimal fronts given that the expensive functions,
F(x), are replaced by the corresponding response surface model approximations, i.e.,
Fm(x)
(see Table 1). In Steps 2.2 and 2.3, two different multi-objective optimization problems are
solved on the surrogate surface.
Steps 2.2 and 2.3 solve the MO problems depicted in Equations 2 and 3 respectively. Since
the objective functions of these problems are derived from the surrogate model, solving the
MO problems is a relatively inexpensive step. However, the MO problems of Equations 2 and
123
J Glob Optim (2016) 64:17–32 23
3 are not trivial and could potentially have non-linear and multi-modal objectives. Hence, we
employ evolutionary algorithms for MO optimization of surrogates in Steps 2.2 and 2.3.
Three algorithm were considered as potential alternatives for optimization of surrogates
in Steps 2.2 and 2.3, namely, NSGA-II, [5], MOEA/D [52] and AMALGAM [48]. NSGA-
II handles the evolutionary search optimization process by ranking and archiving parent
and child populations according to a non-domination sorting. MOEA/D [52] uses aggregate
functions, and simultaneously solves many single-objective Tchebycheff decompositions
of multi-objective problems in an evolutionary generation. AMALGAM [48] is a multi-
method evolutionary algorithm, which incorporates search mechanics of various algorithms.
Extensive computer experiments on test problems were performed on GOMORS with either
of NSGA-II, MOEA/D and AMALGAM as embedded evolutionary schemes (see Section
C of the online supplement for a detailed discussion on the experiments) and AMALGAM
was identified as the best performing evolutionary algorithm (although the differences were
small) embedded in GOMORS.
3.4 Expensive evaluation point selection: Step 2.4
Step 2.4 of the GOMORS algorithm determines which of the candidate points are evaluated
by the expensive objective functions, F(x), within an algorithm iteration. The candidate
points are obtained from Steps 2.2 and 2.3, and are denoted as
PA
mand
PB
mrespectively.
As mentioned earlier, selection of points for expensive evaluation is a critical step in the
algorithm because this is usually by far the most computationally expensive step.
A balance between exploration [19], exploitation and diversification [6] is crucial for
selecting points for expensive evaluations from candidate points,
PA
mand
PB
m.Exploration
of the decision space aims at selecting points in unexplored regions of the decision space.
Exploitation aims at exploiting the inexpensive response surface approximations of Step 2.1
to assist in choosing appropriate points for expensive evaluations. Diversification strives to
ensure that the non-dominated evaluated points are spread out in the objective space.
GOMORS employs a multi-rule based strategy for selection of a small subset of candidate
points (
PA
mand
PB
m) for actual function evaluations (computing Fvia simulation). The
various “rules” employed in the strategy target a balance between exploration,exploitation
and diversification. A detailed framework of Step 2.4 of the algorithm is given below which
gives an overview of the selection rules (The ith rule is referred as Rule i)(ReferTable1for
definitions of sets and symbols):
Details of Step 2.4 (in Section 3.1:)
Inputs from previous steps: (m,Sm,Pm,
PA
m,
PB
m)
Evaluation Points selection: Select t=4
i=0tipoints for expensive evaluations via rules 0–4. Let Scur r be
the selected points. So there are tipoints generated using Rule i, where the Rules are listed below.
Apply Rule 0 - Random Sampling: Pick t0points for expensive function evaluation via random sampling
from a uniform distribution. Ensures exploration of decision space.
Apply Rule 1- Hypervolume Improvement: Use hypervolume improvement to choose t1points from the
candidate points
PA
m.Let HV(Pm)denote the hypervolume [1] of the objective space dominated by the vectors
in the set Pm(see Figure 1a). A new point for expensive function evaluation is selected as follows (see Table 1
for definition of sets and variables):
x=arg max
xj
PA
mHV(Pmxj)HV(Pm)(4)
Equation 4 exploits the RBF approximation and chooses a point for function evaluation which maximizes the
improvement in hypervolume in the objective space as per the RBF approximations of the candidate points,
as illustrated in Fig. 1b.
123
24 J Glob Optim (2016) 64:17–32
Apply Rule 2- Maximize Minimum Domain Euclidean Distance: Let xi,APA
mand xjSm. Choose
t2points for expensive evaluation such that the maximum of minimum distances of each point from already
evaluated points is maximized, i.e.:
x=arg max
xi,A
PA
mmin
xjSm
xi,Axj(5)
Apply Rule 3- Maximize Minimum Objective Space Euclidean Distance: Choose t3points such that the
maximum of minimum distances of each point from already evaluated points, as per the objective function
space, is maximized, i.e.:
x=arg max
xi,A
PA
mmin
xjSm
Fm(xi,A)F(xj)(6)
Rule 2 and Rule 3 are a hybrid of exploration and exploitation.
Apply Rule 4 - Hypervolume Improvement in “Gap Optimization” candidates: Use “Rule 1” to select
t4points from the candidate points obtained via “Gap Optimization”, i.e,
PB
m. Select points for expensive
function evaluations as follows (See Table 1 for definition of sets and variables):
x=arg max
xj
PB
mHV(Pmxj)HV(Pm)(7)
The difference between (4) and (7) is that
PA
min (4) comes from Step 2.2 and
PB
min (7) comes from Step 2.3.
The number of points to be selected for expensive evaluation via each rule i.e, timay
vary. However, we performed all our computer experiments with ti=1, for Rules 1–4. A
point is selected via Rule 0 in each algorithm iteration with a probability of 0.1. Hence, either
four or five points are selected for expensive function evaluations in each iteration of the
algorithm. The expensive evaluations of points are performed in parallel to further speed up
the algorithm.
In order to assess the individual effectiveness of the rules, we performed computer exper-
iments on eleven test problems with the exclusive use of all rules (i.e, one point is selected
from one rule, in all algorithm iterations). Furthermore, we also performed experiments with
the idea of cycling between all rules in subsequent iterations of GOMORS framework. These
different selection strategies were compared against the multiple rule selection strategy for
simultaneous selection of evaluation points (described above). Results of the computer exper-
iments indicated that if the value of parallelization is considered, the multiple rule selection
strategy outperforms the other strategies employed on our analysis (See Section D of the
online supplement for details.). However, if GOMORS is used in a serial setting, i,e, one
point is evaluated in each algorithm iteration, cycling between rules, and use of Rule 3 are
most beneficial (see section D of online supplement).
4 Test problems
In order to test the performance of GOMORS, computational experiments were performed
on various test functions and a groundwater remediation problem. Certain characteristics
of various problems can lead to inadequate convergence or poor distribution of points
on the Pareto front. These characteristics include high dimensionality, non-convexity and
non-uniformity of the Pareto front [6], the existence of multiple locally optimal Pareto
123
J Glob Optim (2016) 64:17–32 25
Tab l e 2 Two MO variants, GWDA and GWDM, of the groundwater optimization problem (Sect. 4.1)
GWDA GWDM
minimize: f1(X)=T
t=1Ct(Xt)minimize: f1(X)=T
t=1Ct(Xt)
minimize: f2(X)=M
m=1sm,T(X)
Mminimize: f2(X)=maxM
m=1sm,T(X)
subject to: X∈[0,1]N×Tsubject to: X∈[0,1]N×T
Mis total number of finite element grid nodes, Nis the total number of wells, Tis the total number of
management periods, Xtis the pumping decision in management period t,sm,tis the contaminant concentration
at grid mand at the end of the last time period T,andCt(Xt)is the cost of cleanup for the pumping decision
Xt
fronts [4], low density of solutions close to the Pareto front, and existence of compli-
cated Pareto sets (this implies that the Pareto solutions are defined by a complicated curve
in the decision space) [22]. We have employed eleven test problems in our experimental
analysis which incorporate the optimization challenges mentioned above. Five test prob-
lems are part of the ZDT test suite [4], while six were derived from the work done by
Li and Zhang [22]. Mathematical formulations and optimization challenges of the test
problems are discussed in detail in Section B.1 of the online supplement to this docu-
ment. These problems are collectively referred as synthetic test problems in subsequent
discussions.
4.1 Groundwater remediation design problem
Optimization problems pertaining to groundwater remediation models usually require solving
complex and computationally expensive PDE systems to find objective function values for a
particular input [27]. The groundwater remediation problem used in our analysis is based on
a PDE system which describes the movement and purification of contaminated groundwater
given a set of bioremediation and pumping design decisions [28]. Detailed description of the
problem is provided in Section B.2 of the online supplement. The decision variables of the
problem are the injection rates of remediation agent at 3 different well locations during each of
mmanagement periods. The input dimension size ranges between 6 and 36 variables, depend-
ing upon the number of management periods incorporated in the numerical computation
model.
The remediation optimization problem can be formulated as two separate multi-objective
optimization problems, called GWDA and GWDM (Table 2). In the first formulation,
GWDM, the aim is to minimize the maximum contaminant concentration at the end of the
pumping time, and to minimize the cost of remediation. The second formulation, GWDA,
aims to minimize the average contaminant concentration, along with cost. The mathematical
formulation of GWDA and GWDM are depicted in Table 2.
5 Results and analysis
5.1 Experimental setup
We tested our algorithm on the test functions and groundwater problems discussed in the
previous section. The performance of GOMORS was compared against the Non-Dominated
123
26 J Glob Optim (2016) 64:17–32
Sorting Algorithm-II (NSGA-II) proposed by Deb et al. [5] (discussed in Sect. 3.3)and
the kriging-based ParEGO algorithm proposed by Knowles [21]. All three algorithms are
quite different. ParEGO is a multi-objective version of EGO [19] where the multi-objective
problem is converted into many single objective optimization problems through Tchebycheff
weighting [46]. One single objective problem is chosen at random from the predefined set of
decomposed problems and EGO is applied to it to for selection of one point for evaluation
per algorithm iteration. ParEGO is not designed for high dimensional problems (more than
15 variables). GOMORS on the other hand embeds RBFs within an evolutionary framework,
and selects multiple points (from various rules defined in Sect. 3.4) for simultaneous (parallel)
evaluations in each iteration and is designed for low and higher (15–25 decision variables)
dimensional problems.
Since the objective of GOMORS is to find good solutions to MO problems within a limited
function evaluation budget, our experiments were restricted to 400 function evaluations.
Since all algorithms compared are stochastic, ten optimization experiments were performed
for each algorithm, on each test problem, and results were compared via visual analysis of
fronts and a performance metric based analysis. A detailed description of the experimental
setup, including parameter settings for all algorithms and source code references for ParEGO
and NSGA-II, is provided in Section E of the online supplement.
The uncovered hypervolume metric [1] was used to compare the performance of various
algorithms. Uncovered hypervolume is the difference between the total feasible objective
space (defined by the reference and ideal points in Fig. 1a) and the objective space dominated
by estimate of the Pareto front obtained by an algorithm. A lower value of the metric indicates
a better solution and the ideal value is zero.
Results from the synthetic test problems were analyzed in combination through the
metric. Experiments were performed for each test problem with 8 decision variables,
16 decision variables and 24 decision variables to highlight performance differences of
GOMORS, ParEGO and NSGA-II with varying problem dimensions. Results of all syn-
thetic test problems were compiled for analysis by summing the metric values of each
of the ten optimization experiments performed on each test problem. Since the uncov-
ered hypervolume metric values obtained for each individual test problem and algorithm
combination could be considered as independent random variables, a sum of them across
a single algorithm is another random variable which is a convolution of the indepen-
dent random variables. This convolution based metric summarizes overall performance
of an algorithm on all synthetic test problems and is used as basis of our analysis
methodology.
Results from the two variants of the groundwater remediation problem (GWDA and
GWDM) were also analyzed through the uncovered hypervolume metric in a similar
manner. Experiments were performed with 6 decision variables, 12 decision variables
and 24 decision variables for each groundwater problem. Results from individual sub-
sequent experiments performed on each problem were summed to obtain a convoluted
metric analysis of the groundwater problems. Performance of GOMORS and ParEGO on
the GWDM test problem was further assessed through visual analysis of non-dominated
solutions obtained from each algorithm (median solution). The numerical comparisons
are based on the number of objective function evaluations and hence do not incorpo-
rate parallel speedup. ParEGO does not have a parallel implementation so its wall-clock
time is much longer than GOMORS. However, the comparisons here focus on the num-
ber of function evaluations and evaluate an estimate of total CPU time. These com-
parisons do not consider the additional advantage of GOMORS (parallel) in wall-clock
time.
123
J Glob Optim (2016) 64:17–32 27
400
Uncovered Hypervolume
GOMORS ParEGO NSGA2
Algorithm
GOMORS ParEGO NSGA2 GOMORS ParEGO NSGA2
Expensive Function Evaluations
Decision Variables
200
16 Variables 24 Variables8 Variables
1
2
3
1
2
3
Fig. 2 Synthetic test problems: box plots of uncovered hypervolume metric values summed over the eleven
test problems. Axis scales are identical across all sub-plots, which describe problems ranging from 8 to 24
decisions and from 200 to 400 function evaluations. Upper row is for 400 evaluations and lower row is for
200 evaluations. In all cases lower values of uncovered hypervolume are best
5.2 Results: synthetic test problems
The metric analysis performed on the synthetic test problems is depicted via box plots in
Fig. 2. The vertical axis in Fig. 2is the convoluted uncovered hypervolume metric described
in Sect. 5.1. The lower the uncovered hypervolume, the better is the quality of the Non-
dominated Front that the algorithm has found.
Figure 2aims to (1) visualize speed of convergence of all algorithms (by comparing results
of 200 and 400 objective function evaluations) and (2) understand the effect of increasing
decision space dimensions on performance of algorithms (by comparing problems with 8,
16 and 24 decision variables). The box plot visualization of each algorithm within each
sub-plot of Fig. 2corresponds to the uncovered hypervolume metric values summed over
the eleven test problems (see Sect. 5.1). Lower values of the metric signify superiority of
performance and a lower spread within a box plot depicts robustness of performance. Each
sub-plot within Fig. 2compares performance of all three algorithms for a specified number of
decision variables (number of decision variables vary between 8 and 24) and a fixed number
of function evaluations (either 200 or 400). Traversing from bottom to top, one can visualize
the change in performance of algorithms as function evaluations increase (from 200 to 400),
and a left to right traversal can help in visualizing performance differences with an increase
in decision space dimensions (8, 16 and 24 decision variables).
Figure 2clearly illustrates that GOMORS outperforms ParEGO for the 24 variable ver-
sions of the synthetic problems, both in terms of speed of convergence (at 200 evaluations)
and at algorithm termination after 400 function evaluations (see top right sub-plot of Fig. 2).
GOMORS’ convergence to a good solution is faster than ParEGO for the 8 and 16 variable
versions of the synthetic test problems, but the difference is not as distinguishable (from Fig.
2) as in the case of the 24 variable versions. The Wilcoxon rank-sum test [14] (at 5 percent
significance) confirms that performance of GOMORS is better than ParEGO for the 8, 16
123
28 J Glob Optim (2016) 64:17–32
400
Uncovered Hypervolume
GOMORS ParEGO NSGA2
Algorithm
GOMORS ParEGO NSGA2 GOMORS ParEGO NSGA2
Expensive Function Evaluations
Decision Variables
12 Variables 24 Variables6 Variables
0.2
0.3
0.4
0.5
0.6
0.7
0.2
0.3
0.4
0.5
0.6
0.7
200
Fig. 3 Groundwater application problems: box plots of uncovered hypervolume metric values summed over
the two groundwater remediation design optimization problems, i.e GWDA and GWDM. See Fig. 2caption
for figure explanation
and 24 variable versions of the test problems after 100, 200 and 400 function evaluations.
Figure 2also indicates that both GOMORS and ParEGO significantly outperform NSGA-II
with reference to the synthetic problems when evaluations are limited to 400.
5.3 Results: groundwater remediation design problem
The box-plot analysis methodology for the groundwater remediation problem is similar to
the one employed for the synthetic test problems and the analysis is summarized in Fig. 3.
Results summarized in Fig. 3are consistent with the findings observed with the synthetic
test functions in Fig. 2. GOMORS and ParEGO both outperform NSGA-II for the two MO
groundwater problems defined in Table 2, GWDA and GWDM, when function evaluations
are limited to 400. Performance of GOMORS (as per Fig. 3) is superior to ParEGO with
application to the 12 and 24 variable versions of GWDA and GWDM, both in terms of
speed of convergence (at 200 evaluations) and upon algorithm termination after 400 function
evaluations. The difference between GOMORS and ParEGO is not visually discernible for
the 6 variable versions of the problems, but the rank-sum test (at 5 percent significance)
confirms that performance of GOMORS is better, which is supported by Fig. 4.
Figure 4provides a visual comparison of non-dominated trade-offs obtained from
GOMORS and ParEGO with application to the GWDM groundwater problem. The red
line within each sub-plot is an estimate of the Pareto front of GWDM. Since knowledge
of the true front is not known, we obtained our estimate of true Pareto front through a sin-
gle trial of NSGA-II with 50,000 function evaluations. The green dots within each sub-plot
correspond to the non-dominated solutions obtained via application of the algorithm ref-
erenced in the sub-plot. There are two sub-figures within Fig. 4and four sub-plots within
each sub-figure. Figure 4a corresponds to the median (the fronts ranked fifth in ten experi-
123
J Glob Optim (2016) 64:17–32 29
(A) 6 Decision Variables
0
1
2
3
F2(x)
02000 4000
0
1
2
3
F
1(x)
F2(x)
GOMORS
200 Evauations
ParEGO
200 Evaluations
0
1
2
3
F2(x)
02000 4000
0
1
2
3
F1(x)
F2(x)
GOMORS
200 Evaluations
ParEGO
200 Evaluations
0
1
2
3
F2(x)
02000 4000
0
1
2
3
F1(x)
F2(x)
GOMORS
100 Evaluations
ParEGO
100 Evalutions
0
1
2
3
F2(x)
02000 4000
0
1
2
3
F1(x)
F2(x)
GOMORS
100 Evaluations
ParEGO
100 Evaluations
(B) 24 Decision Variables
0 2000 4000
F
1
(x)
0 2000 4000
F1(x)
0 2000 4000
F1(x)
0 2000 4000
F1(x)
Fig. 4 Estimated non-dominated front for groundwater problem GWDM: visual comparison of median non-
dominated fronts (as per uncovered hypervolume) obtained from GOMORS and ParEGO for GWDM with 6
(Fig. 4a) and 24 (Fig. 4b) decisions, and after 100 and 200 expensive function evaluations. Red line is result
of 50,000 evaluations with NSGA-II. Green circles are non-dominated solutions from GOMORS or ParEGO
ments of each algorithm) non-dominated fronts obtained from each algorithm for GWDM
with 6 decision variables and Fig. 4b corresponds to the median fronts obtained from each
algorithm for GWDM with 24 decision variables. A clock-wise traversal of sub-plots within
each sub-figure depict results in the following order: (1) GOMORS after 100 evaluations, (2)
GOMORS after 200 evaluations, (3) ParEGO after 200 evaluations, and (4) ParEGO after
100 evaluations.
Figure 4indicates that both algorithms seem to converge to the estimated Pareto front and
also manage to find diverse solutions on the front for the 6 variable version of GWDM. More
diverse solutions are obtained from GOMORS however, and with better speed of convergence
than with ParEGO (depicted by the 6 variable version results after 100 and 200 evaluations).
Performance of GOMORS is significantly better than ParEGO for the 24 variable version of
the problem both in terms of speed of convergence (depicted by 24 variable GWDM version
results after 100 and 200 evaluations) and diversity of solutions. In case of computationally
expensive real-world problems it may not be possible to run multiple MO experiments. Hence,
we also visually analyzed trade-offs of worst case solutions (as per uncovered hypervolume)
obtained from GOMORS and ParEGO for both GWDM and GWDA. The visualizations
are provided in Section F of the online supplement to this paper. After performing a metric
based analysis and a visual analysis, it can be concluded that performance of GOMORS is
superior to ParEGO on the test problems examined here, especially on the relatively higher
dimensional problems.
6Conclusion
GOMORS is a new parallel algorithm for multi-objective optimization of black box functions
that improves efficiency for computationally expensive objective functions by obtaining good
solutions with a relatively small number of evaluations (<500). GOMORS accomplishes
this with the construction in each iteration of a surrogate response surface based on all
the values of the objective functions computed in the current and previous iterations. Then
evolution and non-dominated sorting are applied to the inexpensive function describing this
surrogate surface. It is then possible to evaluate a large number of points on the surrogate
123
30 J Glob Optim (2016) 64:17–32
inexpensively so multiple Rules can be used to select a diversity of points for expensive
evaluation on parallel processors. The use of these multiple Rules is the innovative aspect
of GOMORS and contributes to its effectiveness and robustness with limited numbers of
evaluations (<500). GOMORS is very different from ParEGO, which solves multiple single
objective problems.
Our numerical results indicate that GOMORS was significantly more effective than
ParEGO when the number of evaluations was quite limited relative to the difficulty of the
problem (Figs. 2,3,4) for both the test functions and the groundwater partial differential
equation models. Computational demands for nonlinear optimization will grow rapidly as
the dimension increases, so the effectiveness of GOMORS becomes more obvious on higher
dimensional problems, as is evident in Figs. 2,3,4.
There are many real application models with computational times that are so large that
the evaluations will be greatly limited, especially for multi-objective problems. For example,
analysis of a carbon sequestration monitoring problem required global optimization with
seven decisions of a nonlinear, multiphase flow transient PDF model that took over 2 hours
per simulation [11]. So even 100 evaluations for a problem like this is probably more eval-
uations than are feasible. Hence the GOMORS approach to multi-objective optimization
is a contribution in the area of surrogate-assisted multi-objective optimization of objective
functions based on computationally expensive, multimodal, black box models.
Acknowledgments The research was supported by a Fulbright Fellowshipto Akhtar and grants to Shoemaker
from NSF (CISE 1116298) and to Mahowald and Shoemaker from DOE (SciDAC).
Open Access This article is distributed under the terms of the Creative Commons Attribution License which
permits any use, distribution, and reproduction in any medium, provided the original author(s) and the source
are credited.
References
1. Bader, J.M.: Hypervolume-Based Search for Multiobjective Optimization: Theory and Methods. Cre-
ateSpace, Paramount (2010)
2. Beume, N., Naujoks, B., Emmerich, M.: SMS-EMOA: multiobjective selection based on dominated
hypervolume. Eur. J. Oper. Res. 181(3), 1653–1669 (2007)
3. Buhmann, M.D.: Radial Basis Functions. Cambridge University Press, New York (2003)
4. Deb, K.: Multi-objective genetic algorithms: problem difficulties and construction of test problems. Evol.
Comput. 7(3), 205–230 (1999)
5. Deb, K., Agrawal, S., Pratap, A., Meyarivan, T.: A fast elitist non-dominated sorting genetic algorithm
for multi-objective optimization: Nsga-ii. In: Proceedings of the 6th International Conference on Parallel
Problem Solving from Nature. PPSN VI, pp. 849–858. Springer, London, UK (2000)
6. Deb, K., Kalyanmoy, D.: Multi-Objective Optimization Using Evolutionary Algorithms, 1st edn. Wiley,
New York (2001)
7. Deb, K., Nain, P.: An evolutionary multi-objective adaptive meta-modeling procedure using artificial
neural networks. In: Yang, S., Ong, Y.S., Jin, Y. (eds.) Evolutionary Computation in Dynamic and Uncer-
tain Environments, vol. 51, chap. 13, pp. 297–322. Springer, Berlin (2007)
8. Diaz-Manriquez, A., Toscano-Pulido, G., Gomez-Flores, W.: On the selection of surrogate models in
evolutionary optimization algorithms. IEEE Congr. Evol. Comput. 2155–2162 (2011)
9. Emmerich, M., Giotis, A., Özdemir, M., Bäck, T., Giannakoglou, K.: Metamodel-assisted evolution
strategies. In: Parallel Problem Solving from Nature VII. Springer, pp. 361–370 (2002)
10. Emmerich, M.T.M., Giannakoglou,K., Naujoks, B.: Single- and multiobjective evolutionary optimization
assisted by gaussian random field metamodels. IEEE Trans. Evol. Comput. 10(4), 421–439 (2006)
11. Espinet, A., Shoemaker, C.A., Doughty, C.: Estimation of plume distribution for carbon sequestration
using parameter estimation with limited monitoring data. Water Resour. Res. 49(7), 4442–4464 (2013)
12. Fang, H., Rais-Rohani, M., Liu, Z., Horstemeyer, M.F.: A comparative study of metamodeling methods
for multiobjective crashworthiness optimization. Comput. Struct. 83(25–26), 2121–2136 (2005)
123
J Glob Optim (2016) 64:17–32 31
13. Georgopoulou, C.A., Giannakoglou, K.C.: A multi-objective metamodel-assisted memetic algorithm with
strength-based local refinement. Eng. Optim. 41(10), 909–923 (2009)
14. Gibbons, J.D., Chakraborti, S.: Nonparametric Statistical Inference. Chapman and Hall, Boca Raton
(2010)
15. Gutmann, H.M.: A radial basis function method for global optimization. J. Global Optim. 19, 201–227
(2001)
16. Huang, D., Allen, T.T., Notz, W.I., Zeng, N.: Global optimization of stochastic black-box systems via
sequential kriging meta-models. J. Global Optim. 34(3), 441–466 (2006)
17. Jin, R., Chen, W., Simpson, T.W.: Comparative studies of metamodelling techniques under multiple
modelling criteria. Struct. Multidiscip. Optim. 23(1), 1–13 (2001)
18. Jin, Y., Olhofer, M., Sendhoff, B.: A framework for evolutionary optimization with approximate fitness
functions. IEEE Trans. Evol. Comput. 6(5), 481–494 (2002)
19. Jones, D.R., Schonlau, M., Welch, W.J.: Efficient global optimization of expensive black-box functions.
J. Global Optim. 13(4), 455–492 (1998)
20. Karakasis, M.K., Giannakoglou, K.C.: On the use of metamodel-assisted, multi-objective. Eng. Optim.
38(8), 941–957 (2006)
21. Knowles, J.: ParEGO: a hybrid algorithm with on-line landscape approximation for expensive multiob-
jective optimization problems. IEEE Trans. Evol. Comput. 8(5), 1341–66 (2006)
22. Li, H., Zhang, Q.: Multiobjective optimization problems with complicated pareto sets, MOEA/D and
NSGA-II. IEEE Trans. Evol. Comput. 13(2), 284–302 (2009)
23. Lim, D., Jin, Y., Ong, Y.S., Sendhoff, B.: Generalizing surrogate-assisted evolutionary computation. IEEE
Trans. Evol. Comput. 14(3), 329–355 (2010)
24. Martínez, S.Z., Coello, C.A.C.: A memetic algorithm with non gradient-based local search assisted by
a meta-model. In: Proceedings of the 11th International Conference on Parallel Problem Solving from
Nature: Part I. PPSN’10. Springer, Berlin, Heidelberg, pp. 576–585 (2010)
25. Martinez, S.Z., Coello, C.A.C.: Combining surrogate models and local search for dealing with expensive
multi-objective optimization problems. In: IEEE Congress on Evolutionary Computation, pp. 2572–2579.
IEEE (2013)
26. McKay, M.D., Beckman, R.J., Conover, W.J.: A comparison of three methods for selecting values of
input variables in the analysis of output from a computer code. Technometrics 21(2), 239–245 (1979)
27. Minsker, B.S., Shoemaker, C.A.: Differentiating a finite element biodegradation simulation model for
optimal control. Water Resour. Res. 32(1), 187–192 (1996)
28. Minsker, B.S., Shoemaker, C.A.: Dynamic optimal control of in-situ bioremediation of ground water.
Water Resourc. Plan. Manag. 124(3), 149–161 (1998)
29. Montemayor-garcía, G., Toscano-pulido, G.: A Study of Surrogate models for their use in multiobjective
evolutionary algorithms. In: 8th International Conference on Electrical Engineering Computing Science
and Automatic Control (CCE) (2011)
30. Mueller, J., Shoemaker, C.A., Piche, R.: SO-I: a surrogate model algorithm for expensive nonlinear
integer programming problems including global optimization applications. J. Global Optim. 59(4), 865–
889 (2014)
31. Mueller, J., Shoemaker, C.A., Piche, R.: SO-MI: a surrogate model algorithm for computationally expen-
sive nonlinear mixed-integer, black-box global optimization problems. Comput. Oper. Res. 40(5), 1383–
1400 (2013)
32. di Pierro, F.,Khu, S.T., Savi´c, D., Berardi, L.: Efficient multi-objective optimal design of water distribution
networks on a budget of simulations using hybrid algorithms. Environ. Model. Softw. 24(2), 202–213
(2009)
33. Ponweiser, W., Wagner, T., Biermann, D., Vincze, M.: Multiobjective optimization on a limited budget of
evaluations using model-assisted s-metric selection. In: Proceedings of the 10th International Conference
on Parallel Problem Solving from Nature: PPSN X. Springer, Berlin, Heidelberg, pp. 784–794 (2008)
34. Powell, M.J.D.: The Theory of Radial Basis Function Approximation in 1990, pp. 105–210. Oxford
University Press, USA (1992)
35. Priddy, K.L., Keller, P.E.: Artificial Neural Networks: An Introduction. SPIE Press, New York (2005)
36. Regis, R., Shoemaker, C.: Local function approximation in evolutionary algorithms for the optimization
of costly functions. IEEE Trans. Evol. Comput. 8(5), 490–505 (2004)
37. Regis, R., Shoemaker, C.: A stochastic radial basis function method for the global optimization of expen-
sive functions. INFORMS J. Comput. 19, 497–509 (2007)
38. Regis, R.G., Shoemaker, C.A.: Constrained global optimization of expensive black box functions using
radial basis functions. J. Global Optim. 31, 153–171 (2005)
39. Regis, R.G., Shoemaker, C.A.: Improvedstrategies for radial basis function methods for global optimiza-
tion. J. Global Optim. 37(1), 113–135 (2007)
123
32 J Glob Optim (2016) 64:17–32
40. Regis, R.G., Shoemaker, C.A.: Parallel stochastic global optimization using radial basis functions.
INFORMS J. Comput. 21(3), 411–426 (2009)
41. Regis, R.G., Shoemaker, C.A.: A quasi-multistart framework for global optimization of expensive func-
tions using response surface models. J. Global Optim. 56(4), 1719–1753 (2013)
42. Regis, R.G., Shoemaker, C.A.: Combining radial basis function surrogates dynamic coordinate search in
high dimensional expensive black-box optimization. Eng. Optim. 45(5), 529–555 (2013)
43. Rosales-p, A., Coello, C.A.C., Gonzalez, J.A., Reyes-garcia, C.A., Escalante, H.J.: A hybrid surrogate-
based approach for evolutionary multi-objective optimization. In: IEEE Congress on Evolutionary Com-
putation (CEC), pp. 2548–2555 (2013)
44. Sacks, J., Welch, W.J., Mitchell, T.J., Wynn, H.P.: Design and analysis of computer experiments. Stat.
Sci. 4(4), 409–423 (1989)
45. Santana-Quintero, L., Serrano-Hernandez, V., Coello Coello, C.A., Hernandez-Diaz, A., Molina, J.: Use of
radial basis functions and rough sets for evolutionary multi-objective optimization. In: IEEE Symposium
on Computational Intelligence in Multicriteria Decision Making, pp. 107–114 (2007)
46. Steuer, R., Choo, E.U.: An interactive weighted tchebycheff procedure for multiple objective program-
ming. Math. Program. 26, 326–344 (1983)
47. Vapnik, V.N.: Statistical Learning Theory, 1st edn. Wiley, New York (1998)
48. Vrugt, J.A., Robinson, B.A.: Improved evolutionary optimization from genetically adaptive multimethod
search. Proc. Natl. Acad. Sci. USA 104(3), 708–711 (2007)
49. Wild, S.M., Regis, R.G., Shoemaker, C.A.: ORBIT: optimization by Radial Basis Function Interpolation
in Trust-Regions. SIAM J. Sci. Comput. 30(6), 3197–3219 (2007)
50. Wild, S.M., Shoemaker, C.A.: Global convergence of radial basis function trust-region algorithms for
derivative-free optimization. SIGEST article. SIAM Rev. 55(2), 349–371 (2013)
51. Martínez, Zapotecas: S., Coello Coello, C.A.: Moea/d assisted by rbf networks for expensive multi-
objective optimization problems. In: Proceeding of the Fifteenth Annual Conference on Genetic and
Evolutionary Computation Conference. GECCO ’13, pp. 1405–1412. ACM, New York, NY, USA (2013)
52. Zhang, Q., Li, H.: MOEA/D: a multiobjective evolutionary algorithm based on decomposition. IEEE
Trans. Evol. Comput. 11(6), 712–731 (2007)
53. Zhang, Q., Liu, W., Tsang, E., Virginas, B.: Expensive multiobjective optimization by MOEA/D with
Gaussian process model. IEEE Trans. Evol. Comput. 14(3), 456–474 (2010)
123
... Pareto optimal calibrations are sets of solutions found by multi-objective optimization that are non-dominated to each other but are superior to the rest of solutions in the search space. A solution is a non-dominated solution when no other solution found in multi-objective optimization search is better than it in terms of all objectives (Akhtar and Shoemaker, 2016). Ahmadi et al. (2014) apply multi-objective optimization for calibration of a SWAT model and conclude that MO optimization is more effective than single objective optimization. ...
... Surrogate assisted search methods typically employ computationally inexpensive response surface models (or "surrogate" models), within the iterative search process, in order to efficiently guide the search towards optimal solutions. Hence, we compare two surrogate based algorithms, ParEGO (Knowles, 2006) and GOMORS (Akhtar and Shoemaker, 2016), along with the five evolutionary algorithms discussed above, for analyzing their relative effectiveness in MO calibration of expensive hydrologic/watershed models on a limited budget of a few hundred simulation evaluations. In surrogate-assisted optimization methods such as ParEGO and GOMORS, the surrogate model is constructed by utilizing evaluations already explored by the optimization algorithm. ...
... ParEGO (Knowles, 2006) uses a Kriging-based surrogate surface for multi-objective optimization, and it is specifically designed for applications involving a very limited evaluation budget. GOMORS (Akhtar and Shoemaker, 2016) is another iterative scheme, which employs Radial Basis Functions (RBF) as a surrogate model to guide multi-objective search towards the optimal set of solutions. Both the Kriging-based surrogate surface and the RBF surrogate are versatile data-driven models that can effectively capture diverse regression relationships between decision variables and objective function values. ...
... One way to reduce the number of function evaluations required to solve multiobjective optimization problems is to hybridize one of the aforementioned methods with surrogate models [2]. Such hybrid methods begin by sampling a small number of points according to a design of experiments and then fitting surrogate models for each objective and constraint function. ...
... After evaluating the selected candidate points on the expensive objective and constraint functions, the surrogate models are updated to account for the new observations, and the process is iterated. Akhtar and Shoemaker [2] applied this approach on problems with up to 24 variables, obtaining faster convergence to the Pareto front when compared to the widely used NSGA-II genetic algorithm. ...
... For example, for multiobjective optimization, simDI-RECT could be run for (say) 600 evaluations and the best 200 of these points used as the initial population for a multiobjective evolutionary algorithm. Another possible direction would be to hybridize simDIRECT with surrogate models, using an approach similar to that of Akhtar and Shoemaker [2], but using simDIRECT instead of an MOEA to solve optimization problems on the surrogates. ...
Article
Full-text available
While constrained, multiobjective optimization is generally very difficult, there is a special case in which such problems can be solved with a simple, elegant branch-and-bound algorithm. This special case is when the objective and constraint functions are Lipschitz continuous with known Lipschitz constants. Given these Lipschitz constants, one can compute lower bounds on the functions over subregions of the search space. This allows one to iteratively partition the search space into rectangles, deleting those rectangles which—based on the lower bounds—contain points that are all provably infeasible or provably dominated by previously sampled point(s). As the algorithm proceeds, the rectangles that have not been deleted provide a tight covering of the Pareto set in the input space. Unfortunately, for black-box optimization this elegant algorithm cannot be applied, as we would not know the Lipschitz constants. In this paper, we show how one can heuristically extend this branch-and-bound algorithm to the case when the problem functions are black-box using an approach similar to that used in the well-known DIRECT global optimization algorithm. We call the resulting method “simDIRECT.” Initial experience with simDIRECT on test problems suggests that it performs similar to, or better than, multiobjective evolutionary algorithms when solving problems with small numbers of variables (up to 12) and a limited number of runs (up to 600).
... Apart from efficient global optimization algorithms, there are other works with good performance on computationally expensive MOPs. In [38], Akhtar et al. proposed an evolutionary multiobjective optimization algorithm that uses Radial Basic Functions to iteratively compute surrogate response surfaces as an approximation of the objective function. Li et al. [39] proposed a prescreening criterion to enhance the performance of NSGA-III for dealing with computationally expensive MOPs. ...
Article
Full-text available
Surrogate-assisted evolutionary algorithms (SAEAs) are popular for solving expensive optimization problems. However, most existing SAEAs are designed for solving single-objective or multiobjective optimization problems with two or three objectives. Few works had been reported to deal with expensive many-objective optimization problems with more than three objectives because of two difficulties. One is the curse of dimensionality caused by many-objective problems, and the other is the fewer computational resources available in a limited time for expensive optimization problems. Since an effective selection method can better solve the many-objective optimization problems, high-efficiency search and accurate model can save computational resources for expensive optimization problems, this paper proposes a diverse/converged individual competition algorithm, which owns a novel diverse/converged individual competition selection mechanism, a hybrid search mechanism, and a segmentation approach. The diverse/converged individual competition selection mechanism maintains a good balance between the convergence and diversity of the selected solutions for solving many-objective optimization problems. The hybrid search mechanism performs a memetic search and genetic search at different stages of the evolution process to further generate superior solutions. The segmentation approach uses two different populations with small numbers to build two surrogate models which will predict different areas, and it can improve the accuracy of the prediction. The proposed algorithm is compared with several state-of-art algorithms on widely used benchmark functions. The experimental results show that the proposed algorithm performs significantly better than the compared algorithms.
... C OMPUTATIONALLY expensive multiobjective optimization problems (EMOPs) are common in real-world problems, e.g., neural architecture search [1], design of supercritical airfoils [2], [3], and groundwater remediation design [4]. EMOPs are challenging due to the involvement of multiple, often conflicting, and time-consuming objective functions. ...
Article
Full-text available
With the rising popularity of computationally expensive multiobjective optimization problems (EMOPs) in real-world applications, many surrogate-assisted evolutionary algorithms (SAEAs) have been proposed in the recent decade. Nevertheless, high-dimensional EMOPs remain challenging for existing SAEAs attributed to their requirement in massive fitness evaluations and complex models. We propose an SAEA with a supervised reconstruction strategy, namely SR-SAEA, for solving high-dimensional EMOPs. In SR-SAEA, we first select several well-converged reference solutions to form a set of reference vectors in the decision space. Then each candidate solution is projected onto these reference vectors, reflecting the closeness between the candidate solution and those reference solutions. Each candidate solution is then projected onto these reference vectors, generating a projection vector that reflects its proximity to the reference solutions. This allows the optimization of the high-dimensional decision vector to be approximated by optimizing the low-dimensional projection vector. Subsequently, a supervised autoencoder is employed to reconstruct the optimized low-dimensional projection vector back to the original decision space. Notably, the latency vector of the autoencoder is replaced with the projection vector for supervised reconstruction. An ablation study confirms the effectiveness of the proposed supervised reconstruction strategy. The superiority of SR-SAEA, compared with six state-of-the-art SAEAs, is validated on benchmark problems with up to 200 decision variables.
Article
Full-text available
This study proposes a constrained multiobjective robust simulation optimization (CMRSO) method to address black-box problems with multiple objectives and constraints under uncertainties, especially when multiple objectives and constraints are evaluated by costly simulations. Neighborhood exploration is first performed for each iterate to search for its infeasible neighbors and worst-case feasible neighbors with the help of kriging surrogate models of constraints and multiple objectives. Next, a local move direction and a proper step size are determined to obtain an updated iterate that stays away from previous infeasible neighbors and worst-case feasible neighbors. These two steps are repeated until no feasible local move direction exists or the computational budget is exhausted. By evolving iteratively and independently from a set of initial solutions, multiple final solutions will generate a set of robust efficient solutions. Finally, the CMRSO method is applied to a synthetic constrained biobjective optimization problem and a network-wide signal timing simulation optimization (SO) problem under cyber-attacks. Our study shows the effectiveness of CMRSO even with a limited computational budget, indicating that it may be a promising tool for solving simulation-based problems with multiple objectives and constraints under uncertainties.
Chapter
Multi-objective evolutionary algorithms have been shown to solve multi-objective optimization problems well and have been very widely used, but there are still drawbacks such as failure to develop sufficient environmental selection pressure to guide the population search toward the Pareto front, long computation time of the algorithms, and difficulty in visualization in the face of MaOPs, which have led to an increasing interest in the research of solving MaOPs. In addition, there are many expensive MaOPs in many practical applications of MaOPs. This paper has the following three parts of work: introducing common mainstream methods for solving MaOPs; introducing common surrogate-assisted methods for solving expensive MaOPs; applying MaOEAs and surrogate-assisted evolutionary algorithms to solve different MaOPs based on platEMO. The results show that there are differences in the effectiveness of different methods in solving different problems.
Article
Full-text available
Many scientific phenomena are now investigated by complex computer models or codes. A computer experiment is a number of runs of the code with various inputs. A feature of many computer experiments is that the output is deterministic—rerunning the code with the same inputs gives identical observations. Often, the codes are computationally expensive to run, and a common objective of an experiment is to fit a cheaper predictor of the output to the data. Our approach is to model the deterministic output as the realization of a stochastic process, thereby providing a statistical basis for designing experiments (choosing the inputs) for efficient prediction. With this model, estimates of uncertainty of predictions are also available. Recent work in this area is reviewed, a number of applications are discussed, and we demonstrate our methodology with an example.
Article
Two types of sampling plans are examined as alternatives to simple random sampling in Monte Carlo studies. These plans are shown to be improvements over simple random sampling with respect to variance for a class of estimators which includes the sample mean and the empirical distribution function.
Article
We introduce a new framework for the global optimization of computationally expensive multimodal functions when derivatives are unavailable. The proposed Stochastic Response Surface (SRS) Method iteratively utilizes a response surface model to approximate the expensive function and identifies a promising point for function evaluation from a set of randomly generated points, called candidate points. Assuming some mild technical conditions, SRS converges to the global minimum in a probabilistic sense. We also propose Metric SRS (MSRS), which is a special case of SRS where the function evaluation point in each iteration is chosen to be the best candidate point according to two criteria: the estimated function value obtained from the response surface model, and the minimum distance from previously evaluated points. We develop a global optimization version and a multistart local optimization version of MSRS. In the numerical experiments, we used a radial basis function (RBF) model for MSRS and the resulting algorithms, Global MSRBF and Multistart Local MSRBF, were compared to 6 alternative global optimization methods, including a multistart derivative-based local optimization method. Multiple trials of all algorithms were compared on 17 multimodal test problems and on a 12-dimensional groundwater bioremediation application involving partial differential equations. The results indicate that Multistart Local MSRBF is the best on most of the higher dimensional problems, including the groundwater problem. It is also at least as good as the other algorithms on most of the lower dimensional problems. Global MSRBF is competitive with the other alternatives on most of the lower dimensional test problems and also on the groundwater problem. These results suggest that MSRBF is a promising approach for the global optimization of expensive functions.
Article
In many areas of mathematics, science and engineering, from computer graphics to inverse methods to signal processing, it is necessary to estimate parameters, usually multidimensional, by approximation and interpolation. Radial basis functions are a powerful tool which work well in very general circumstances and so are becoming of widespread use as the limitations of other methods, such as least squares, polynomial interpolation or wavelet-based, become apparent. The author's aim is to give a thorough treatment from both the theoretical and practical implementation viewpoints. For example, he emphasises the many positive features of radial basis functions such as the unique solvability of the interpolation problem, the computation of interpolants, their smoothness and convergence and provides a careful classification of the radial basis functions into types that have different convergence. A comprehensive bibliography rounds off what will prove a very valuable work.