Content uploaded by Ruxia Liang
Author content
All content in this area was uploaded by Ruxia Liang on Dec 17, 2018
Content may be subject to copyright.
ORIGINAL ARTICLE
Multi-criteria group decision-making method based
on interdependent inputs of single-valued trapezoidal
neutrosophic information
Ru-xia Liang
1
•Jian-qiang Wang
1
•Lin Li
2
Received: 18 June 2016 / Accepted: 25 October 2016 / Published online: 17 November 2016
The Natural Computing Applications Forum 2016
Abstract Single-valued trapezoidal neutrosophic numbers
(SVTNNs) are very useful tools for describing complex
information, because they are able to maintain the com-
pleteness of the information and describe it accurately and
comprehensively. This paper develops a method based on
the single-valued trapezoidal neutrosophic normalized
weighted Bonferroni mean (SVTNNWBM) operator to
address multi-criteria group decision-making (MCGDM)
problems. First, the limitations of existing operations for
SVTNNs are discussed, after which improved operations
are defined. Second, a new comparison method based on
score function is proposed. Then, the entropy-weighted
method is established in order to obtain objective expert
weights, and the SVTNNWBM operator is proposed based
on the new operations of SVTNNs. Furthermore, a single-
valued trapezoidal neutrosophic MCGDM method is
developed. Finally, a numerical example and comparison
analysis are conducted to verify the practicality and
effectiveness of the proposed approach.
Keywords Multi-criteria group decision-making Single-
valued trapezoidal neutrosophic number Single-valued
trapezoidal neutrosophic weighted Bonferroni mean
operator Entropy-weighted method
1 Introduction
Multi-criteria decision-making (MCDM) methods and
multi-criteria group decision-making (MCGDM) methods
are widely used in real-life decision-making problems.
However, these situations often involve uncertain, incom-
plete or indeterminate decision-making information. To
address this problem, Zadeh [1] proposed fuzzy sets, which
can provide a better representation of reality. Since then,
various extensions of fuzzy sets have emerged, such as
interval-valued fuzzy sets (IVFSs) [2], intuitionistic fuzzy
sets (IFSs), interval-valued intuitionistic fuzzy sets
(IVIFSs) [3–5], hesitant fuzzy sets (HFSs) [6] and intu-
itionistic hesitant fuzzy sets (IHFSs) [7], all of which have
been used to solve MCDM [8–10] and MCGDM problems
[11–13]. IFSs and IVIFSs have generated further exten-
sions to cope with the vagueness and hesitancy of knowl-
edge or decision information, including triangular
intuitionistic fuzzy numbers (TrIFNs) [7,14,15], trape-
zoidal intuitionistic fuzzy numbers (TIFNs) and trapezoidal
interval-valued intuitionistic fuzzy numbers (TIVIFNs)
[16]. These extensions posses notable advantages in that
they extend the domain of IFSs from a discrete set to a
continuous one. For example, TIFNs are defined using
trapezoidal fuzzy numbers (TFNs) to express membership
and non-membership functions, which helps describe
decision makers’ (DMs) information with precision in
different dimensions [17]. Still, many uncertainties exist in
real decision-making processes, including indeterminate,
inconsistent, imprecise, incomplete and even unknown
information, which are beyond the scope of FSs and IFSs.
Smarandache developed his seminal theory of neutro-
sophic logic and neutrosophic sets (NSs) [18,19] and pointed
that the NS is a generalization of the IFS [20]. The prominent
characteristic of a NS is the independence among the truth-
&Jian-qiang Wang
jqwang@csu.edu.cn
1
School of Business, Central South University,
Changsha 410083, People’s Republic of China
2
School of Business, Hunan University, Changsha 410082,
People’s Republic of China
123
Neural Comput & Applic (2018) 30:241–260
https://doi.org/10.1007/s00521-016-2672-2
membership, falsity-membership and indeterminate mem-
bership which allows them to express more abundant and
flexible information than FSs and IFSs [21]. However, NSs
cannot be applied in real scientific or engineering areas if not
specified described. Since they were proposed, work on NSs
theory has progressed rapidly, and a number of applications
have been identified [22,23]. Furthermore, many extensions
have been developed, such as simplified neutrosophic sets
(SVNSs and INSs) [24,25], multi-valued neutrosophic sets
(MVNSs) [26] and normal neutrosophic sets (NNSs) [27].
Moreover, some researchers have attempted to combine NSs
with other traditional sets in order to enhance the ability to
represent uncertainty; examples include single-valued neu-
trosophic graphs [28], interval-valued neutrosophic graphs
[29] and interval-valued neutrosophic parameterized (IVNP-)
soft sets [30]. In addition, interval-valued neutrosophic hesi-
tant fuzzy sets (IVNHFSs) [31] and simplified neutrosophic
linguistic sets (SNLSs) [32–35]havebeenproposed.How-
ever, in SNLSs, the three membership degrees function rela-
tive to a fuzzy concept ‘‘Excellent’’ or ‘‘Good’’, which is a
discrete set, naturally, this may lead to information loss, such
that it is worthwhile to extend the discrete set to a continuous
one. Two studies [36,37] addressed this by proposing a
method to transform linguistic information into triangular
fuzzy numbers (TrFNs). Deli and Subas [21]definedsingle-
valued triangular neutrosophic numbers (SVTrN-numbers) as
a generalization of TrFNs and TrIFNs, allowing the DMs’
information to be expressed completely in different dimen-
sions [17].
Ye [38] proposed single-valued trapezoidal neutrosophic
numbers (SVTNNs) as an extension of SVTrN-numbers in
order to improve the ability to describe indeterminate and
inconsistent information. SVTNNs have attracted a great
deal of research attention because of their advantages in
representing incomplete and inconsistent information while
avoiding information loss and distortion in complex deci-
sion-making problems. For example, Deli and S¸ubas¸[39]
proposed a new ranking method of SVTNNs, which they
applied to tackle MCDM problems. Smarandache defined
single-valued neutrosophic trapezoidal linguistic numbers
(SVNTLNs) by combining SVTNNs with trapezoidal
fuzzy linguistic variables, and he also defined the neutro-
sophic trapezoidal linguistic weighted arithmetic averaging
aggregation operator, the neutrosophic trapezoidal lin-
guistic weighted geometric aggregation operator [40], the
interval neutrosophic trapezoidal linguistic weighted
arithmetic averaging aggregation operator and the interval
neutrosophic trapezoidal linguistic weighted geometric
averaging aggregation operator [41]. Ye [38] proposed the
concept of a trapezoidal neutrosophic number (TNN) and
defined the basic operations of TNNs. Based on this work,
he developed the trapezoidal neutrosophic weighted arith-
metic averaging (TNWAA) operator and the trapezoidal
neutrosophic weighted geometric averaging (TNWGA)
operator. However, in the method proposed by Ye [38], the
TFNs and the three membership degrees are treated inde-
pendently, such that their complementary effects might be
ignored; this could lead to information distortion and
conservative results. Furthermore, the method does not
take into account interrelationships among criteria, which
widely exist in real-world situations.
Aggregation operators have raised concerns about infor-
mation fusion. Many efficient aggregating operators have
been proposed and applied in MCGDM problems [42,43].
They can be roughly divided into two categories [17]:
aggregation operators with independent criteria, as intro-
duced in the research described above [21,38–41], and
aggregation operators that consider interdependent inputs,
which widely exist in real decision-making problems.
Bonferroni [44] initially proposed the Bonferroni mean
(BM) operator, which is prominently characterized by its
capacity to capture the interrelationships of input arguments.
Many extensions of the BM operator have been applied in
various fields. For example, Li et al. [45] introduced the
geometric BM operator, applying it to environments with
IFNs; meanwhile, Liu et al. [46] applied the BM operator to
MVNSs, Liu and Jin [47] introduced a trapezoidal fuzzy
linguistic BM operator, and Zhu et al. [48] developed tri-
angular fuzzy BM operators and applied them to MCDM
problems. Chen et al. [49] generalized the extended BM
operator to explore its aggregation mechanism explicitly;
Tian et al. [32] proposed the simplified neutrosophic lin-
guistic normalized weighted BM operator, the simplified
neutrosophic linguistic normalized geometric weighted BM
operator and the gray linguistic weighted BM operator [42]
to handle MCDM problems. Finally, Zhang et al. [50]
constructed an improved decision support model that intro-
duced IVNSs to denote online reviews and utilized BM
operators to consider interrelationships among criteria.
As these examples illustrate, the BM operator has found
applications in many fields, such as FSs, IFSs, linguistic
information, NSs and various extensions of them. At the
same time, SVTNNs can express indeterminate and
inconsistent information more flexibly and have therefore
gained some attention. However, little research has com-
bined these concepts to address MCGDM problems using
BM operators under SVTNN environments. Previous
studies [38,53] have focused on using the traditional
arithmetic mean operator or geometric mean operator with
SVTNNs, meaning that they only deal with independent
criteria. Moreover, there are some drawbacks in defining
the operations and comparison methods between two
SVTNNs. To overcome these deficiencies, this paper pro-
poses a new comparison method. In addition, expert
weights are determined using an entropy-weighted method.
Furthermore, the single-valued trapezoidal neutrosophic
242 Neural Comput & Applic (2018) 30:241–260
123
normalized weighted BM (SVTNNWBM) operator is
proposed. Finally, a group decision-making problem for
satisfaction assessment is solved using the approach based
on the SVTNNWBM operator and the new comparison
method with SVTNNs.
The rest of this paper is organized as follows. Section 2
briefly reviews some concepts regarding SVTNNs and their
operations are briefly reviewed. Section 3defines the new
operations and comparison method. In Sect. 4, some
SVTNN aggregation operators are introduced, including the
single-valued trapezoidal neutrosophic BM (SVTNNBM)
operator and the SVTNNWBM operator. Section 5intro-
duces the entropy-weighted method and develops a single-
valued trapezoidal neutrosophic MCGDM approach by
integrating the SVTNNWBM operator. Section 6provides
an illustrative example to demonstrate the feasibility and
applicability of the proposed approach. Additionally, Sect. 6
contains sensitivity and comparative analyses and discus-
sions. Section 7presents conclusions.
2 Preliminaries
This section introduces some basic concepts and compar-
ison methods related to SVTNNs; these concepts are useful
and will be utilized in the subsequent analyses.
Definition 1 [51] Let K=[a
1
,a
2
,a
3
,a
4
] be a TFN on the
real number set R, and a
1
Ba
2
Ba
3
Ba
4
. Then the
membership function l
K
:R?[0, 1] is defined as follows:
lKxðÞ¼
xa1
ðÞlK=a2a1
ðÞ;a1x\a2;
lK;a2xa3;
a4xðÞlK=a4a3
ðÞ;a3\xa4;
0;otherwise:
8
>
>
<
>
>
:
when a
2
=a
3
, the TFN K=[a
1
,a
2
,a
3
,a
4
] is reduced to a
TrFN.
Ye [38] extended the concept of TFNs to SVNSs and
defined SVTNNs. In what follows, we will first introduce
SVNSs.
Definition 2 [52] Let Xbe a space of points (objects),
with a generic element in Xdenoted by x. A SVNS Vin Xis
characterized by three independent parts, namely the truth-
membership function T
V
, indeterminacy-membership
function I
V
and falsity-membership function F
V
. Further-
more, T
V
:X?[0, 1], I
V
:X?[0, 1], and F
V
:X?[0, 1].
For simplification, Vis denoted by V ={\x,(T
V
(x), I
V
(x),
F
V
(x))[|x[X}.
The SVNS Vis a subclass of NSs, and the sum of T
V
(x),
I
V
(x) and F
V
(x) satisfies 0 BT
V
(x)?I
V
(x)?F
V
(x)B3.
As SVNNs are denoted by crisp numbers that cannot
represent much fuzzy information, the SVTNN is proposed
to extend the discrete set to a continuous one.
Definition 3 [38] Let T~
a;I~
a;F~
a20;1½. A SVTNN ~
a¼
a1;a2;a3;a4
½;T~
a;I~
a;F~
a
ðÞ
hi
is a special NS on the real
number set R, whose truth-membership function l~
a, inde-
terminacy-membership function m~
aand falsity-membership
function k~aare defined as follows:
l~
axðÞ¼
xa1
ðÞT~
a=a2a1
ðÞa1\x\a2;
T~
aa2\x\a3;
a4xðÞT~
a=a4a3
ðÞa3\x\a4;
0 otherwise:
8
>
>
<
>
>
:
m~
axðÞ¼
a2xþI~
axa1
ðÞðÞ=a2a1
ðÞa1\x\a2;
I~
aa2\x\a3;
xa3þI~
aa4xðÞðÞ
=a4a3
ðÞa3\x\a4;
1 otherwise:
8
>
>
<
>
>
:
k~
axðÞ¼
a2xþF~axa1
ðÞðÞ
=a2a1
ðÞa1\x\a2;
F~
aa2\x\a3;
xa3þF~
aa4xðÞðÞ
=a4a3
ðÞa3\x\a4;
1 otherwise:
8
>
>
<
>
>
:
When a
1
[0, ~
a¼a1;a2;a3;a4
½;T~
a;I~
a;F~
a
ðÞ
hi
is called
a positive SVTNN, denoted by ~
a[0. Similarly, when
a
4
B0, ~
a¼a1;a2;a3;a4
½;T~
a;I~
a;F~
a
ðÞ
hi
is a negative
SVTNN, denoted by ~
a\0. When 0 Ba
1
Ba
2
Ba
3
-
Ba
4
B1 and T~a;I~a;F~a20;1½;~
ais called a normalized
SVTNN.
When I~
a¼1T~
aF~
a, the SVTNN is reduced to a
TIFN. When a
2
=a
3
, then ~
a¼a1;a2;a3;a4
½;T~a;I~a;F~a
ðÞ
hi
becomes a single-valued triangular neutrosophic number
(SVTrNN). If I~
a¼0 and F~
a¼0, then the SVTNN is
reduced to a generalized TFN, ~
a¼a1;a2;a3;a4
½;T~a
hi
.
Definition 4 [38] Let ~
a¼a1;a2;a3;a4
½;T~
a;I~
a;F~
a
ðÞhi
and ~
b¼b1;b2;b3;b4
½;T~
b;I~
b;F~
b
be two SVTNNs, and
f0. Their operations are defined as follows:
1. ~
a~
b¼a1þb1;a2þb2;a3þb3;a4þb4
½;
h
T~
aþT~
bT~
aT~
b;I~
aI~
b;F~
aF~
b
i;
2. ~
a~
b¼a1b1;a2b2;a3b3;a4b4
½;T~
aT~
b;T~
aþT~
b
T~
aT~
b;F~
aþF~
bF~
aF~
bÞi;
3. f~
a¼fa1;fa2;fa3;fa4
½;11T~
a
ðÞ
f;I~
a
ðÞ
f;
D
F~
a
ðÞ
fÞi;
4. ~
af¼af
1;af
2;af
3;af
4
hi
;T~
a
ðÞ
f;11I~
a
ðÞ
f;
D
11F~
a
ðÞ
fÞi:
The following example illustrates some drawbacks in
the operations described in Definition 4.
Example 1 Let ~
a¼0:1;0:1;0:2;0:3½;0;0;1ðÞ
hi
and ~
b¼
0:1;0:1;0:2;0:3½;1;0;0ðÞ
hi
be two SVTNNs. According
to Definition 4, the following result can be calculated:
~
aþ~
b¼0:2;0:2;0:4;0:6½;1;0;0ðÞ
hi
. This result, however,
Neural Comput & Applic (2018) 30:241–260 243
123
is inaccurate, because the falsity-membership of ~
a, the
correlations among TFNs and the membership degrees of ~
a
and ~
bare not considered. Therefore, these operations
would be impractical.
Example 2 Let ~
a1¼½0:03;0:05;0:07;0:09;ð0:3;0:5;h
0:5Þi be a SVTNN and f¼10. Then the result f~
a1
obtained using Definition 4 is
10~
a1¼0:3;0:5;0:7;0:9½;0:9718;0:001;0:001ðÞhi:
In this result, the three membership degrees of these
SVTNNs are operated repeatedly, significantly distorting
the result and conflicting with common sense.
Therefore, some new operations for SVTNNs must be
defined in order to overcome these operational anomalies.
The new operations will be discussed in Sect. 3.
In order to compare two different SVTNNs, some
previous comparison methods have been proposed.
Definition 5 [53] Let ~
a¼a1;a2;a3;a4
½;T~
a;I~
a;F~
a
ðÞhibe
a SVTNN. The score function and accuracy function of ~
a
are defined, respectively, as
S~
aðÞ¼1
16 a1þa2þa3þa4
½2þT~
aI~
aF~
a
ðÞ;ð1Þ
H~
aðÞ¼1
16 a1þa2þa3þa4
½2þT~
aI~
aþF~
a
ðÞ:ð2Þ
Let and *be two binary relations on SVTNNs,
denoted by ~
a~
bif ~
ais preferred to ~
b, and ~
a~
bif ~
ais
indifferent to ~
b.
Definition 6 [53] Let ~
a¼a1;a2;a3;a4
½;T~
a;I~
a;F~
a
ðÞ
hi
and
~
b¼b1;b2;b3;b4
½;T~
b;I~
b;F~
b
be two SVTNNs. Then,
1. If S~
aðÞ\S~
b
, then ~
a~
b;
2. If S~
aðÞ¼S~
b
and H~
aðÞ\H~
b
, then ~
a~
b;
3. If S~
aðÞ¼S~
b
and H~
aðÞ¼H~
b
, then ~
a~
b.
However, there are some limitations to Definition 5,
which will be illustrated in Example 3.
Example 3 Let ~
a¼0:2;0:3;0:5;0:8½;0:1;0:8;0ðÞ
hi
and
~
b¼0:1;0:4;0:5;0:8½;0:2;0:9;0ðÞ
hi
be two SVTNNs. It is
clear that ~
a6¼ ~
b. The following results can be obtained
according to Definition 5: S~
aðÞ¼S~
b
¼0:146,
H~
aðÞ¼H~
b
¼0:146, and ~
a~
b. However, these results
are counterintuitive.
Definition 7 [38] Let ~
a¼a1;a2;a3;a4
½;T~
a;I~
a;F~
a
ðÞhibe
a SVTNN. The score function of ~
ais defined as follows:
S0~
aðÞ¼1
12 a1þa2þa3þa4
½2þT~
aI~
aF~
a
ðÞ:ð3Þ
Definition 8 [38] Let ~
a¼a1;a2;a3;a4
½;T~
a;I~
a;F~
a
ðÞ
hi
and
~
b¼b1;b2;b3;b4
½;T~
b;I~
b;F~
b
be two SVTNNs. Then,
1. If S0~
aðÞ[S0~
b
, then ~
a~
b;
2. If S0~
aðÞ¼S0~
b
, then ~
a~
b.
However, some drawbacks also exist in Definition 7,
and they will be discussed in the following example.
Example 4 Let ~
a¼0:3;0:4;0:5;0:8½;0:5;0:3;0:7ðÞ
hi
and ~
b¼0:5;0:7;0:8;1½;0:2;0:8;0:4ðÞ
hi
be two SVTNNs.
It is clear that ~
a6¼ ~
b. However, according to Definitions 7
and 8, S0~
aðÞ¼S0~
b
¼0:25, and ~
a~
b; these results do not
conform to our intuition.
3 New operations and comparison method
for SVTNNs
In order to overcome the limitations discussed in Sect. 2,
this section defines several new operations. Moreover, a
new comparison method is proposed on the basis of score,
accuracy and certainty functions.
3.1 New operations for SVTNNs
Definition 9 Let ~
a¼a1;a2;a3;a4
½;T~
a;I~
a;F~
a
ðÞ
hi
and ~
b¼
b1;b2;b3;b4
½;T~
b;I~
b;F~
b
be two SVTNNs, and f0;
then, the new operations for SVTNNs are defined as follows:
1. ~
a~
b
(i) If a3þa46¼ a1þa2and b3þb46¼ b1þb2,
~
a~
b¼a1þb1;a2þb2;a3þb3;a4þb4
½;
h
u~
aðÞT~
aþu~
b
ðÞ
T~
b
u~
aðÞþu~
b
ðÞ ;u~
aðÞ1I~
a
ðÞþu~
b
ðÞ1I~
b
ðÞ
u~
aðÞþu~
b
ðÞ ;
u~
aðÞ1F~
a
ðÞþu~
b
ðÞ1F~
b
ðÞ
u~
aðÞþu~
b
ðÞ Þi, where u~
aðÞ ¼
a3a2þa4a1
2and u~
b
¼b3b2þb4b1
2;
(ii) If a
1
=a
2
=a
3
=a
4
=aand b
3
?b
4
=b
1
?b
2
,~
a~
b¼Daþb1;aþb2;½aþ
b3;aþb4;aT ~
aþu~
b
ðÞ
T~
b
aþu~
b
ðÞ ;a1I~
a
ðÞþu~
b
ðÞ1I~
b
ðÞ
aþu~
b
ðÞ ;
a1F~a
ðÞþu~
b
ðÞ1F~
b
ðÞ
aþu~
b
ðÞ E;
(iii) If a
3
?a
4
=a
1
?a
2
and b
1
=b
2
=b
3
=-
b
4
=b,~
a~
b¼
a1þb;a2þb;a3þb;a4þb½;
u~
aðÞT~
aþbT ~
b
u~aðÞþb;u~
aðÞ1I~a
ðÞþb1I~
b
ðÞ
u~aðÞþb;
u~
aðÞ1F~
a
ðÞþb1F~
b
ðÞ
u~
aðÞþb;
244 Neural Comput & Applic (2018) 30:241–260
123
(iv) If a
1
=a
2
=a
3
=a
4
=aand b
1
=b
2
=
b
3
=b
4
=b,~
a~
b¼aþb;
haT ~
aþbT~
b
aþb;
a1I~a
ðÞþb1I~
b
ðÞ
aþb;a1F~a
ðÞþb1F~
b
ðÞ
aþbÞi;
2. ~
a~
b¼a1b1;a2b2;a3b3;a4b4
½;T~
aT~
b;I~
aþI~
bI~
aI~
b;
F~
aþF~
bF~
aF~
bÞi;
3. f~
a¼fa1;fa2;fa3;fa4
½;T~
a;I~
a;F~
a
ðÞ
hi
;f0;
4. ~
af¼af
1;af
2;af
3;af
4
hi
;T~a
ðÞ
f;11I~a
ðÞ
f;1
D
1F~a
ðÞ
fÞi;f0;
5. neg ~
aðÞ¼ 1a4;1a3;1a2;1½
ha1;T~
a;I~
a;F~
a
ðÞi:
Example 5 Using Definition 9 and the data in Example 1,
let f¼2. The calculated results are as follows:
1. ~
a~
b¼0:2;0:2;0:35;0:7½;0:538;0;0:538ðÞhi;
2. ~
a~
b¼0:01;0:01;0:03;0:12½;0;0;0ðÞ
hi
;
3. 2~
a¼0:2;0:2;0:4;0:6½;0;0;1ðÞ
hi
;
4. ~
a2¼0:01;0:01;0:04;0:09½;0;0;1ðÞ
hi
:
Compared with the operations proposed by Ye [38] and
Deli and S¸ ubas¸[53], these newly proposed SVTNN
operations have the following advantages: (1) they can
capture the correlations of TFNs and three membership
degrees of SVTNNs and (2) they can effectively avoid
repeated operations and minimize information loss and
distortion.
Using the corresponding operations for SVTNNs, the
following theorem can be easily proved.
Theorem 1 Let ~
a,~
b,and ~
c be three SVTNNs, and f0;
then, the following equations are true:
1. ~
a~
b¼~
b~
a;
2. ~
a~
b
~
c¼~
a~
b~
c
;
3. ~
a~
b¼~
b~
a;
4. ~
a~
b
~
c¼~
a~
b~
c
;
5. f~
af~
b¼f~
b~
a
;
6. ~
a~
b
f¼~
af~
bf:
It is easy to prove Theorem 1 according to Definition 9,
so the proof is omitted here.
3.2 New comparison methods for SVTNNs
Motivated by the comparison methods proposed by Broumi
and Smarandache [40] according to the expected function,
accuracy function and certainty function of SVNTLNs, this
subsection defines a new comparison method, and proves it
to be reasonable and practical.
Definition 10 Let ~
a¼a1;a2;a3;a4
½;T~
a;I~
a;F~
a
ðÞ
hi
be a
SVTNN; then, the score function, accuracy function and
certainty function of SVTNN ~
aare defined, respectively, as
follows:
E~
aðÞ¼
a1þ2a2þ2a3þa4
62þT~
aI~
aF~
a
3
;ð4Þ
A~
aðÞ¼
a1þ2a2þ2a3þa4
6T~
aF~
a
ðÞ;ð5Þ
C~
aðÞ¼
a1þ2a2þ2a3þa4
6T~
a:ð6Þ
Assume that ~
aand ~
bare two SVTNNs; then, they can be
compared using the following rules.
Definition 11 Let ~
a¼a1;a2;a3;a4
½;T~
a;I~
a;F~
a
ðÞ
hi
and
~
b¼b1;b2;b3;b4
½;T~
b;I~
b;F~
b
be two SVTNNs. The
comparison method for ~
aand ~
bcan be defined as follows:
1. If E~
aðÞ[E~
b
, then ~
a~
b, meaning that ~
ais superior
to ~
b.
2. If E~
aðÞ¼E~
b
, and if A~
aðÞ[A~
b
, then ~
a~
b,
meaning that ~
ais superior to ~
b, otherwise, ~
a~
b,
meaning that ~
ais inferior to ~
b.
3. If E~
a
ðÞ
¼E~
b
and A~
a
ðÞ
¼A~
b
, then ~
a~
bif
C~
aðÞ[C~
b
, meaning that ~
ais superior to ~
b; however,
~
a~
bif C~
aðÞ\C~
b
, meaning that ~
ais inferior to ~
b; and
~
a~
bif C~
aðÞ¼C~
b
, meaning that ~
ais indifferent to ~
b.
Example 6 Utilizing the data in Example 3, we can
determine that E~
aðÞ¼0:188 and E~
b
¼0:195. Then
~
b~
a; in other words, ~
bis superior to ~
a, which is consistent
with our intuition.
4 Single-valued trapezoidal neutrosophic
aggregation operators
This section reviews the traditional BM operator and the
normalized weighted Bonferroni mean (NWBM) operator,
as well as some of their prominent characteristics. Then,
the SVTNNWBM operator is proposed in an environment
featuring SVTNNs.
4.1 BM and NWBM operators
The BM operator [44] is a traditional aggregation operator
that can capture expressed interrelationships of the indi-
vidual arguments.
Neural Comput & Applic (2018) 30:241–260 245
123
Definition 12 [54] Let p;q0, and let aii¼1;2;...;nðÞ
be a collection of non-negative numbers. Then the aggre-
gation function
BMp;qa1;a2;...;an
ðÞ¼
1
nn1ðÞ
X
n
i;j¼1
i6¼j
ap
iaq
j
0
B
B
@1
C
C
A
1
=pþq
ðÞ
ð7Þ
is called the BM operator.
The BM operator has the following obvious properties:
1. BMp;q0;0;...;0ðÞ¼0.
2. (Commutativity). Let aii¼1;2;...;nðÞand
a0
ii¼1;2;...;nðÞbe two sets of non-negative num-
bers. If a0
1;a0
2;...;a0
n
is any permutation of
a1;a2;...;an
ðÞ, then BMp;qa0
1;a0
2;...;a0
n
¼BMp;q
a1;a2;...;an
ðÞ.
3. (Idempotency). Let a
i
(i=1, 2,…,n) be a set of non-
negative numbers. If all a
i
are equal for all i, then
BMp;qa1;a2;...;an
ðÞ¼a.
4. (Monotonicity). Let a
i
(i=1, 2,…,n) and
a0
ii¼1;2;...;nðÞbe two sets of non-negative num-
bers, If aia0
ifor all i, then BMp;qa1;ða2;...;anÞ
BMp;qa0
1;a0
2;...;a0
n
.
5. (Boundedness). Let a
i
(i=1, 2,…,n) be a set of non-
negative numbers, while a¼min a1;a2;...;an
ðÞand
aþ¼max a1;a2;...;an
ðÞ; then, aBMp;qa1;ð
a2;...;anÞaþ.
Some special cases of the BM operator with respect to
the parameters pand qare discussed as follows:
1. If p=1 and q=1, then the BM operator is reduced to
the following:
BM1;1a1;a2;...;an
ðÞ¼
1
nn1ðÞ
X
n
i;j¼1
i6¼j
aiaj
0
B
B
@1
C
C
A
1
=
2
:
ð8Þ
2. If q=0, then the BM operator is reduced to the
generalized mean operator,
BMp;0a1;a2;...;an
ðÞ¼
1
nX
n
i¼1
ap
i
!
1=p
:ð9Þ
3. If p=1 and q=0, then the BM operator is reduced to
the arithmetic mean operator,
BM1;0a1;a2;...;an
ðÞ¼
1
nX
n
i¼1
aið10Þ
4. If p?0 and q=0, then the BM operator is reduced
the geometric mean operator,
lim
p!0BMp;0a1;a2;...;an
ðÞ¼
Y
n
i¼1
ai
!
1=n
:ð11Þ
Definition 13 [55] Let p,qC0, and let a
i
(i=1, 2,…,n)
be a collection of non-negative numbers with the weight
vector w=w
1
,w
2
,…,w
n
) such that w
i
[[0, 1] and
Pn
i¼1wi¼1. If
NWBMp;q
wa1;a2;...;an
ðÞ¼
n
i;j¼1
i6¼j
wiwj
1wi
ap
iaq
j
0
B
@1
C
A
1
pþq
;
ð12Þ
then NWBMp;q
wis called the NWBM operator.
4.2 Single-valued trapezoidal neutrosophic
normalized weighted BM operator
This subsection extends the traditional BM and NWBM
operators to accommodate situations in which the input
arguments are SVTNNs. Furthermore, a SVTNNBM
operator and a SVTNNWBM operator are developed, and
some of their desirable properties are analyzed.
Definition 14 Let p,qC0, and let ~
ai¼
ai1;ai2;ai3;ai4
½;T~
ai;I~
ai;F~
ai
ðÞ
hi
(i=1, 2,…,n) be a set of
SVTNNs. If
SVTNNBMp;q~
a1;~
a2;...;~
an
ðÞ¼
1
nn1ðÞ
n
i;j¼1
i6¼j
~
ap
i~
aq
j
0
B
@1
C
A
1
pþq
;
ð13Þ
then SVTNNBMp;qis called the SVTNNBM operator.
The following definition fully introduces the
SVTNNWBM operator.
Definition 15 Let p,qC0, and let ~
ai¼
ai1;ai2;ai3;ai4
½;T~ai;I~ai;F~ai
ðÞ
hi
(i=1, 2,…,n) be a set of
SVTNNs. If
SVTNNWBMp;q
w~
a1;~
a2;...;~
an
ðÞ¼
n
i;j¼1
i6¼j
wiwj
1wi
~
ap
i~
aq
j
0
B
@1
C
A
1
pþq
;
ð14Þ
where w=(w
1
,w
2
,…,w
n
) is the weight vector of ~
ai,
w
i
[[0, 1], and Pn
i¼1wi¼1, then SVTNNWBMp;q
wis
called the SVTNNWBM operator.
Theorem 2 Let p, q C0, and let ~
ai¼ai1;ai2;ai3;½
h
ai4;T~
ai;I~
ai;F~
ai
ðÞi(i =1, 2,…, n) be a set of SVTNNs.
Then, the aggregated result using Eq. (14)is also a
SVTNN, and
246 Neural Comput & Applic (2018) 30:241–260
123
SVTNNWBMp;q
w~
a1;~
a2;...;~
an
ðÞ¼
n
i;j¼1
i6¼j
wiwj
1wi~
ap
i~
aq
j
0
B
@1
C
A
1
pþq
¼
n
i;j¼1
i6¼j
wiwj
1wiap
i1aq
j1
0
B
@1
C
A
1
pþq
;
n
i;j¼1
i6¼j
wiwj
1wiap
i2aq
j2
0
B
@1
C
A
1
pþq
;
2
6
6
4
*
n
i;j¼1
i6¼j
wiwj
1wiap
i3aq
j3
0
B
@1
C
A
1
pþq
;
n
i;j¼1
i6¼j
wiwj
1wiap
i4aq
j4
0
B
@1
C
A
1
pþq3
7
7
5;
n
i;j¼1
i6¼j
1
2
wiwj
1wiap
i3aq
j3ap
i2aq
j2þap
i4aq
j4ap
i1aq
j1
hi
T~
ai
ðÞ
pT~
aj
ðÞ
q
n
i;j¼1
i6¼j
1
2
wiwj
1wiap
i3aq
j3ap
i2aq
j2þap
i4aq
j4ap
i1aq
j1
hi
0
B
B
B
@1
C
C
C
A
1
pþq
;
0
B
B
B
B
@
11
n
i;j¼1
i6¼j
1
2
wiwj
1wiap
i3aq
j3ap
i2aq
j2þap
i4aq
j4ap
i1aq
j1
hi
1I~ai
ðÞ
p1I~aj
ðÞ
q
n
i;j¼1
i6¼j
1
2
wiwj
1wiap
i3aq
j3ap
i2aq
j2þap
i4aq
j4ap
i1aq
j1
hi
0
B
B
B
@1
C
C
C
A
1
pþq
;
11
n
i;j¼1
i6¼j
1
2
wiwj
1wiap
i3aq
j3ap
i2aq
j2þap
i4aq
j4ap
i1aq
j1
hi
1F~
ai
ðÞ
p1F~
aj
ðÞ
q
n
i;j¼1
i6¼j
1
2
wiwj
1wiap
i3aq
j3ap
i2aq
j2þap
i4aq
j4ap
i1aq
j1
hi
0
B
B
B
@1
C
C
C
A
1
pþq1
C
C
C
C
A+:
ð15Þ
‘‘Appendix 1’’ details the proof of Theorem 2.
The traditional NWBM operator has the properties of
reducibility, commutativity, idempotency, monotonicity
and boundedness. It is easy to see that the SVTNNWBM
operator also satisfies these properties. The following
theorem proves only the monotonicity property, while the
others can be proved in a similar way and are omitted.
Theorem 3 (Monotonicity) Let ~
ai¼ai1;ai2;ai3;ai4
½;
h
T~
ai;I~
ai;F~
ai
ðÞii¼1;2;...nðÞand ~
bi¼bi1;bi2;bi3;½
h
bi4;T~
bi;I~
bi;F~
bi
ii¼1;2;...;nðÞbe two sets of SVTNNs.
Suppose ai1bi1,ai2bi2,ai3bi3,ai4bi4,T~
aiT~
bi,
I~
aiI~
bi, and F~
aiF~
bifor all i; then, SVTNNWBp;q
w
~
a1;~
a2;...;ð~
anÞSVTNNWBp;q
w~
b1;~
b2;...;~
bn
.
‘‘Appendix 2’’ details the proof of Theorem 3.
5 MCGDM method based on the SVTNNWBM
operator
This section develops an approach based on the
SVTNNWBM operator and the new comparison method
in order to solve MCGDM problems with SVTNN
information.
For a group decision-making problem with a finite set of m
alternatives, let D={D
1
,D
2
,…,D
s
} be the set of DMs,
A={A
1
,A
2
,…,A
m
} be the set of alternatives, and C={C
1
,
C
2
,…,C
n
} be the set of criteria. Assume that the subjective
weight vector of the criteria provided by each DM D
k
(k=1,
2,…,s)is-k¼-k
1;-k
2;...;-k
n
, such that -k
j20;1½and
Pn
j¼1-k
j¼1. Similarly, the weight vector of the DMs are
specified as w=(w
1
,w
2
,…,w
s
), where w
k
C0, and
Ps
k¼1wk¼1. The evaluation values provided by the experts
are converted into SVTNNs through two questionnaires, and
~
ak
ij ¼ak
ij1;ak
ij2;ak
ij3;ak
ij4
hi
;T~
ak
ij ;I~
ak
ij ;F~
ak
ij
DE
,k¼1;2;ð...;
s;j¼1;2;...;n;i¼1;2;...;mÞstands for the evaluation
value of DM D
k
(k=1, 2,…,s) for alternative
~
aii¼1;2;...;mðÞunder criteria C
j
(j=1, 2,…,n).
To elucidate the proposed methodology, this section is
divided into two parts: determining each DM’s weight
using an entropy-weighted method and describing the
algorithm in the proposed approach.
5.1 Determining each DM’s weight using entropy-
weighted method
Shannon [56] introduced the term ‘‘entropy’’ to measure
the degree of uncertainty in information. It is a useful tool
in decision-making, and it has found many applications
[57–60]. Applying the existed entropy-weighted methods,
this subsection proposes a new method to obtain the
objective weights of DMs.
We can first identify the decision matrix ~
akand the
weight vector of criteria -k, which are provided by DM D
k
(k=1, 2,…,s).
~
ak¼
~
ak
11 ~
ak
12 ~
ak
1n
~
ak
21 ~
ak
22 ~
ak
2n
.
.
..
.
..
.
.
~
ak
m1~
ak
m2 ~
ak
mn
2
6
6
6
43
7
7
7
5;ð16Þ
-k¼-k
1;-k
2;...;-k
n
;k¼1;2;...;sðÞ;ð17Þ
where the elements of the decision matrix ~
akis character-
ized by SVTNNs.
The main procedures are as follows:
1. Two types of criteria exist in decision matrices: benefit
and cost criteria. In order to make the criterion type
uniform, the cost criteria must be transformed into benefit
criteria using the negation operator defined in Definition
9. The normalized evaluation information matrix is
ak¼
ak
11
ak
12
ak
1n
ak
21
ak
22
ak
2n
.
.
..
.
..
.
.
ak
m1
ak
m2
ak
mn
2
6
6
6
43
7
7
7
5:ð18Þ
Neural Comput & Applic (2018) 30:241–260 247
123
2. Using Definition 9, the jth criteria weight -k
j(k=1,
2,…,s) is assigned to the jth criteria value
ak
ij in
decision matrix
akin Eq. (18). The weighted decision
matrix is then identified as follows:
Uk¼~
uk
ij
mn¼-k
j
ak
ij
mn
¼
~
uk
11 ~
uk
12 ~
uk
1n
~
uk
21 ~
uk
22 ~
uk
2n
.
.
..
.
..
.
.
~
uk
m1~
uk
m2 ~
uk
mn
2
6
6
6
43
7
7
7
5;
k¼1;2;...;sðÞ;
ð19Þ
where the elements of the weighted decision matrix are
denoted as ~
uk
ij ¼uk
ij1;uk
ij2;uk
ij3;uk
ij4
hi
;T~uk
ij ;I~uk
ij ;F~uk
ij
DE
.
3. Let I
k
be the entropy of the kth DM; then,
Ik¼ 1
ln mX
m
i¼1
ek
i
Pm
i¼1ek
i
ln ek
i
Pm
i¼1ek
i
;ð20Þ
where ek
iis calculated according to Definition 10, and
its form is presented as follows:
ek
i¼X
n
j¼1
E~
uk
ij
;ð21Þ
E~
uk
ij
¼uk
ij1þ2uk
ij2þ2uk
ij3þuk
i4
6
2þT~uk
ij I~uk
ij F~uk
ij
3
!
;
ð22Þ
where E~
uk
ij
denotes the score function of the
assessment information ~
uii¼1;2;...;mðÞwith respect
to C
j
(j=1, 2,…,n) for DM D
k
(k=1, 2,…,s). If
ek
i¼0, it is assumed that
ek
i
Pm
i¼1ek
i
ln ek
i
Pm
i¼1ek
i
¼0;
i¼1;2;...;m;j¼1;2;...;n;k¼1;2;...;sðÞ:
4. We can elicit the objective expert weight as follows:
wk¼1Ik
Ps
k¼11Ik
ðÞ
;ð23Þ
in which w
k
C0, and Ps
k¼1wk¼1.
5.2 The algorithm of the proposed approach
The procedures of the MCGDM approach involve the
following steps:
Step 1: Establish the decision matrices and weight vector
of the criteria.
According to Eqs. (16) and (17), we can get the
decision matrix ~
akand weight vector of criteria
-k(k=1, 2,…,s) provided by each DM D
k
(k=1, 2,…,s).
Step 2: Normalize the decision matrices.
Decision matrices include benefit criteria and
cost criteria. Using Definition 9, the cost criteria
can be transformed into benefit criteria.
Step 3: Obtain the weighted decision matrices.
According to Eq. (19), the weighted decision
matrices can be constructed by multiplying the
subjective weight vector of DMs
-k¼-k
1;-k
2;...;-k
n
,(k=1, 2,…,s) into the
decision matrices.
Step 4: Obtain expert weights through the entropy-
weighted method.
We can identify the objective expert weights
using Eqs. (20) through (23).
Step 5: Calculate the comprehensive criteria weights.
Utilizing the weighted arithmetic mean operator,
we can identify the comprehensive criteria
weights ~
-¼~
-1;~
-2;...;~
-n
ðÞwith ~
-j20;1½
(j=1, 2,…,n) and Pn
j¼1~
-j¼1.
~
-j¼X
s
k¼1
wk-k
j;k¼1;2;...;sðÞ:ð24Þ
Step 6: Obtain the aggregated decision matrix.
According to the new operations described in
Definition 9 and using both the objective expert
weights obtained in Step 4 and the weighted
arithmetic mean operator, we can calculate the
aggregated decision matrix as follows:
M¼
aij
mn¼X
s
k¼1
wk
ak
ij
!
mn
¼
a11
a12
a1n
a21
a22
a2n
.
.
..
.
..
.
.
am1
am2
amn
2
6
6
6
43
7
7
7
5:ð25Þ
Step 7: Obtain the overall value of A
i
.
Utilizing Eq. (15), the overall value of
alternative A
i
can be aggregated.
Step 8: Calculate the score values.
Utilizing Eqs. (4) through (6), the score values
can be obtained for comparison.
248 Neural Comput & Applic (2018) 30:241–260
123
Step 9: Rank all alternatives.
Comparing the values obtained in Step 8 yields
the final ranking results, and the optimal
ranking(s) can be selected.
6 A numerical example
This section uses a numerical example adapted from Yue
[61] to demonstrate the applicability of the proposed method.
A year-end report is required to assess various con-
stituencies’ satisfaction with respect to the institutional leader
at Chinese universities. The following four leaders of a uni-
versity in Guangdong, China, must be assessed: (1) A
1
rep-
resents the president; (2) A
2
is the first vice president; (3) A
3
is
the second vice president; (4) A
4
is the third vice president.
Teams are assembled from several constituencies to serve as
DMs (reviewers), including teachers (D
1
), researchers (D
2
)
and undergraduate students (D
3
). These DMs use the three
criteria C
1
(working experience), C
2
(academic performance)
and C
3
(personality) to evaluate the four alternatives.
Reviewers can evaluate the four alternatives with respect
to each criterion according to a hundred-point scale, in which
100 is the maximum grade and 0 is the minimum grade. As
the evaluation team includes a large number of people, we
must first obtain an interval number representing common
opinion. Second, we should take into consideration the
minimum and maximum scores from the reviewers, such that
a TFN can be obtained. Furthermore, reviewers can evaluate
the obtained TFN by voting in favor, voting in against or
abstaining on each evaluation index. The final result is a
SVTNN. For example, the assessment value of alternative A
1
is denoted as ~
a1
11 ¼0:6;0:7;0:8;0:9½;0:36;0:3;0:27ðÞ
hi
.
This result is obtained by DM D
1
with respect to criterion C
1
using the two questionnaires. Initially, we can identify an
interval number representing the common opinion, denoted
as [0.7, 0.8]. Meanwhile, a few people offered remarkably
low or high assessment values, denoted as 0.6 and 0.9,
respectively; obviously, these values should also be taken
into consideration. This process yields the TFN [0.6, 0.7, 0.8,
0.9]. For a second time, the constituents are asked to evaluate
this TFN by voting in favor, voting in against or abstaining on
each evaluation index, which refers to the three membership
degrees of the SVTNNs. This produces the final assessment
information ~
a1
11 ¼0:6;0:7;0:8;0:9½;0:36;0:3;0:27ðÞ
hi
.
The four possible alternatives are evaluated according to
the three criteria listed above in the form of SVTNNs,
which are transformed from evaluation values, as shown in
the following three decision matrices:
The subjective criteria weights offered by the DMs are
-1¼0:4;0:2;0:4ðÞ,-2¼0:3;0:3;0:4ðÞ, and -3¼0:4;ð
0:4;0:2Þ, respectively.
6.1 Evaluation steps for the new MCGDM method
based on the SVTNNWBM operator
The following steps describe the procedure for assessing
DMs’ satisfaction with respect to their leaders and
obtaining a final ranking order for the four alternatives.
Step 1: Establish the decision matrices and weight vector
of the criteria.
The decision matrices and weight vector of
criteria are listed in the previous subsection.
Step 2: Normalize the decision matrices.
Since all the criteria are benefit criteria, there is
no need for normalization.
~
a1¼
A1
A2
A3
A4
0:6;0:7;0:8;0:9½;0:36;0:3;0:27ðÞhi0:72;0:5;0:8;0:86½;0:53;0:3;0:28ðÞhi0:85;0:85;0:9;0:92½;0:57;0:35;0:22ðÞhi
0:77;0:77;0:8;0:81½;0:72;0:3;0:28ðÞhi0:69;0:7;0:8;0:93½;0:91;0:5;0:07ðÞhi0:83;0:85;0:85;0:88½;0:80;0:2;0:10ðÞhi
0:8;0:85;0:9;0:96½;0:63;0:5;0:19ðÞ
hi
0:59;0:7;0:8;0:87½;0:88;0:3;0:12ðÞ
hi
0:68;0:7;0:8;0:85½;0:86;0:4;0:14ðÞ
hi
0:6;0:6;0:8;0:9½;0:65;0:3;0:33ðÞ
hi
0:58;0:6;0:8;0:9½;0:72;0:3;0:23ðÞ
hi
0:6;0:7;0:8;0:9½;0:77;0:45;0:23ðÞ
hi
0
B
B
B
@1
C
C
C
A
~
a2¼
A1
A2
A3
A4
0:77;0:8;0:8;0:83½;0:53;0:3;0:26ðÞ
hi
0:68;0:7;0:8;0:86½;0:54;0:4;0:35ðÞ
hi
0:82;0:85;0:9;0:9½;0:68;0:35;0:32ðÞ
hi
0:93;0:94;0:95;0:98½;0:85;0:3;0:15ðÞ
hi
0:76;0:8;0:8;0:86½;0:86;0:5;0:13ðÞ
hi
0:65;0:7;0:8;0:87½;0:69;0:2;0:3ðÞ
hi
0:79;0:8;0:84;0:85½;0:83;0:4;0:16ðÞ
hi
0:72;0:8;0:9;0:92½;0:76;0:5;0:24ðÞ
hi
0:81;0:85;0:9;0:97½;0:73;0:4;0:13ðÞ
hi
0:7;0:78;0:8;0:9½;0:9;0:3;0:07ðÞ
hi
0:58;0:6;0:8;0:9½;0:91;0:2;0:03ðÞ
hi
0:7;0:7;0:8;0:9½;0:66;0:4;0:12ðÞ
hi
0
B
B
B
@1
C
C
C
A
~
a3¼
A1
A2
A3
A4
0:85;0:85;0:9;0:96½;0:81;0:3;0:18ðÞ
hi
0:76;0:8;0:8;0:86½;0:76;0:5;0:24ðÞ
hi
0:8;0:85;0:9;0:97½;0:74;0:35;0:19ðÞ
hi
0:79;0:8;0:8;0:87½;0:75;0:3;0:16ðÞhi0:75;0:75;0:8;0:89½;0:84;0:5;0:16ðÞhi0:81;0:85;0:9;0:93½;0:97;0:2;0:03ðÞhi
0:62;0:7;0:8;0:82½;0:89;0:1;0:11ðÞ
hi
0:84;0:85;0:85;0:89½;0:78;0:5;0:21ðÞ
hi
0:78;0:8;0:8;0:82½;0:74;0:4;0:11ðÞ
hi
0:6;0:6;0:8;0:9½;0:66;0:3;0:18ðÞ
hi
0:64;0:7;0:8;0:9½;0:63;0:3;0:27ðÞ
hi
0:6;0:65;0:8;0:9½;0:71;0:4;0:29ðÞ
hi
0
B
B
B
@1
C
C
C
A
Neural Comput & Applic (2018) 30:241–260 249
123
Step 3: Obtain the weighted decision matrices.
Since every DM offers different subjective
weights for the criteria, we must multiply the
subjective weight vector of the DMs into the
initial decision matrices. Utilizing Definition 9,
we can acquire the following results:
Step 4: Obtain expert weights through the entropy-weigh-
ted method.
We can identify the objective expert weights using
Eqs. (20) through (23). The results are calculated as
follows:
(i) According to Eq. (22), we have
E~
u1
ij
hi
mn¼
0:179 0:0906 0:2342
0:2245 0:1193 0:2839
0:2268 0:1219 0:2335
0:193 0:1041 0:209
2
6
6
6
43
7
7
7
5;
E~
u2
ij
hi
mn¼
0:2749 0:2164 0:1288
0:2473 0:2296 0:1438
0:2644 0:236 0:1189
0:2083 0:2078 0:0988
2
6
6
6
43
7
7
7
5;
and E~
u3
ij
hi
mn¼
0:179 0:0906 0:2342
0:2245 0:1193 0:2839
0:2268 0:1219 0:2335
0:193 0:1041 0:209
2
6
6
6
43
7
7
7
5:
(ii) According to Eqs. (20) through (22), we have
I1¼0:996789;I2¼0:997812;and
I3¼0:996789:
(iii) According to Eq. (23), the expert weights can be
identified as follows:
w1¼0:3729;w2¼0:2542;and
w3¼0:3729:
Step 5: Calculate the comprehensive criteria weights.
The following comprehensive criteria weights can be
obtained using Eq. (24):
-1¼0:3746;-2¼0:3;and -3¼0:3254:
Step 6: Obtain the aggregated decision matrix.
Based on the initial decision matrices and the expert
weights obtained in Step 4, and using the new operations
proposed in Definition 9, we can acquire the following
U1¼
A1
A2
A3
A4
0:24;0:28;0:32;0:36½;0:36;0:3;0:27ðÞ
hi
0:144;0:1;0:16;0:172½;0:53;0:3;0:28ðÞ
hi
0:34;0:34;0:36;0:368½;0:57;0:35;0:22ðÞ
hi
0:308;0:308;0:32;0:324½;0:72;0:3;0:28ðÞ
hi
0:138;0:14;0:16;0:18½;0:91;0:5;0:07ðÞ
hi
0:332;0:34;0:34;0:352½;0:80;0:2;0:10ðÞ
hi
0:32;0:34;0:36;0:384½;0:63;0:5;0:19ðÞ
hi
0:118;0:14;0:16;0:174½;0:88;0:3;0:12ðÞ
hi
0:272;0:28;0:32;0:34½;0:86;0:4;0:14ðÞ
hi
0:24;0:24;0:32;0:36½;0:65;0:3;0:33ðÞhi0:116;0:12;0:16;0:18½;0:72;0:3;0:23ðÞhi0:24;0:28;0:32;0:36½;0:77;0:45;0:23ðÞhi
0
B
B
B
@1
C
C
C
A
U2¼
A1
A2
A3
A4
0:231;0:24;0:24;0:249½;0:53;0:3;0:26ðÞ
hi
0:204;0:21;0:24;0:258½;0:54;0:4;0:35ðÞ
hi
0:328;0:34;0:36;0:36½;0:68;0:35;0:32ðÞ
hi
0:279;0:282;0:285;0:294½;0:85;0:3;0:15ðÞ
hi
0:228;0:24;0:24;0:258½;0:86;0:5;0:13ðÞ
hi
0:26;0:28;0:32;0:348½;0:69;0:2;0:3ðÞ
hi
0:237;0:24;0:252;0:255½;0:83;0:4;0:16ðÞ
hi
0:216;0:24;0:27;0:276½;0:76;0:5;0:24ðÞ
hi
0:324;0:34;0:36;0:388½;0:73;0:4;0:13ðÞ
hi
0:21;0:234;0:24;0:27½;0:9;0:3;0:07ðÞ
hi
0:174;0:18;0:24;0:27½;0:91;0:2;0:03ðÞ
hi
0:28;0:28;0:32;0:36½;0:66;0:4;0:12ðÞ
hi
0
B
B
B
@1
C
C
C
A
U3¼
A1
A2
A3
A4
0:34;0:34;0:36;0:384½;0:81;0:3;0:18ðÞ
hi
0:304;0:32;0:32;0:344½;0:76;0:5;0:24ðÞ
hi
0:16;0:17;0:18;0:194½;0:74;0:35;0:19ðÞ
hi
0:316;0:32;0:32;0:348½;0:75;0:3;0:16ðÞ
hi
0:3;0:3;0:32;0:356½;0:84;0:5;0:16ðÞ
hi
0:162;0:17;0:18;0:186½;0:97;0:2;0:03ðÞ
hi
0:248;0:28;0:32;0:328½;0:89;0:1;0:11ðÞ
hi
0:336;0:34;0:34;0:356½;0:78;0:5;0:21ðÞ
hi
0:156;0:16;0:16;0:164½;0:74;0:4;0:11ðÞ
hi
0:24;0:24;0:32;0:36½;0:66;0:3;0:18ðÞ
hi
0:256;0:28;0:32;0:36½;0:63;0:3;0:27ðÞ
hi
0:12;0:13;0:16;0:18½;0:71;0:4;0:29ðÞ
hi
0
B
B
B
@1
C
C
C
A
250 Neural Comput & Applic (2018) 30:241–260
123
aggregated decision matrix:
Step 7: Obtain the overall value of A
i
.
For simplicity, we assume that p=q=1; then, utilizing
Eq. (15), we can identify the overall value of A
i
Step 8: Calculate the score values.
EA
1
ðÞ¼0:6012;EA
2
ðÞ¼0:675;
EA
3
ðÞ¼0:6569;and EA
4
ðÞ¼0:5836:
Step 9:Rank all alternatives.
Based on the score values obtained in Step 8, we can set
forward the final ranking results: A
2
A
3
A
1
A
4
.
These results show that alternative A
2
is the best one.
6.2 The influence of parameters pand qon the final
order of the alternatives
In order to illustrate the influence of different criteria
weights, different values of pand qshould be evaluated to
check their influence on the example’s decision-making
results. Several different values of pand qwere taken into
consideration in order to gain a comprehensive view, and
the results are shown in Table 1.
An analysis of the results in Table 1reveals that dif-
ferent values of pand qin the SVTNNWBM operator can
lead to different ranking results. Except for the two situa-
tions p=1, q=0 and p=q=0.5, the final ranking
order is A
2
A
3
A
4
A
1
; under the other conditions,
the ranking order is A
2
A
3
A
1
A
4
, as shown in
Table 1. The best alternative is always A
2
, while the worst
alternative changes between A
1
and A
4
.
The reasons for this inconsistency are as follows. In
special cases where at least one of the two parameters pand
qtakes the value of zero, the SVTNNWBMoperatorcannot
capture the interrelationship of the individual arguments,
which produces a different ranking order. This is why the
final ranking result when p=1, q=0 is different from the
M¼
0:737;0:781;0:837;0:905½;
0:491;0:406;0:416ðÞ
*+
0:725;0:663;0:8;0:86½;
0:564;0:353;0:364ðÞ
*+
0:824;0:85;0:9;0:934½;
0:68;0:504;0:543ðÞ
*+
0:792;0:824;0:838;0:876½;
0:75;0:423;0:431ðÞ
*+
0:73;0:744;0:8;0:897½;
0:882;0:5;0:321ðÞ
*+
0:777;0:812;0:856;0:896½;
0:811;0:433;0:537ðÞ
*+
0:73;0:781;0:847;0:88½;
0:789;0:696;0:55ðÞ
*+
0:716;0:781;0:844;0:89½;
0:833;0:38;0:212ðÞ
*+
0:75;0:775;0:825;0:869½;
0:808;0:418;0:203ðÞ
*+
0:625;0:646;0:8;0:9½;
0:687;0:474;0:509ðÞ
*+
0:602;0:637;0:8;0:9½;
0:748;0:388;0:318ðÞ
*+
0:625;0:681;0:8;0:9½;
0:723;0:504;0:413ðÞ
*+
0
B
B
B
B
B
B
B
B
B
B
B
B
B
B
@
1
C
C
C
C
C
C
C
C
C
C
C
C
C
C
A
:
Ai¼
A1
A2
A3
A4
0:761;0:7653;0:8459;0:9½;0:5718;0:1847;0:171ðÞ
hi
0:7675;0:7948;0:8319;0:8891½;0:8145;0:1628;0:1772ðÞhi
0:7325;0:7794;0:8391;0:8796½;0:8092;0:124;0:2467ðÞ
hi
0:6182;0:6547;0:8;0:9½;0:7179;0:16;0:1852ðÞ
hi
0
B
B
@1
C
C
A:
Table 1 Ranking orders with
different values of pand qin
SVTNNWBM operator
pand qRanking pand qRanking
p=1, q=0A
2
A
3
A
4
A
1
p=q=4A
2
A
3
A
1
A
4
p=0.5, q=0.5 A
2
A
3
A
4
A
1
p=q=5A
2
A
3
A
1
A
4
p=q=1A
2
A
3
A
1
A
4
p=q=6A
2
A
3
A
1
A
4
p=1, q=2A
2
A
3
A
1
A
4
p=q=7A
2
A
3
A
1
A
4
p=2, q=1A
2
A
3
A
1
A
4
p=q=8A
2
A
3
A
1
A
4
p=q=2A
2
A
3
A
1
A
4
p=q=9A
2
A
3
A
1
A
4
p=q=3A
2
A
3
A
1
A
4
p=q=10 A
2
A
3
A
1
A
4
Neural Comput & Applic (2018) 30:241–260 251
123
other circumstances. Moreover, when p=q=0.5,
because both of the parameters are smaller than 1, the
aggregation value may be amplified when calculating the
comprehensive value of A
i
; as a result, the final ranking
order of A
1
and A
4
will switch. In general, we can take the
values of pand qas p=q=1; not only is this intuitive and
simple, but also it considers the interrelationships among
criteria. Thus, the proposed method enables the DMs to
select the desirable alternative according to their interest
and actual needs.
6.3 Comparison analysis and discussion
In this subsection, a comparative study is conducted to validate
the practicality and effectiveness of the proposed approach.
Case 1 Comparative analysis in the context of SNLS
environments.
In order to verify its feasibility, the method proposed in
this paper was used to solve the example in Tian et al. [33],
which features an environment characterized by SNLSs.
An analysis is conducted here to compare the proposed
method and the method in [33].
The method proposed in [33] incorporates power aggrega-
tion operators and a TOPSIS-based QUALIFLEX method to
solve green product design selection problems using neutro-
sophic linguistic information. According to the method
proposed in this paper, the first step, in order to keep the
decision information the same, is to translate the data in [33]
into SVTNNs as defined in [62]. Next, the expert weights are
obtained using the entropy-weighted method, and the com-
prehensive decision matrix is obtained using the
SVTNNWBM operator. Then, the ranking results are obtained
based on the new comparison method described in Sect. 3.2.
Theexamplefoundin[33] can be solved as follows (Table 2):
The example in [33] yields the same ranking results
A
2
A
3
A
4
A
1
using the two different methods when
p=1, q=0. There are subtle differences in other condi-
tions where A
2
A
3
A
1
A
4
, but alternative A
2
remains
the optimum design. This can be explained as follows.
Using the proposed method, SNLS information is first
converted into SVTNN information using the technique
developed in [62]. In SNLSs, the membership degree, non-
membership degree and indeterminate degree are relative
to a fuzzy concept ‘‘Excellent’’ or ‘‘Good’’, which is a
discrete set and can cause information distortion and loss.
However, SVTNNs allow for representation as a contin-
uous set, which has more ability to express the uncertainty
and maintain completeness of information. The discrep-
ancy could also be caused by the distinct inherent
characteristics of the aggregation operators and comparison
methods utilized by these two methods. Although both the
power average operators and BM operators take into
account information about the relationships among the
arguments being aggregated, they accomplish this differ-
ently, as stated in [33]. Given the above analysis, SVTNNs
may reflect the assessment information better than SNLSs
because they transform the linguistic terms into TFNs.
Therefore, the results obtained in this paper can be
considered to be relatively convincing.
Case 2 Comparative analysis in the context of SVTNN
environments.
In order to validate the accuracy and superior perfor-
mance of the proposed method, the method in Ye [38] was
applied to deal with the example in Sect. 6. A comparative
study is conducted here between the proposed approach
and the method developed in [38], based on the illustrative
example described in this paper.
The method proposed in [38] is used to handle a MCDM
problem through four main procedures. First, the trape-
zoidal neutrosophic weighted arithmetic averaging
(TNWAA) operator and trapezoidal neutrosophic weighted
geometric averaging (TNWGA) operator are used to
aggregate the evaluation values. Second, the score func-
tions are calculated for each alternative’s collective overall
value. Third, the best choice is selected according to the
score values. When solving the example in Sect. 6using
the approach in [38], the first five procedures are the same
as in our proposed method. However, in Step 6, the
aggregated decision matrix is obtained by the operations in
[38]; then, the TNWAA operator and TNWGA operator are
applied to identify the overall evaluation values of each
alternative in Step 7. Finally, the score values can be
calculated, and the ranking results can be obtained.
As shown in Table 3, different ranking orders are
obtained using the different methods, but the differences
are subtle. The reasons for the inconsistency can be
summarized as follows.
From the perspective of operations, the improved oper-
ations for SVTNNs in this paper take into consideration the
correlation between TFNs and the three membership degrees
of SVTNNs. This is a reliable principle that can effectively
avoid losing the information. The operations in [38],
however, divide the TFNs and three membership degrees
Table 2 Ranking results of different methods in SNLS environments
Methods Ranking of
alternatives
Best
alternative
The method in [32] based on
SNLPWA operator with f
*
1
,
k=2 and d=0.5
A
2
A
3
A
4
A
1
A
2
Proposed approach
When p=1, q=0A
2
A
3
A
4
A
1
A
2
When p=1, q=1A
2
A
3
A
1
A
4
A
2
When p=1, q=2A
2
A
3
A
1
A
4
A
2
252 Neural Comput & Applic (2018) 30:241–260
123
of SVTNNs into two separate parts, which may lead the
aggregated results to deviate from the reality.
In terms of comparison methods, the new comparison
method for SVTNNs proposed in this paper has some
notable advantages over the corresponding method based
on the score function in [38]. The details were discussed in
Sect. 3.2.
In terms of aggregation operators, the use of the
SVTNNWBM operator can take the interrelationships of
the input arguments into consideration, allowing the user to
assign different results by adjusting the value of parameters
pand q. This adds flexibility to the proposed method. The
TNWAA and TNWGA operators used in [38], however,
cannot recreate the pairwise influence of different input
arguments. Therefore, the ranking results in this paper are
more reasonable, and the proposed method has more
flexibility than the method in [38].
The results of the comparative analysis validate the
proposed approach and confirm that it is practical and
effective in addressing MCGDM problems.
7 Conclusion
SVTNNs have a strong ability to represent incomplete and
inconsistent information, and they can avoid information loss
and distortion in complex decision-making problems.
MCGDM methods with SVTNNs have extensive application
prospects in many domains. The BM operator can take into
consideration interrelationships among the input arguments.
Furthermore, the entropy-weighted method is an appropriate
tool for determining objective weights, which is significant in
solving decision-making problems. This paper developed a
new approach to MCGDM problems using SVTNNs. We
redefined the improved operations and proposed a new com-
parison method for SVTNNs. We obtained experts weights
through the entropy-weighted method, and we applied the BM
operator. Then, we proposed the SVTNNWBM operator to
aggregate the decision information expressed by SVTNNs.
We further studied some properties of the BM operator and
discussed some special cases. In addition, a sensitivity
analysis was constructed to assess the impact of changing the
values of parameters pand q. Finally, we confirmed the new
MCGDM approach to be practical and effective by applying it
to a numerical example and comparing it with two different
methods found in the literature. In future research, this method
can be applied to other scenarios including personal selection,
green supplier selection and medical diagnosis problems. This
study considered the interrelationships among input argu-
ments and acquired experts weights objectively; however, the
risk preferences of DMs were ignored. Our next topic of study
aims to cover this deficiency. In future research, the proposed
approach can be applied to more practical cases to illustrate its
efficiency and effectiveness. Because SVTNNs can be easily
and intuitively obtained in education evaluation processes,
this method should find further applications in this field.
Acknowledgements This work was supported by the National Nat-
ural Science Foundation of China (Nos. 71571193).
Compliance with ethical standards
Conflict of interest The authors declare that there is no conflict of
interest regarding the publication of this paper.
Appendix 1
Proof In the following steps, Eq. (15) will be proved using
mathematical induction on n.
(1) The following equation must be proved first:
n
i;j¼1
i6¼j
wiwj
1wi
~
ap
i~
aq
j
¼
n
i;j¼1
i6¼j
wiwj
1wi
ap
i1aq
j1;
2
6
4
*
n
i;j¼1
i6¼j
wiwj
1wiap
i2aq
j2;
n
i;j¼1
i6¼j
wiwj
1wiap
i3aq
j3;
n
i;j¼1
i6¼j
wiwj
1wiap
i4aq
j43
7
5;
n
i;j¼1
i6¼j
1
2
wiwj
1wiap
i3aq
j3ap
i2aq
j2þap
i4aq
j4ap
i1aq
j1
hi
T~
ai
ðÞ
pT~
aj
ðÞ
q
n
i;j¼1
i6¼j
1
2
wiwj
1wiap
i3aq
j3ap
i2aq
j2þap
i4aq
j4ap
i1aq
j1
hi
;
0
B
B
B
@
n
i;j¼1
i6¼j
1
2
wiwj
1wiap
i3aq
j3ap
i2aq
j2þap
i4aq
j4ap
i1aq
j1
hi
1I~ai
ðÞ
p1I~aj
ðÞ
q
n
i;j¼1
i6¼j
1
2
wiwj
1wiap
i3aq
j3ap
i2aq
j2þap
i4aq
j4ap
i1aq
j1
hi
;
n
i;j¼1
i6¼j
1
2
wiwj
1wiap
i3aq
j3ap
i2aq
j2þap
i4aq
j4ap
i1aq
j1
hi
1F~
ai
ðÞ
p1F~
aj
ðÞ
q
n
i;j¼1
i6¼j
1
2
wiwj
1wiap
i3aq
j3ap
i2aq
j2þap
i4aq
j4ap
i1aq
j1
hi
1
C
C
C
A+
ð26Þ
Table 3 Ranking results of different methods in SVTNN
environments
Methods Operators Ranking of
alternatives
The method in [37] TNWAA operator A
2
A
3
A
4
A
1
TNWGA operator A
2
A
3
A
4
A
1
Proposed approach SVTNNWBM operator A
2
A
3
A
1
A
4
Neural Comput & Applic (2018) 30:241–260 253
123
(a) Utilizing the operations for SVTNNs and mathe-
matical induction on n, we have
~
ap
i¼ap
i1;ap
i2;ap
i3;ap
i4
;T~
ai
ðÞ
p;11I~
ai
ðÞ
p;11F~
ai
ðÞ
p
ðÞ
;
~
aq
j¼aq
j1;aq
j2;aq
j3;aq
j4
hi
;T~
aj
q;11I~
aj
q;11F~
aj
q
DE
;
~
ap
i~
aq
j¼ap
i1aq
j1;ap
i2aq
j2;ap
i3aq
j3;ap
i4aq
j4
hi
;
DT~
ai
ðÞ
pT~
aj
q;11I~
ai
ðÞ
p1I~
aj
q;
11F~
ai
ðÞ
p1F~
aj
q
!+
When n=2, the following equation can be calculated:
2
i;j¼1
i6¼j
wiwj
1wi~
ap
i~
aq
j
¼w1w2
1w1~
ap
1~
aq
2
þw2w1
1w2~
ap
2~
aq
1
¼
2
i;j¼1
i6¼j
wiwj
1wiap
i1aq
j1;
2
i;j¼1
i6¼j
wiwj
1wiap
i2aq
j2;
2
i;j¼1
i6¼j
wiwj
1wiap
i3aq
j3;
2
i;j¼1
i6¼j
wiwj
1wiap
i4aq
j4
2
43
5;
*
2
i;j¼1
i6¼j
1
2
wiwj
1wiap
i3aq
j3ap
i2aq
j2þap
i4aq
j4ap
i1aq
j1
hi
T~
ai
ðÞ
pT~
aj
ðÞ
q
2
i;j¼1
i6¼j
1
2
wiwj
1wiap
i3aq
j3ap
i2aq
j2þap
i4aq
j4ap
i1aq
j1
hi
;
0
B
B
B
@
2
i;j¼1
i6¼j
1
2
wiwj
1wiap
i3aq
j3ap
i2aq
j2þap
i4aq
j4ap
i1aq
j1
hi
1I~
ai
ðÞ
p1I~
aj
ðÞ
q
2
i;j¼1
i6¼j
1
2
wiwj
1wiap
i3aq
j3ap
i2aq
j2þap
i4aq
j4ap
i1aq
j1
hi
;
2
i;j¼1
i6¼j
1
2
wiwj
1wiap
i3aq
j3ap
i2aq
j2þap
i4aq
j4ap
i1aq
j1
hi
1F~ai
ðÞ
p1F~aj
ðÞ
q
2
i;j¼1
i6¼j
1
2
wiwj
1wiap
i3aq
j3ap
i2aq
j2þap
i4aq
j4ap
i1aq
j1
hi
1
C
C
C
A+:
:
In other words, when n=2, Eq. (26) is true.
(b) Suppose that when n=k, Eq. (26) is true. That is,
k
i;j¼1
i6¼j
wiwj
1wi
~
ap
i~
aq
j
¼
k
i;j¼1
i6¼j
wiwj
1wi
ap
i1aq
j1;
2
6
4
*
k
i;j¼1
i6¼j
wiwj
1wiap
i2aq
j2;
k
i;j¼1
i6¼j
wiwj
1wiap
i3aq
j3;
k
i;j¼1
i6¼j
wiwj
1wiap
i4aq
j43
7
5;
k
i;j¼1
i6¼j
1
2
wiwj
1wiap
i3aq
j3ap
i2aq
j2þap
i4aq
j4ap
i1aq
j1
hi
T~
ai
ðÞ
pT~
aj
ðÞ
q
k
i;j¼1
i6¼j
1
2
wiwj
1wiap
i3aq
j3ap
i2aq
j2þap
i4aq
j4ap
i1aq
j1
hi
;
0
B
B
B
@
k
i;j¼1
i6¼j
1
2
wiwj
1wiap
i3aq
j3ap
i2aq
j2þap
i4aq
j4ap
i1aq
j1
hi
1I~ai
ðÞ
p1I~aj
ðÞ
q
k
i;j¼1
i6¼j
1
2
wiwj
1wiap
i3aq
j3ap
i2aq
j2þap
i4aq
j4ap
i1aq
j1
hi
;
k
i;j¼1
i6¼j
1
2
wiwj
1wiap
i3aq
j3ap
i2aq
j2þap
i4aq
j4ap
i1aq
j1
hi
1F~
ai
ðÞ
p1F~
aj
ðÞ
q
k
i;j¼1
i6¼j
1
2
wiwj
1wiap
i3aq
j3ap
i2aq
j2þap
i4aq
j4ap
i1aq
j1
hi
1
C
C
C
A+
ð27Þ
Then, when n=k?1, the following result can be
obtained:
kþ1
i;j¼1
i6¼j
wiwj
1wi
~
ap
i~
aq
j
¼
k
i;j¼1
i6¼j
wiwj
1wi
~
ap
i~
aq
j
þ
k
i¼1
wiwkþ1
1wi
~
ap
i~
aq
kþ1
þ
k
j¼1
wkþ1wj
1wkþ1
~
ap
kþ1~
aq
j
:
ð28Þ
Next, the following equation must be proved:
254 Neural Comput & Applic (2018) 30:241–260
123
In the following steps, Eq. (29) will be proved using
mathematical induction on k.
(i) When k=2, the following result can be calculated:
That is, when k=2, Eq. (29) is true.
(ii) Suppose that when k=l, Eq. (29) is true. That is,
2
i¼1
wiw3
1wi
~
ap
i~
aq
3
¼w1w3
1w1
~
ap
1~
aq
3
þw2w3
1w2
~
ap
2~
aq
3
¼
2
i¼1
wiw3
1wi
ap
i1aq
3;1
;
2
i¼1
wiw3
1wi
ap
i2aq
3;2
;
2
i¼1
wiw3
1wi
ap
i3aq
3;3
;
2
i¼1
wiw3
1wi
ap
i4aq
3;4
;
2
i¼1
1
2
wiw3
1wiap
i3aq
3;3ap
i2aq
3;2þap
i4aq
3;4ap
i1aq
3;1
hi
T~
ai
ðÞ
PT~
a3
ðÞ
q
2
i¼1
1
2
wiw3
1wiap
i3aq
3;3ap
i2aq
3;2þap
i4aq
3;4ap
i1aq
3;1
hi
;
0
B
B
@
2
i¼1
1
2
wiw3
1wiap
i3aq
3;3ap
i2aq
3;2þap
i4aq
3;4ap
i1aq
3;1
hi
1I~ai
ðÞ
P1I~a3
ðÞ
q
2
i¼1
1
2
wiw3
1wiap
i3aq
3;3ap
i2aq
3;2þap
i4aq
3;4ap
i1aq
3;1
hi
;
2
i¼1
1
2
wiw3
1wiap
i3aq
3;3ap
i2aq
3;2þap
i4aq
3;4ap
i1aq
3;1
hi
1F~
ai
ðÞ
P1F~
a3
ðÞ
q
2
i¼1
1
2
wiw3
1wiap
i3aq
3;3ap
i2aq
3;2þap
i4aq
3;4ap
i1aq
3;1
hi
1
C
C
A+:
k
i¼1
wiwkþ1
1wi
~
ap
i~
aq
kþ1
¼
k
i¼1
wiwkþ1
1wi
ap
i1aq
kþ1;1
;
k
i¼1
wiwkþ1
1wi
ap
i2aq
kþ1;2
;
k
i¼1
wiwkþ1
1wi
ap
i3aq
kþ1;3
;
k
i¼1
wiwkþ1
1wi
ap
i4aq
kþ1;4
;
k
i¼1
1
2
wiwkþ1
1wiap
i3aq
kþ1;3ap
i2aq
kþ1;2þap
i4aq
kþ1;4ap
i1aq
kþ1;1
hi
T~
ai
ðÞ
PT~
akþ1
q
k
i¼1
1
2
wiwkþ1
1wiap
i3aq
kþ1;3ap
i2aq
kþ1;2þap
i4aq
kþ1;4ap
i1aq
kþ1;1
hi
;
0
B
B
@
k
i¼1
1
2
wiwkþ1
1wiap
i3aq
kþ1;3ap
i2aq
kþ1;2þap
i4aq
kþ1;4ap
i1aq
kþ1;1
hi
1I~
ai
ðÞ
P1I~
akþ1
q
k
i¼1
1
2
wiwkþ1
1wiap
i3aq
kþ1;3ap
i2aq
kþ1;2þap
i4aq
kþ1;4ap
i1aq
kþ1;1
hi
;
k
i¼1
1
2
wiwkþ1
1wiap
i3aq
kþ1;3ap
i2aq
kþ1;2þap
i4aq
kþ1;4ap
i1aq
kþ1;1
hi
1F~
ai
ðÞ
P1F~
akþ1
q
k
i¼1
1
2
wiwkþ1
1wiap
i3aq
kþ1;3ap
i2aq
kþ1;2þap
i4aq
kþ1;4ap
i1aq
kþ1;1
hi
1
C
C
A+:
ð29Þ
Neural Comput & Applic (2018) 30:241–260 255
123
Then, when k=l?1, the following result can be calcu-
lated:
lþ1
i¼1
wiwlþ2
1wi
~
ap
i~
aq
lþ2
¼
l
i¼1
wiwlþ2
1wi
~
ap
i~
aq
lþ2
þwlþ1wlþ2
1wlþ1
~
ap
lþ1~
aq
lþ2
¼
l
i¼1
wiwlþ2
1wi
ap
i1aq
lþ2;1
;
l
i¼1
wiwlþ2
1wi
ap
i2aq
lþ2;2
;
l
i¼1
wiwlþ2
1wi
ap
i3aq
lþ2;3
;
l
i¼1
wiwlþ2
1wi
ap
i4aq
lþ2;4
;
l
i¼1
1
2
wiwlþ2
1wiap
i3aq
lþ2;3ap
i2aq
lþ2;2þap
i4aq
lþ2;4ap
i1aq
lþ2;1
hi
T~ai
ðÞ
PT~alþ2
q
l
i¼1
1
2
wiw2þ1
1wiap
i3aq
lþ2;3ap
i2aq
lþ2;2þap
i4aq
lþ2;4ap
i1aq
lþ2;1
hi
;
0
B
B
@
l
i¼1
1
2
wiwlþ2
1wiap
i3aq
lþ2;3ap
i2aq
lþ2;2þap
i4aq
lþ2;4ap
i1aq
lþ2;1
hi
1I~ai
ðÞ
P1I~alþ2
q
l
i¼1
1
2
wiwlþ2
1wiap
i3aq
lþ2;3ap
i2aq
lþ2;2þap
i4aq
lþ2;4ap
i1aq
lþ2;1
hi
;
l
i¼1
1
2
wiwlþ2
1wiap
i3aq
lþ2;3ap
i2aq
lþ2;2þap
i4aq
lþ2;4ap
i1aq
lþ2;1
hi
1F~
ai
ðÞ
P1F~
alþ2
q
l
i¼1
1
2
wiwlþ2
1wiap
i3aq
lþ2;3ap
i2aq
lþ2;2þap
i4aq
lþ2;4ap
i1aq
lþ2;1
hi
1
C
C
A+
wlþ1wlþ2
1wlþ1
ap
lþ1;1aq
lþ2;1;wlþ1wlþ2
1wlþ1
ap
lþ1;2aq
lþ2;2;wlþ1wlþ2
1wlþ1
ap
lþ1;3aq
lþ2;3;wlþ1wlþ2
1wlþ1
ap
lþ1;4aq
lþ2;4
T~
alþ1
pT~
alþ2
q;11I~
alþ1
p1I~
alþ2
q;11F~
alþ1
p1F~
alþ2
q
:
l
i¼1
wiwlþ1
1wi
~
ap
i~
aq
lþ1
¼
l
i¼1
wiwlþ1
1wi
ap
i1aq
lþ1;1
;
l
i¼1
wiwlþ1
1wi
ap
i2aq
lþ1;2
;
l
i¼1
wiwlþ1
1wi
ap
i3aq
lþ1;3
;
l
i¼1
wiwlþ1
1wi
ap
i4aq
lþ1;4
;
l
i¼1
1
2
wiwlþ1
1wiap
i3aq
lþ1;3ap
i2aq
lþ1;2þap
i4aq
lþ1;4ap
i1aq
lþ1;1
hi
T~
ai
ðÞ
PT~
alþ1
q
l
i¼1
1
2
wiwlþ1
1wiap
i3aq
lþ1;3ap
i2aq
lþ1;2þap
i4aq
lþ1;4ap
i1aq
lþ1;1
hi
;
0
B
B
@
l
i¼1
1
2
wiwlþ1
1wiap
i3aq
lþ1;3ap
i2aq
lþ1;2þap
i4aq
lþ1;4ap
i1aq
lþ1;1
hi
1I~
ai
ðÞ
P1I~
alþ1
q
l
i¼1
1
2
wiwlþ1
1wiap
i3aq
lþ1;3ap
i2aq
lþ1;2þap
i4aq
lþ1;4ap
i1aq
lþ1;1
hi
;
l
i¼1
1
2
wiwlþ1
1wiap
i3aq
lþ1;3ap
i2aq
lþ1;2þap
i4aq
lþ1;4ap
i1aq
lþ1;1
hi
1F~
ai
ðÞ
P1F~
alþ1
q
l
i¼1
1
2
wiwlþ1
1wiap
i3aq
lþ1;3ap
i2aq
lþ1;2þap
i4aq
lþ1;4ap
i1aq
lþ1;1
hi
1
C
C
A+:
256 Neural Comput & Applic (2018) 30:241–260
123
¼
lþ1
i¼1
wiwlþ2
1wi
ap
i1aq
lþ2;1
;
lþ1
i¼1
wiwlþ2
1wi
ap
i2aq
lþ2;2
;
lþ1
i¼1
wiwlþ2
1wi
ap
i3aq
lþ2;3
;
lþ1
i¼1
wiwlþ2
1wi
ap
i4aq
lþ2;4
;
lþ1
i¼1
1
2
wiwlþ2
1wiap
i3aq
lþ2;3ap
i2aq
lþ2;2þap
i4aq
lþ2;4ap
i1aq
lþ2;1
hi
T~
ai
ðÞ
PT~
alþ2
q
lþ1
i¼1
1
2
wiw2þ1
1wiap
i3aq
lþ2;3ap
i2aq
lþ2;2þap
i4aq
lþ2;4ap
i1aq
lþ2;1
hi
;
0
B
B
@
lþ1
i¼1
1
2
wiwlþ2
1wiap
i3aq
lþ2;3ap
i2aq
lþ2;2þap
i4aq
lþ2;4ap
i1aq
lþ2;1
hi
1I~
ai
ðÞ
P1I~
alþ2
q
lþ1
i¼1
1
2
wiwlþ2
1wiap
i3aq
lþ2;3ap
i2aq
lþ2;2þap
i4aq
lþ2;4ap
i1aq
lþ2;1
hi
;
lþ1
i¼1
1
2
wiwlþ2
1wiap
i3aq
lþ2;3ap
i2aq
lþ2;2þap
i4aq
lþ2;4ap
i1aq
lþ2;1
hi
1F~
ai
ðÞ
P1F~
alþ2
q
lþ1
i¼1
1
2
wiwlþ2
1wiap
i3aq
lþ2;3ap
i2aq
lþ2;2þap
i4aq
lþ2;4ap
i1aq
lþ2;1
hi
1
C
C
A+:
That is, when k=l?1, Eq. (29) is true.
(iii) So, for all k,Eq.(29) is true.
The following equation can be proved in a similar fashion,
and the proof is omitted here.
Using Eqs. (27), (29) and (20), Eq. (28) can be transformed
as follows:
k
j¼1
wkþ1wj
1wkþ1
~
ap
kþ1~
aq
j
¼
k
j¼1
wkþ1wj
1wkþ1
ap
kþ1;1aq
j1
;
k
j¼1
wkþ1wj
1wkþ1
ap
kþ1;2aq
j2
;
k
j¼1
wkþ1wj
1wkþ1
ap
kþ1;3aq
j3
;
k
j¼1
wkþ1wj
1wkþ1
ap
kþ1;4aq
j4
;
k
j¼1
1
2
wkþ1wj
1wkþ1ap
kþ1;3aq
j3ap
kþ1;2aq
j2þap
kþ1;4aq
j4ap
kþ1;1aq
j1
hi
T~
akþ1
PT~
aj
q
k
j¼1
1
2
wkþ1wj
1wkþ1ap
kþ1;3aq
j3ap
kþ1;2aq
j2þap
kþ1;4aq
j4ap
kþ1;1aq
j1
hi
;
0
B
B
@
k
j¼1
1
2
wkþ1wj
1wkþ1ap
kþ1;3aq
j3ap
kþ1;2aq
j2þap
kþ1;4aq
j4ap
kþ1;1aq
j1
hi
1I~akþ1
P1I~aj
q
k
j¼1
1
2
wkþ1wj
1wkþ1ap
kþ1;3aq
j3ap
kþ1;2aq
j2þap
kþ1;4aq
j4ap
kþ1;1aq
j1
hi
;
k
j¼1
1
2
wkþ1wj
1wkþ1ap
kþ1;3aq
j3ap
kþ1;2aq
j2þap
kþ1;4aq
j4ap
kþ1;1aq
j1
hi
1F~
akþ1
P1F~
aj
q
k
j¼1
1
2
wkþ1wj
1wkþ1ap
kþ1;3aq
j3ap
kþ1;2aq
j2þap
kþ1;4aq
j4ap
kþ1;1aq
j1
hi
1
C
C
A+:
ð30Þ
Neural Comput & Applic (2018) 30:241–260 257
123
kþ1
i;j¼1
i6¼j
wiwj
1wi
~
ap
i~
aq
j
¼
k
i;j¼1
i6¼j
wiwj
1wi
~
ap
i~
aq
j
þ
k
i¼1
wiwkþ1
1wi~
ap
i~
aq
kþ1
þ
k
j¼1
wkþ1wj
1wkþ1~
ap
kþ1~
aq
j
:
¼
kþ1
i;j¼1
i6¼j
wiwj
1wiap
i1aq
j1;
kþ1
i;j¼1
i6¼j
wiwj
1wiap
i2aq
j2;
kþ1
i;j¼1
i6¼j
wiwj
1wiap
i3aq
j3;
kþ1
i;j¼1
i6¼j
wiwj
1wiap
i4aq
j4
2
43
5;
*
kþ1
i;j¼1
i6¼j
1
2
wiwj
1wiap
i3aq
j3ap
i2aq
j2þap
i4aq
j4ap
i1aq
j1
hi
T~
ai
ðÞ
pT~
aj
ðÞ
q
kþ1
i;j¼1
i6¼j
1
2
wiwj
1wiap
i3aq
j3ap
i2aq
j2þap
i4aq
j4ap
i1aq
j1
hi
;
0
B
B
B
@
kþ1
i;j¼1
i6¼j
1
2
wiwj
1wiap
i3aq
j3ap
i2aq
j2þap
i4aq
j4ap
i1aq
j1
hi
1I~ai
ðÞ
p1I~aj
ðÞ
q
kþ1
i;j¼1
i6¼j
1
2
wiwj
1wiap
i3aq
j3ap
i2aq
j2þap
i4aq
j4ap
i1aq
j1
hi
;
kþ1
i;j¼1
i6¼j
1
2
wiwj
1wiap
i3aq
j3ap
i2aq
j2þap
i4aq
j4ap
i1aq
j1
hi
1F~
ai
ðÞ
p1F~
aj
ðÞ
q
kþ1
i;j¼1
i6¼j
1
2
wiwj
1wiap
i3aq
j3ap
i2aq
j2þap
i4aq
j4ap
i1aq
j1
hi
1
C
C
C
A+:
Then, when n=k?1, Eq. (26) is true. Therefore,
Eq. (26) is true for all n.
(2) Using the SVTNN operations and Eq. (26), Eq. (15)
can be obtained. This completes the proof of Theorem 3.
Appendix 2
Proof For an arbitrary i, there are a
i1
Cb
i1
,a
i2
Cb
i2
,
a
i3
Cb
i3
,a
i4
Cb
i4
; therefore, it is easy to obtain the fol-
lowing inequalities:
ap
i1bq
i1;ap
i2bq
i2;ap
i3bq
i3;and ap
i4bq
i4;
then
n
i;j¼1
i6¼j
wiwj
1wi
ap
i1aq
j1
0
B
@1
C
A
1
pþq
n
i;j¼1
i6¼j
wiwj
1wi
bp
i1bq
j1
0
B
@1
C
A
1
pþq
;
n
i;j¼1
i6¼j
wiwj
1wi
ap
i2aq
j2
0
B
@1
C
A
1
pþq
n
i;j¼1
i6¼j
wiwj
1wi
bp
i2bq
j2
0
B
@1
C
A
1
pþq
;
n
i;j¼1
i6¼j
wiwj
1wi
ap
i3aq
j3
0
B
@1
C
A
1
pþq
n
i;j¼1
i6¼j
wiwj
1wi
bp
i3bq
j3
0
B
@1
C
A
1
pþq
;
n
i;j¼1
i6¼j
wiwj
1wi
ap
i4aq
j4
0
B
@1
C
A
1
pþq
n
i;j¼1
i6¼j
wiwj
1wi
bp
i4bq
j4
0
B
@1
C
A
1
pþq
:
The truth-membership, indeterminacy-membership and
falsity-membership parts can be proved using mathemat-
ical induction on n.
n
i;j¼1
i6¼j
1
2
wiwj
1wiap
i3aq
j3ap
i2aq
j2þap
i4aq
j4ap
i1aq
j1
hi
T~ai
ðÞ
pT~aj
q
n
i;j¼1
i6¼j
1
2
wiwj
1wiap
i3aq
j3ap
i2aq
j2þap
i4aq
j4ap
i1aq
j1
hi
0
B
B
B
B
@
1
C
C
C
C
A
1
pþq
n
i;j¼1
i6¼j
1
2
wiwj
1wibp
i3bq
j3bp
i2bq
j2þbp
i4bq
j4bp
i1bq
j1
hi
T~
bi
p
T~
bj
q
n
i;j¼1
i6¼j
1
2
wiwj
1wibp
i3bq
j3bp
i2bq
j2þbp
i4bq
j4bp
i1bq
j1
hi
0
B
B
B
B
@
1
C
C
C
C
A;
11
n
i;j¼1
i6¼j
1
2
wiwj
1wiap
i3aq
j3ap
i2aq
j2þap
i4aq
j4ap
i1aq
j1
hi
1I~
ai
ðÞ
p1I~
aj
q
n
i;j¼1
i6¼j
1
2
wiwj
1wiap
i3aq
j3ap
i2aq
j2þap
i4aq
j4ap
i1aq
j1
hi
0
B
B
B
B
@
1
C
C
C
C
A
1
pþq
11
n
i;j¼1
i6¼j
1
2
wiwj
1wibp
i3bq
j3bp
i2bq
j2þbp
i4bq
j4bp
i1bq
j1
hi
1I~
bi
p
1I~
bj
q
n
i;j¼1
i6¼j
1
2
wiwj
1wibp
i3bq
j3bp
i2bq
j2þbp
i4bq
j4bp
i1bq
j1
hi
0
B
B
B
B
@
1
C
C
C
C
A
1
pþq
;and
258 Neural Comput & Applic (2018) 30:241–260
123
11
n
i;j¼1
i6¼j
1
2
wiwj
1wiap
i3aq
j3ap
i2aq
j2þap
i4aq
j4ap
i1aq
j1
hi
1F~ai
ðÞ
p1F~aj
q
n
i;j¼1
i6¼j
1
2
wiwj
1wiap
i3aq
j3ap
i2aq
j2þap
i4aq
j4ap
i1aq
j1
hi
0
B
B
B
B
@
1
C
C
C
C
A
1
pþq
11
n
i;j¼1
i6¼j
1
2
wiwj
1wibp
i3bq
j3bp
i2bq
j2þbp
i4bq
j4bp
i1bq
j1
hi
1F~
bi
p
1F~
bj
q
n
i;j¼1
i6¼j
1
2
wiwj
1wibp
i3bq
j3bp
i2bq
j2þbp
i4bq
j4bp
i1bq
j1
hi
0
B
B
B
B
@
1
C
C
C
C
A
1
pþq
:
Then, using the new comparison method in Sect. 3.2,
Theorem 3 can be proved.
References
1. Zadeh LA (1965) Fuzzy sets. Inf Control 8(3):338–353
2. Derrac J, Chiclana F, Garcia S, Herrera F (2016) Evolutionary
fuzzy k-nearest neighbors algorithm using interval-valued fuzzy
sets. Inf Sci 329:144–163
3. Atanassov KT (1986) Intuitionistic fuzzy sets. Fuzzy Sets Syst
20(1):87–96
4. Atanassov KT, Gargov G (1989) Interval valued intuitionistic
fuzzy sets. Fuzzy Sets Syst 31(3):343–349
5. Wan S, Lin L-L, Dong J (2016) MAGDM based on triangular
Atanassov’s intuitionistic fuzzy information aggregation. Neural
Comput Appl. doi:10.1007/s00521-016-2196-9
6. Zhou H, Wang J-Q, Zhang H-Y (2016) Multi-criteria decision-
making approaches based on distance measures for linguistic
hesitant fuzzy sets. J Oper Res Soc. doi:10.1057/jors.2016.41
7. Beg I, Rashid T (2014) Group decision making using intuitionistic
hesitant fuzzy sets. Int J Fuzzy Log Intell Syst 14(3):181–187
8. Liu H-W, Wang G-J (2007) Multi-criteria decision making
methods based on intuitionistic fuzzy sets. Eur J Oper Res
179(1):220–233
9. Xu Z-S (2012) Intuitionistic fuzzy multi-attribute decision mak-
ing: an interactive method. IEEE Trans Fuzzy Syst
20(3):514–525
10. Wang J-Q, Han Z-Q, Zhang H-Y (2014) Multi-criteria group
decision-making method based on intuitionistic interval fuzzy
information. Group Decis Negot 23(4):715–733
11. Amorim P, Curcio E, Almada-Lobo B, Barbosa-Po
´voa APFD,
Grossmann IE (2016) Supplier selection in the processed food
industry under uncertainty. Eur J Oper Res 252(3):801–814
12. Chen S-M, Cheng S-H, Chiou C-H (2016) Fuzzy multi-attribute
group decision making based on intuitionistic fuzzy sets and
evidential reasoning methodology. Inf Fusion 27:215–227
13. Liu P-D, Liu X (2016) The neutrosophic number generalized
weighted power averaging operator and its application in multiple
attribute group decision making. Int J Mach Learn Cybern.
doi:10.1007/s13042-016-0508-0
14. Wu J, Xiong R, Chiclana F (2016) Uninorm trust propagation and
aggregation methods for group decision making in social network
with four tuple information. Knowl Based Syst 96(2):29–39
15. Wang J-Q, Nie R-R, Zhang H-Y (2013) New operators on tri-
angular intuitionistic fuzzy numbers and their applications in
system fault analysis. Inf Sci 251:79–95
16. Wang J-Q (2008) Overview on fuzzy multi-criteria decision-
making approach. Control Decis 23(6):002
17. Wan S-P (2013) Power average operators of trapezoidal intu-
itionistic fuzzy numbers and application to multi-attribute group
decision making. Appl Math Model 37(6):4112–4126
18. Smarandache F (1998) Neutrosophy: neutrosophic probability,
set, and logic. American Research Press, Rehoboth, pp 1–105
19. Smarandache F (1999) A unifying field in logics: neutrosophic
logic neutrosophy, neutrosophic set, neutrosophic probability.
American Research Press, Rehoboth, pp 1–141
20. Smarandache F (2008) Neutrosophic set—a generalization of the
intuitionistic fuzzy set. Int J Pure Appl Math 24(3):38–42
21. Deli I, S¸ ubas¸ Y (2015) Some weighted geometric operators with
SVTrN-numbers and their application to multi-criteria decision
making problems. J Intell Fuzzy Syst. doi:10.3233/jifs-151677
22. El-Hefenawy N, Metwally M-A, Ahmed Z-M, El-Henawy I-M
(2016) A review on the applications of neutrosophic sets.
J Comput Theor Nanosci 13(1):936–944
23. S¸ ubas¸ Y (2015) Neutrosophic numbers and their application to
multi-attribute decision making problems. Unpublished Masters
Thesis, 7 Aralık University, Graduate School of Natural and
Applied Science
24. Liu C, Luo Y (2016) Correlated aggregation operators for sim-
plified neutrosophic set and their application in multi-attribute
group decision making. J Intell Fuzzy Syst 30(3):1755–1761
25. Wu X-H, Wang J, Peng J-J, Chen X-H (2016) Cross-entropy and
prioritized aggregation operator with simplified neutrosophic sets
and their application in multi-criteria decision-making problems.
Int J Fuzzy Syst. doi:10.1007/s40815-016-0180-2
26. Ji P, Zhang H-Y, Wang J-Q (2016) A projection-based TODIM
method under multi-valued neutrosophic environments and its
application in personnel selection. Neural Comput Appl. doi:10.
1007/s00521-016-2436-z
27. Liu P-D, Li H (2015) Multiple attribute decision-making method
based on some normal neutrosophic Bonferroni mean operators.
Neural Comput Appl 25(7–8):1–16
28. Broumi S, Talea M, Bakali A, Smarandache F (2016) Single
valued neutrosophic graphs. J New Theory 10:86–101
29. Broumi S, Talea M, Bakali A, Smarandache F (2016) Interval
valued neutrosophic graphs. Publ Soc Math Uncertain 10:5
30. Broumi S, Deli I, Smarandache F (2014) Interval valued neu-
trosophic parameterized soft set theory and its decision making.
Appl Soft Comput 28(4):109–113
31. S¸ ahin R, Liu P (2016) Correlation coefficient of single-valued
neutrosophic hesitant fuzzy sets and its applications in decision
making. Neural Comput Appl. doi:10.1007/s00521-015-2163-x
32. Tian Z-P, Wang J, Wang J-Q, Zhang H-Y (2016) An improved
MULTIMOORA approach for multi-criteria decision-making
based on interdependent inputs of simplified neutrosophic
Neural Comput & Applic (2018) 30:241–260 259
123
linguistic information. Neural Comput Appl. doi:10.1007/
s00521-016-2378-5
33. Tian Z-P, Wang J, Wang J-Q, Zhang H-Y (2016) Simplified
neutrosophic linguistic multi-criteria group decision-making
approach to green product development. Group Decis Negot.
doi:10.1007/s10726-016-9479-5
34. Tian Z-P, Wang J, Zhang H-Y, Wang J-Q (2016) Multi-criteria
decision-making based on generalized prioritized aggregation
operators under simplified neutrosophic uncertain linguistic
environment. Int J Mach Learn Cybern. doi:10.1007/s13042-016-
0552-9
35. Ma Y-X, Wang J-Q, Wang J, Wu X-H (2016) An interval neu-
trosophic linguistic multi-criteria group decision-making method
and its application in selecting medical treatment options. Neural
Comput Appl. doi:10.1007/s00521-016-2203-1
36. Chan H-K, Wang X-J, Raffoni A (2014) An integrated approach
for green design: life-cycle, fuzzy AHP and environmental
management accounting. Br Account Rev 46(4):344–360
37. Chan H-K, Wang X-J, White GRT, Yip N (2013) An extended
fuzzy-AHP approach for the evaluation of green product designs.
IEEE Trans Eng Manag 60(2):327–339
38. Ye J (2015) Some weighted aggregation operators of trapezoidal
neutrosophic numbers and their multiple attribute decision mak-
ing method. http://www.gallup.unm.edu/*smarandache/Some
WeightedAggregationOperators.pdf
39. Deli I, S¸ ubas¸ Y (2016) A ranking method of single valued neu-
trosophic numbers and its applications to multi-attribute decision
making problems. Int J Mach Learn Cybern. doi:10.1007/s13042-
016-0505-3
40. Broumi S, Smarandache F (2014) Single valued neutrosophic
trapezoid linguistic aggregation operators based multi-attribute
decision making. Bull Pure Appl Sci Math Stat 33(2):135–155
41. Said B, Smarandache F (2016) Multi-attribute decision making
based on interval neutrosophic trapezoid linguistic aggregation
operators. Handb Res Gen Hybrid Set Struct Appl Soft Comput.
doi:10.5281/zenodo.49136
42. Tian Z-P, Wang J, Wang J-Q, Chen X-H (2015) Multi-criteria
decision-making approach based on gray linguistic weighted
Bonferroni mean operator. Int Trans Oper Res. doi:10.1111/itor.
12220
43. Ye J (2015) Multiple attribute group decision making based on
interval neutrosophic uncertain linguistic variables. Int J Mach
Learn Cybern. doi:10.1007/s13042-015-0382-1
44. Bonferroni C (1950) Sulle medie multiple di potenze. Bolletino
dell‘Unione Matematica Italiana 5:267–270
45. Li D, Zeng W, Li J (2016) Geometric Bonferroni mean operators.
Int J Intell Syst. doi:10.1002/int.21822
46. Liu P, Zhang L, Liu X, Wang P (2016) Multi-valued neutrosophic
number Bonferroni mean operators with their applications in
multiple attribute group decision making. Int J Inf Technol Decis
Mak. doi:10.1142/s0219622016500346
47. Liu P-D, Jin F (2012) The trapezoid fuzzy linguistic Bonferroni
mean operators and their application to multiple attribute decision
making. Sci Iran 19(6):1947–1959
48. Zhu W-Q, Liang P, Wang L-J, Hou Y-R (2015) Triangular fuzzy
Bonferroni mean operators and their application to multiple
attribute decision making. J Intell Fuzzy Syst 29(4):1265–1272
49. Chen Z-S, Chin K-S, Li Y-L, Yang Y (2016) On generalized
extended Bonferroni means for decision making. IEEE Trans
Fuzzy Syst. doi:10.1109/tfuzz.2016.2540066
50. Zhang H-Y, Ji P, Wang J, Chen X-H (2017) A novel decision
support model for satisfactory restaurants utilizing social infor-
mation: a case study of TripAdvisor.com. Tour Manag 59: 281–297
51. Dubois D, Prade H (1983) Ranking fuzzy numbers in the setting
of possibility theory. Inf Sci 30(3):183–224
52. Wang H-B, Smarandache F, Zhang Y, Sunderraman R (2010)
Single valued neutrosophic sets. Multispace Multistruct
4:410–413
53. Deli I, S¸ ubas¸ Y (2014) Single valued neutrosophic numbers and
their applications to multi-criteria decision making problem.
viXra preprint viXra 1412.0012
54. Xu Z-S, Yager R-R (2011) Intuitionistic fuzzy Bonferroni means.
IEEE Trans Syst Man Cybern B (Cybern) 41(2):568–578
55. Zhou W, He J-M (2012) Intuitionistic fuzzy normalized weighted
Bonferroni mean and its application in multi-criteria decision
making. J Appl Math 1110-757x:1–16
56. Shannon C-E (2001) A mathematical theory of communication.
ACM SIGMOBILE Mob Comput Commun Rev 5(1):3–55
57. Lo
´pez-de-Ipin
˜a K, Sole
´-Casals J, Faundez-Zanuy M, Calvo P-M,
Sesa E, Martinez de Lizarduy U, Bergareche A (2016) Selection
of entropy based features for automatic analysis of essential
tremor. Entropy 18(5):184
58. Wei C, Yan F, Rodrı
´guez R-M Entropy measures for hesitant
fuzzy sets and their application in multi-criteria decision making.
J Intell Fuzzy Syst (Preprint) 1–13
59. Verma R, Maheshwari S (2016) A new measure of divergence
with its application to multi-criteria decision making under fuzzy
environment. Neural Comput Appl. doi:10.1007/s00521-016-
2311-y
60. Kumar A, Peeta S (2015) Entropy weighted average method for
the determination of a single representative path flow solution for
the static user equilibrium traffic assignment problem. Transp Res
B Methodol 71(4):213–229
61. Yue Z (2014) TOPSIS-based group decision making methodol-
ogy in intuitionistic fuzzy setting. Inf Sci 277(2):141–153
62. Wang J-H, Hao J-Y (2006) A new version of 2-tuple fuzzy lin-
guistic representation model for computing with words. IEEE
Trans Fuzzy Syst 14(3):435–445
260 Neural Comput & Applic (2018) 30:241–260
123
A preview of this full-text is provided by Springer Nature.
Content available from Neural Computing and Applications
This content is subject to copyright. Terms and conditions apply.