DataPDF Available

Three-way decisions with intuitionistic fuzzy decision-theoretic rough sets based on point operators

Authors:
Information Sciences 375 (2017) 183–201
Contents lists available at ScienceDirect
Information Sciences
journal homepage: www.elsevier.com/locate/ins
Three-way decisions with intuitionistic fuzzy
decision-theoretic rough sets based on point operators
Decui Liang
a
, Zeshui Xu
b , , Dun Liu
c
a
School of Management and Economics, University of Electronic Science and Technology of China, Chengdu 610054, China
b
Business School, Sichuan University, Chengdu,Sichuan 610065, China
c
School of Economics and Management, Southwest Jiaotong University, Chengdu 610031, China
a r t i c l e i n f o
Article history:
Received 17 October 2015
Revised 2 August 2016
Accepted 14 September 2016
Available online 15 September 2016
Keywo rds:
Three-way decisions
Decision-theoretic rough sets
Intuitionistic fuzzy sets
Point operator
a b s t r a c t
Three-way decisions with decision-theoretic rough sets (DTRSs) as a typical risk decision
method, are generated by Bayesian decision theory and have three kinds of decision strate-
gies, i.e., the acceptance decision, the deferment (non-commitment) decision and the rejec-
tion decision. The construction of three-way decisions under the complex decision-making
context creates enormous challenges. The determination of loss function is one of key
steps. In this paper, we discuss the decision principles of three-way decision rules based
on the variation of loss functions with intuitionistic fuzzy sets (IFSs). More specifically, we
introduce the intuitionistic fuzzy point operator (IFPO) into DTRSs and explore three-way
decisions. Firstly, we construct a loss function matrix with the point operator and ana-
lyze its corresponding properties. IFPO implies one type of variation modes for the loss
functions of three-way decisions. With respect to the point operator, we show that the
prerequisites among loss functions still hold in each stage. Secondly, given the loss func-
tions, we construct the corresponding three-way decision model and deduce three-way
decisions. Finally, with the aid of information entropy theory, we further investigate which
stage may be most suitable to make the decision. This study extends the range of applica-
tions of three-way decisions to the new intuitionistic fuzzy environment.
©2016 Elsevier Inc. All rights reserved.
1. Introduction
As a risk decision-making method, three-way decisions [12,22,44] are composed of the acceptance decision, the defer-
ment (non-commitment) decision and the rejection decision. Different situations may have different interpretations of these
decisions. Unlike a certain decision making, three-way decisions newly increase a deferment strategy. The idea of three-
way decisions is consistent with human’s cognitions to solve the problem in the real world [43] . Since it was proposed by
Yao [41,42] , it has attracted the attention of researchers and has been applied in many fields, such as investment decision-
making [26] , information filtering [17] , text classification [18] , risk decision-making [19] , cluster analysis [24,48] , government
decision-making [28] , web-based support systems [40] , approximations of fuzzy sets [8] , etc. Three-way decisions can pro-
vide a semantic mechanism to help us make a more reasonable decision.
In view of the researches of three-way decisions, the most typical model is decision-theoretic rough sets (DTRSs)
[10,15,19,45] . As an extension model of rough sets [31] , DTRSs vastly push the development of three-way decisions
corresponding aut hor.
E-mail addresses: decuiliang@126.com (D. Liang), xu_zeshui@263.net (Z. Xu), newton83@163.com (D. Liu).
http://dx.doi.org/10.1016/j.ins.2016.09.039
0020-0255/© 2016 Elsevier Inc. All rights reserved.
184 D. Liang et al. / Information Sciences 375 (2017) 183–201
[2 1, 34,41,50] . In light of Bayesian decision procedure [7] , DTRSs were proposed by Yao et al. [41,42] in the rough set con-
text. Considering three pairwise disjoint regions of rough sets (i.e., the positive region POS( C ), the boundary region BND( C )
and the negative region NEG( C )), Yao [44–46] constructed three-way decisions with DTRSs and provided the corresponding
interpretations. Three-way decisions with DTRSs are deduced based on the minimum of the overall risk and generate three
types of decision rules [44] . The rules derived from the positive region produce the acceptance decisions. The rules coming
with the negative region give rise to the rejection decisions, while the rules (coming with the boundary region) result in
the decisions of non-commitment. Three-way decisions with DTRSs not only consider the practical decision semantics, but
also involve the relevant risks [22,23] .
With regard to three-way decisions with DTRSs, there are a series of literature. For example, Deng and Yao [9] pro-
posed an information-theoretic approach for the interpretation and determination of probabilistic thresholds. On the basis
of the results presented in Ref. [9] ., Azam and Yao [3] further explored the determination of the probabilistic thresholds
in the framework of game-theoretic rough set (GTRS) model. By combining the relative and absolute information, Li and
Xu [21] proposed double quantification decision-theoretic rough set (DTRS). In light of the different combinations of loss
functions, Liu et al. [27] summarized four-level choosing criteria for probabilistic rules. Qian et al. [33] developed a multi-
granulation decision-theoretic rough set based on DTRSs and multi-granular structures. In order to determine the parameters
for the probabilistic rough fuzzy set, Sun et al. [34] proposed a decision-theoretic rough fuzzy set. Zhang and Miao [50] es-
tablished a fundamental reduction framework for two-category DTRSs. In addition, Liang and Liu [22] effectively estimated
the loss functions in the format of hesitant fuzzy sets. With the aforementioned literature, the determination of loss func-
tion of DTRSs is the most prominent problem [23] . Nowadays, the uncertain evaluation scenarios [14,23] and the dynamic
environments [20,29,30,46,47] are novel research directions for three-way decisions with DTRSs. The two research directions
vastly enrich the adaptability of three-way decisions. Meantime, the construction of three-way decisions under the complex
decision-making context creates enormous challenges.
Intuitionistic fuzzy sets (IFSs) are powerful in dealing with the uncertainty, imprecision and vagueness [5,13] . IFSs, which
are characterized by a membership function and a non-membership function, were introduced by Atanassov [1,2] . Compared
with fuzzy sets [35,49] , IFSs capture a duality property during the evaluation. For instance, the expert may concentrate both
on pros and cons in a vote [25] . By introducing the new evaluation format of loss function with intuitionistic fuzzy num-
bers (IFNs) [39] , Liang and Liu [23] proposed a naive model of intuitionistic fuzzy decision-theoretic rough sets (IFDTRSs).
However, IFSs are normally based on the static information and do not involve the variation [36,37] . In some situations, Xia
and Xu [36] pointed out that the intuitionistic fuzzy information can be changed. Taking expert coordination for example,
some people of the abstention group tend to cast affirmative votes, others are dissenters and still others tend to abstain
from voting [25] . From the existing literature of IFSs, intuitionistic fuzzy point operator (IFPO) defined by Atanassov [2] is
suitable for this scenario. Based on the point operator, Burillo and Bustince [4] discussed the construction theorems for IFSs.
By using the IFPO, Liu and Wang [25] presented a new method for solving multi-criteria decision-making problems. Xia and
Xu [36] , Xia [37] developed a series of intuitionistic fuzzy point operators (IFPOs). In the framework of IFDTRs, we introduce
the IFPOs into DTRSs and analyze the construction of three-way decisions. In this situation, the IFPO provides us one type
of variation modes for interpreting the loss functions of three-way decisions. In view of single stage (or each period), we
generate the decision rules of each stage based on the given loss functions with IFPO. In light of the continue perspective (or
multiple stages), we further investigate the determination of the specific stage with the aid of information entropy theory.
It provides a semantic criterion to explain which stage makes the decision is best. Three-way decisions with point operators
can provide us a flexibility for the selection of decision rules. Our study extends the range of applications of three-way
decisions and makes it accommodate these complex scenarios.
The remainder of this paper is organized as follows: Section 2 provides basic concepts of IFSs and IFPO. In Section 3 , we
present three-way decisions with IFDTRSs based on point operators. Section 4 further explores the application of three-way
decisions to determine the decision stage with the aid of information entropy theory. In Section 5 , we use an example to
elaborate the risk decision-making of three-way decisions and the characteristics. Section 6 concludes the paper and points
out the future research directions.
2. Preliminaries
Basic concepts, notations and results of IFSs [1,39] and the corresponding IFPO [2,4,25,36,37] are briefly reviewed in this
section.
2.1. Intuitionistic fuzzy sets (IFSs)
The concept of IFS [1] is an extension of the concept of fuzzy set [49] . In the following, we review some basic concepts
related to IFSs [36,38] :
Definition 1 [1,39] . Let U be a fixed set. An intuitionistic fuzzy set (IFS) E on U can be represented as the following mathe-
matical symbol:
E = { < f, μE
(f) , νE
(f) > | f U} ,
D. Liang et al. / Information Sciences 375 (2017) 183–201 185
where the functions μE
( f ): U [0, 1] and νE
( f ): U [0, 1] are the membership degree and the non-membership degree of
f to E , respectively.
According to Definition 1 , we describe a decision-making problem with the positive and negative perspectives simul-
taneously. By Definition 1 , the relationship between μE
( f ) and νE
( f ) for f U is: 0 μE
(f) + νE
(f) 1 . Meanwhile, the
hesitation (or uncertainty) membership of f in U is calculated as Refs. [38,39] :
πE
(f) = 1 μE
(f) νE
(f) ,
where the value of πE
( f ) satisfies 0 πE
( f ) 1. If the value of πE
( f ) is small, then the knowledge of f is more certain
and vice versa [25,36,37,39] . According to the notations reported in Refs. [38,39] , an intuitionistic fuzzy number (IFN) can
be simply denoted as f = (μf
, νf
) , where μf
, νf
0, μf
+ νf
1 and πf
= μf
νf
. In light of the results presented in
Refs. [38,39] , we adopt IFNs to analyze three-way decisions below.
Given two IFNs f
1
= (μf
1
, νf
1
) , f
2
= (μf
2
, νf
2
) , their basic operations are [1,39] :
(1) f
1
f
2
= (μf
1
+ μf
2
μf
1
μf
2
, νf
1
νf
2
) ;
(2) f
1
f
2
= (μf
1
μf
2
, νf
1
+ νf
2
νf
1
νf
2
) ;
(3) kf
1
= (1 (1 μf
1
)
k
, (νf
1
)
k
) , where k R and k 0.
In order to compare IFNs, the score function and the accuracy function of an IFN are defined in advance [6,11,38] .
Definition 2 [11] . Let f = (μf
, νf
) be an IFN, then the score function of f is calculated as:
S(f) = μf
νf
, (1)
and the accuracy function of f is defined as:
H(f) = μf
+ νf
, (2)
where 1 S(f
1
) 1 and 0 H ( f
1
) 1.
With respect to Definition 2 , it is worth noting that the accuracy of (2) is really different from the definition of the
classification. It measures the degree of certainty of the IFN by adding the membership degree and the non-membership
degree. In addition, Xu and Yage r [38] further gave an order relation between two IFNs, which can be used to rank IFNs.
Definition 3 [38] . Given two IFNs f
1
= (μf
1
, νf
1
) and f
2
= (μf
2
, νf
2
) , we confirm their relationships below:
(1) If S ( f
1
) > S ( f
2
), then f
2
is smaller than f
1
, denoted by f
1
f
2
;
(2) If S ( f
1
) < S ( f
2
), then f
1
is smaller than f
2
, denoted by f
1
f
2
;
(3) If S(f
1
) = S(f
2
) , then
(i) If H(f
1
) = H(f
2
) , then f
1
is equal to f
2
, denoted by f
1
f
2
;
(ii) If H ( f
1
) > H ( f
2
), then f
2
is smaller than f
1
, denoted by f
1
f
2
;
(iii) If H ( f
1
) < H ( f
2
), then f
1
is smaller than f
2
, denoted by f
1
f
2
.
2.2. Intuitionistic fuzzy point operator (IFPO)
Intuitionistic fuzzy point operator (IFPO) provides a novel variation viewpoint to study IFSs. It can transform an IFS into a
new IFS and automatically reduce the uncertainty of IFSs [25,36,37] . Based on the existing intuitionistic fuzzy point operators
(IFPOs), Liu and Wa ng [25] proposed a representative IFPO. The IFPO for aggregating IFSs is defined as follows:
Definition 4 [19] . Let IFS ( U ) be the set of all IFSs on U and E IFS ( U ). For each f U , taking κf
, ηf
[0, 1] and κf
+ ηf
1 ,
then the IFPO F
κf
,ηf
(U) : IF S(U) IF S(U) is:
F
κf
,ηf
(E) = { f, < μE
(f) + κf
πE
(f) , νE
(f) + ηf
πE
(f) > | f U} .
For Definition 4 , the above operation F
κf
,ηf
can transform an IFS into another one and has a variation property. With
the aid of the point operator F
κf
,ηf
, it can reduce the uncertainty information of IFS, and increase its membership and non-
membership information [37] . The variations of the membership degree and non-membership degree of IFS E are derived
from the uncertainty reduction, i.e., κf
πE
( f ) and ηf
πE
( f ). Considering the dynamical variation of IFSs, the operation F
κf
,ηf
has
the following properties:
Property 1 [19] . Let F
0
κf
,ηf
(E) = E = { < f, μE
(f) , νE
(f) > | f U} , κf
, ηf
[0, 1] and κf
+ ηf
1 , then
F
n
κf
,ηf
(E) =
f, < μE
(f) + κf
πE
(f)
1 (1 κf
ηf
)
n
κf
+ ηf
, νE
(f) + ηf
πE
(f)
1 (1 κf
ηf
)
n
κf
+ ηf
> | f U
.
where κf
+ ηf
= 0 , n is a positive integer and n = 0 denotes the original situation. If κf
+ ηf
= 0 , then F
n
κf
,ηf
(E ) = E .
Based on the result of Property 1 , the point operator F
κf
,ηf
mainly adopts the iteration idea to regenerate the value of IFS.
The parameters n , κf
and ηf
play an important role in the variation of IFSs. When κf
and ηf
are constant, the point operator
186 D. Liang et al. / Information Sciences 375 (2017) 183–201
Tabl e 1
The intuitionistic fuzzy loss function matrix re-
garding the risk or cost of actions in the different
states [23] .
C ¬C
a
P λPP
=
(μλPP
,
νλPP
) λPN
=
(μλPN
,
νλPN
)
a
B λBP
=
(μλBP
,
νλBP
) λBN
=
(μλBN
,
νλBN
)
a
N λNP
=
(μλNP
,
νλNP
) λNN
=
(μλNN
,
νλNN
)
F
n
κf
,ηf
(E) depends on n [25] . If κf
+ ηf
= 0 , then it implies that the values of κf
and ηf
are 0. Under the point operation, the
hesitation membership of IFS is analyzed as follows:
Property 2
[19] . Under the point operation F
κf
,ηf
, the hesitation membership of f in U is calculated as:
πF
n
κf
,ηf
(E)
(f) = (1 κf
ηf
)
n
πE
(f) .
For Property 2 , we can obtain the following relationship: πF
0
κf
,ηf
(E)
(f) = πE
(f) . For the results presented in
Definition 4 and Properties 1 2 , they are based on the original IFSs [36,37] . If we apply the defined operation to IFNs,
then the following definition is given:
Definition 5 [36] . For an IFN f = (μf
, νf
) , let κf
, ηf
[0, 1], then we define the point operator: IFN IFN as follows:
F
κf
,ηf
(f) = (μf
+ κf
πf
, νf
+ ηf
πf
) . (3)
In light of Properties 1–2 and Definition 5 , the IFN f = (μf
, νf
) with the point operator IFPO has the following property:
Property 3 [36] . Let f = (μf
, νf
) be an IFN, and n be a positive integer, taking κf
, ηf
[0, 1] and κf
+ ηf
1 , then
F
n
κf
,ηf
(f) = (μf
+ κf
πf
1 (1 κf
ηf
)
n
κf
+ ηf
, νf
+ ηf
πf
1 (1 κf
ηf
)
n
κf
+ ηf
) , (4)
where κf
+ ηf
= 0 , πf
= 1 μf
νf
and πF
n
κf
,ηf
(f)
(f) = (1 κf
ηf
)
n
πf
. The value of πf
satisfies the condition 0 πf
1 .
If κf
+ ηf
= 0 , then F
n
κf
,ηf
(f ) = f .
From the results of Property 3 , we deduce the following relationships for the IFN f = (μf
, νf
) : (1) μf
μf
+ κf
πf
1 (1 κf
ηf
)
n
κf
+ ηf
; (2) νf
νf
+ ηf
πf
1 (1 κf
ηf
)
n
κf
+ ηf
; (3) (1 κf
ηf
)
n
πf
πf
; (4) μf
+ κf
πf
1 (1 κf
ηf
)
n 1
κf
+ ηf
μf
+
κf
πf
1 (1 κf
ηf
)
n
κf
+ ηf
; (5) νf
+ ηf
πf
1 (1 κf
ηf
)
n 1
κf
+ ηf
νf
+ ηf
πf
1 (1 κf
ηf
)
n
κf
+ ηf
; (6) (1 κf
ηf
)
n
πf
(1 κf
ηf
)
n 1
πf
.
3. Three-way decisions deriving from IFDTRSs with point operators
Considering that the loss functions of IFDTRS model are associated with the IFPO, we construct the new basic model
and deduce the corresponding three-way decisions in this section. In light of Bayesian decision procedure and the results
of Ref. [23] ., the three-way decisions with IFDTRSs based on point operators comprise three main steps: (1) generating loss
function matrix with IFPO; (2) constructing basic model of IFDTRSs with point operators; (3) forming three-way decisions.
3.1. Loss function matrix with the new interpretation of point operators
In this section, we assume that IFNs are associated with the point operator and introduce this idea into IFDTRSs of
Ref. [23] . Considering the effectiveness of the point operator to loss functions, we reconstruct the loss function matrix of
IFDTRS model presented in Ref. [23] in advance. Based on Bayesian decision procedure [41,42] , Liang and Liu [23] constructed
IFDTRSs and elicited three-way decisions. Inspired by the results reported in Ref. [23] , we continue to determine the loss
functions of DTRSs with IFNs. Unlike the results of Ref. [23] , we also consider that the loss functions with IFNs can be
impacted by the point operator presented in Definition 5 . In this situation, IFPO implies one type of variation modes for the
loss functions of three-way decisions.
The IFDTRS model is composed of two states and three actions [23] . The set of states is given by = { C, ¬ C} indicating
that an object is in C and not in C , respectively. The set of actions is given by A = { a
P
, a
B
, a
N
} , where a
P
, a
B
, and a
N
represent
three actions when classifying the object x , namely, deciding x POS( C ), deciding x BND( C ), and deciding x NEG( C ),
respectively. The intuitionistic fuzzy loss function matrix regarding the risk or cost of actions in the different states is given
in Table 1 [23] .
In Table 1 , the loss functions λ••
are the IFNs (•= P, B, N) . λPP
, λBP
and λNP
denote the loss degrees incurred for taking ac-
tions of a
P
, a
B
and a
N
, respectively, when an object belongs to C . Similarly, λPN
, λBN
and λNN
denote the loss degrees incurred
D. Liang et al. / Information Sciences 375 (2017) 183–201 187
Tabl e 2
The new loss function matrix associated with the point
operator.
C ¬C
a
P λn
PP
= F
n
κλPP
,ηλPP
(λPP
) λn
PN
= F
n
κλPN
,ηλPN
(λPN
)
a
B λn
BP
= F
n
κλBP
,ηλBP
(λBP
) λn
BN
= F
n
κλBN
,ηλBN
(λBN
)
a
N λn
NP
= F
n
κλNP
,ηλNP
(λNP
) λn
NN
= F
n
κλNN
,ηλNN
(λNN
)
for taking the same actions when the object belongs to ¬C . Meanwhile, the hesitation memberships of the loss functions
are: πλPP
= 1 μλPP
νλPP
, πλBP
= 1 μλBP
νλBP
, πλNP
= 1 μλNP
νλNP
, πλPN
= 1 μλPN
νλPN
, πλBN
= 1 μλBN
νλBN
and πλNN
= 1 μλNN
νλNN
.
For the loss functions of IFDTRS model, Liang and Liu [22,23] provided a semantic interpretation. That is, the loss of
classifying an object x belonging to C into the positive region POS( C ) is less than or equal to the loss of classifying x into
the boundary region BND( C ), and both of these losses are strictly less than the loss of classifying x into the negative region
NEG( C ). The reverse orders of these losses are suitable to classify an object not in C . Hence, we confirm the following
relationships of loss functions [23] :
λPP λBP λNP
,
λNN
λBN
λPN
.
where ”is an order relation [22,23] . With respect to Definitions 2 and 3 , Liang and Liu [23] further stipulated a set of
constraint conditions. The conditions are [23] :
μλPP
< μλBP
< μλNP
, (5)
νλNP
< νλBP
< νλPP
, (6)
μλNN
< μλBN
< μλPN
, (7)
νλPN
< νλBN
< νλNN
. (8)
It is notable that (5) (8) are the prerequisites of the IFDTRSs. (5) and (7) describe the membership degree relationships
among the loss functions. Meanwhile, (6) and (8) represent the non-membeship relationships among the loss functions.
Considering the effectiveness of the point operator to loss functions, we reconstruct the loss function matrix of Table 1 . The
result is shown in Table 2 .
For Table 2 , n is a positive integer, which can be regarded as the corresponding stage [25] . Under the point op-
erator, the loss function λPP
is impacted by the parameters κλPP
and ηλPP
. Analogously, the parameters κλPN
, ηλPN
,
κλBP
, ηλBP
, κλBN
, ηλBN
, κλNP
, ηλNP
, κλNN
and ηλNN
affect other loss functions. For the parameters κλ•• and ηλ•• , we
take κκλ•• , ηλ•• [0 , 1] and κλ•• + ηλ•• 1 (•= P, B, N) [25] . In the original situation, the value of n is 0. In this case,
the loss functions are decided by the results of Table 1 , i.e., λ0
PP
= F
0
κλPP
,ηλPP
(λPP
) = (μλPP
, νλPP
) , λ0
PN
= F
0
κλPN
,ηλPN
(λPN
) =
(μλPN
, νλPN
) , λ0
BP
= F
0
κλBP
,ηλBP
(λBP
) = (μλBP
, νλBP
) , λ0
BN
= F
0
κλBN
,ηλBN
(λBN
) = (μλBN
, νλBN
) , λ0
NP
= F
0
κλNP
,ηλNP
(λNP
) = (μλNP
, νλNP
)
and λ0
NN
= F
0
κλNN
,ηλNN
(λNN
) = (μλNN
, νλNN
) . With regard to the results of Property 3 , we calculate the loss functions under
the point operators as follows:
Proposition 1. On the basis of Property 3 and Tabl e 1 , the loss functions of Table 2 under the point operators are calculated,
respectively:
F
n
κλPP
,ηλPP
(λPP
) =
μλPP
+ κλPP
πλPP
1 (1 κλPP
ηλPP
)
n
κλPP
+ ηλPP
, νλPP
+ ηλPP
πλPP
1 (1 κλPP
ηλPP
)
n
κλPP
+ ηλPP ,
F
n
κλBP
,ηλBP
(λBP
) =
μλBP
+ κλBP
πλBP
1 (1 κλBP
ηλBP
)
n
κλBP
+ ηλBP
, νλBP
+ ηλBP
πλBP
1 (1 κλBP
ηλBP
)
n
κλBP
+ ηλBP ,
F
n
κλNP
,ηλNP
(λNP
) =
μλNP
+ κλNP
πλNP
1 (1 κλNP
ηλNP
)
n
κλNP
+ ηλNP
, νλNP
+ ηλNP
πλNP
1 (1 κλNP
ηλNP
)
n
κλNP
+ ηλNP ,
F
n
κλPN
,ηλPN
(λPN
) =
μλPN
+ κλPN
πλPN
1 (1 κλPN
ηλPN
)
n
κλPN
+ ηλPN
, νλPN
+ ηλPN
πλPN
1 (1 κλPN
ηλPN
)
n
κλPN
+ ηλPN ,
F
n
κλBN
,ηλBN
(λBN
) =
μλBN
+ κλBN
πλBN
1 (1 κλBN
ηλBN
)
n
κλBN
+ ηλBN
, νλBN
+ ηλBN
πλBN
1 (1 κλBN
ηλBN
)
n
κλBN
+ ηλBN ,
188 D. Liang et al. / Information Sciences 375 (2017) 183–201
F
n
κλNN
,ηλNN
(λNN
) =
μλNN
+ κλNN
πλNN
1 (1 κλNN
ηλNN
)
n
κλNN
+ ηλNN
, νλNN
+ ηλNN
πλNN
1 (1 κλNN
ηλNN
)
n
κλNN
+ ηλNN ,
where κλ•• + ηλ•• = 0 and n is a positive integer. If κλ•• + ηλ•• = 0 , then F
n
κλ•• ,ηλ•• (λ•• ) = λ•• (•= P, B, N) .
When κλ•• + ηλ•• = 0 , it implies κλ•• = 0 and ηλ•• = 0 (•= P, B, N) . In this case, the new loss functions associated with
the point operator presented in Table 2 become the results of Table 1 . Hence, IFDTRSs proposed by Liang and Liu [23] are a
special case of the novel IFDTRSs with the point operator. In this paper, we mainly discuss the precondition κλ•• + ηλ•• = 0
and further recalculate the corresponding hesitant memberships of the loss functions (•= P, B, N) .
Proposition 2. On the basis of Property 3 and Proposition 1 , the hesitant memberships of loss functions under the point operators
are calculated as follows:
πF
n
κλPP
,ηλPP
(λPP
)
(λPP
) = (1 κλPP
ηλPP
)
n
πλPP
,
πF
n
κλBP
,ηλBP
(λBP
)
(λBP
) = (1 κλBP
ηλBP
)
n
πλBP
,
πF
n
κλNP
,ηλNP
(λNP
)
(λNP
) = (1 κλNP
ηλNP
)
n
πλNP
,
πF
n
κλPN
,ηλPN
(λPN
)
(λPN
) = (1 κλPN
ηλPN
)
n
πλPN
,
πF
n
κλBN
,ηλBN
(λBN
)
(λBN
) = (1 κλBN
ηλBN
)
n
πλBN
,
πF
n
κλNN
,ηλNN
(λNN
)
(λNN
) = (1 κλNN
ηλNN
)
n
πλNN
,
where κλ•• + ηλ•• = 0 and n is a positive integer. If κλ•• + ηλ•• = 0 , then πF
n
κλ•• ,ηλ•• (λ••)
(λ•• ) = πλ•• (•= P, B, N) .
For clarity, we use Example 1 to illustrate the calculations of Propositions 1 and 2 .
Example 1. According to the loss function matrix of Table 1 , the loss functions are given as follows: λPP
=
(μλPP
, νλPP
) = (0 . 005 , 0 . 5) , λPN
= (μλPN
, νλPN
) = (0 . 8 , 0 . 1) , λBP
= (μλBP
, νλBP
) = (0 . 1 , 0 . 3) , λBN
= (μλBN
, νλBN
) = (0 . 3 , 0 . 3) ,
λNP
= (μλNP
, νλNP
) = (0 . 4 , 0 . 1) and λNN
= (μλNN
, νλNN
) = (0 . 01 , 0 . 6) . In order to illustrate the calculation procedure of the
new loss functions with the point operator, we take the loss function λPN
as an example. Based on the result of [25] , the
values of the parameters κλPN
and ηλPN
are equal to the initial membership degree and the initial non-membership of λPN
,
respectively. Hence, we set κλPN
= μλPN
= 0 . 8 and ηλPN
= νλPN
= 0 . 1 .
When n is 0, F
0
0 . 8 , 0 . 1
(λPN
) = λPN
= (0 . 8 , 0 . 1) and πF
0
0 . 8 , 0 . 1
(λPN
)
(λPN
) = πλPN
= 1 μλPN
νλPN
= 0 . 1 . If n is 1, then we cal-
culate the loss function F
1
0 . 8 , 0 . 1
(λPN
) , and thus
F
1
0 . 8 , 0 . 1
(λPN
) =
μλPN
+ 0 . 8 πλPN
1 (1 0 . 8 0 . 1)
0 . 8 + 0 . 1
, νλBN
+ 0 . 1 πλPN
1 (1 0 . 8 0 . 1)
0 . 8 + 0 . 1
= (μλPN
+ 0 . 8 πλPN
, νλBN
+ 0 . 1 πλPN
)
= (0 . 88 , 0 . 11 ) ,
πF
1
0 . 8 , 0 . 1
(λPN
)
(λPN
) = (1 0 . 8 0 . 1) πλPN
= 0 . 01 .
If n is 2, then we further calculate the loss function F
2
0 . 8 , 0 . 1
(λPN
) . The results are:
F
2
0 . 8 , 0 . 1
(λPN
) =
μλPN
+ 0 . 8 πλPN
1 (1 0 . 8 0 . 1)
2
0 . 8 + 0 . 1
, νλBN
+ 0 . 1 πλPN
1 (1 0 . 8 0 . 1)
2
0 . 8 + 0 . 1
= (μλPN
+ 0 . 88 πλPN
, νλBN
+ 0 . 11 πλPN
)
= (0 . 888 , 0 . 111 ) ,
πF
2
0 . 8 , 0 . 1
(λPN
)
(λPN
) = (1 0 . 8 0 . 1)
2
πλPN
= 0 . 001 .
With the aid of the point operator, Example 1 illustrates the gradual variations of loss functions in detail, which is
changed by utilizing the iterations of n . Based on the results of Proposition 2 , we deduce the property for hesitant member-
ships of loss functions below:
Proposition 3. Suppose that the parameters κλ•• and ηλ•• are constant, i.e., κκλ•• , ηλ•• [0 , 1] and κλ•• + ηλ•• 1 , then the
hesitant memberships of loss functions with the point operators presented in Proposition 2 , are non-monotonic increasing with
the increase of n , namely,
πF
n
κλ•• ,ηλ•• (λ••)
(λ•• ) πF
n 1
κλ•• ,ηλ•• (λ••)
(λ•• ) ≤ ··· ≤ πF
i
κλ•• ,ηλ•• (λ••)
(λ•• ) ≤ ··· ≤ πF
0
κλ•• ,ηλ•• (λ••)
(λ•• ) ,
where •= P, B, Nand 0 i n.
D. Liang et al. / Information Sciences 375 (2017) 183–201 189
Proof. In light of the result of Proposition 2 , πF
n
κλ•• ,ηλ•• (λ••)
(λ•• ) = (1 κλ•• ηλ•• )
n
πλ•• (•= P, B, N) . In the original situ-
ation, the value of πλ•• is given. Thus, we assume that πλ•• is constant. Considering that n is an independent variable of
πF
n
κλ•• ,ηλ•• (λ••)
(λ•• ) , we take the partial derivative of πF
n
κλ•• ,ηλ•• (λ••)
(λ•• ) to n . When κλ•• and ηλ•• are constant, the partial
derivative of πF
n
κλ•• ,ηλ•• (λ••)
(λ•• ) to n is computed as:
∂πF
n
κλ•• ,ηλ•• (λ••)
(λ•• )
n
= ((1 κλ•• ηλ•• )
n
πλ•• )
= πλ•• ((1 κλ•• ηλ•• )
n
)
,
= πλ•• (1 κλ•• ηλ•• )
n
ln (1 κλ•• ηλ•• ) .
From the results of Property 3 , 0 πλ•• 1 . Because κκλ•• , ηλ•• [0 , 1] and κλ•• + ηλ•• 1 , we can deduce the relationship:
0 (1 κλ•• ηλ•• ) 1 . Hence, we discriminate the following relationship:
∂πF
n
κλ•• ,ηλ•• (λ••)
(λ•• )
n
= πλ•• (1 κλ•• ηλ•• )
n
ln (1 κλ•• ηλ•• ) 0 .
In view of the condition
∂πF
n
κλ•• ,ηλ••
(λ••)
(λ••)
n
0 , the statement in Proposition 3 holds.
Proposition 3 shows the variation of the hesitant memberships of loss functions with n . It explains the reduc-
tion for the uncertainty information of an IFN [37] . Inspired by the idea, we continue to analyze the variations
of the membership degrees and non-membership degrees of loss functions with n , respectively. Let F
n
κλ•• ,ηλ•• (λ•• ) =
(μF
n
κλ•• ,ηλ••
(λ•• ) , νF
n
κλ•• ,ηλ••
(λ•• )) (•=P, B, N) .
Proposition 4. Let μF
n
κλ•• ,ηλ••
(λ•• ) = μλ•• + κλ•• πλ••
1 (1 κλ•• ηλ•• )
n
κλ•• + ηλ••
(•= P, B, N) . Suppose that the parameters κλ•• and ηλ••
are constant, i.e., κκλ•• , ηλ•• [0 , 1] and κλ•• + ηλ•• 1 , then μF
n
κλ•• ,ηλ••
(λ•• ) is non-monotonic decreasing with the increase of
n.
Proof. For the membership degree of μF
n
κλ•• ,ηλ••
(λ•• ) , the values of μλ•• and πλ•• are given in the original situation. Hence,
we assume that μλ•• and πλ•• are constant. Considering that n is an independent variable of μF
n
κλ•• ,ηλ••
(λ•• ) , we take the
partial derivative of μF
n
κλ•• ,ηλ••
(λ•• ) to n . When κλ•• and ηλ•• are constant, the partial derivative of μF
n
κλ•• ,ηλ••
(λ•• ) to n is
computed as:
∂μF
n
κλ•• ,ηλ••
(λ•• )
n
= (μλ•• + κλ•• πλ••
1 (1 κλ•• ηλ•• )
n
κλ•• + ηλ••
)
= κλ••πλ••
κλ•• + ηλ••
((1 κλ•• ηλ•• )
n
)
= κλ••πλ••
κλ•• + ηλ••
(1 κλ•• ηλ•• )
n
ln (1 κλ•• ηλ•• ) 0 .
In view of the condition
∂μF
n
κλ•• ,ηλ••
(λ••)
n
0 , the statement in Proposition 4 holds.
Proposition 5. Let νF
n
κλ•• ,ηλ••
(λ•• ) = νλ•• + ηλ•• πλ••
1 (1 κλ•• ηλ•• )
n
κλ•• + ηλ••
(•= P, B, N) . Suppose that the parameters κλ•• and ηλ•• are
constant, i.e., κκλ•• , ηλ•• [0 , 1] and κλ•• + ηλ•• 1 , then νF
n
κλ•• ,ηλ••
(λ•• ) is non-monotonic decreasing with the increase of n.
Proof. For the non-membership degree of νF
n
κλ•• ,ηλ••
(λ•• ) , the values of νλ•• and πλ•• are given in the original situation, i.e.,
n = 0 . Hence, we assume that νλ•• and πλ•• are constant. Considering that n is an independent variable of νF
n
κλ•• ,ηλ••
(λ•• ) ,
we take the partial derivative of νF
n
κλ•• ,ηλ••
(λ•• ) to n . When κλ•• and ηλ•• are constant, the partial derivative of νF
n
κλ•• ,ηλ••
(λ•• )
to n is computed as:
∂νF
n
κλ•• ,ηλ••
(λ•• )
n
=
νλ•• + ηλ•• πλ••
1 (1 κλ•• ηλ•• )
n
κλ•• + ηλ••
= ηλ••πλ••
κλ•• + ηλ••
((1 κλ•• ηλ•• )
n
)
= ηλ••πλ••
κλ•• + ηλ••
(1 κλ•• ηλ•• )
n
ln (1 κλ•• ηλ•• ) 0 .
In view of the condition
∂νF
n
κλ•• ,ηλ••
(λ••)
n
0 , the statement in Proposition 5 is satisfied.
190 D. Liang et al. / Information Sciences 375 (2017) 183–201
From the results presented in Propositions 4 and 5 , the variations of the membership degrees and non-membership
degrees of loss functions are synchronously non-monotonic decreasing with the increase of n . They explain the exten-
sion for the membership and non-membership information of an IFN [37] . Similarly, given the value of n , the variation
of F
n
κλ•• ,ηλ•• (λ•• ) with the change of the parameters κλ•• and ηλ•• is deeply investigated as follows.
Proposition 6. Let μF
n
κλ•• ,ηλ••
(λ•• ) = μλ•• + κλ•• πλ••
1 (1 κλ•• ηλ•• )
n
κλ•• + ηλ••
(•= P, B, N) . Suppose that the parameter n is constant,
taking κκλ•• , ηλ•• [0 , 1] and κλ•• + ηλ•• 1 , then μF
n
κλ•• ,ηλ••
(λ•• ) is non-monotonic decreasing with the increase of κλ••. How-
ever, μF
n
κλ•• ,ηλ••
(λ•• ) is non-monotonic increasing with the increase of ηλ•• .
Proof. For the membership degree of μF
n
κλ•• ,ηλ••
(λ•• ) , the values of μλ•• and πλ•• are given in the original situation. Con-
sidering that κλ•• and ηλ•• are independent variables of μF
n
κλ•• ,ηλ••
(λ•• ) , we take the partial derivative of μF
n
κλ•• ,ηλ••
(λ•• ) to
κλ•• and ηλ••, respectively. When n and ηλ•• are constant, the partial derivative of μF
n
κλ•• ,ηλ••
(λ•• ) to κλ•• is computed as:
∂μF
n
κλ•• ,ηλ••
(λ•• )
∂κλ••
=
μλ•• + κλ•• πλ••
1 (1 κλ•• ηλ•• )
n
κλ•• + ηλ••
= πλ••
κλ••
1 1 κλ•• ηλ•• n
κλ•• + ηλ••
= πλ••
(κλ•• )
1 (1 κλ•• ηλ•• )
n
κλ•• + ηλ••
+ κλ•• 1 (1 κλ•• ηλ•• )
n
κλ•• + ηλ••
= πλ••1 (1 κλ•• ηλ•• )
n
κλ••
+ ηλ•• + κλ••
(κλ•• + ηλ•• )(1 (1 κλ•• ηλ•• )
n
)
(κλ•• + ηλ•• )
(1 (1 κλ•• ηλ•• )
n
)
(κλ•• + ηλ•• )
2
= πλ••1 (1 κλ•• ηλ•• )
n
κλ•• + ηλ••
+ κλ••
(κλ•• + ηλ•• ) n (1 κλ•• ηλ•• )
n 1
(1 (1 κλ•• ηλ•• )
n
)
(κλ•• + ηλ•• )
2
=
πλ••
(κλ•• + ηλ•• )
2
((κλ•• + ηλ•• )(1 (1 κλ•• ηλ•• )
n
) + κλ•• ((κλ•• + ηλ•• ) n (1 κλ•• ηλ•• )
n 1
(1 (1 κλ•• ηλ•• )
n
)))
=
πλ••
(κλ•• + ηλ•• )
2
(ηλ•• (1 (1 κλ•• ηλ•• )
n
) + κλ•• (κλ•• + ηλ•• ) n (1 κλ•• ηλ•• )
n 1
) 0 .
Analogously, when n and κλ•• are constant, the partial derivative of μF
n
κλ•• ,ηλ••
(λ•• ) to ηλ•• is computed as:
∂μF
n
κλ•• ,ηλ••
(λ•• )
∂ηλ••
=
μλ•• + κλ•• πλ••
1 (1 κλ•• ηλ•• )
n
κλ•• + ηλ••
= πλ••κλ•• 1 (1 κλ•• ηλ•• )
n
κλ•• + ηλ••
= πλ••κλ••
(κλ•• + ηλ•• )(1 (1 κλ•• ηλ•• )
n
)
(κλ•• + ηλ•• )
(1 (1 κλ•• ηλ•• )
n
)
(κλ•• + ηλ•• )
2
= πλ••κλ••
(κλ•• + ηλ•• ) n (1 κλ•• ηλ•• )
n 1
(1 (1 κλ•• ηλ•• )
n
)
(κλ•• + ηλ•• )
2
=
πλ•• κλ••
(κλ•• + ηλ•• )
2
((κλ•• + ηλ•• ) n (1 κλ•• ηλ•• )
n 1
+ (1 κλ•• ηλ•• )
n
1) 0 .
In this case, the direction of
∂μF
n
κλ•• ,ηλ••
(λ••)
∂ηλ••
mainly relies on the condition (κλ•• + ηλ•• ) n (1 κλ•• ηλ•• )
n 1
+ (1 κλ••
ηλ•• )
n
1 , which forms a judging criterion. It is obvious that (κλ•• + ηλ•• ) n (1 κλ•• ηλ•• )
n 1
+ (1 κλ•• ηλ•• )
n
1 0 .
Thus,
∂μF
n
κλ•• ,ηλ••
(λ••)
∂ηλ••
0 .
In view of
∂μF
n
κλ•• ,ηλ••
(λ••)
∂κλ••
0 and
∂μF
n
κλ•• ,ηλ••
(λ••)
∂ηλ••
0 , the statement in Proposition 6 holds.
D. Liang et al. / Information Sciences 375 (2017) 183–201 191
Proposition 7. Let νF
n
κλ•• ,ηλ••
(λ•• ) = νλ•• + ηλ•• πλ••
1 (1 κλ•• ηλ•• )
n
κλ•• + ηλ••
(•= P, B, N) . Suppose that the parameter n is constant, tak-
ing κκλ•• , ηλ•• [0 , 1] and κλ•• + ηλ•• 1 , then νF
n
κλ•• ,ηλ••
(λ•• ) is non-monotonic decreasing with the increase of ηλ•• . However,
νF
n
κλ•• ,ηλ••
(λ•• ) is non-monotonic increasing with the increase of κλ•• .
Proof. For the non-membership degree of νF
n
κλ•• ,ηλ••
(λ•• ) , the values of νλ•• and πλ•• are given in the original situation.
Considering that κλ•• and ηλ•• are independent variables of νF
n
κλ•• ,ηλ••
(λ•• ) , we take the partial derivative of νF
n
κλ•• ,ηλ••
(λ•• )
to κλ•• and ηλ•• , respectively. When n and κλ•• are constant, the partial derivative of νF
n
κλ•• ,ηλ••
(λ•• ) to ηλ•• is computed
as:
∂νF
n
κλ•• ,ηλ••
(λ•• )
∂ηλ••
=
νλ•• + ηλ•• πλ••
1 (1 κλ•• ηλ•• )
n
κλ•• + ηλ••
= πλ••ηλ••
1 (1 κλ•• ηλ•• )
n
κλ•• + ηλ••
=
πλ••
(κλ•• + ηλ•• )
2
(κλ•• (1 (1 κλ•• ηλ•• )
n
) + ηλ•• (κλ•• + ηλ•• ) n (1 κλ•• ηλ•• )
n 1
) 0 .
Analogously, when n and ηλ•• are constant, the partial derivative of νF
n
κλ•• ,ηλ••
(λ•• ) to κλ•• is computed as follows:
∂νF
n
κλ•• ,ηλ••
(λ•• )
∂κλ••
=
νλ•• + ηλ•• πλ••
1 (1 κλ•• ηλ•• )
n
κλ•• + ηλ••
= πλ••ηλ•• 1 (1 κλ•• ηλ•• )
n
κλ•• + ηλ••
=
πλ•• ηλ••
(κλ•• + ηλ•• )
2
((κλ•• + ηλ•• ) n (1 κλ•• ηλ•• )
n 1
+ (1 κλ•• ηλ•• )
n
1) 0 .
The direction of
∂νF
n
κλ•• ,ηλ••
(λ••)
∂κλ••
mainly relies on the condition (κλ•• + ηλ•• ) n (1 κλ•• ηλ•• )
n 1
+ (1 κλ•• ηλ•• )
n
1 ,
which forms a judging criterion. In this case,
∂νF
n
κλ•• ,ηλ••
(λ••)
∂κλ••
0 .
In view of
∂νF
n
κλ•• ,ηλ••
(λ••)
∂ηλ••
0 and
∂νF
n
κλ•• ,ηλ••
(λ••)
∂κλ••
0 , the statement in Proposition 7 holds.
From the results reported in Propositions 6 and 7 , we obtain the influence of the parameters κλ•• and ηλ•• to the mem-
bership degree and the non-membership degree of F
n
κλλ••
,ηλ•• (λ•• ) (•=P, B, N) , when n is constant. The point operator can
reassign the values of the membership degree and the non-membership degree of the IFN. The main strategy is to reduce
the uncertainty of IFN by modifying the value of n , e.g., the results of Propositions 3 5 . Following the existing literature
[25,36,37] , we assume that the values of the parameters κλ•• and ηλ•• are constant in the rest of this paper. Then, we
deduce the following limit based on the results reported in Propositions 3 5 and Ref. [25] .
Corollary 1. Let the loss function of Table 2 be F
n
κλ•• ,ηλ•• (λ•• ) = (μF
n
κλ•• ,ηλ••
(λ•• ) , νF
n
κλ•• ,ηλ••
(λ•• )) and its hesitant membership
be πF
n
κλ•• ,ηλ•• (λ••)
(λ•• ) (•=P, B, N) . If the parameters κλ•• and ηλ•• are constant, i.e., κκλ•• , ηλ•• [0 , 1] and κλ•• + ηλ•• 1 ,
then
(1) lim
n →∞
μF
n
κλ•• ,ηλ••
(λ•• ) = μλ•• +
κλ•• πλ••
κλ•• + ηλ••
;
(2) lim
n →∞
νF
n
κλ•• ,ηλ••
(λ•• ) = νλ•• +
ηλ•• πλ••
κλ•• + ηλ••
;
(3) lim
n →∞
πF
n
κλ•• ,ηλ••
(λ•• ) = 0 .
According to the results presented in Propositions 3 7 , the point operator may influence the values of loss functions.
When we construct three-way decisions, the loss functions must obey the prerequisites of (5) - (8) . With the change of n ,
we need to check the prerequisites of the new loss functions. Hence, we analyze the relationships among the loss functions
under the point operator below:
Proposition 8. Let the loss function of Table 2 be F
n
κλ•• ,ηλ•• (λ•• ) = (μF
n
κλ•• ,ηλ••
(λ•• ) , νF
n
κλ•• ,ηλ••
(λ•• )) and they comply with (5) -
(8) in the original situation (•= P, B, N) . If the parameters κλ•• = κand ηλ•• = ηare constant, taking κκλ•• , ηλ•• [0 , 1] and
192 D. Liang et al. / Information Sciences 375 (2017) 183–201
κλ•• + ηλ•• 1 , then
μF
n
κλPP
,ηλPP
(λPP
) μF
n
κλBP
,ηλBP
(λBP
) μF
n
κλNP
,ηλNP
(λNP
) ,
νF
n
κλNP
,ηλNP
(λNP
) νF
n
κλBP
,ηλBP
(λBP
) νF
n
κλPP
,ηλPP
(λPP
) ,
μF
n
κλNN
,ηλNN
(λNN
) μF
n
κλBN
,ηλBN
(λBN
) μF
n
κλPN
,ηλPN
(λPN
) ,
νF
n
κλPN
,ηλPN
(λPN
) νF
n
κλBN
,ηλBN
(λBN
) νF
n
κλNN
,ηλNN
(λNN
) .
Proof. For the membership degree of F
n
κλ•• ,ηλ•• (λ•• ) , it can be calculated based on Proposition 1 (•= P, B, N) ,
i.e., μF
n
κλ•• ,ηλ••
(λ•• ) = μλ•• + κλ•• πλ••
1 (1 κλ•• ηλ•• )
n
κλ•• + ηλ••
. Assume that κλ•• = κand ηλ•• = η, then μF
n
κλ•• ,ηλ••
(λ•• ) = μλ•• +
κπλ••
1 (1 κη)
n
κ+ η.
When n , κand ηare constant, the partial derivative of μF
n
κλ•• ,ηλ••
(λ•• ) to μλ•• is computed as:
∂μF
n
κλ•• ,ηλ••
(λ•• )
∂μλ••
=
μλ•• + κπλ••
1 (1 κη)
n
κ+ η
=
μλ•• + κ(1 μλ•• νλ•• )
1 (1 κη)
n
κ+ η
= 1 κ1 (1 κη)
n
κ+ η0 .
Meanwhile, the partial derivative of μF
n
κλ•• ,ηλ••
(λ•• ) to νλ•• is computed as:
∂μF
n
κλ•• ,ηλ••
(λ•• )
∂νλ••
=
μλ•• + κπλ••
1 (1 κη)
n
κ+ η
=
μλ•• + κ(1 μλ•• νλ•• )
1 (1 κη)
n
κ+ η
= κ1 (1 κη)
n
κ+ η0 .
Combining the conditions (5) - (8) , we verify the membership relationships among loss functions in the n th stage.
In the same way, we can prove the non-membership relationships among loss functions. Hence, the statement in
Proposition 8 holds.
Note that the value of n of each loss function is the same. In Proposition 8 , we discuss that the parameters κλ•• and
ηλ•• of each loss function are equal. Given a value of n , the new loss functions impacted by the point operator basically
satisfy the conditions (5) (8) . In addition, Liu and Wang [25] also assumed the values of the parameters: κλ•• = μλ•• and
ηλ•• = νλ•• (•= P, B, N) . On the basis of (5) (8) and Propositions 6 7 , we can deduce the following corollary:
Corollary 2. Let the loss function of Table 2 be F
n
κλ•• ,ηλ•• (λ•• ) = (μF
n
κλ•• ,ηλ••
(λ•• ) , νF
n
κλ•• ,ηλ••
(λ•• )) and they comply with (5) (8)
in the original situation (•= P, B, N) . If the parameters κλ•• = μλ•• and ηλ•• = νλ•• are constant, taking κκλ•• , ηλ•• [0 , 1] and
κλ•• + ηλ•• 1 , then
μF
n
κλPP
,ηλPP
(λPP
) μF
n
κλBP
,ηλBP
(λBP
) μF
n
κλNP
,ηλNP
(λNP
) ,
νF
n
κλNP
,ηλNP
(λNP
) νF
n
κλBP
,ηλBP
(λBP
) νF
n
κλPP
,ηλPP
(λPP
) ,
μF
n
κλNN
,ηλNN
(λNN
) μF
n
κλBN
,ηλBN
(λBN
) μF
n
κλPN
,ηλPN
(λPN
) ,
νF
n
κλPN
,ηλPN
(λPN
) νF
n
κλBN
,ηλBN
(λBN
) νF
n
κλNN
,ηλNN
(λNN
) .
From the results of Corollary 2 , the loss functions under the different values of n still basically obey the conditions (5) - (8) ,
when the parameters κλ•• = μλ•• and ηλ•• = νλ•• are constant. During the application, Corollary 2 ensures the prerequisite
of three-way decisions.
3.2. Basic model of IFDTRSs with point operators
Considering the effectiveness of the point operator to loss functions, we reconstruct a new IFDTRS model reported in
Ref. [23] .. In the aspect of three-way decisions, the conditional probability is another basic element. In this paper, Pr ( C |[ x ])
denotes the conditional probability of an object x belonging to C given that the object is described by its equivalence class
[ x ]. Analogously, Pr ( ¬C |[ x ]) is the conditional probability of an object x belonging to ¬C , i.e., P r(C| [ x ]) + P r(¬ C| [ x ]) = 1 .
D. Liang et al. / Information Sciences 375 (2017) 183–201 193
For an object x , the expected loss R (a
| [ x ])
n
(•= P, B, N) associated with taking the individual actions under a given value
of n is expressed as:
R (a
P
| [ x ])
n
= F
n
κλPP
,ηλPP
(λPP
) P r(C| [ x ]) F
n
κλPN
,ηλPN
(λPN
) P r(¬ C| [ x ]) , (9)
R (a
B
| [ x ])
n
= F
n
κλBP
,ηλBP
(λBP
) P r(C| [ x ]) F
n
κλBN
,ηλBN
(λBN
) P r(¬ C| [ x ]) , (10)
R (a
N
| [ x ])
n
= F
n
κλNP
,ηλNP
(λNP
) P r(C| [ x ]) F
n
κλNN
,ηλNN
(λNN
) P r(¬ C| [ x ]) . (11)
In this case, Pr ( C |[ x ]) is precise and the expected losses may be impacted by n . According to the operations of IFNs and the
results reported in Ref. [23] ., the expected losses are calculated as:
R (a
P
| [ x ])
n
= (1 (1 μF
n
κλPP
,ηλPP
(λPP
))
Pr(C| [ x ])
, (νF
n
κλPP
,ηλPP
(λPP
))
Pr(C| [ x ])
)
(1 (1 μF
n
κλPN
,ηλPN
(λPN
))
Pr(¬ C| [ x ])
, (νF
n
κλPN
,ηλPN
(λPN
))
Pr(¬ C| [ x ])
) ,
R (a
B
| [ x ])
n
= (1 (1 μF
n
κλBP
,ηλBP
(λBP
))
Pr(C| [ x ])
, (νF
n
κλBP
,ηλBP
(λBP
))
Pr(C| [ x ])
)
(1 (1 μF
n
κλBN
,ηλBN
(λBN
))
Pr(¬ C| [ x ])
, (νF
n
κλBN
,ηλBN
(λBN
))
Pr(¬ C| [ x ])
) ,
R (a
N
| [ x ])
n
= (1 (1 μF
n
κλNP
,ηλNP
(λNP
))
Pr(C| [ x ])
, (νF
n
κλNP
,ηλNP
(λNP
))
Pr(C| [ x ])
)
(1 (1 μF
n
κλNN
,ηλNN
(λNN
))
Pr(¬ C| [ x ])
, (νF
n
κλNN
,ηλNN
(λNN
))
Pr(¬ C| [ x ])
) ,
where the expected losses are IFNs. Based on the results of Ref. [23] ., we calculate the expected losses below:
Proposition 9. Based on (9) - (11) and the point operator, the expected losses R ( a
|[ x ])
n under a given value of n are re-expressed
as (•= P, B, N) :
R (a
P
| [ x ])
n
= (1 (1 μF
n
κλPP
,ηλPP
(λPP
))
Pr(C| [ x ])
(1 μF
n
κλPN
,ηλPN
(λPN
))
Pr(¬ C| [ x ])
,
(νF
n
κλPP
,ηλPP
(λPP
))
Pr(C| [ x ])
(νF
n
κλPN
,ηλPN
(λPN
))
Pr(¬ C| [ x ])
) , (12)
R (a
B
| [ x ])
n
= (1 (1 μF
n
κλBP
,ηλBP
(λBP
))
Pr(C| [ x ])
(1 μF
n
κλBN
,ηλBN
(λBN
))
Pr(¬ C| [ x ])
,
(νF
n
κλBP
,ηλBP
(λBP
))
Pr(C| [ x ])
(νF
n
κλBN
,ηλBN
(λBN
))
Pr(¬ C| [ x ])
) , (13)
R (a
N
| [ x ])
n
= (1 (1 μF
n
κλNP
,ηλNP
(λNP
))
Pr(C| [ x ])
(1 μF
n
κλNN
,ηλNN
(λNN
))
Pr(¬ C| [ x ])
,
(νF
n
κλNP
,ηλNP
(λNP
))
Pr(C| [ x ])
(νF
n
κλNN
,ηλNN
(λNN
))
Pr(¬ C| [ x ])
) , (14)
where μF
n
κλ•• ,ηλ••
(λ•• ) = μλ•• + κλ•• πλ••
1 (1 κλ•• ηλ•• )
n
κλ•• + ηλ••
and νF
n
κλ•• ,ηλ••
(λ•• ) = νλ•• + ηλ•• πλ••
1 (1 κλ•• ηλ•• )
n
κλ•• + ηλ••
.
According to the results of Ref. [23] and Proposition 9 , the expected loss with a given value of n is expressed as:
R (a
| [ x ])
n
= (1 (1 μF
n
κλP
,ηλP
(λP
))
Pr(C| [ x ])
(1 μF
n
κλN
,ηλN
(λN
))
Pr(¬ C| [ x ])
,
(νF
n
κλP
,ηλP
(λP
))
Pr(C| [ x ])
(νF
n
κλN
,ηλN
(λN
))
Pr(¬ C| [ x ])
) ,
where •=P, B, N. Let R (a
| [ x ])
n
= (μn , νn ) . In light of the results reported in Ref. [23] and Propositions 4 5 , we deduce
the variations of the elements of expected losses with the stage n .
Corollary 3. Let μn = 1 (1 μF
n
κλP
,ηλP
(λP
))
Pr(C| [ x ])
(1 μF
n
κλN
,ηλN
(λN
))
Pr(¬ C| [ x ])
be the membership degree of the ex-
pected loss R ( a
|[ x ])
n
. When Pr ( C |[ x ]) is constant, μn is non-monotonic decreasing with the increase of n (•= P, B, N) .
Corollary 4. Let νn = (νF
n
κλP
,ηλP
(λP
))
Pr(C| [ x ])
(νF
n
κλN
,ηλN
(λN
))
Pr(¬ C| [ x ])
be the non-membership degree of the expected loss
R ( a
|[ x ])
n
. When Pr ( C |[ x ]) is constant, νn is non-monotonic decreasing with the increase of n (•= P, B, N) .
For the results of Corollary 3 and Ref. [23] , they imply that μF
n
κλP
,ηλP
(λP
) and μF
n
κλN
,ηλN
(λN
) are non-monotonic
decreasing with the increase of n . For the result of Corollary 4 , it implies that νF
n
κλP
,ηλP
(λP
) and νF
n
κλN
,ηλN
(λN
) are
non-monotonic decreasing with the increase of n . Note that a precondition is that the parameters κλ•• and ηλ•• are con-
stant (•=P, B, N) .
194 D. Liang et al. / Information Sciences 375 (2017) 183–201
The purpose of IFDTRSs is to provide decision rules relied on the Bayesian decision procedure [23] . Under the intuitionist
fuzzy environment with the point operators, the Bayesian decision procedure suggests the following minimum-cost decision
rules:
(P
n
) If R (a
P
| [ x ])
n
R (a
B
| [ x ])
n and R (a
P
| [ x ])
n
R (a
N
| [ x ])
n
, decide x POS (C)
n
;
(B
n
) If R (a
B
| [ x ])
n
R (a
P
| [ x ])
n and R (a
B
| [ x ])
n
R (a
N
| [ x ])
n
, decide x BND (C)
n
;
(N
n
) If R (a
N
| [ x ])
n
R (a
P
| [ x ])
n and R (a
N
| [ x ])
n
R (a
B
| [ x ])
n
, decide x NEG (C)
n
;
where ” denotes the smaller or the equal relation. The decision rules (P
n
) (N
n
) are three-way decisions proposed by
Yao [44] . The three-way decisions comprise the positive rules (P
n
), the boundary rules (B
n
), and the negative rules (N
n
). The
positive rules make decisions of the acceptance. The negative rules make decisions of the rejection, and the boundary rules
make decisions of the non-commitment. R ( a
P
|[ x ])
n
, R ( a
B
|[ x ])
n and R ( a
N
|[ x ])
n are IFNs. On the basis of the results presented
in Corollary 2 and Ref. [23] , the elements of the expected losses under a given value of n are non-monotonic decreasing and
their comparison results are uncertain. In essence, we need to find the minimum from the expected losses.
3.3. The risk decision-making procedure of three-way decisions
In order to compare the expected losses of the object x , we calculate their scores and accuracies. In light of Definition 2 ,
the scores of the expected losses are determined as follows:
S(R (a
| [ x ])
n
) = μn νn , (15)
where the membership degree is μn = 1 (1 μF
n
κλP
,ηλP
(λP
))
Pr(C| [ x ])
(1 μF
n
κλN
,ηλN
(λN
))
Pr(¬ C| [ x ]) and the non-
membership degree is νn = (νF
n
κλP
,ηλP
(λP
))
Pr(C| [ x ])
(νF
n
κλN
,ηλN
(λN
))
Pr(¬ C| [ x ])
. Meanwhile, the accuracies of the ex-
pected losses are computed as:
H(R (a
| [ x ])
n
) = μn + νn . (16)
In the context of risk or cost viewpoint, we prefer the lowest of expected losses. In light of the decision rules (P
n
) (N
n
)
and Definition 3 , we draw their comparison procedure of the expected losses presented in Fig. 1 .
From Fig. 1 , the calculations of the scores and accuracies of the expected losses play an important role in the comparison
procedure, which constructs two key judgment conditions. In the first condition, we need to compare the scores of the
expecte d losses. If the number of the expected losses having the minimum score is more than or equal to 2, we also need to
further compute the accuracy of the expected losses. The risk decision-making procedure of three-way decisions is designed
as follows:
Step 1 : According to the loss function matrix of Table 1 , we define the set of states = { C, ¬ C} and the set of actions
A = { a
P
, a
B
, a
N
} . Then, we evaluate the loss functions, i.e., λPP
= (μλPP
, νλPP
) , λBP
= (μλBP
, νλBP
) , λNP
= (μλNP
, νλNP
) ,
λPN
= (μλPN
, νλPN
) , λBN
= (μλBN
, νλBN
) and λNN
= (μλNN
, νλNN
) .
Step 2 : Under the point operator, we assign the parameters κλ•• = μλ•• and ηλ•• = νλ•• (•= P, B, N) .
Step 3 : Determine the value of n , which is regarded as the number of iterations.
Step 4 : Based on the results of Proposition 1 , we compute the loss functions of Table 2 with a given value of n , i.e.,
F
n
κλ•• ,ηλ•• (λ•• ) = (μF
n
κλ•• ,ηλ••
(λ•• ) , νF
n
κλ•• ,ηλ••
(λ•• )) (•= P, B, N) .
Step 5 : Determine the value of conditional probability for each object x , namely, Pr ( C |[ x ]).
Step 6 : On the basis of (12) (14) , we calculate the expected losses R (a
| [ x ])
n
= (μn , νn ) , where μn = 1 (1
μF
n
κλP
,ηλP
(λP
))
Pr(C| [ x ])
(1 μF
n
κλN
,ηλN
(λN
))
Pr(¬ C| [ x ]) is the membership degree of the expected loss R ( a
|[ x ])
n
and νn = (νF
n
κλP
,ηλP
(λP
))
Pr(C| [ x ])
(νF
n
κλN
,ηλN
(λN
))
Pr(¬ C| [ x ]) is the non-membership degree of the expected loss
R ( a
|[ x ])
n (•=P, B, N) .
Step 7 : According to (15) , we calculate the scores of the expected losses, i.e., S ( R ( a
|[ x ])
n
) (•= P, B, N) .
Step 8 : Compare the expected losses. If the number of the expected losses having the minimum score is equal to 1, the
decision rule for the object depends on the corresponding action and go to Step 11. Otherwise, go to Step 9.
Step 9 : For the expected losses having the same minimum, we further compute the accuracies of these expected losses
based on (16) .
Step 10 : From these accuracies, we find the minimum and form the decision rule based its corresponding action .
Step 11 : In light of the decision rules (P
n
) (N
n
) and Definition 2 , we confirm the corresponding decision rule.
Through Steps 1–11, we form three-way decisions, i.e., POS( C )
n
, BND( C )
n and NEG( C )
n
. Note that the loss functions of
Table 2 may gradually change with the variation of n . Under the point operator, we can generate a series of decision rules
based on the different values of n . The corresponding three-way decisions can be used to predict reasonable results for each
stage.
D. Liang et al. / Information Sciences 375 (2017) 183–201 195
Calculation of the scores of the expected
losses
The comparisons among the expected
losses
Calculation of the accuraies of the expected
losses
Determination rule for the expected loss
Is the number of expected loss having the
minimum score equal to 1?
Yes No
Determination rule for the expected losses
Fig. 1. The comparison procedure of the expected losses.
4. The application of three-way decisions to determine the decision stage with the aid of information entropy theory
On the basis of the point operator IFPO, three-way decisions under a given value of n are generated in Section 3 . With
the increase of n , we can obtain a series of three-way decisions in the different stages. During the application, we also
encounter the determination of n and interpret which stage is most suitable to make the decision. In this section, we
adopt an information entropy theory as the criterion to judge our decision. The Shannon entropy may be viewed as a
measure of uncertainty of a partition of three-way decisions [9] . Based on the Shannon entropy, Deng and Yao [9] presented
an information-theoretic approach to the interpretation and determination of thresholds. They mainly considered that the
thresholds could divide the universe into three regions and generate the corresponding uncertainty. In Ref. [9] , its main
contribution is that the authors constructed a function for searching the thresholds, which realizes the minimum overall
uncertainty.
With respect to the results of Ref. [23] , the loss function with IFPO can be regarded as information granulations [32] .
In this case, it reflects the flexibility of information granularity and also represents one type of uncertainty. IFPO measures
the variation of loss function. Inspired by Ref. [9] , we further investigate which stage is most suitable to make the decision
and determine the value of n . The information-theoretic approach reported in Ref. [9] provides us an objective function for
the determination of the value of n . By considering the flexibility of IFPO, an optimization method for deriving three-way
decisions from the new IFDTRS model of Section 3 is discussed in this section. More specifically, from the viewpoint of
information entropy theory, we explain the determination of the value of n when the information uncertainty is minimum.
Compared to Ref. [9] , this section focuses on the aspect of loss function. When the value of n is ensured, we can further use
the method of Section 3.3 to deduce three-way decisions. In light of the results of Ref. [9] , the basic concept of information-
theoretic approach is reviewed as follows:
Definition 6 [9] . A quadruple S = (U, AT D, V, f) is an information system with AT D = Ø, where U is a non-empty finite
of objects, called the universe. AT is called the conditional attributes and D is called the decision attributes. Suppose that
π= { b
1
, b
2
, ··· , b
q
} is a partition of a universe U , namely,
q
i =1
b
i
= Uand b
i
b
j
= Øfor i = j .
196 D. Liang et al. / Information Sciences 375 (2017) 183–201
According to Definition 6 , the condition equivalence classes are denoted as πAT
= { X
1
, X
2
, ··· , X
m
} and decision equiva-
lence classes are denoted as πD
= { C, ¬ C} . In the situation, the decision equivalence classes correspond to the set of states
of Section 3 . Suppose that the value of n is given, according to decision rules (P
n
) (N
n
) and the results of Section 3 ,
n produces three regions: π(n )
= { P OS(C)
n
, NEG (C)
n
, BND (C)
n
} . The overall uncertainty of the three regions is computed as
[9] :
H(πD
| π(n )
) = P r(P OS(C)
n
) H(πD
| P OS(C)
n
) + P r(N EG (C)
n
) H(πD
| N EG (C)
n
) + P r(BN D (C)
n
) H(πD
| BN D (C)
n
) . (17)
where
H(πD
| P OS(C)
n
) = P r(C| P OS(C)
n
) log P r(C| P OS(C)
n
) P r(¬ C| P OS(C)
n
) log P r(¬ C| P OS(C)
n
) ,
H(πD
| NEG (C)
n
) = P r(C| NEG (C)
n
) log P r(C| NEG (C)
n
) P r(¬ C| NEG (C)
n
) log P r(¬ C| NEG (C)
n
) ,
H(πD
| BND (C)
n
) = P r(C| BND (C)
n
) log P r(C| BND (C)
n
) P r(¬ C| BND (C)
n
) log P r(¬ C| BND (C)
n
) ,
P r() =
| |
| U|
(•= P OS(C )
n
, NEG (C )
n
, BND (C )
n
) ,
where (17) provides an objective function for evaluating the uncertainty of three-way decisions. Furthermore, an optimiza-
tion model is developed as follows:
min h = H(πD
| π(n )
) (18)
s. t.
μF
n
κλ•• ,ηλ••
(λ•• ) = μλ•• + κλ•• πλ••
1 (1 κλ•• ηλ•• )
n
κλ•• + ηλ••
, •= P, B, N,
νF
n
κλ•• ,ηλ••
(λ•• ) = νλ•• + ηλ•• πλ••
1 (1 κλ•• ηλ•• )
n
κλ•• + ηλ••
, •= P, B, N,
κλ•• = μλ•• , •= P, B, N,
ηλ•• = νλ•• , •= P, B, N,
πλ•• = 1 μλ•• νλ•• , •= P, B, N,
n is a posit i v e integer,
where (18) is the objective function. In general, one can search the space of all possible values of n to find the minimum
of (18) by the enumeration method. When the value of n is ensured, we can use the method of Section 3.3 to deduce
three-way decisions. On the basis of Step 5 presented in Section 3.3 , we also need to calculate all conditional probabilities
of every conditional class to the decision class C before solving the optimization, i.e., P r(C| X
i
) (i = 1 , 2 , ··· , m ) . Given the
original evaluation information of loss functions, namely, Step 1 of Section 3.3 , we can find an optimal value of n , when
H ( πD
| π( n )
) is minimum.
Under the point operator, each loss function presented in Table 2 possesses a good convergency. Corollary 1 has proved
this property. In the extreme case, each loss function can become a precise value and its uncertainty is 0. By utilizing this
property, we can design a search scope of n and find the minimum of H ( πD
| π( n )
) rapidly. In order to achieve this purpose,
we focus on discussing the uncertainty of IFNs with the point operator. Here, we introduce a threshold α, which is a certain
degree of tolerance and α0. It implies that the loss function remains almost the same with the increase of n , when its
uncertainty is less than α. On the basis of Proposition 2 , we have the following relationships:
πF
n
κλPP
,ηλPP
(λPP
)
(λPP
) = (1 κλPP
ηλPP
)
n
πλPP
α, (19)
πF
n
κλBP
,ηλBP
(λBP
)
(λBP
) = (1 κλBP
ηλBP
)
n
πλBP
α, (20)
πF
n
κλNP
,ηλNP
(λNP
)
(λNP
) = (1 κλNP
ηλNP
)
n
πλNP
α, (21)
πF
n
κλPN
,ηλPN
(λPN
)
(λPN
) = (1 κλPN
ηλPN
)
n
πλPN
α, (22)
πF
n
κλBN
,ηλBN
(λBN
)
(λBN
) = (1 κλBN
ηλBN
)
n
πλBN
α, (23)
πF
n
κλNN
,ηλNN
(λNN
)
(λNN
) = (1 κλNN
ηλNN
)
n
πλNN
α. (24)
From (19) - (24) , each loss function has a critical point. With respect to the optimization model, we set κλ•• = μλ•• and
ηλ•• = νλ•• (•= P, B, N) . Then, we obtain the following conditions: (1) (πλPP
)
n +1
α; (2) (πλBP
)
n +1
α; (3) (πλNP
)
n +1
α; (4) (πλPN
)
n +1
α; (5) (πλBN
)
n +1
αand (6) (πλNN
)
n +1
α. If the uncertainties of loss functions are equal to 0, then
these conditions hold. Suppose that the uncertainties of loss functions are not equal to 0, we further deduce the following
relationships: (1) n log
πλPP
α1 , (2) n log
πλBP
α1 , (3) n log
πλNP
α1 , (4) n log
πλPN
α1 , (5) n log
πλBN
α1
and (6) n log
πλNN
α1 . Thus, when all of loss functions satisfy a certain degree of the tolerance α, we can determine the
upper bound of n as:
n
+
= CEI LI NG
max
•= P,B,N
( log
πλ•• α1)
. (25)
D. Liang et al. / Information Sciences 375 (2017) 183–201 197
Tabl e 3
Probabilistic information of a concept C .
X
1 X
2 X
3 X
4 X
5 X
6 X
7 X
8
Pr ( X
i
) 0 .0177 0 .0285 0 .0137 0 .0352 0 .0580 0 .1069 0 .0498 0 .10 70
Pr ( C | X
i
) 1 .0 1 .0 1 .0 1 .0 0 .9 0 .8 0 .8 0 .6
X
9 X
10 X
11 X
12 X
13 X
14 X
15
Pr ( X
i
) 0 .0155 0 .1792 0 .0998 0 .1299 0 .0080 0 .1441 0 .0 067
Pr ( C | X
i
) 0 .5 0 .4 0 .4 0 .2 0 .1 0 .0 0 .0
From (25) , the function CEILING is to obtain the ceiling of the numeric value. Finally, we confirm that the range of n is
[0, n
+
]. During the range, we can successively search the minimum of (18) and determine the value of n . Then, given the
value of n , we implement Steps 3–10 of Section 3.3 to deduce three-way decisions.
5. An illustrative example
In the following, we use an example of software development to illustrate the risk decision-making of three-way deci-
sions ofSection 3 and the optimization model of Section 4 . Lee et al. [16] classified the risk factors of software development
and established a hierarchical structure model of aggregative risk. Considering the risk assessment and the IFDTRS model
with the point operators, this section focuses on analyzing the software plan selection. On the one hand, we want to rightly
select the software development plan. On the other hand, we also consider that some possible decision results of the selec-
tion may result in different losses [23] . Thus, the selection of software plan is consistent with three-way decisions.
5.1. Decision analysis of our proposed method
For the software plan selection, we assume that there are two states = { C, ¬ C} indicating that the software plan is
good or bad. The set of actions for the new development plan is given by A = { a
P
, a
B
, a
N
} , where a
P
, a
B
, and a
N
represent
the development, need for further investigation and not to development, respectively. In accordance with the results of Ref.
[9] , we suppose the probabilistic information about a concept C with respect to a partition of 15 equivalence classes, where
the equivalence classes X
i
(i = 1 , 2 , ··· , 15 ) denote the software plans. Their results are shown in Table 3 .
In Table 3 , it lists the proportion and its conditional probability of each software plan. According to the loss function
matrix of Table 1 , the loss functions in the original situation are given as follows: λPP
= (μλPP
, νλPP
) = (0 . 005 , 0 . 25) , λPN
=
(μλPN
, νλPN
) = (0 . 6 , 0 . 1) , λBP
= (μλBP
, νλBP
) = (0 . 1 , 0 . 2) , λBN
= (μλBN
, νλBN
) = (0 . 3 , 0 . 3) , λNP
= (μλNP
, νλNP
) = (0 . 4 , 0 . 1) and
λNN
= (μλNN
, νλNN
) = (0 . 01 , 0 . 6) . In this case, the uncertainties of loss functions are: πPP
= 0 . 74 5 , πPN
= 0 . 3 , πBP
= 0 . 7 ,
πBN
= 0 . 4 , πNP
= 0 . 5 and πNN
= 0 . 39 , respectively. In light of the result of Ref. [25] , we assume that κλ•• = μλ•• and
ηλ•• = νλ•• (•= P, B, N) . Under the point operator, these loss functions can be changed with the variation of n , which may
give rise to different decision rules.
In this example, we first utilize the information entropy theory of Section 4 to explain the determination of n . In light of
(18) , we construct the optimization model as follows:
min h = H(πD
| π(n )
)
s. t.
μF
n
κλPP
,ηλPP
(λPP
) = 0 . 005 + 0 . 005 0 . 74 5 1 (1 0 . 005 0 . 25)
n
0 . 005+0 . 25
,
μF
n
κλBP
,ηλBP
(λBP
) = 0 . 1 + 0 . 1 0 . 7 1 (1 0 . 1 0 . 2)
n
0 . 1+0 . 2
,
μF
n
κλNP
,ηλNP
(λNP
) = 0 . 4 + 0 . 4 0 . 5 1 (1 0 . 4 0 . 1)
n
0 . 4+0 . 1
,
μF
n
κλPN
,ηλPN
(λPN
) = 0 . 6 + 0 . 6 0 . 3 1 (1 0 . 6 0 . 1)
n
κ0 . 6+0 . 1
,
μF
n
κλBN
,ηλBN
(λBN
) = 0 . 3 + 0 . 3 0 . 4 1 (1 0 . 3 0 . 3)
n
0 . 3+0 . 3
,
μF
n
κλNN
,ηλNN
(λNN
) = 0 . 01 + 0 . 01 0 . 39 1 (1 0 . 01 0 . 6)
n
0 . 01+0 . 6
,
νF
n
κλPP
,ηλPP
(λPP
) = 0 . 25 + 0 . 25 0 . 745 1 (1 0 . 005 0 . 25)
n
0 . 005+0 . 25
,
νF
n
κλBP
,ηλBP
(λBP
) = 0 . 2 + 0 . 2 0 . 7 1 (1 0 . 1 0 . 2)
n
0 . 1+0 . 2
,
νF
n
κλNP
,ηλNP
(λNP
) = 0 . 1 + 0 . 1 0 . 5 1 (1 0 . 4 0 . 1)
n
0 . 4+0 . 1
,
νF
n
κλPN
,ηλPN
(λPN
) = 0 . 1 + 0 . 1 0 . 3 1 (1 0 . 6 0 . 1)
n
0 . 6+0 . 1
,
νF
n
κλBN
,ηλBN
(λBN
) = 0 . 3 + 0 . 3 0 . 4 1 (1 0 . 3 0 . 3)
n
0 . 3+0 . 3
,
νF
n
κλNN
,ηλNN
(λNN
) = 0 . 6 + 0 . 6 0 . 39 1 (1 0 . 01 0 . 6)
n
0 . 01+0 . 6
,
n is a positi v e integer,
198 D. Liang et al. / Information Sciences 375 (2017) 183–201
Fig. 2. The overall uncertainty of the three regions with the different values of n .
In this case, H ( πD
| π( n )
is calculated as:
H(πD
| π(n )
) =
X POS(C)
n
P r(X ) H(πD
| P OS(C)
n
) +
X NEG (C)
n
P r(X ) H(πD
| NEG (C)
n
) +
X BND (C)
n
P r(X ) H(πD
| BND (C)
n
) .
where
H(πD
| P OS(C)
n
) =
X POS(C)
n
P r(C| X )
X POS(C)
n
P r(X )
log
X POS(C)
n
P r(C| X )
X POS(C)
n
P r(X )
X POS(C)
n
1 P r(C| X )
X POS(C)
n
P r(X )
log
X POS(C)
n
1 P r(C| X )
X POS(C)
n
P r(X )
,
H(πD
| NEG (C)
n
) =
X NEG (C)
n
P r(C| X )
X NEG (C)
n
P r(X )
log
X NEG (C)
n
P r(C| X )
X NEG (C)
n
P r(X )
X NEG (C)
n
1 P r(C| X )
X NEG (C)
n
P r(X )
log
X NEG (C)
n
1 P r(C| X )
X NEG (C)
n
P r(X )
,
H(πD
| BND (C)
n
) =
X BND (C)
n
P r(C| X )
X BND (C)
n
P r(X )
log
X BND (C)
n
P r(C| X )
X BND (C)
n
P r(X )
X BND (C)
n
1 P r(C| X )
X BND (C)
n
P r(X )
log
X BND (C)
n
1 P r(C| X )
X BND (C)
n
P r(X )
.
In order to solve the optimization model, the enumeration method is used in this example. Because of a good conver-
gency of each loss function presented in Table 2 , we need to confirm the range of n . Let the certain degree of tolerance be
α= 0 . 0 0 0 01 . On the basis of (25) , we compute the upper bound of n as:
n
+
= CEI LI NG ( max
•= P,B,N
( log
πλ•• α1)) = CEI LI NG ( max (38 . 11 , 31 . 28 , 15 . 61 , 8 . 56 , 11 . 56 , 11 . 23)) = 39 .
Hence, we confirm that the range of n is [0, 39]. Given the value of n , we use Steps 3–11 of Section 3.3 to deduce the
corresponding three-way decisions, i.e., POS ( C )
n
, NEG ( C )
n
, BND ( C )
n
. For the overall uncertainty of the three regions, namely
(17) , its variations with the different values of n are presented in Fig. 2 .
From Fig. 2 , we consider that the value of n varies from 0 to 39, respectively. Meanwhile, we compute the overall un-
certainty of (17) under the different values of n . According to (18) , we further search its minimum. When the value of n is
4, (18) first appears.e., h = 0 . 4796 . When the value of n is more than 4, their decision results are identical and (17) has a
convergence trend. When the value of 1 and 4, we successively calculate the score of each software plan. Their score results
of each software plan are shown in Fig. 3 .
From Fig. 3 , for any software plan X
i
, S ( R ( a
P
| X
i
)
n
), S ( R ( a
B
| X
i
)
n
) and S ( R ( a
N
| X
i
)
n
) are not equal under the given value of n .
Hence, according to Step 8 of Section 3 , we directly judge decision rules of software plans based on these scores. Finally,
the decision results with the different values of n are obtained in Table 4 .
From Table 4 , the positive region is expanding with the increase of n . The negative region is shrinking with the increase
of n . But at the same time, the boundary region manifests the uncertainty. Thus, three-way decisions with the aid of point
operators provide us a flexibility for the software plan selection. According to the result of Fig. 2 , the overall information
uncertainty of (18) is the minimum when the value of n is 4. In this case, the equivalence classes X
1
, X
2
, X
3
, X
4
, X
5
, X
6
and
X
7
are classified into the positive region POS( C )
4
and can be immediately developed. X
12
, X
13
, X
14
and X
15
are classified into
the negative region NEG( C )
4
, which needs to be rejected in this stage. Meanwhile, the equivalence classes X
8
, X
9
, X
10
and
X
11
are classified into the boundary region BND( C )
4
and need for further investigation by updating the information.
D. Liang et al. / Information Sciences 375 (2017) 183–201 199
Fig. 3. The scores of each software plan with the different value s of n .
Tabl e 4
The decision rules of each software plan with the different valu es of n .
Region POS( C )
n BND( C )
n NEG( C )
n
n = 1 { X
1
, X
2
, X
3
, X
4
, X
5
} { X
6
, X
7
, X
8
, X
9
} { X
10
, X
11
, X
12
, X
13
, X
14
, X
15
}
n = 2 { X
1
, X
2
, X
3
, X
4
, X
5
} { X
6
, X
7
, X
8
, X
9
, X
10
, X
11
} { X
12
, X
13
, X
14
, X
15
}
n = 3 { X
1
, X
2
, X
3
, X
4
, X
5
} { X
6
, X
7
, X
8
, X
9
, X
10
, X
11
} { X
12
, X
13
, X
14
, X
15
}
n = 4 { X
1
, X
2
, X
3
, X
4
, X
5
, X
6
, X
7
} { X
8
, X
9
, X
10
, X
11
} { X
12
, X
13
, X
14
, X
15
}
n = { X
1
, X
2
, X
3
, X
4
, X
5
, X
6
, X
7
} { X
8
, X
9
, X
10
, X
11
} { X
12
, X
13
, X
14
, X
15
}
Tabl e 5
The decision rules of (18) with the different values of
κλ•• and ηλ•• (•= P, B, N) .
Case κλ•• = μλ•• and ηλ•• = μλ•• κλ•• = νλ•• and ηλ•• = νλ••
The value of (18) 0 .5089 0 .5089
Case κλ•• = 0 . 5 and ηλ•• = 0 . 5 κλ•• = μλ•• and ηλ•• = νλ••
The value of (18) 0 .5089 0 .4796
5.2. Discussions of our proposed method
The parameters of the IFPO (i.e., κλ•• and ηλ••) are rather important and can seriously affect the final decisions (•=
P, B, N) . In this example, we suppose that κλ•• = μλ•• and ηλ•• = νλ••. For clarity, we further select three special cases of κλ••
and ηλ•• for deducing three-way decisions: (1) κλ•• = μλ•• and ηλ•• = μλ•• ; (2) κλ•• = νλ•• and ηλ•• = νλ•• ; (3) κλ•• = 0 . 5 and
ηλ•• = 0 . 5 . Based on (18), we discuss their own decision results and compare them. Their results are shown in Table 5 .
From Table 5 , the case of κλ•• = μλ•• and ηλ•• = νλ•• reported in Ref. [25] has the best performance. Generally speaking,
three-way decisions with the aid of point operators can provide us a flexibility for the selection. Compared to the original
three-way decisions of Ref. [23] , we may obtain the corresponding decision rules under a given value of n and have more
options. This example elaborates the optimization model of (18) and its solution. More importantly, it further explains the
determination of n and answers which stage is most suitable to make the decision. In order to illustrate the characteristics of
our proposed method, we also briefly summarize the main existing research works of the point operator and then compare
them with our proposed method. The summary is shown in Table 6 .
From Table 6 , the representation of the point operator and the determination of the parameter n construct two criteria
for the comparison. In light of these criteria, we can find that all the existing work adopts the point operator F
κf
,ηf
. Thus,
200 D. Liang et al. / Information Sciences 375 (2017) 183–201
Tabl e 6
The summary on the main existing research wo rks of the point operator.
Literature The representation of the point operator The determination of the parameter n
Ref. [2] F
κf
,ηf
and other representations Null
Ref. [4] F
κf
,ηf
Null
Ref. [25] F
κf
,ηf
The risk attitude of the decision maker
Ref. [36] F
κf
,ηf
and other representations Null
Ref. [37] F
κf
,ηf
and other representations Null
This paper F
κf
,ηf
Information entropy theory
F
κf
,ηf
of this paper is regarded as a representative IFPO. On the one hand, the point operator has good semantics and is
easily used in the decision-making problem [25] . On the other hand, Burillo and Bustince [4] proved that the point operator
had a favourable property and could recover the fuzzy sets used in the construction from the IFSs. Besides, we design
the information entropy as a criterion for the determination of the parameter n . Although Ref. [25] . pointed out that the
determination of the parameter n was associated with the risk attitude of the decision maker, but it lacks of the specific
method. Unlike the results of the existing papers, our proposed method provides us a theoretical explanation based on
Ref. [9] . As the statement of Pedrycz [32] , IFPO can generate the granular hierarchy in the granular computing perspective.
Meantime, we also design the solution for the choice of granular layer.
6. Conclusions
Considering the impact of IFPO on the loss functions, we reconstruct the basic model of IFDTRSs reported in Ref. [23] and
analyze the new derivation of three-way decisions. Under the variation scenario of loss function, we prove that the prereq-
uisites among loss functions still hold in each stage, which implies that we can directly utilize Bayesian decision theory
to deduce three-way decisions. On the one hand, according to the variation of loss function, we design the corresponding
procedure for three-way decisions and predict reasonable results in each stage. Three-way decisions with point operators
can provide us a flexibility mechanism. On the other hand, in the viewpoint of the continue perspective, we further investi-
gate the determination of n with the aid of information entropy theory and form a criterion for the application of the point
operator. Our method can be very useful in dealing with many uncertain and risk management decision-making problems,
such as the selection of manufacturing partner, the trust of P2P platform, and the health situation evaluation of employee.
We intend to further investigate the mechanism of the IFPO parameters κand ηto the decision rules in our future work.
Acknowledgements
This work is partially supported by the National Science Foundation of China (Nos. 71401026, 71432003, 71571123,
61273209, 71571123) and the Fundamental Research Funds for the Central Universities of China (No. ZYGX2014J100), the
Social Science Planning Project of the Sichuan Province (No. SC15C009) and the Sichuan Youth Science and Technology In-
novation Team (2016TD0013).
References
[1] K.T. Atanassov , Intuitionistic fuzzy sets, Fuzzy Set Syst. 20 (1986) 87–96 .
[2] K.T. Atanassov , Remark on the intuitionistic fuzzy sets-III, Fuzzy Set Syst. 75 (1995) 401–402 .
[3] N. Azam , J.T. Yao , Analyzing uncertainties of probabilistic rough set regions with game-theoretic rough sets, Int. J. Approximate Reasoning 55 (2014)
142–155 .
[4] P. Burillo , H. Bustince , Construction theorems for intuitionistic fuzzy sets, Fuzzy Set Syst. 84 (3) (1996) 271–281 .
[5] K.H. Chang , C.H. Cheng , A risk assessment methodology using intuitionistic fuzzy set in FMEA, Int. J. Syst. Sci. 41 (12)
(2010) 1457–1471 .
[6] S.M. Chen , J.M. Tan , Handing multicriteria fuzzy decision-making problems based on vague set theory, Fuzzy Set Syst. 67 (2) (1994) 163–172 .
[7] R.O. Duda , P.E . Hart , Pattern Classification and Scene Analysis, Wiley Press, New York, 1973 .
[8] X.F. Deng , Y.Y. Yao , Decision-theoretic three-way approximations of fuzzy sets, Inf. Sci. 279 (2014a) 702–715 .
[9] X.F. Deng , Y.Y. Yao , A multifaceted analysis of probabilistic three-way decisions, Fundamenta Informaticae 13 2 (2014b) 291–313 .
[10] S. Greco , R. Slowinski , Y.Y. Yao , Bayesian decision theory for dominance-based rough
set approach, in: J. Yao (Ed.), RSKT2007, LNAI 4481, Springer,
Berlin, 20 07, pp. 134–141 .
[11] D.D. Hong , C.H. Choi , Multicriteria fuzzy decision-making problems based on vague set theory, Fuz zy Set Syst. 114 (1) (20 0 0) 103–113 .
[12] B.Q. Hu , Three-way decisions space and three-way decisions, Inf. Sci. 281 (2014) 21–52 .
[13] B. Huang , Y. L . Zhang , H.X. Li , Information granulation and uncertainty measures in interval-valued intuitionistic fuzzy information systems, Eur. J. Oper.
Res. 231 (1) (2013) 162–170 .
[14] B. Huang , C.X. Guo , Y.L. Zhang , H.X. Li , X.Z.
Zhou , Intuitionistic fuzzy multigranulation rough sets, Inf. Sci. 277 (2014) 299–320 .
[15] X.Y. Jia , W.H . Liao , Z.M. Tang , L. Shang , Minimum cost attribute reduction in decision-theoretic rough set models, Inf. Sci. 219 (2013) 151–167 .
[16] H.M. Lee , S.Y. Lee , T.Y. Lee , J.J. Chen , A new algorithm for applying fuzzy set theory to evaluate the rate of aggregative risk in software development,
Inf. Sci. 153 (2003) 177–197 .
[17] Y. Li , C. Zhang , J.R. Swan , An information filtering model on the web and its application in jobagent, Knowl.
Based Syst. 13 (20 0 0) 285–296 .
[18] W. Li , D.Q. Miao , W.L. Wang , N. Zhang , Hierarchical rough decision theoretic framework for text classification, in: Proceedings of the 9th IEEE Interna-
tional Conference on Cognitive Informatics, 2010, pp. 4 84–4 89 .
[19] H.X. Li , X.Z. Zhou , Risk decision making based on decision-theoretic ro ugh set: A three-way view decision model, Int. J. Comput. Intell. Syst. 4 (2011)
1–11 .
D. Liang et al. / Information Sciences 375 (2017) 183–201 201
[20] H.X. Li , L.B. Zhang , B. Huang , X.Z. Zhou , Sequential three-way decision and granulation for cost-sensitive face recognition, Knowl. Based Syst. 91 (2016)
241–251 .
[21] W.T. Li , W.H. Xu , Double-quantitative decision-theoretic rough set, Inf. Sci. 316 (2015) 54–67 .
[22] D.C. Liang , D. Liu , A novel risk decision making based on decision-theoretic rough sets under hesitant fuzzy information, IEEE Trans. Fuzzy Syst. 23
(2) (2015b) 237–247 .
[23] D.C. Liang , D. Liu , Deriving three-way decisions from intuitionistic fuzzy decision-theoretic rough sets, Inf. Sci. 300 (2015b) 28–48 .
[24] P. Lingras
, M. Chen , D.Q. Miao , Rough cluster quality index based on decision theory, IEEE Trans. Knowl. Data Eng. 21 (7) (2009) 1014–1026 .
[25] H.W. Liu , G.J. Wan g , Multi-criteria decision-making methods based on intuitionistic fuzzy sets, Eur. J. Oper. Res. 17 9 (2007) 220–233 .
[26] D. Liu , Y.Y. Yao , T.R. Li , Three-way investment decisions with decision-theoretic rough sets, Int. J. Comput. Intell. Syst. 4 (2011b) 66–74 .
[27] D. Liu , T.R. Li , D. Ruan , Probabilistic model criteria with decision-theoretic rough sets, Inf. Sci. 181 (2011b) 3709–3722 .
[28] D. Liu ,
T.R. Li , D.C. Liang , Three-way government decision analysis with decision-theoretic rough sets, Int. J. Uncertainty Fuzziness Knowledge Based
Syst. 20 (2012) 119–132 . (Supp. 1)
[29] D. Liu , T.R . Li , D.C. Liang , Three-way decisions in dynamic decision-theoretic rough sets, in: Proceedings of the 8th International Conference on Rough
Sets and Knowledge Technology, LNAI 8171, 2013, pp. 288–299 .
[30] C. Luo , T.R . Li , H.M. Chen , Dynamic maintenance of three-way decision rules, in: D. Miao (Ed.), RSKT 2014, LNAI 8818, Springer, Berlin, 2014,
pp. 801–811 .
[31] Z. Pawlak , Rough sets, Int. J. Comput.
Inf. Sci. 11 (1982) 341–356 .
[32] W. Pedrycz , Granular Computing: Analysis and Design of Intelligent Systems, CRC Press/Francis Tayl or, Boca Raton, 2013 .
[33] Y.H . Qian , H. Zhang , Y. L. Sang , J.Y. Liang , Multigranulation decision-theoretic rough sets, Int. J. Approximate Reasoning 55 (2014) 225–237 .
[34] B.Z. Sun , W.M . Ma , H.Y. Zhao , Decision-theoretic rough fuzzy set model and application, Inf. Sci. 283 (2014) 18 0– 19 6 .
[35] S.M. Ta he ri , J. Behboodian , A bayesian approach to fuzzy hypotheses testing, Fuz zy Set Syst. 123 (2001) 39–48 .
[36] M.M. Xia , Z.S. Xu ,
Generalized point operators for aggregating intuitionistic fuzzy information, Int. J. Intell. Syst. 25 (2010) 1061–1080 .
[37] M.M. Xia , Point operators for intuitionistic multiplicative information, J. Intell. Fuzzy Syst. 28 (2015) 615–620 .
[38] Z.S. Xu , R.R. Yag er , Some geometri c aggregation operators based on intuitionistic fuzzy sets, Int. J. General Syst. 35 (4) (2006) 417–433 .
[39] Z.S. Xu , Intuitionistic fuzzy aggregation operators, IEEE Trans. Fuzzy Syst. 15 (6) (2007) 1179–1187 .
[40] J.T. Yao , J.P. Herbert , Web-based support systems with rough set analysis, in: M. Kryszkiewicz (Ed.), RSEIP 2007, LNAI 4585, Springer, Berlin, 2007,
pp. 360–370 .
[41] Y.Y. Yao , S.K.M. Wong , P. Lingras , A decision-theoretic rough set model, in: Z.W. Ras, M. Zemankova, M.L. Emrich (Eds.), Methodologies for Intelligent
Systems, 5, North-Holland, New York , 1990, pp. 17–24 .
[42] Y.Y. Yao , S.K.M. Won g , A decision theoretic framework for approximating concepts, Int. J. Man-machine Studies 37 (1992) 793–809 .
[43] Y.Y. Yao , Probabilistic rough set approximation, Int. J. Approximate Reasoning 49 (2008) 255–271 .
[44] Y.Y. Yao , Three-way decisions with probabilistic rough sets, Inf. Sci. 180 (2010) 341–353 .
[45] Y.Y. Yao , The superiority of three-way decision
in probabilistic rough set models, Inf. Sci. 181 (6) (2011) 1080–1096 .
[46] Y.Y. Yao , X.F. Deng , Sequential three-way decisions with probabilistic rough sets, in: Proc. 10th IEEE International Conference on Cognitive Informatics
and Cognitive Computing, 2011, pp. 120– 125 .
[47] Y.Y. Yao , Granular computing and sequential three-way decisions, in: P. Lingras (Ed.), RSKT 2013, LNAI 8171, Springer-Verlag, Berlin, 2013, pp. 16–27 .
[48] H. Yu , Z.G. Liu , G.Y. Wang , An automatic method to determine the number of clusters using decision-theoretic rough set, Int. J. Approximate Reasoning
55 (2014) 142–155 .
[49] L.A. Zadeh ,
Fuzzy sets, Inf. Control 8 (1965) 338–353 .
[50] X.Y. Zhang , D.Q. Miao , Reduction target structure-based hierarchical attribute reduction for two-category decision-theoretic rough sets, Inf. Sci. 277
(2014) 755–776 .
Article
Full-text available
Advanced methods for flood susceptibility mapping are required to minimize hazards in the watershed. Here, Partial Least Square-Structural Equation Model (PLS-SEM) was introduced to analyze the impact of flood influencing factors. PLS-SEM integrated with four Machine Learning (ML) methods as Multi-Layer Perceptron Neural Network (MLPNN), K Nearest Neighbor (KNN), Support Vector Machine (SVM) and Radial Basis Function Neural network (RBFN). In addition, significant flood influencing factors from PLS-SEM analysis was taken as the input of ML models. Then SVM, MLPNN, KNN, and RBFN integrated with the PLS-SEM classifier to develop hybrid models for constructing FSM. The performance of models is assessed in terms of standard statistical methods. The performance of the achieved model is good having AUROC > 0.8 and PLS-SEM-SVM (AUROC = 0.978) perform superior than others. Thus, hybrid SVM model can be best utilized for flood susceptibility. This study provides the importance of mechanism for flood influencing factors and extends the application of proposed hybrid ML models to minimize flood risk.
Chapter
The Rough set theory is a very successful tool to deal with vague, inconsistent, imprecise and uncertain knowledge. In recent years, rough set theory and its applications have drawn many researchers' interest progressively in one of its hot issues, viz. the field of artificial intelligence. An intuitionistic fuzzy (IF) set, which is a generalization of fuzzy set, has more practical and flexible real‐world proficiency to characterize a complex information and provide a better glimpse to confront uncertainty and ambiguity when compared with those of the fuzzy set. Moreover, rough sets and intuitionistic fuzzy sets deal with the specific aspects of the same problem & imprecision, and their combination IF rough set has been studied by many researchers in the past few years. The present chapter deals with a review of IF rough set theory, their basic concepts, properties, topological structures, logic operators, approximation operators and similarity relations on the basis of axiomatic and constructive approaches. The characterization of IF rough sets based on various operators, similarity relations, distances, IF cut sets, IF coverings and inclusion degrees is also discussed. Moreover, several extensions of IF rough sets and their hybridization with other extended rough set theories are thoroughly surveyed. The applications of IF rough sets in different real‐world problems are also discussed in detail.
Article
Full-text available
Three-way decision is a decision-making method based on human cognitive process, and its basic idea is to divide a universal set into three pair-wise disjoint regions to cognitive information processing. As the complexity of decision-making environment, cognitive information about alternatives given by decision-makers is uncertain and inconsistent. Picture fuzzy point operator (PFPO) is an effective tool to handle this information. In order to obtain more reasonable and effective decision results, this paper proposes three-way decision models and develops a multi-attribute three-way decision method. Then, we use the proposed method to solve a project investment problem. We define new operators on picture fuzzy numbers by a monotonically increasing binary function and a monotonically decreasing unary function. Then, we build three-way decision models based on PFPO and these new operators. Further, we fully consider the relationship between attributes and the classification of alternatives, and present a multi-criteria three-way decision method. In addition, we compare the proposed method with the existing methods by a project investment problem. We show that PFPO can handle inconsistent and changing cognitive information more accurately through an example. In a project investment problem, the decision results obtained by using the proposed method are the same as those obtained by the existing methods, which shows that the method is effective. By the analysis and comparison with these methods, it is proved that the proposed method is very suitable for dealing with multi-attribute decision-making problem with changing picture fuzzy information and consistent with human cognition.
Article
Full-text available
The decision-theoretic rough set model is adopted to derive a profit-based three-way approach to investment decision-making. A three-way decision is made based on a pair of thresholds on conditional probabilities. A positive rule makes a decision of investment, a negative rule makes a decision of non-investment, and a boundary rule makes a decision of deferment. Both cost functions and revenue functions are used to calculate the required two thresholds by maximizing conditional profit with the Bayesian decision procedure. A case study of oil investment demonstrates the proposed method.
Article
In situations where available information or evidence is incomplete or uncertain, probabilistic two-way decisions/classifications with a single threshold on probabilities for making either an acceptance or a rejection decision may be inappropriate. With the introduction of a third non-commitment option, probabilistic three-way decisions use a pair of thresholds and provide an effective and practical decision-making strategy. This paper presents a multifaceted analysis of probabilistic three-way decisions. By identifying an inadequacy of two-way decisions with respect to controlling the levels of various decision errors, we examine the motivations and advantages of three-way decisions. We present a general framework for computing the required thresholds of a three-way decision model as an optimization problem. We investigate two special cases, one is a decision-theoretic rough set model and the other is an information-theoretic rough set model. Finally, we propose a heuristic algorithm for finding the required thresholds.
Article
Many previous studies on face recognition attempted to seek a precise classifier to achieve a low misclassification error, which is based on an assumption that all misclassification costs are the same. In many real-world scenarios, however, this assumption is not reasonable due to the imbalanced misclassification cost and insufficient high-quality facial image information. To address this issue, we propose a sequential three-way decision method for cost-sensitive face recognition. The proposed method is based on a formal description of granular computing. It develops a sequential strategy in a decision process. In each decision step, it seeks a decision which minimizes the misclassification cost rather than misclassification error, and it incorporates the boundary decision into the decision set such that a delayed decision can be made if available high-quality facial image information is insufficient for a precise decision. To describe the granular information of the facial image in three-way decision steps, we develop a series of image granulation methods based on two-dimensional subspace projection methods including 2DPCA, 2DLDA and 2DLPP. The sequential three-way decisions and granulation methods present an applicable simulation on human decisions in face recognition, which simulate a sequential decision strategy from rough granule to precise granule. The experiments were conducted on two popular facial image database, which validated the effectiveness of the proposed methods.
Article
Intuitionistic multiplicative set uses the unbalanced distributed scale instead of the balanced one to describe the membership and non-membership information. It can avoid some disadvantages in calculating process. We develop a series of point operators to reduce the uncertain information in intuitionistic multiplicative set. The proposed operator can redistribute the uncertain information according to the difference preference of decision makers.
Article
Attribute reduction is an essential subject in rough set theory, but because of quantitative extension, it becomes a problem when considering probabilistic rough set (PRS) approaches. The decision-theoretic rough set (DTRS) has a threshold semantics and decision feature and thus becomes a typical and fundamental PRS. Based on reduction target structures, this paper investigates hierarchical attribute reduction for a two-category DTRS and is divided into five parts. (1) The knowledge-preservation property and reduct are explored by knowledge coarsening. (2) The consistency-preservation principle and reduct are constructed by a consistency mechanism. (3) Region preservation is analyzed, and the separability between consistency preservation and region preservation is concluded; thus, the double-preservation principle and reduct are studied. (4) Structure targets are proposed by knowledge structures, and an attribute reduction is further described and simulated; thus, general reducts are defined to preserve the structure targets or optimal measures. (5) The hierarchical relationships of the relevant four targets and reducts are analyzed, and a decision table example is provided for illustration. This study offers promotion, rationality, structure, hierarchy and generalization, and it establishes a fundamental reduction framework for two-category DTRS. The relevant results also provide some new insights into the attribute reduction problem for PRS.