Content uploaded by Swaroop Kallakuri
Author content
All content in this area was uploaded by Swaroop Kallakuri on May 07, 2019
Content may be subject to copyright.
Cloud Service Provider Evaluation System using
Fuzzy Rough Set Technique
Parwat Singh Anjana, Priyanka Badiwal, Rajeev Wankar, Swaroop Kallakuri* (IEEE member), and C. Raghavendra Rao
School of CIS, University of Hyderabad, Hyderabad, India - 500019
*Scottline LLC, Bellaire, TX 77401
{anjana.uoh, priyanka.badiwal, rajeev.wankar, crrcs.ailab}@gmail.com, *swaroop@scottline.com
Abstract—Cloud Service Provider (CSP) offers a wide variety
of scalable, flexible, and cost-efficient services to users on demand
and pay-per-utilization basis. However, vast diversity in available
services leads to various challenges for a user to determine
and select best suitable service. Further, sometime user needs
to hire required services from multiple service providers which
leads to account, security, support, interface, and Service Level
Agreements (SLAs) associated management difficulties. To avoid
such problems having a Cloud Service Broker (CSB) which is
aware of service offerings and user’s Quality of Service (QoS)
requirements will benefit both the service providers and users.
We have proposed a cloud service brokerage architecture using
fuzzy rough set technique, which facilitate ranking and selecting
services, based on users QoS requirements and finally monitor
the service execution. We have used fuzzy rough set technique
for dimension reduction and weighted Euclidean distance to
rank the service providers. To prioritize user QoS request, we
suggest to use user assign weights. We also incorporate system
assigned weights to give the relative importance to QoS attributes.
We compared the proposed ranking technique with an existing
technique based on the system response time. With the help
of a case study we have demonstrated the efficiency of the
proposed technique. The case study experiment results show that
the proposed approach is scalable, resilience, and produce better
results with less searching time.
Index Terms—Cloud Service Provider, Cloud Service Broker,
Fuzzy Rough Set Theory, Reduct, Quality of Service.
I. INTRODUCTION
Since the past few years, Cloud emerges as a new utility as
a computing over the internet. With this emergence of Cloud,
the Cloud Service Provider (CSP) offers a wide variety of
flexible, scalable, on-demand, and pay-as go online resources
to the users [1]. Nowadays, Cloud becomes an indispensable
part of many organizations; it requires a right approach for
adoption, deployment, and preservation of the resources [2].
Also, it introduces several challenges for users in selecting
the best service among a considerable number of services [3].
Further, organization sometime needs to rent different services
from different providers, which leads to the challenges of
operating multiple interfaces, accounts, supports, and Service
Level Agreements (SLAs) [2]. To facilitate the providers and
to circumvent the difficulties faced by an organization (in
ranking, selecting, and dealing with multiple service providers)
we need a Cloud Service Broker (CSB). According to Gartner
[4], a service broker is a third party (individual or an orga-
nization) between providers and users. It provides intermedi-
ation, aggregation, and arbitrage services to consult, mediate,
and facilitates the cloud computing solutions on behalf of
the users or business organizations [4]. An Intermediation
service broker offers value-added services on top of existing
services by appending specific capabilities, for e.g., identity,
security, and access reporting and administration. To advance
data transfer security and integration, an Aggregation service
broker includes consolidation and integration of various ser-
vices, for example, data integration and portability assurance
between different service providers and users. An Arbitrage
service broker is similar to aggregation broker, but the number
of services being aggregated will not be fixed. An Arbitrage
service broker acquires comprehensive services and presents
it in smaller containers with more excellent cumulative value
to the users, e.g., a large block of bandwidth at wholesale
rates. A service broker reduces processing costs, increases
flexibility, provides access to various providers within one
contract channel, and offers timely execution of services
by removing acquisition limits through reporting and billing
policies [5]. We proposed an Intermediation service broker.
Recent advancement in service brokers focuses on devel-
oping an efficient system to help the user to select and
monitor resource efficiently. Service execution and evaluation
help in getting historical information about services affected
by dynamic, quantifiable and non-quantifiable QoS attributes
[6]. A quantifiable QoS attribute mainly refers to functional
requirements and can efficiently be measured without any
ambiguity, e.g., vCPU, service response time, cost, etc. Where
as a non-quantifiable attribute primarily depends on non-
functional QoS requirements and cannot be quantified easily,
e.g., security, feedback, support, etc. [1]. The importance of
quantifiable and non-quantifiable attributes are classified, but
existing approaches do not present a technique to handle non-
quantifiable QoS attributes efficiently and objectively. The
quantified patterns can be used to analyze user QoS require-
ments in existing techniques [1] [7]. But, user requirements
are imprecise or vague. Therefore, we require a technique
to handle it efficiently. The quantifiable (precise) attribute
comprises of only crisp values (un-ambiguous) while a non-
quantifiable attribute usually includes fuzzy values that cannot
be quantified. We presented a hybrid imprecision technique
which consists of quantifiable and non-quantifiable attributes.
In the proposed technique, we employ Fuzzy Rough Set
Technique (FRST) to deal with hybrid information system
with real-valued entries (to provide the solution for real-
time conditional attributes with a fuzzy decision) and for
dimension reduction. For dimension reduction, all reducts of
the decision system are computed using FRST, and the best
reduct is selected to generate a Reduced Decision System.
The best reduct is the reduct which consists max. overlapping
attributes with user QoS request. The proposed architecture
is an intermediation broker using FRST that offers the cloud
user facility to rank, select, and monitor the services based on
their desired QoS requirement. The major contribution of this
work is as follows:
•An efficient service broker: 1) to rank providers based
on user QoS requirements, 2) to select and monitor the
service execution of the selected service.
•Incorporating user-assigned weights to prioritize user
requirements along with system assigned weights during
the ranking procedure to improve ranking efficiency.
Subsequently allowing the user to choose their desired
service provider from a ranked list.
•The principal focus is on dealing with a hybrid system
and dimension reduction using FRST. Also to improve the
accuracy of service provider selection by incorporating
dynamic and network layer QoS parameters (dynamic and
network layer attribute change with time and availability
of the resources).
•User experiences are incorporated, taking from satisfied
users in future assignments, along with performance mon-
itoring during service execution. By using this factor (past
performances, user experience) in the service ranking, we
enhance the correctness of ranking procedure.
Roadmap: The detailed study of existing work and significant
contributions for service provider selection (using rough, fuzzy
set) is presented in Section II. While in Section III, we provide
the overview of Fuzzy Rough Set and need of Fuzzy Rough
Set Approach. Section IV introduces proposed architecture,
and basic blocks along with a Ranking Algorithm. Section
Vgives a comprehensive case study to show the efficiency
of the proposed technique in ranking service providers (IaaS).
Section VI concludes with some future directions.
II. RE LATE D WOR K
A wide range of discovery and selection techniques have
been developed for the evaluation of service providers based
on user QoS requirements. This section presents the work
carried out by researchers for service provider ranking and
selection using rough, fuzzy set based techniques, along with
some other significant contribution.
To address the challenges of service providers discovery
and selection, Le et al., [8] presented a comprehensive survey
of existing service selection approach. They evaluated the
service provider selection approaches based on five aspects
and characterized into four groups (Multi-Criteria Decision
Making, Multi-Criteria Optimization, Logic-Based, and Other
Approaches). The Multi-Criteria Decision-Making based ap-
proaches have successfully implemented to discover desired
services. It includes the Analytic Hierarchy Process (AHP),
Multi-Attribute Utility Theory (MAUT), Outranking, and Sim-
ple Additive Weighting based techniques [8], which are an
extension of Web services. Godse et al., [9] presented an
MCDM based services selection technique, performed a case
study to prove the significance of methodology. Garg et al.,
[1] introduced AHP technique based service provider ranking
framework known as SMICloud. This framework enables users
to compare and select the service provider using “Service
Measurement Index (SMI)” attributes to satisfy users QoS
requirements. They computed key performance indicators de-
fined in the “Cloud Service Measurement Index Consortium
[10] QoS standards to compare services. However, they did not
examine trustworthiness, user experience, and network layer
attributes for ranking.
Alhamad et al., [11] introduced a fuzzy set theory based
technique for service provider selection using availability,
security, usability, and scalability as QoS attributes. Hussain
et al., [12] proposed an MCDM based selection technique for
IaaS services and presented a case study on service provider
selection among thirteen providers using five performance
criteria. Ganghishetti et al. initially proposed a Cloud-QoS
Management Strategy using the rough set technique in [13].
Specifically, they considered ranking of IaaS services using
QoS attributes. They also extended their work in Modified
C-QoSMS [14] and presented a case study using Random
and Round Robin algorithms. However, they did not exam-
ine non-quantifiable attribute and considered only categori-
cal information hence they need to discretize the numerical
value for service selection. Qu et al., [7] introduced a fuzzy
hierarchy based trust evaluation using inference system to
evaluates users trust based on fuzzy QoS requirements and
progressive fulfillment of services to advance service provider
selection. With a case study and simulation, they illustrated the
effectiveness and efficiency of their model. In [15] Patiniotakis
et al., introduced a PuLSaR: Preference-based Cloud Service
Recommender system which is an MCDM based optimization
brokerage technique, in the proposed method to deal vagueness
of imprecise QoS attributes they used fuzzy set theory.
In [16] Aruna et al., suggested a fuzzy set based rank-
ing framework for federated IaaS infrastructure to rank the
service providers based on QoS attributes and SLAs. Their
proposed framework consists of three phases of the AHP
process as decomposition of the problem, priority judgment,
and aggregation with simple rule inductions. Anjana et al.
introduced the first contribution to the Fuzzy Rough Set based
service provider ranking in [3]. They proposed a Fuzzy Rough
Set Based Cloud Service Broker (FRSCB) architecture in
which they did the QoS Attribute Categorization into different
types, also includes network layer and non-quantifiable QoS
attributes, ranked the service providers by mean of the total
score using Euclidean distance. They presented a case study
with fifteen providers along with fifteen SMI based attributes.
The proposed E-FRSCB is an extension of work presented
in FRSCB architecture [3]. In this work, to deal with dynamic
quantifiable, and non-quantifiable SMI attributes, we proposed
to use a fuzzy rough set based hybrid technique. We assigned
different weights to attributes at varying levels of ranking
procedure, incorporating quantifiable and non-quantifiable at-
tributes including network layer parameters, fetch real-time
values of dynamic attributes and monitor service execution
to improve next ranking assignments. We also simulated the
behavior of our work using CloudSim [17] and demonstrated
the merit of using E-FRSCB in lieu of FRSCB.
III. FUZ ZY RO UG H SET TH EO RY (FRST)
In traditional set theory, human reasoning is described using
Boolean logic, i.e., true or false (0/1) and are not enough
to reason efficiently. Therefore there is a need for decision
terms that take the value ranging within an interval from 0
to 1 to represent human reasoning. The Fuzzy Set Theory
(FST) proposed by Lotfi Zadeh [18] in 1965 can realize human
reasoning in the form of a degree ’d’ such that 0 ≤d≤1.
For example, in FST a person is healthy by 70% (d = 0.7) or
unhealthy by 30% (d = 0.3), while in traditional set theory a
person can be healthy or unhealthy. FST based set membership
function determined by the Equation 1.
µy(X)∈[0,1] (1)
Where: y∈Xi.e. yis an element of X, and Xis a set of
elements.
Rough Set Theory (RST) proposed by Pawlak [19] is a
mathematical way to deal with vagueness present in the data. It
proves its importance in Artificial Intelligence manly in expert
systems, decision systems, knowledge discovery, and many
more. The fundamental advantage of using RST is that there
is no need for holding any prior knowledge about data. In this
approach, the notion of the boundary region is used to describe
the uncertainty associated with data. A boundary region is
determined based on the upper and lower approximation of
the set. A set is defined as a rough when the boundary region
is non-empty while crisp when the boundary region is empty
[19]. Due to lack of space, we are skipping this information
please refer to [19] to know more.
Reduct: “The minimal subset of the attributes which gives
the same classification power of the elements of the universe
as the whole is known as a reduct [20].” In other words, those
attributes which are not part of the reduct are unnecessary
concerning the classification of the universe. Therefore, to
analyze the system, we can remove unnecessary attributes and
still be able to discern objects of the system as the original
one. In this work, we used the reduct concept primarily for
dimension reduction, which will improve the accuracy of the
service provider ranking and also reduce the searching time.
One limitation of the Rough Set Theory is that it can deal
with only categorical or non-quantifiable attributes. In general,
quantifiable and interval-fuzzy values also exist in real-world
data as explained in Sec.I. The Rough Set Theory fails when
we have quantifiable data in our Information System (IS) (TA-
BLE II). One plausible solution to the problem can be obtained
by performing discretization so that quantifiable attributes
can be categorized and Rough Set Theory can be employed.
Alternatively, we can also use the Fuzzy Set Theory to deal
directly with quantifiable characteristics in the information
system. However, to deal with a hybrid information system
(as shown in the TABLE II) which consists of both categorical
(non-quantifiable) and quantifiable attribute we need a hybrid
technique. Therefore, the Fuzzy Rough Set Theory (FRST)
can be employed for service provider ranking and selection.
FRST is a generalized form of a crisp Rough and Fuzzy set
theory. It is derived from the approximation of a fuzzy set
in a crisp approximation space. It is helpful when we are
dealing with a decision system in which conditional attributes
are real-valued attributes [21]. It is primarily employed in the
dimension reduction to improve the classification accuracy in
several aspects including storage, accuracy and speed [21].
Dimension reduction can be achieved by determining the
reduct of the system. A reduct is a minimal subset of attributes
of the system that gives the same classification power as given
by the entire set of attributes [20]. In a real-valued (conditional
attributes) based decision system, it is done with the help of
finding a minimal set of conditional attributes that preserves
discernment information by concerning decision attribute. For
detailed understanding of discernibility matrix based all reduct
computations (used in this work), refer to [22], [21].
The proposed technique deals with hybrid real-valued sys-
tem and also provides the solution for real-time conditional
attributes with the fuzzy decision. It computes all possible
reducts of the decision system (TABLE III) using FRST
(all reduct computation function given in [22]), and selected
the best reduct using Best Reduct Algorithm (Algo.3) for
dimension reduction. It incorporates the user feedback and
monitors the service execution using Service Execution Moni-
tor (Sec.IV-B3) (once service selection is made by the users) to
improve the accuracy of further service selection and ranking,
which is missing in most of the existing ranking technique
available in the literature.
IV. PROPOSED BROK ER AGE ARCHITECTURE
A. System Architecture
The proposed architecture attempts to help the users by
providing brokerage services such as ranking providers, se-
lection of the best provider following the selection execution
and monitoring of service execution. The proposed Extended-
Fuzzy Rough Set based Cloud Service Broker (E-FRSCB) bro-
kerage architecture (Fig. 1) consists of several basic software
components classified into three layers as Cloud User Layer,
Cloud Service Broker Layer, and Resource Layer. Cloud User
Layer includes the number of users either requesting for
service provider ranking or using services. Cloud Service
Broker (CSB) Layer is the central component of the archi-
tecture responsible for ranking, selection, service executions,
and monitoring (we focused on this layer). It consists of a
Cloud Service Repository (CSR), Service Execution Monitor
(SEM) and Broker Resource Manager (BRM). Finally, Re-
source Layer includes the number of service providers along
with service models which are generated using a simulator
(CloudSim) [17]. A user requesting for brokering services
at time ‘t’ with QoS request and attribute weight is stored
in an individual service definition document with BRM. The
detailed introduction to each component of E-FRSCB is given
in Sec. IV-B.
B. E-FRSCB Components
1) Broker Resource Manager (BRM):The overall func-
tionality of our proposed system is controlled by Broker Re-
source Manager from ranking to service execution monitoring.
Fig. 1 shows BRM as a principal component of E-FRSCB
architecture. It has several sub-components such as Definition
Document, Ranking Component, and Web Service Component.
The profile of providers is accumulated in the information
container know as Cloud Service Repository.Ranking Com-
ponent takes user QoS request, performs ranking, and returns
the ranked list of providers (as shown in Fig. 2). On every
new user request, Web Service Component is used to fetch the
real-time values of dynamic attributes including network layer
parameters. A user submits his QoS request using GUI, on sub-
mission E-FRSCB Algorithm (Algo.1) is invoked. Once service
selection is made by the user, E-FRSCB algorithm contacts the
selected provider. The SLA will be established between the
user and provider via negotiation process. Further, resources
will be reserved, and service execution will be performed.
Finally, Service Execution Monitor monitors the execution of
the services and maintain an Execution Monitoring Log to keep
track of the service performance to facilitate the next ranking
process.
CSP2 CSPn
CSAPI CSAPI CSAPI
Cloud
Service
Providers
(CSPs)
Resource
Layer
CloudServiceBroker
Layer
CloudService
Consumers
Consume
Layer
(IV.B.3)
Service
Execution
Monitor
(b)Ranking
Component
SLA
[C]CloudService
APIInvocation
(a)WebService
Component
[A]ConsumersQoS
RequestandWeights
[B]RankedList
ofCSPs
[E]Consumer
Feedback
(c)Definition
Document
CSP1
(4.2)EFRSCB
[D]CloudService
Execution
(IV.B.1)
Broker
Resource
Manager
(IV.B.2)
CloudService
Repository
Consumer1
Consumer2
Consumern
Fig. 1: Extended Fuzzy Rough Set based Service Brokerage
Architecture. (Square Bracket shows E-FRSCB algo. steps)
(a) Definition Document: It is used to record the user QoS
request along with desire priority given by the user in the
form of weights to each requested attribute (TABLE IV).
The Definition document is utilized in ranking and service
execution monitoring phase.
(b) Ranking Component: Ranking begins with the invocation
of Ranking Algorithm (Algo.2) of Ranking Component. It is
a central part of the BRM and is composed of several phases
such as information gathering, analysis, dimension reduction,
and ranking. For these, it interacts with the Cloud Service
Repository (CSR), Service Execution Monitor (SEM), and
Web Service Components (WSC).
ConsumerQoSrequest
andweights
ProfilingPhase
(InformationSystem)
(b)RankingComponent
(c)WebService
Component
SearchSpace
ReductionPhase
(ReducedDecision
System)
ClusteringPhase
(DecisionSystem)
RankingPhase
(Score=WeightedEuclidean
Distance(WeightedSystem,
UserRequest))
NetworkLayer
QoSAttributes
DynamicQoS
Attributes
[2](i)(a)
[2](i)(b)
[2](i)(c)
[6](iv)
(a)Definition
Document
Request
Request
Reply
Reply
Request
Reply
(IV.B.3)
Service
Execution
Monitor
SLA
(IV.B.2)
CloudService
Repository
RankedListofService
Providers(CSPs)
Consumer
Feedback
[2]
[3]
[4]
[5] [6]
[1](i)
[1](ii)
Fig. 2: Ranking Procedure a High-level View
Algorithm 1 E-FRSCB Algorithm
Input: Definition Document, Feedback, CSPs Service Information
Output: Ranked List of Cloud Service Providers (CSPs), Service Execution
1: procedure e-frscb(useri)
2: STEP [A] (i) E-FRSCB ←User-Request(QoS values, weights)
3: STEP [B] (i) definition Document ←STEP [A] (i)
4: (ii) Ranked List ←Ranking Algorithm(DD, UF)
5: STEP [C] (i) User-ID ←STEP [B] (ii)
6: (ii) User-Select one CSP, send CSP-ID to E-FRSCB
7: (iii) User-invoke(CS-API) ←Selected(CSP)
8: (iv) E-FRSCB-SLA-User-ID ←establish-SLA(User-ID, CSP-ID)
9: STEP [D] (i) E-FRSCB-BRM-Resource Reservation(User-ID, Service API)
10: (ii) E-FRSCB-BRM-Service Exe(User-ID, Service API, SLA-User-ID)
11: STEP [E] (i) Profile-CSP-ID ←User-ID-Feedback
/∗BRM: Broker Resource Manager; CSR: Cloud Service Registry; DD: Definition
Document; UF: User Feedback; SEM: Service Execution Monitor; SLA: Service Level
Agreement; DS: Decision System; DQoS: Dynamic QoS Attributes; NLQoS: Network
Layer QoS Attributes; CSP: Cloud Service Provider. ∗/
•Profiling Phase does the information gathering by send-
ing a request to the service repository, execution moni-
tor, and web service components. Once profiling phase
receives the response from all the part, it generates the
Information System (IS) based on the latest information
of monitoring phase and dynamic QoS attribute including
network layer QoS attributes (using WSC gets the current
value of QoS attributes). At the end of this phase, it sends
the IS along with user QoS request to next step (clustering
phase) for further information analysis.
•At Clustering Phase to design the Decision System (DS)
K-means [23] is employed over the Information System
which gives different clustering labels. Each object of
the Information System (service provider) is associated
with respective clustering label to generate the Decision
System, i.e., each cluster is labeled with distinct labels
and labels are used as a decision attribute in Decision
System. During this process, if service providers are
grouped under the same clustering label, then they offer
related service. In the proposed method to determine the
optimal number of clustering labels Elbow method [24]
is employed, and decision attribute is kept at the end of
the Decision System (last row) as shown in TABLE III.
•In Dimension Reduction Phase reduct concept of Fuzzy
Rough Set Theory is applied. All reducts of the DS (TA-
BLE V) are computed using all reduct computation()
function from [22], and the best reduct is selected using
Best Reduct Algorithm (Algo.3). The best reduct is the
reduct which consists the maximum overlapping QoS at-
tributes with user QoS requests. If more then one reducts
have the same number of the user-requested attribute then
to break the tie Best Reduct Algorithm selects the reduct
which has a higher number of dynamic attributes. Based
on the selected reduct Reduced Decision System (RDS)
is generated. Finally based on the RDS and Definition
Document ranking of providers is performed.
•Ranking Phase Weighted Euclidean Distance (Score) is
computed for service providers by using attribute values
of provider and user request. Smaller the Score represents
better service provider, therefore ranking is done based on
increasing order of Score. Finally, the ranking algorithm
terminates with sending a ranked list to BRM.
Relative Weight Assignment for QoS Attributes: We need
to consider the relative importance of attributes for service
provider comparison before calculating respective Score of
the providers. For this, we need to assign some weights to
QoS attributes. We employed System Assigned Weights and
User Assigned Weights to respective attributes (user requested
and network layer QoS in RDS TABLE VI). During weight
assignment, at the first-level system will assign weights, while
at the second-level user can assign his preferred weights. By
doing this, we try to incorporate both user preferences and
actual relative importance of attributes in the ranking process.
Assigning the weight at lower levels prioritize the user request
alongside also gives importance to the attributes which are not
part of the user request but have a critical role in the ranking
process (e.g., network attributes) at the higher level.
System Assigned Weights: If user requested attribute is not
present in RDS, then that attribute does not have enough
potential concerning selected reduct and hence not counted
for ranking process. However, all other attributes which are
not part of the user request and are part of the RDS are
assigned the weights based on Golden Section Search Method
[25]. Where 66.67% (i.e., 0.67) weights allocated to the user
requested attributes and 33.33% (i.e., 0.33) weights to others
which are part of the RDS (e.g., network layer attributes,
monitored attributes). Here the sum of the weight is considered
to be equal to 1. A critical remark to acknowledge here is that
if the user does not wish to assign weights then only one level
of weight assignment will be done for the ranking process.
User Assigned Weights: The user assigned weights to indi-
cate the relative importance of the attribute sought by the users
in their request. A user can use his own scale to assign different
weights to attributes in his request [1] (as shown in TABLE I).
This weight assignment is done based on the suggestion given
in AHP technique [26], for this, the restriction of the sum of
all weights need not be equal to one, unlike system assigned
weights. However, for each first level user requested attribute
(e.g., Accountability, Agility, Cost, Assurance, Security, and
Satisfaction) we considered the sum of the weights is equal to
1as shown in TABLE IV. This weight assignment technique
is proposed to assign different weights to each attribute in
ranking technique by Garg at el., [1] which we adopted for
TABLE I: Relative Importance of QoS Attributes
Relative Importance Value
Equaly important 1
Somewhat more important 3
Definitely more important 5
Much more important 7
Extremely more important 9
weight assignment in our proposed technique.
(c) Web Service Component (WSC): It is used to fetch the
current state of dynamic attributes including network layer
attributes from service providers using web-services. The state
of dynamic QoS attributes changes from one ranking process
to another ranking process, and we need the real-time values
to improve the accuracy of the ranking process. For this, we
used various APIs (Cloud Harmony APIs [27]) to fetch the
current state of dynamic attributes of the service providers.
Dynamic attributes are specified in advance, so there is no
need of determining them again during each ranking process.
2) Cloud Service Repository (CSR):It is a repository
service employed to store information about service providers
services. The service provider needs to register their QoS
plans/services, capabilities, offerings, and initial SLAs in our
service repository. In the proposed architecture, we assume
providers have been recorded their services in service reposi-
tory beforehand. So from the service repository, we can obtain
required information of the providers, their QoS service offer-
ings, and other information to generate the initial Information
System (IS) as shown in TABLE II. In the absence of a global
service repository, the service providers need to register their
services in our local CSR registry. In the Fig. 1 and Fig. 2
we have shown our local service repository, as for experiment
purpose we used local service repository.
3) Service Execution Monitor (SEM):The process of
cloud monitoring includes dynamic tracking of the attributes
relevant to virtualized resources, for example, vCPU, Storage,
Network, etc. [28]. The cloud computing resources config-
uration is a genuinely challenging task which consists of
various heterogeneous virtualized computing resources [29].
Furthermore, sometime, there will be a massive demand for a
specific service. And because of the change in requirement and
availability of various resources including network parameter
the performance may change. Which directly and indirectly
affects the user service experience. Therefore, there is a need
for monitoring to keep track of resources operation at high
demand, to detect fluctuations in performance and to account
for the SLA breaches of specific QoS attributes [30]. The
performance fluctuations may also happen due to failures and
other run-time configuration. Cloud monitoring solutions can
be classified into three types as Generic solutions, Cluster-
Grid solutions, and Cloud specific solutions. Cloud specific
solutions are designed explicitly to monitor computing envi-
ronments in the cloud and developed by academic researchers
or commercial efforts [28]. Existing Cloud specific solutions
(tools) includes Amazon CloudWatch, Private Cloud Monitor-
ing Systems (PCMONS) (open source [31]), Cloud Manage-
ment System (CMS), Runtime Model for Cloud Monitoring
(RMCM), and CAdvisor (open source) [28].
In service broker, service monitoring can be used to improve
provider ranking and to build healthy competition between
providers to adhere to provide excellent services to the user
for the next ranking process. In proposed E-FRSCB execution
monitor is implemented using the technique presented in [31].
It can also be implemented as a third party monitoring service
offered (e.g., Amazon CloudWatch, AppDynamics). Service
execution monitor is used to provide the guarantee that the
deployed services perform at the expected level to satisfy the
SLA. This component includes monitoring task regarding the
currently running services. This responsibility consists of the
detection and collection of attribute values. Data collected
during monitoring is utilized for the next ranking process and
is sent to the Broker Resource Manager whenever BRM issues
a new request for data.
Algorithm 2 Ranking Algorithm
Input: Definition Document, User Feedback
Output: Ranked List of Service Providers (CSPs)
1: procedure ranking(definition Document, user F eedback)
2: STEP [1] (i) RC ←definition Document
3: (ii) RC ←user Feedback
4: STEP [2] (i) Fetch latest information about QoS from different components
5: (a) qoS Attribute ←request(CSR)
6: (b) dynamic QoS Values ←request(WSC)
7: (c) performance QoS ←request(SEM)
8: (ii) IS ←generate( [1](ii), [2](i)(a), [2](i)(b), [2](i)(c) )
9: STEP [3] (i) clustering Labels ←kmeans(IS, optimal Clusters, nstart)
10: (ii) decision Attribute ←clustering Label(IS)
11: (iii) DS ←generateDS(IS, decision Attribute)
12: STEP [4] (i) all Reducts ←FS.all.reducts.computation(DS)
13: (ii) best Reduct ←best.Reduct(DS, definition Document)
14: (iii) RDS ←generate(DS, best Reduct)
15: STEP [5] (i) RDSN ←generate(RDS, NQoS)
16: (ii) Weight Assignment
17: (a) W RDSN ←assign(RDS QoS 67%, NQoS 33%)
18: (b) if(definition Document(user.Weights))
19: W RDSN ←assign(W RDSN, user.Weights)
20: STEP [6] (i) score-CSP ←weighted EDistance (user Request, W RDSN)
21: (ii) if score is equal give priority to CSP with more Dynamic QoS
22: (iii) ranked List CSPs ←ascending order(score-CSP)
23: (iv) return(ranked List CSPs)
/∗RC: Ranking Component; IS: Information System; CSR: Cloud Service Registry;
SEM: Service Execution Monitor; WSC: Web Service Component; DS: Decision System;
RDS: Reduced Decision System; NQoS: Network QoS Attributes; RDSN: Reduced
Decision System + Network QoS Attributes; W RDSN: Weighted RDSN. ∗/
Algorithm 3 Best Reduct Algorithm
Input: Decision System, All Reducts of Decision System, Definition Document
Output: Best Reduct
1: procedure best.Reduct(Decision System, All Reducts, definition
Document)
2: STEP [A] find number of overlapping QoS with User QoS Request for each Reduct
3: STEP [B] select all the Reduct which has max. number of overlapping QoS
4: STEP [C] if more then one such Reduct selected which have max. overlapping QoS
5: (i) count Number of Dynamic QoS in each such Reduct
6: (ii) select the Reduct which has more number of Dynamic QoS
7: (iii) if more then one such Reduct has equal number of Dynamic QoS select
anyone Reduct.
8: STEP [D] return(selected.Reduct)
V. C AS E ST UDY: COMPUTE SE RVI CE PROVIDER (IAAS)
RANKING BASED ON USER QOS R EQUIREMENT
In this section, the ranking method of E-FRSCB (given
in the Sec.IV) is analyzed for IaaS service with the help
of a case study example. However, this can also work with
other types of services like SaaS, PaaS. RStudio [32] is
used as a development IDE and R−language [33] for
implementation. To submit user request to the system, GUI
is developed using fgui [34] package. We have referred
Cloud Service Measurement Index Consortium [10], defined
Service Measurement Index (SMI) matrices for evaluation of
IaaS Service Providers. To design initial Information System
(IS) (TABLE II) we considered 10 service providers along
with total 17 QoS attributes (scalable). We have designed
IS for general purpose IaaS services by considering 8first
level SMI matrices which consist of 17 third level attributes.
We experimented with synthesized data, but tried to incor-
porate actual QoS values. We have taken data from different
sources including providers websites, SLAs, literature [1], and
CloudSim [17] for most of the QoS as shown in TABLE II. For
dynamic and network layer attributes (Availability, Latency,
Throughput) data is collected using Cloud Harmony API
[27]. For performance-intensive attribute (vCPU, Response
Time) information from Service Execution Monitor is used.
However, in the initial IS design, actual values of vCPU
speed offered by providers are used, and for Response Time
values are assigned randomly. The non-quantifiable attribute
(e.g., security, accountability, feedback, & supports) values
are randomly assigned. System assigned weights to attribute
are assigned using Golden Section Search Method [25], while
user-specified weights are assigned as explained in Sec.IV-B1.
In the literature to standardize the categorical (non-
quantifiable) attributes, there is no single globally accepted
standardization. The categorical attribute represents the enu-
merated type of quality where the expected value of the
attribute is presented in the form of levels [35]. In proposed
architecture for categorical attributes such as accountability,
support, security, and user feedback different levels ([1:10])
are introduced based on the work presented in [3]. We demon-
strate here the quantification of Security Levels in the proposed
technique (for support type, accountability, and user feedback
level quantification is done similarly).
Security is one of the critical QoS metrics; its primary
objective is to strengthen the security mechanisms and mit-
igate threats. It helps in developing user trust and improves
operational performance. In E-FRSCB, security consists of ten
levels where random values are assigned for the providers. It
includes various performance indicators such as certificates
provisioning, vCPU configuration, firewall, access, and log
management policies, encryption, etc. A cloud framework
proposed by European Union Network Information Security
Agency consist 10, 69, 130 as first, second, and third-degree
security indicators [36]. Cloud Security Alliance introduced
fundamental principles that providers must follow and supports
user for security estimation [37]. We did security levels
quantification into ten different levels [1:10] (can be easily
extended) based on the risk of security threat and security
performance indicators. In quantization, level 1 represents the
most straightforward security mechanisms while level 10 rep-
resent the complex and highest level of security mechanisms
offered by the providers. Each level of quantized security level
consists of one or more security performance indices (130
third-degree indicators). The straightforward example can be at
level 1 only provider, and user authentication is done, at level 2
including level 1 multi-factor and fine granular authentication
and authorization are performed. Similarly, at further levels
(from level 3-10) different firewall administration, privileged
controls over accesses, application identification and other
security metrics takes place to achieve higher security. Next,
we present the proposed ranking method in multiple steps.
In the first step, whenever a new user submits his request
(TABLE IV), ranking algorithm (Algo.2) is invoked. During
this process, to generate the initial IS, it fetches data from CSR
(repository) for IaaS services offered by service providers, as
shown in TABLE II. To fetch the actual values of dynamic and
network layer attributes the ranking algorithm sends a request
to web service component to execute the web services.
In the second step, on generated IS, K-means Clustering
is performed (optimal number of clusters ‘k’ is determined
using Elbow Method [24]). Where generated clustering labels
are used as a decision attribute to design the Decision System
(DS) and the corresponding clustering label is attached to the
providers. In the paper, for presentation clarity, we transferred
the tables (IS, DS, Reduced Decision System, and Ranking
Table) where row shows attribute and column shows the object
of the IS/DS. Therefore, clustering labels are kept in the last
row as shown in TABLE III.
In the third step, once DS is generated, dimension reduc-
tion is achieved by employing all reduct concept of Fuzzy
Rough Set Theory (R language package RoughSet [38])
all reduct computation() function. This function gives all
possible reducts of the DS, out of which one reduct is selected.
Further, the number of reducts depend on the precision value
(αprecision), we also analyzed the precision value impact on
the number of reducts as shown in TABLE VIII and in Fig.
3, 4. With the change in precision value number of attributes
in reduct decreases but the total number of reducts increases.
For the experiment, we fixed the αprecision = 0.15 as default
value and got four reducts as shown in TABLE V. Among
four reducts the best reduct is selected using Best Reduct
Algorithm (Algo.3). Here all four reducts consist of seven
dynamic attributes, among these reducts any one of the reducts
can be selected. Based on selected reduct-1 Reduced Decision
System (RDS) is generated and shown in TABLE VI. During
the best reduct selection, Network Layer attributes may not
be present in the reduct since user request does not consist
of network attributes. The apparent reason for that is the user
does not have control over Network Layer QoS, it depends on
the network traffic. Hence, we added the Network Layer QoS
in RDS if attributes are not part of RDS. A critical observation
here is that in primary IS (TABLE II) we have 17 different
quality attributes while after performing dimension reduction
based on selected best reduct and Network QoS attributes (if
not in RDS add them) we have only 11 attributes. In dimension
reduction, the reduction of information we achieved is 35.29%.
At present, more than 500 providers offering more than a
thousand different services (this static are based on the number
of providers registered with Cloud Service Market [39] and
Cloud Harmony [27]). So if IS is vast (thousands of providers
and a large number of attributes), dimension reduction will
help a lot in better ranking of the providers.
In the fourth step, once RDS is available, two-level weight
assignment to attributes is performed (as explained earlier
in Sec.IV-B1). After weight assignment, by using User QoS
Request (as shown in TABLE IV) and RDS (TABLE VI),
Weighted Euclidean Distance (Score) of respective provider
is computed. Based on the Score providers are sorted in
ascending order, where smaller score represents better service
providers by concerning user QoS request (TABLE VII).
Hence, the ranking of the providers can be determined based
on Score (Weighted Euclidean Distance) (4.989, 5.404, 6.843,
7.372, 7.387, 7.916, 8.403, 8.450, 8.668, 8.814). The ranked
list is as follows: Amazon EC2 >Rackspace >Microsoft
Azure >Google Compute Engine >Digital Ocean >Vultr
Cloud >Century Link >IBM Soft Layer >Storm on Demand.
Finally, based on user request Amazon EC2 give the best
service. In the next step, a ranked list is sent to the user for
service selection. Finally, the user selects a provider and gives
his selection response to the system, system communicated
with the selected provider for resource reservation and service
execution. During service execution we monitor the execution
based on SLA to improve the accuracy and use the monitoring
data in designing IS for next ranking process (Fig. 2).
Two different experiments have been performed to compare
the performance of the proposed technique with existing fuzzy
rough set-based technique (FRSCB [3]). To simulate “compute
cloud service infrastructure (IaaS)” we have used CloudSim
[17]. During simulation for the first test, we kept service
providers (CSPs) fixed to 100 and submitted 1to 5000 user
request for ranking. While, in second experiment user requests
was fixed to 100, and service provider varies from 10 to
5000. For simulation, random QoS values are generated to
design the IS based on the domain of each QoS attribute. For
User QoS Request attribute values and weights are assigned
randomly. Furthermore, using the information available in IS,
providers were created where each provider consists of time
shared Virtual Machine Scheduler and 50 computing hosts.
During simulation for each task execution time is selected
randomly from 0.1to 1.0ms. Finally, based on the experiment
set up response time (sec.) of proposed and FRSCB ranking
technique is recorded. Experiment result shows (Fig. 5, 6) that
proposed method outperforms then FRSCB ranking technique.
Here, results show that our proposed architecture is scalable
when the number of user QoS request increases and also when
the number of provider increases.
VI. CONCLUSIONS AND FUTURE WORK
Because of the vast diversity of available cloud service from
the user point of view, it leads to many challenges for discover-
ing and managing their desired services. As service discovery
and management involve various operational aspects, it is
desired to have a cloud service broker who can do this task
on behalf of the user. In this paper, we have presented an
efficient Service Provider evaluation system using the Fuzzy
Rough Set technique and performed a case study on the IaaS
service provider ranking. Our proposed architecture not only
rank the services but also monitor execution. The significant
contributions of proposed brokerage system can be summa-
rized as follows: 1) it evaluates the number of service providers
based on user QoS requirements and offers an opportunity to
the user to select the best service from a ranked list. 2) it also
priorities the user request by incorporating user assign weights
and gives the relative priority to non-user requested QoS by
considering system assigned weights. 3) the primary focus was
on dimension reduction so that we can minimize the searching
time and also improve the efficiency of the ranking procedure.
4) we used Weighted Euclidean Distance to lead to the ideal
value it shows the improved representation of the method. 5)
finally we monitor the service execution once the selection
is made by the users to get the historical data and actual
performance of the services to improve the accuracy of the
next service provider ranking process. The proposed approach
can deal with hybrid information system and also scalable and
efficient. This technique helps new users and the brokerage
based organization to directly deal with fuzzy information
system with there rough QoS requirement for service providers
ranking, and selection. In the future, we are working to develop
an online model by dynamically fetching all QoS attributes for
service provider ranking and selection.
REFERENCES
[1] S. K. Garg, S. Versteeg, and R. Buyya, “A framework for ranking
of cloud computing services,” Future Generation Computer Systems,
vol. 29, no. 4, pp. 1012–1023, 2013.
[2] D. Rane and A. Srivastava, “Cloud brokering architecture for dynamic
placement of virtual machines,” in Cloud Computing (CLOUD), 2015
IEEE 8th International Conference on, pp. 661–668, IEEE, 2015.
[3] P. S. Anjana, R. Wankar, and C. R. Rao, “Design of a cloud brokerage
architecture using fuzzy rough set technique,” in Multi-disciplinary
Trends in Artificial Intelligence, pp. 54–68, 2017.
[4] “Cloud service broker,” Accessed Jan 2018. “https://www.techopedia.
com/definition/26518/cloud-broker”.
[5] M. Guzek, A. Gniewek, P. Bouvry, J. Musial, and J. Blazewicz, “Cloud
brokering: Current practices and upcoming challenges,” IEEE Cloud
Computing, vol. 2, no. 2, pp. 40–47, 2015.
[6] R. Buyya, C. S. Yeo, S. Venugopal, J. Broberg, and I. Brandic, “Cloud
computing and emerging it platforms: Vision, hype, and reality for
delivering computing as the 5th utility,” Future Generation computer
systems, vol. 25, no. 6, pp. 599–616, 2009.
[7] C. Qu and R. Buyya, “A cloud trust evaluation system using hierarchical
fuzzy inference system for service selection,” in Advanced information
networking and applications (aina), 2014 ieee 28th international con-
ference on, pp. 850–857, IEEE, 2014.
[8] L. Sun, H. Dong, F. K. Hussain, O. K. Hussain, and E. Chang, “Cloud
service selection: State-of-the-art and future research directions,” Journal
of Network and Computer Applications, vol. 45, pp. 134–150, 2014.
[9] M. Godse and S. Mulik, “An approach for selecting software-as-a-
service (saas) product,” in Cloud Computing, 2009. CLOUD’09. IEEE
International Conference on, pp. 155–158, IEEE, 2009.
[10] “Cloud service measurement index consortium (csmic), smi framework,”
Accessed Jan 2018. “http://csmic.org”.
[11] M. Alhamad, T. Dillon, and E. Chang, “A trust-evaluation metric for
cloud applications,” International Journal of Machine Learning and
Computing, vol. 1, no. 4, p. 416, 2011.
[12] Z. ur Rehman, O. K. Hussain, and F. K. Hussain, “Iaas cloud selection
using mcdm methods,” in e-Business Engineering (ICEBE), 2012 IEEE
Ninth International Conference on, pp. 246–251, IEEE, 2012.
[13] P. Ganghishetti and R. Wankar, “Quality of service design in clouds,”
CSI Communications, vol. 35, no. 2, pp. 12–15, 2011.
[14] P. Ganghishetti, R. Wankar, R. M. Almuttairi, and C. R. Rao, “Rough
set based quality of service design for service provisioning in clouds,”
in Rough Sets and Knowledge Technology, pp. 268–273, Springer, 2011.
[15] I. Patiniotakis, Y. Verginadis, and G. Mentzas, “Pulsar: preference-based
cloud service selection for cloud service brokers,” Journal of Internet
Services and Applications, vol. 6, no. 1, p. 26, 2015.
[16] L. Aruna and M. Aramudhan, “Framework for ranking service providers
of federated cloud architecture using fuzzy sets,” International Journal
of Technology, vol. 7, no. 4, pp. 643–653, 2016.
[17] R. N. Calheiros, R. Ranjan, A. Beloglazov, C. A. De Rose, and R. Buyya,
“Cloudsim: a toolkit for modeling and simulation of cloud computing
environments and evaluation of resource provisioning algorithms,” Soft-
ware: Practice and experience, vol. 41, no. 1, pp. 23–50, 2011.
[18] L. A. Zadeh, “Fuzzy sets,” Information and control, vol. 8, no. 3,
pp. 338–353, 1965.
[19] Z. Pawlak and A. Skowron, “Rudiments of rough sets,” Information
sciences, vol. 177, no. 1, pp. 3–27, 2007.
[20] R. Jensen and Q. Shen, “Rough set-based feature selection,” Rough
Computing: Theories, Technologies, p. 70, 2008.
[21] D. Chen, L. Zhang, S. Zhao, Q. Hu, and P. Zhu, “A novel algorithm
for finding reducts with fuzzy rough sets,” IEEE Transactions on Fuzzy
Systems, vol. 20, no. 2, pp. 385–389, 2012.
[22] “Package roughsets,” Accessed Jan 2018. “https://cran.r-project.org/
web/packages/RoughSets/RoughSets.pdf”.
[23] A. K. Jain, “Data clustering: 50 years beyond k-means,” Pattern recog-
nition letters, vol. 31, no. 8, pp. 651–666, 2010.
[24] R. Tibshirani, G. Walther, and T. Hastie, “Estimating the number of
clusters in a data set via the gap statistic,” Journal of the Royal Statistical
Society: Series B (Statistical Methodology), vol. 63, no. 2, pp. 411–423,
2001.
[25] B. Zhao and Y.-K. Tung, “Determination of optimal unit hydrographs
by linear programming,” Water resources management, vol. 8, no. 2,
pp. 101–119, 1994.
[26] T. L. Saaty, Theory and applications of the analytic network process:
decision making with benefits, opportunities, costs, and risks. RWS
publications, 2005.
[27] “Cloud harmony,” Accessed Jan 2018. “http://cloudharmony.com”.
[28] G. Da Cunha Rodrigues, R. N. Calheiros, V. T. Guimaraes, G. L. d.
Santos, M. B. De Carvalho, L. Z. Granville, L. M. R. Tarouco, and
R. Buyya, “Monitoring of cloud computing environments: concepts,
solutions, trends, and future directions,” in Proceedings of the 31st
Annual ACM Symposium on Applied Computing, pp. 378–383, ACM,
2016.
[29] K. Alhamazani, R. Ranjan, K. Mitra, F. Rabhi, P. P. Jayaraman, S. U.
Khan, A. Guabtni, and V. Bhatnagar, “An overview of the commercial
cloud monitoring tools: research dimensions, design issues, and state-
of-the-art,” Computing, vol. 97, no. 4, pp. 357–377, 2015.
[30] H. J. Syed, A. Gani, R. W. Ahmad, M. K. Khan, and A. I. A. Ahmed,
“Cloud monitoring: A review, taxonomy, and open research issues,”
Journal of Network and Computer Applications, 2017.
[31] “Private cloud monitoring systems (pcmons),” Accessed Jan 2018.
“https://github.com/pedrovitti/pcmons”.
[32] “R development environment,” Accessed Jan 2018. “https://www.
rstudio.com/”.
[33] R. C. Team, “R language definition,” Vienna, Austria: R foundation for
statistical computing, 2000.
[34] “Cran-package fgui: Gui interface,” Accessed Jan 2018. “https://cran.
r-project.org/web/packages/fgui/index.html”.
[35] R. N. Gould and C. N. Ryan, Introductory statistics: Exploring the world
through data. Pearson, 2015.
[36] D. Catteddu, G. Hogben, et al., “Cloud computing information assur-
ance framework,” European Network and Information Security Agency
(ENISA), 2009.
[37] “Cloud security alliance (csa): Cloud control matrix (ccm),”
Accessed Jan 2018. “https://cloudsecurityalliance.org/group/
cloud-controls- matrix/”.
[38] “Cran-package roughsets,” Online; Accessed July 2017. “https://CRAN.
R-project.org/package=RoughSets”.
[39] “Cloud service market: A comprehensive overview of cloud computing
services,” Accessed Jan 2018. “http://www.cloudservicemarket.info”.
TABLE II: Information System (IS). row: attributes of IS; column: objects of IS
QoS Attributes QoS
Unit
Cloud Service Providers (CSPs)
QoS
Type
Require-
ment
Google
Compute
Engine
Storm
on
Demand
Century
Link
Amazon
EC2
Vultr
Cloud
IBM
Soft
Layer
Linode Digital
Ocean
Micro-
soft
Azure
Racks-
pace
Accountability Levels (1-10) 8 4 4 9 3 7 1 6 8 2
Dynamic
Categorical
Agility
(Capacity)
Number
of
vCPUs
(4 core
each) 16 8 1 8 6 4 3 8 8 8
Numerical
vCPU Speed (GHZ) 2.6 2.7 3.6 3.6 3.8 3.4 2.3 2.2 3.5 3.3
Disk (TB) 3 0.89 2 1 2 1 0.76 2 0.195 1
Memory (GB) 14.4 16 16 15 24 32 16 16 32 15
Cost
vCPU ($/h) 0.56 0.48 0.56 0.39 0.44 0.69 0.48 0.23 0.38 0.51
Static
Data
Transfer
Bandwidth
In
($/TB-m) 0 8 10 0 7 8 10 9 18.43 15.2
Out
($/TB-m) 70.6 40 51.2 51.2 22.86 92.16 51.2 45.72 18.43 40
Storage ($/TB-m) 40.96 122.8 40.96 21.5 102 36.86 80.86 40.96 40.96 36.86
Assurance
Support
(Levels) (1-10) 8 4 7 10 5 7 3 10 8 2 Categorical
Availability (%) 99.99 100 99.97 99.99 99.89 99.97 100 99.99 100 99.95 Dynamic Numerical
Security Levels (1-10) 9 8 6 10 8 10 6 8 10 2 Static
Categorical
Satisfaction Feedback
(Levels) (1-10) 9 7 6 10 6 9 6 8 9 7
Dynamic
Performance
Response
Time (sec) 83.5 90 97 52 90 100 97 85 76 57
Numerical
vCPU Speed (GHZ) 2.6 2.7 3.6 3.6 3.8 3.4 2.3 2.2 3.5 3.3
Network
Layer
QoS
Down
Time (min) 1.02 0 1.98 0.51 9.05 7.5 0 1.53 2.83 2.83
Latency (ms) 31 57 31 29 28 29 28 28 32 32
Throughput (mb/s) 20.24 16.99 24.99 16.23 10.11 16.23 8.12 24.67 23.11 23.67
TABLE III: Decision System (DS)
Cloud Service Providers (CSPs)
QoS Attributes QoS
Unit
Google
Compute
Engine
Storm
on
Demand
Century
Link
Amazon
EC2
Vultr
Cloud
IBM
Soft
Layer
Linode Digital
Ocean
Micro-
soft
Azure
Racks-
pace
Accountability Levels (1-10) 8 4 4 9 3 7 1 6 8 2
Number
of
vCPUs
(4-core
each) 16 8 1 8 6 4 3 8 8 8
vCPU Speed (GHZ) 2.6 2.7 3.6 3.6 3.8 3.4 2.3 2.2 3.5 3.3
Disk (TB) 3 0.89 2 1 2 1 0.76 2 0.195 1
Agility
(Capacity)
Memory (GB) 14.4 16 16 15 24 32 16 16 32 15
vCPU ($/h) 0.56 0.48 0.56 0.39 0.44 0.69 0.48 0.23 0.38 0.51
In
($/TB-m) 0 8 10 0 7 8 10 9 18.43 15.2Data
Transfer
Bandwidth Out
($/TB-m) 70.6 40 51.2 51.2 22.86 92.16 51.2 45.72 18.43 40
Cost
Storage ($/TB-m) 40.96 122.8 40.96 21.5 102 36.86 80.86 40.96 40.96 36.86
Support
(Levels) (1-10) 8 4 7 10 5 7 3 10 8 2
Assurance
Availability (%) 99.99 100 99.97 99.99 99.89 99.97 100 99.99 100 99.95
Security Levels (1-10) 9 8 6 10 8 10 6 8 10 2
Satisfaction Feedback
(Levels) (1-10) 9 7 6 10 6 9 6 8 9 7
Response
Time (sec) 83.5 90 97 52 90 100 97 85 76 57
Performance
vCPU Speed (GHZ) 2.6 2.7 3.6 3.6 3.8 3.4 2.3 2.2 3.5 3.3
Down
Time (min) 1.02 0 1.98 0.51 9.05 7.5 0 1.53 2.83 2.83
Latency (ms) 31 57 31 29 28 29 28 28 32 32
Network
Layer
QoS Throughput (mb/s) 20.24 16.99 24.99 16.23 10.11 16.23 8.12 24.67 23.11 23.67
Decision Attribute 3 1 3 2 1 3 1 2 2 2
TABLE IV: Definition Document: User Request and Weights
QoS Attributes QoS
Unit
Consumer
QoS Request
Consumer
QoS Weights
Accountability Levels (1-10) 4 1.0 = 1
Agility
(Capacity)
Number
of vCPUs
(4-core
each) 4 0.4
= 1vCPU Speed (GHZ) 3.6 0.2
Disk (TB) 0.5 0.3
Memory (GB) 16 0.1
Cost
vCPU ($/h) 0.54 0.6
= 1
Data Transfer
Bandwidth
In ($/TB-m) 10 0.1
Out ($/TB-m) 51 0.1
Storage ($/TB-m) 50 0.2
Assurance Support (Levels) (1-10) 8 *0.3 = 1
Availability (%) 99.9 0.7
Security Levels (1-10) 10 1.0 = 1
Satisfaction Feedback (Levels) (1-10) 9 1.0 = 1
Performance Response Time (sec)
vCPU Speed (GHZ)
Network
Layer
QoS
Down Time (min)
Latency (ms)
Throughput (mb/s)
Fig. 3: Precision Value vs Number of Reducts
TABLE V: All Possible Reducts of The Decision System
Reduct 1 2 3 4
Accountability Level vCPU Accountability Level vCPU
vCPU Speed vCPU Speed vCPU Speed vCPU Speed
Memory Memory Memory Memory
Green Color:
Dynamic
Attributes
vCPU Cost vCPU Cost vCPU Cost vCPU Cost
In Bound Cost In Bound Cost Security Level Security Level
Out Bound Cost Out Bound Cost Out Bound Cost Out Bound Cost
Red Color:
Static
Attribute
Availability Availability Availability Availability
Response Time Response Time Response Time Response Time
Latency Latency Latency Latency
QoS
Attributes
Throughput Throughput Throughput Throughput
Blue Color:
Network
QoS Attributes
(Dynamic)
Fig. 4: Precision Value vs Static and Dynamic Attributes in
Maximum and Minimum Size QoS Reduct
TABLE VI: Reduced Decision System (RDS) with QoS Weights
Cloud Service Providers (CSPs)
QoS Attributes QoS
Unit
Google
Compute
Engine
Storm
on
Demand
Century
Link
Amazon
EC2
Vultr
Cloud
IBM
Soft
Layer
Linode Digital
Ocean
Micro-
soft
Azure
Racks-
pace
System QoS
Weights
(Level-1)
Consumer
QoS Weights
(Level-2)
Accountability Levels (1-10) 8 4 4 9 3 7 1 6 8 2 0.095 1.0
vCPU Speed (GHZ) 2.6 2.7 3.6 3.6 3.8 3.4 2.3 2.2 3.5 3.3 0.095 0.2Agility
(Capacity) Memory (GB) 14.4 16 16 15 24 32 16 16 32 15 0.095 0.1
vCPU ($/h) 0.56 0.48 0.56 0.39 0.44 0.69 0.48 0.23 0.38 0.51 0.095 0.6
In
($/TB-m) 0 8 10 0 7 8 10 9 18.43 15.2 0.095 0.1
Cost Data
Transfer
Bandwidth Out
($/TB-m) 70.6 40 51.2 51.2 22.86 92.16 51.2 45.72 18.43 40 0.095 0.1
Assurance Availability (%) 99.99 100 99.97 99.99 99.89 99.97 100 99.99 100 99.95 0.095
0.095
*
7
=
0.67
0.1
Performance Response Time (sec) 83.5 90 97 52 90 100 97 85 76 57 0.082
Down
Time (min) 1.02 0 1.98 0.51 9.05 7.5 0 1.53 2.83 2.83 0.082
Latency (ms) 31 57 31 29 28 29 28 28 32 32 0.082
Network
Layer
QoS Throughput (mb/s) 20.24 16.99 24.99 16.23 10.11 16.23 8.12 24.67 23.11 23.67 0.082
0.082 * 4
=
0.33
Decision Attribute 3 1 3 2 1 3 1 2 2 2 -
TABLE VII: Normalized Weighted Reduced Decision System and Weighted Euclidean Distance
QoS
Attributes Unit
Cloud Service Providers (CSPs)
Amazon
EC2 Rackspace Microsoft
Azure
Google
Compute
Engine
Digital
Ocean
Vultr
Cloud
Century
Link Linode
IBM
Soft
Layer
Storm
on
Demand
Accountability Levels (1-10) 0.861 0.191 0.766 0.766 0.574 0.287 0.383 0.096 0.670 0.383
Agility
(Capacity)
vCPU Speed (GHZ) 0.069 0.063 0.067 0.050 0.042 0.073 0.069 0.044 0.065 0.052
Memory (GB) 0.431 0.431 0.919 0.413 0.459 0.689 0.459 0.459 0.919 0.459
Cost
vCPU ($/h) 0.022 0.029 0.022 0.032 0.013 0.025 0.032 0.028 0.040 0.028
Data
Transfer
Bandwidth
In
($/TB-m) 0.000 0.145 0.176 0.000 0.086 0.067 0.096 0.096 0.077 0.077
Out
($/TB-m) 0.490 0.383 0.176 0.676 0.438 0.219 0.490 0.490 0.882 0.383
Assurance Availability (%) 6.699 6.697 6.700 6.699 6.699 6.693 6.698 6.700 6.698 6.700
Performance Response
Time (sec) 4.290 4.703 6.270 6.889 7.013 7.425 8.003 8.003 8.250 7.425
Network
Layer
QoS
Down Time (min) 0.042 0.233 0.233 0.084 0.126 0.747 0.163 0.000 0.619 0.000
Latency (ms) 2.393 2.640 2.640 2.558 2.310 2.310 2.558 2.310 2.393 4.703
Throughput (mb/s) 1.339 1.953 1.907 1.670 2.035 0.834 2.062 0.670 1.339 1.402
Weighted Euclidean Distance (Score) 4.989 5.404 6.843 7.372 7.387 7.916 8.403 8.450 8.668 8.814
Service Provider Rank 1 2 3 4 5 6 7 8 9 10
TABLE VIII: Static & Dynamic Attributes in Max. & Min. QoS Reduct with Change in Precision Value (0≤α≤1)
Precision Value 0.05 0.1 0.15 0.2 0.25 0.3 0.35 0.4 0.45 0.5 0.55 0.6 0.65 0.7 0.75 0.8 0.85 0.9 0.95 1
Number of Reducts 2 2 4 8 20 14 84 89 78 289 365 405 301 291 226 240 203 172 106 1
Dynamic Attributes 6 6 8 7 7 6 5 5 4 6 4 5 5 4 3 4 4 3 2 12
Static Attributes 3 3 3 3 4 4 4 3 5 2 3 2 1 2 2 1 0 1 1 5
Reduct With
Maximum
No. of QoS Total Attributes 9 9 11 10 11 10 9 8 9 8 7 7 6 6 5 5 4 4 3 17
Dynamic Attributes 6 6 8 6 6 5 4 4 3 2 2 2 1 2 1 1 0 1 1 12
Static Attributes 3 3 3 4 3 2 2 2 2 3 2 2 2 1 1 1 2 1 1 5
Reduct With
Minimum
No. of QoS Total Attributes 9 9 11 10 9 7 6 6 5 5 4 4 3 3 2 2 2 2 2 17
Fig. 5: Number of Requests vs System Response Time Fig. 6: Number of service providers vs System Response Time