ArticlePDF Available

Cooperative Agents-based approach for Workflow Scheduling on Fog-Cloud computing

Authors:

Abstract and Figures

Connected objects in the Internet of Things (IoT) domain are widespread everywhere. They interact with each other and cooperate with their neighbors to achieve a common goal. Most of these objects generate a huge amount of data, often requiring a process under strict time constraints. Being motivated by the question of optimizing the execution time of these IoT tasks, we remain aware of the sensitivity to latency and the volume of data generated. In this article, we propose a hybrid Cloud-Fog multi-agent approach to schedule a set of dependent IoT tasks modeled as a workflow. The major advantage of our approach is to allow to model IoT workflow planning as a multi-objetif optimization problem in order to create a compromise planning solution in terms of response time, cost and makespan. In addition to taking into account data communications between workflow tasks, during the planning process, our approach has two other advantages: (i) maximizing the use of Fog Computing in order to minimize response time, and (ii) the use of elastic cloud computing resources at minimum cost. The implementation of the MAS-GA (Multi-Agent System based Genetic Algorithm), which we have proposed in this context; the series of experiments carried out on different corpora, as well as the analysis of the found results confirm the feasibility of our approach and its performance in terms of cost which represents an average gain of 21.38% compared to Fog and 25.49% compared to Cloud, makespan which represents a gain of 14.13% compared to Fog and a slight increase of 5.24% compared to Cloud and in response time which represents an average gain of 46.66% compared to Cloud with a slight increase of 6.66% compared to Fog, while strengthening the collaboration between Fog computing and Cloud computing.
This content is subject to copyright. Terms and conditions apply.
Vol.:(0123456789)
1 3
Journal of Ambient Intelligence and Humanized Computing (2022) 13:4719–4738
https://doi.org/10.1007/s12652-021-03187-9
ORIGINAL RESEARCH
Cooperative agents‑based approach forworkflow scheduling
onfog‑cloud computing
MarwaMokni1,2 · SoniaYassa3· JalelEddineHajlaoui1· RachidChelouah2· MohamedNazihOmri1
Received: 31 August 2020 / Accepted: 25 March 2021 / Published online: 1 April 2021
© The Author(s), under exclusive licence to Springer-Verlag GmbH Germany, part of Springer Nature 2021
Abstract
Connected objects in the Internet of Things (IoT) domain are widespread everywhere. They interact with each other and
cooperate with their neighbors to achieve a common goal. Most of these objects generate a huge amount of data, often requir-
ing a process under strict time constraints. Being motivated by the question of optimizing the execution time of these IoT
tasks, we remain aware of the sensitivity to latency and the volume of data generated. In this article, we propose a hybrid
Cloud-Fog multi-agent approach to schedule a set of dependent IoT tasks modeled as a workflow. The major advantage of
our approach is to allow to model IoT workflow planning as a multi-objetif optimization problem in order to create a com-
promise planning solution in terms of response time, cost and makespan. In addition to taking into account data communi-
cations between workflow tasks, during the planning process, our approach has two other advantages: (1) maximizing the
use of Fog Computing in order to minimize response time, and (2) the use of elastic cloud computing resources at minimum
cost. The implementation of the MAS-GA (Multi-Agent System based Genetic Algorithm), which we have proposed in this
context; the series of experiments carried out on different corpora, as well as the analysis of the found results confirm the
feasibility of our approach and its performance in terms of cost which represents an average gain of 21.38% compared to
Fog and 25.49% compared to Cloud, makespan which represents a gain of 14.13% compared to Fog and a slight increase of
5.24% compared to Cloud and in response time which represents an average gain of 46.66% compared to Cloud with a slight
increase of 6.66% compared to Fog, while strengthening the collaboration between Fog computing and Cloud computing.
Keywords Workflow· Internet of things· Scheduling· Optimization· QoS· Cloud computing· Fog computing
1 Introduction
The world is witnessing the revolution of the new concept
Internet of things (IoT) (Feki etal. 2013). The basic idea is
a pervasive presence of various things or objects (such as
sensors, actuators, mobile phones, etc.) which are able to
interact with each other and cooperate with their neighbors
in order to achieve a common aim. Most of these objects
generate a huge amount of data, often requiring a process
within strict time constraints. Thus, IoT dependent objects
can be processed with dependent tasks, expressed by a real
time workflow (Stavrinides and Karatza 2019). The Work-
flow concept deals mainly with complex tasks. The global
idea is to concatenate a set of tasks in order to achieve a
complex treatment. Therefore, it is desirable to assign tasks
to multiple resources in order to minimize the execution
time. Indeed, Cloud computing is always represented as a
suitable solution to execute workflows with minimum cost
and execution time (Hajlaoui etal. 2017b). This paradigm is
* Marwa Mokni
marwa.mokni@isitc.u-sousse.tn; marwa.mokni@ensea.fr
Sonia Yassa
sonia.yassa@cyu.fr
Jalel Eddine Hajlaoui
hajlaouijalel.ig@gmail.com
Rachid Chelouah
rc@eisti.eu
Mohamed Nazih Omri
Mohamednazih.omri@eniso.u-sousse.tn
1 MARS Laboratory University ofSousse, Sousse, Tunisia
2 ETIS Laboratory, CNRS, UMR8051, University ofCY
Cergy, Paris, France
3 University of CY Cergy, Paris, France
4720 M.Mokni et al.
1 3
able to control the explosion of data by exploiting the elastic
resources available on its data centers (Hajlaoui etal. 2017a;
Helali and Omri 2021) and offering a large capacity of com-
puting and storage with a great capability to adjust services
according to the need of applications (Bittencourt etal.
2018). However, when processing a workflow as a set of IoT
tasks, Cloud computing may not be sufficient to satisfy the
low latency need of IoT applications due to its distance and
centralized nature (Pham and Huh 2016). In order to over-
come these limitations, a Cloud extension called Fog com-
puting was proposed by cisco in 2012 (Bonomi etal. 2012).
This new concept aims to be closer to end users, to support
geographic distribution and latency sensitivity. Specifically,
the Fog computing paradigm was appeared when a large
amount of data was generated by latency-sensitive applica-
tions. To sum up, a challenge of choosing the appropriate
environment (Cloud computing or Fog computing) to map
workflow tasks based on IoT applications according to their
characteristics is posed. Workflow scheduling problem keeps
being an NP-complete problem (Yassa 2014) where several
works in literature have been proposed to solve it with differ-
ent methods such as, heuristics [Genetic Algorithm (Yassa
etal. 2013b), Particle Swarm Optimization (Sharma and
Rashid 2020), Ant Colony Optimization (Wei 2020; Alaei
etal. 2020), etc.]. Generally, most scheduling works did
not take into account the huge amount of data generated by
workflow tasks and the high dynamicity of Fog and Cloud
computing environments (Mutlag etal. 2020; Hajlaoui etal.
2017a). Thus, the goal of this work is to efficiently map IoT
workflow tasks between Fog computing and Cloud comput-
ing in order to optimize the global QoS metrics and mini-
mize the latency. Our solution is based on the Multi Agent
System (MAS) model motivated by the dynamic nature of
Fog-Cloud computing environment. The main characteris-
tics of agents are autonomy, proactivity, communication,
cooperation, negotiation and learning. Therefore, an agent
is able to communicate, cooperate and negotiate with other
agents based on their own knowledge, with the aim to pro-
vide an appropriate scheduling of IoT tasks.
Our main contributions can be summarized as follows:
we benefit from multi agent system design in order to
manage and discover the services.
we propose a multi-objective workflow scheduling opti-
mization, by taking into account the cost and time execu-
tion and associate to each QoS metric a weight based on
user preferences.
we develop a genetic algorithm to solve the workflow
scheduling problem.
The rest of this paper is organized as follows. Section2 pre-
sents the related works. In Sect.3 we detail the problem
formulation. Section4 is devoted to study the details of our
proposed approach. In Sect.5 we evaluate the complexity of
our approach. Then, the experimental study and the analysis
of the obtained results are given in Sect.6. Section7 con-
cludes this work and gives some prospects.
2 Related works
Workflow scheduling has always been one of the major
challenge problems. Thus, with the advent of IoT dynamic
applications, this problem becomes more challenging. Major
researchers were motivated by the successful combina-
tion between Cloud and Fog computing, in order to look
for the most suitable tasks mapping model. In Pham and
Huh (2016), authors propose a task scheduling algorithm
in a cooperative environment of Fog and Cloud computing.
The main objective of this proposal is to find a suitable bal-
ance between the makespan and the monetary cost of Cloud
resources. In the same context, an integrated Cloud-Fog
based model for tasks resources allocation was proposed in
Rasheed etal. (2018). The authors use the Min-Max algo-
rithm in order to balance the energy distributions among end
users, in all regions of the world. The proposed approach
aims to minimize the response time and cost using Smart
Grids that help users to meet their energy needs. In Stavrin-
ides and Karatza (2019), a hybrid Fog and Cloud heuristic
was proposed, that aims to schedule multiple real-time IoT
workflows. The main idea of this scheduling approach is to
map tasks with low communication needs on Cloud comput-
ing and tasks with low process time requirements on Fog
computing. The authors take into account the communica-
tion costs engaged by transferring data from the IoT layer
to Fog layer. But, they ignored the communication costs
incurred by transferring data to the Cloud layer. A Fog-
Cloud scheduling based approach was proposed in Pham
etal. (2017), in order to build an optimal trade-off between
the execution time of the application and the cost of using
Cloud resources. This static approach is not suitable for the
dynamic nature of IoT applications. Another Fog-Cloud
scheduling based approach was proposed in Binh etal.
(2018). This approach is based on an evolutionary algorithm
to deal with Bag-of-Tasks application and build an optimal
trade-off between time and cost execution. Wang etal.
(2019) model a multi-objective workflow scheduling model
with a multi-agent reinforcement learning setting to guide
the scheduling of multi-workflow over infrastructure-as-a-
service of Cloud computing. The proposed model is capable
of seeking for correlated equilibrium between makespan and
cost criteria without prior experts’ knowledge and converges
to the correlated equilibrium policy in a dynamic real-time
environment. The recent work in Tychalas and Karatza
4721Cooperative agents-based approach forworkflow scheduling onfog-cloud computing
1 3
(2020) aims to highlight the efficiency of Fog computing in
reducing costs (e.g. monthly fees for VMs) by utilizing all
available resources. The authors demonstrate the viability of
the proposed approach and how someone could take advan-
tage of the existing computational power and expand it with
little effort by using other available resources, such as smart-
phones. Additionally, the research (Bhatia etal. 2020) pro-
poses a quantumized approach for scheduling heterogene-
ous tasks in Fog computing-based applications. The global
objective is to determine that the use of Fog computing
nodes in an optimal manner for the execution of a task is by
minimizing the overall execution delay. With the intention of
investigating the potential of Fog for scheduling of extreme
data workflows with strict response time requirements, the
research (DeMaio and Kimovski 2020) proposes a novel
Pareto-based approach for task offloading in Fog, called
Multi-objective Workflow Offloading (MOWO). MOWO
considers three optimization objectives, namely response
time, reliability and financial cost. An IoT task scheduling
problem was addressed on Fog-Cloud computing based on
the Multi-Agent system model in Fellir etal. (2020). Agents
schedule important tasks firstly based on task priority and
its dependencies to other tasks. The Multi agent system in
this work is not detailed or highlighted. A multi-agent Sys-
tem has also been developed in Mutlag etal. (2020) for the
management of a healthcare critical tasks on Fog computing.
In this work, personal agents at the edge layer receive tasks
from IoT devices and decide to schedule it on Fog node or
on Cloud layer based on task priority. Authors develop indi-
vidual agents without taking into account the various fun-
damentals of the Multi-agent System, such as communica-
tion, negotiation and collaboration between agents. Another
scheduling approach in Fog-cloud environments is proposed
(Ali etal. 2020) where authors develop the DNSGA-II for
solving the problem of scheduling tasks by allocating them
well over resources. Authors conclude that the makespan
generated by Fog-cloud is lower than those of Cloud sys-
tem and Fog system without specifying the number of tasks
affected on Cloud computing and on the Fog computing in
order to well differentiate between them. The proposed work
in Aburukba etal. (2020) aims to demonstrate the effective-
ness of combining Fog and Cloud computing for scheduling
IoT requests with low latency. The limitation of this work is
that the different experiments were developed to compare the
latency of Cloud computing with that of Fog computing, the
authors did not present the results of the effect of combining
the Fog with Cloud computing on latency. Likewise, the pro-
posed IoT-tasks scheduling approach in Bhatia etal. (2020)
is developed on a hybrid Fog-Cloud computing environment
with the aim of reducing latency. Therefore, authors enhance
the utilization of Fog resources compared with Cloud by
scheduling tasks according to the Fog clusters. Otherwise,
a study is proposed in Saeedi etal. (2020) intend to schedule
IoT tasks on Fog-Cloud computing environments by reduc-
ing the cost. The main idea is to maximize the utilization
of Fog resources since it provides lower transmission cost
compared with cloud. According to the works cited we can
conclude that maximising Fog computing resources utilisa-
tion than those of Cloud computing is a promising solution
to reduce latency and cost when addressing IoT task sched-
uling problems.
For better understanding, Table1 summarizes the tech-
niques, the features, QoS used and objectives of related
works cited above. As a synthesis, we can argue that these
works did not take into account the huge amount of data
generated by workflow tasks and the high dynamicity of
Fog-Cloud computing environment. Therefore, we propose
a cooperative Agent-based approach for Workflow Schedul-
ing on Fog-Cloud computing (MAS-GA) approach based on
multi agent system able to control the dynamicity of Fog-
Cloud computing environment and we use a workflow parti-
tioning technique in order to reduce the workflow complex-
ity. Besides, we develop an evolutionary algorithm to obtain
a good solution that optimizes the different QoS metrics.
3 Problem description
Obviously, the workflows are proposed to process a large
number of tasks. These tasks are from some to millions of
dependent tasks. Therefore, an efficient workflow scheduling
solution should be established in order to satisfy the imposed
constraints (precedence constraints, deadline, budget and
QoS metrics). Task scheduling is a systematic approach,
aiming to allocate the appropriate resources to clients’ tasks
without violating the QoS requirements. Hence, task sched-
uling problems must be formulated from two sides. At the
first side, a resource provider seeks to efficiently allocate
resources while maximizing their profit. On the other side,
a client seeks to execute a workflow with a minimum cost
and execution time, while optimizing all QoS metrics. For-
mally, let R be a set of m resources,
1<j<m
and T a set
of n tasks,
1<i<n
where the challenge is to find the best
mapping between
Rj
and
Ti
while satisfying both resource
provider and client.
3.1 Workflow model
We model a set of IoT applications as a workflow, where
every IoT application is represented by a workflow task.
Each workflow is defined by a direct acyclic graph
G=(T,A)
. T represents the set of tasks
T=(T1Tn)
,
where n is the overall number of tasks. Each task
Ti
in
T generates an amount of computing work
Wi
which
4722 M.Mokni et al.
1 3
defines the number of instructions and a deadline
Di
that must not be exceeded. A represents the functional
links relating all the tasks. In addition,
defines both
the precedence constraints between
Ti
,
Tj
and the amount
of data that must be transferred. Every arc between
Ti
and
Tj
generate a non-negative communication weight
Table 1 Classification of techniques based on environment, features and QoS
MK Makespan, C Cost, L Latency, E Energy, D Deadline, T execution time, LB load balancing, R reability
References Environment Technique Type of jobs Tool Features QoS Objective
Stavrinides and
Karatza (2019)
Cloud EDF heuristic Real-time work-
flow
Cloudsim Energy aware
heuristic for
the scheduling
of real-time
workflow appli-
cations
E/L Minimizing the
energy consump-
tion and the SLA
violations
Wang etal.
(2019)
Multi-agent
reinforce-
ment learning
(MARL)
Workflow Own written
simulation
environment
that acts like
the IaaS Cloud
system.
Multi-objective
workflow
scheduling
C/MK Multi-objective
optimization
Tychalas and
Karatza (2020)
Fog Weighted Round
Robin
Bag-of Tasks
Jobs
iFogsim Bag-of Tasks
Jobs scheduling
problem
C Reduce total
expenses under
a Bag-of-Tasks
workload model
Bhatia etal.
(2020)
Quantum map-
ping
IoT iFogSim Task scheduling
problem for IoT
applications
T Minimize overall
execution delay
DeMaio and
Kimovski
(2020)
GA Data Scientific
Workflows
iFogSim and
EdgeCloudSim
Workflow sched-
uling problem
C/L/R Multi-Objective
scheduling
Mutlag etal.
(2020)
MAS IoT tasks iFogSim IoT tasks schedul-
ing
E/R Maximizing Fog
resource utilisa-
tion
Pham etal.
(2017)
Cloud-Fog EST EFT Tasks Cloudsim Task scheduling
problem
MK/C Best cost makespan
tradeoff
Rasheed etal.
(2018)
Max-Min Tasks Cloudsim Tasks resources
allocation
LB Minimizing
response time and
cost
Pham and Huh
(2016)
Specific heuristic Tasks Cloudsim Task scheduling MK/C/D Tradeoff between
performance and
cost-savings
Binh etal. (2018)GA IoT applications Cloudsim Task scheduling
problem for IoT
applications
C/T Shorter scheduling
length
Fellir etal. (2020) SMA IoT Tasks iFogSim IoT task schedul-
ing problem
E/C Better resource
utilization
Ali etal. (2020) NSGA ii IoT Tasks iFogSim IoT task schedul-
ing problem
C Maximizing Fog
resource utiliza-
tion
Bhatia etal.
(2020)
GA IoT Tasks iFogSim IoT task schedul-
ing problem
L Maximizing Fog
resource utiliza-
tion
Saeedi etal.
(2020)
Clustering IoT Tasks iFogSim contract-based
IoT task sched-
uling problem
L Maximizing Fog
resource utiliza-
tion
Ismayilov and
Topcuoglu
(2020)
GA IoT Tasks iFogSim IoT task schedul-
ing problem
C Maximizing Fog
resource utiliza-
tion
4723Cooperative agents-based approach forworkflow scheduling onfog-cloud computing
1 3
Cwij
which hold the amount of data to be transferred
from
Ti
to
Tj
.
3.2 Resources model
To execute a workflow, we have a set of resource nodes
N=(N1Nm)
representing the data centers in a Cloud-
Fog computing environment where each node is charac-
terized by a resource capacity RC, utilization rate UR and
a geographical area G. Within a node
Ni
, a set of virtual
machines
VM =(vm1vmk)
are processing a set of tasks.
All these resources are supervised by a resource provider
RP =(RP1RPh)
. Resources have varied processing capa-
bilities delivered at different prices. We denote
tej
i
as the sum
of the processing time and data transmission time, and
cj
i
as
the sum of the service price and data transmission cost for
processing
Ti
on service
Rj
. With each resource request, the
user should specify the cost constraint (budget) B and the
deadline constraint D. The budget constrained scheduling
problem consists of mapping every
Ti
into a suitable
Rj
in
order to minimize the cost of the workflow execution and
complete it within B. The deadline constrained scheduling
problem is to map every
Ti
into a suitable resource
Rj
in order
to minimize the execution time of the workflow and complete
it within D.
3.3 Quality ofservice metrics
In the Cloud computing environment, several QoS metrics
should be taken into account (Yassa etal. 2013a): these are
(1) Makespan used to evaluate workflow planning algorithms,
(2) cost which is the total cost of execution of a workflow,
(3) reliability which represents the percentage of execution
of the workflow completed successfully and without any lack
of resources. The reliability calculation is inspired from the
formula described in Wang etal. (2011), (4) availability to
determine to what extent the set of reserved virtual machines
are used, (5) response time which is the data transfer time via
the network and the demand dwell time in the Fog layer. These
metrics are defined respectively by the following Eqs.(1), (2),
(3), (4) and (5):
where
(TiT)
, DF (
Ti
) is the date of the end execution of
the task
Ti
where
Uj
is the unit price of a
vmj
that process the task
Ti
,
Cwij
is the communication weight between
Ti
and
Tj
,
TRCij
:
(1)
Makespan =maxDF(Ti)
(2)
Cost
=
n
i=
0
m
j=
0
(DF(TiUj)+
n
i=
1
n
j=
1
(Cwij ×TRCij
)
is the cost of communication between the machine where
Ti
is mapped and another machine where
Tj
is affected.
Where
𝜆j
is the failure rate of the virtual machine j that pro-
cess the task and an essential property of the resource.
where execution time is the difference between the end
time(the date when
VMj
is stopped) and the start time(the
date when
VMj
is launched), m is the total number of
resources at the resource infrastructure level.
where network time also called the propagation time of
data which is the time taken for a signal to travel from one
point (virtual machine, IoT device, etc.) to another through
a medium. Its calculation is dependent on distance between
two points as detailed on Sect.5.1.
4 IoT workow planning approach based
onMAS
In this section we present the architecture of the proposed
model and the Multi Agent model used, we detail the princi-
ple of the planning approach based on MAS and we illustrate
the final partitioning graph obtained with the mixed min-cut
technique. We end this section by developing a workflow
planning strategy based on the genetic algorithm and pre-
senting the complexity and the experimental results of our
proposed approach.
4.1 Architecture model
Being motivated by the lack presented in the studied works,
in this section we propose an architecture model based on
the Multi-Agent System with the goal of maintaining the
interaction between different entities such as the relationship
between IoT objects and with computing infrastructures
while managing any disruptions that may occur during the
scheduling process. The proposed architecture is organized
hierarchically on three-levels as illustrated in Fig.1. First
level called user level, consists of a set of sensors and user
devices which generate a huge amount of data that must be
transferred and processed by Fog computing as a workflow
generated by the workflow managment system. Fog
(3)
Reliability
=exp
n
i=1
DF(Ti𝜆
j
(4)
Availability
=
1
m×
m
j=1
1
executiontime
Makespan
(5)
Response Time =network time + total process time
4724 M.Mokni et al.
1 3
computing level consists of r region
RE =(RE1REr)
.
Every region
RE
i=(N
Fog
1
,,N
Fog
f)
holds f nodes, where
each node
NFog
j
has a computing threshold
TCkFo g
j
, which
must not be exceeded, an utilization rate
URFog
j
and a com-
puting capacity
RCf Fog
j
. There are k virtual machines
vmFog
(
vm1,,vmk
) belonging to a Fog node
NFog
j
where each
vmk
involves a virtual CPU
CPUVk
, a virtual memory
MVk
, and
a cost per unit of time
Uk
. All Fog nodes belonging to the
same region are interconnected with each other and can
communicate with all the Fog nodes of the other regions and
all the nodes of the Cloud layer
NCloud
(N
Cloud
1
,,N
Cloud
c)
.
Each
NCloud
j
is characterized by a geographical area and a
computing capacity
PnCloud
j
. Within each Cloud node there
is a set of virtual machines
vmCloud
(
vm1,,vml
), where
every virtual machine
vml
is characterized by a computing
capacity
RCcj
, cost per unit of time
Uj
, a virtual CPU
CPUVj
and a virtual memory
MVj
.
4.2 Multi agent system model
The general architecture is organized around four main types
of agents:
Manager agent: The Manager Agent (MA) creates an
instance of the workflow by instantiating contractual
agents and assigning the sub-workflows to them.
Contractual agent: Each contractual agent
CTAi
CTA(CTA1CTAp)
is responsible for executing a sub-
workflow by actively requesting the necessary resources
while respecting all the required QoS metrics and the
constraints imposed by MA.
Fog agent: Each Fog Agent
FAjFA(FA1FAq)
responds to a request of the
CTAi
by a proposal (an offer)
and this is done through an evaluation step of the
CTAi
demand which contains the resource request for execut-
ing a sub workflow. The execution of each task requires
multidimensional resources that must be allocated in a
VM hosted on a physical node. In this evaluation step,
the objective of the Fog agent is to assign the maximum
number of tasks to the Fog resources without exceed-
ing the utilization threshold of each resource in order
to respect SLA constraints, guarantee maximum exploi-
tation and balancing resources, and minimizing the
response time.
Cloud agent: Each Cloud Agent
CA(CA1CAw)
rep-
resents the resource provider in the Cloud environment.
Through the Cloud unlimited resources, the role of
CAi
is to provide an adequate amount of resources with mini-
mum cost and execution time.
4.3 MAS‑based scheduling approach
In order to control the huge amount of data generated by IoT
devices and to monitor their traffic with sensors, we adopted
a workflow design, where each task is used to process an IoT
application. Therefore, we relied on a multi-agent system
for mapping the workflow tasks to the appropriate resources
while satisfying QoS requirements at a reasonable time. At
first sight, the global workflow is managed by a first agent
(i.e., the Manager Agent). It is charged to respect the afore-
mentioned deadline D and budget B. Therefore a partitioning
process is the first mission of the Manager Agent. This first
step of our approach is applied with the aim of minimizing
data movements and communication costs between tasks,
thus reducing the workflow overall execution time. We adopt
Fig. 1 Solution architecture
4725Cooperative agents-based approach forworkflow scheduling onfog-cloud computing
1 3
a partitioning method called Mixed Min-Cut Graph (Jiang
and Wang 2007) which generates a set of small sub-work-
flows starting from the global workflow graph G, where the
sum of sub-workflows tasks weights is equal (the method is
detailed in Sect.4.5). As a second step of our approach, a set
of Contractual Agents equal to the number of sub-workflows
created. Each of them is responsible for allocating appropri-
ate resources to the sub-workflow. Each Contractual Agent
CTAj
has a Knowledge Directory KD(a simulation to the
Directory Facilitator “DFAgentDescription” defined by the
JADE framework (Bellifemine etal. 2000) which provides
a directory system that allows agents to find service pro-
vider agents by ID) containing the provider’s agents’ names
and their positions as well as an objective to achieve. The
CTAj
restrictions are a crucial deadline that must be met
and a limited budget. After triggering the resource alloca-
tion request
RARi
, the contractual agent
CTAj
sends a
RARi
to the first Fog agent
FAd
in KD (agents are regrouped on
the directory by their distance values between
CTAj
and all
close Fog agents).
FAd
manages a set of Fog nodes on the
region of
CAc
, aiming to find an optimal scheduling plan
for
RARi
while respecting QoS metrics indicated in SLA
(Service Level Agreement).
FAd
is capable also to cooper-
ate with all Fog agents in its KD in order to find a mapping
solution close to the end user region, thus maximizing the
Fog computing resources utilization and minimizing the
response time. In case of lack of scheduling proposal within
Fog nodes,
FAd
sends the resource demand to Cloud agent
CAc
in its region.
4.4 Proposed algorithm
Workflow Partitioning: Step 1
This step is based on the Mixed Min Cut partitioning
method, detailed on the Sect.4.5.
Establish a sub-workflow execution contract: Step 2
After initiating p contractual agents (p is the number
of sub-workflows) by the Manager Agent, a workflow
execution contract is established. The manager agent
must specify the deadline of sub-workflow, the maxi-
mum allowable budget and the required QoS metrics.
Each agent must not violate the terms of the contract.
Sub workflow execution: Step 3
A resource allocation request
RARi
is created by the
first contractual agent. In the case of parallel sub-work-
flows order, all agents in parallel order can be launched
together.
RARi
must specify the deadline, budget con-
straint and all sub-workflow characteristics (computing
weight, CPU and memory required). This request is sent
to the closest Fog agent.
Resource allocation: Step 4
After receiving
RARi
, the Fog agent
FAd
compares
the sub-workflow computing weight
Wi
with their nodes
computing threshold
TCKf
. If
Wi
is lower than
TCKf
, then
FAd
creates a proposal that maximizes the utility function
F using a genetic algorithm. Therefore, the best proposal
is sent to
CTAj
. Else, i.e.
Wi
is bigger than
TCKf
,
FAd
will restart the partitioning algorithm on the sub-work-
flow for once, and then assign a priority value for each
new sub-workflow according to precedence constraints.
The Genetic Algorithm has been chosen for its ability
to find a near optimal solution for NP-hard problems. It
is able to make a global search and explore the search
space using their different kind of crossover. Therefore,
it allows resolving the mono-objective optimization prob-
lem (Yassa etal. 2013a), which aims to minimize the
execution time of the overall workflow. We describe, in
Sect.4.6, how we adapt the genetic algorithm to solve
our problem.
Agents cooperation: Step 5
The main aim of our approach is to minimize the sub-
workflow processing time, i.e. the transfer time, the wait-
ing time and the execution time. Hence, Fog agent
FAd
cooperates with its closest Fog agents in order to maxi-
mize the execution of all sub-workflow tasks in the Fog
layer. Firstly,
FAd
must find a sub-workflow scheduling
plan on their own nodes until reaching the computing
threshold. Then, it sends a request message to Fog agents
of neighboring regions in terms of predefined distance
DS (DS is the geographic distance between two nodes) in
order to minimize the response time. If
FAd
receives pro-
posals from neighboring Fog agents, then it must choose
the proposal with the shortest distance DS and send the
final mapping plan to
CTAj
. Else, i.e.
FAd
do not receive
any proposal and exceeded the predefined waiting time
WT (a time that should not be exceeded when waiting
for a proposal from other Fog agents), it must send a
resource allocation request to the Cloud agent
CAs
in its
region. Therefore,
CAs
creates all possible proposals on
the basis of reducing cost and time execution. It is worth
4726 M.Mokni et al.
1 3
mentioning,
CAs
must choose
vmv
with minimum execu-
tion time in order to satisfy the sub-workflow deadline
even if it is expensive.
The pseudocode corresponding to the above procedure
is shown in Algorithm1. The first step of the procedure
is described in lines 3–4, whereas the second step is
described in lines 5–6. The lines 7–9 illustrate the third
step and the step 4 is illustrated in lines 10–12, as the last
step of procedure is described in lines 13-25.
4.5 Mixed min‑cut graph
Our approach proposes to transform a unique workflow into
a set of federated sub-workflows that can be executed by
different agents. Therefore, we use Mixed Min-Cut Graph
method (MMCG) (Jiang and Wang 2007), that aims to bal-
ance the sum of the weight values of the vertices in each
cut. We consider a workflow as a graph that must be parti-
tioned into sub-graphs to satisfy area constraints. Figure2
illustrates the final partitioning graph obtained with the
Fig. 2 Final partitioning after mixed min-cut graph technique (Jiang
and Wang 2007)
Fig. 3 Partitioning graph satisfying the precedence constraints (Jiang
and Wang 2007)
4727Cooperative agents-based approach forworkflow scheduling onfog-cloud computing
1 3
mixed min-cut technique (Jiang and Wang 2007), where
G is the main workflow and M is the new graph with the
feasible sub-workflows. Firstly, G is split in 5 random cuts
which are C1, C2, C3, C4 and C5. The algorithm looks for
cuts that holds an equal sum of nodes weights w. Namely,
let C5 and C4 be the feasible partitions with
w=15
as sum
of nodes weights and a minimum communication cost of
3 as sum of output edges weights of each cut considering
the precedence constraints between tasks during the par-
titioning process. Secondly, the graph M is created with
three sub-workflows. The first sub workflow in M contains
4 tasks, the second holds 6 tasks and the last one holds 5
tasks with the same size and with lowest communication
weight.
Figure3 presents an example of partitioning that satisfies
the precedence constraints.
The constraints that define a time order of the nodes
called precedence constraints P(v) between the nodes v and
u, such that P(v) < P(u). Thus, v must be scheduled before u
when partitioning. Figure3a depicts an example of a given
directed acyclic graph. Figure3b, c depict two graph parti-
tioning examples. Obviously, Fig.3b results in a cyclic prec-
edence relation and cannot satisfy precedence constraints.
Contrarily, partitioning results in Fig.3c satisfy precedence
constraints. Firstly, the partitioning technique applies the
traditional flow graph-based algorithm. Then, all iterations
produce a set of possible mini-cuts that are saved in order to
choose the feasible cuts with the Breadth First Search (BFS)
technique. A feasible partitioning solution of a workflow
should verify the specified conditions:
the precedence constraints: the time order between tasks
should be respected.
the maximum number of tasks in each cut is predefined.
the amount of data communication between the sub-
workflows must be minimized and respect a predefined
threshold.
4.6 Genetic algorithm
It is notable that workflow scheduling problems (Xu etal.
2020; Ismayilov and Topcuoglu 2020; Wang etal. 2020;
Mohammadzadeh etal. 2020) are known as NP-Hard prob-
lems, where it is impossible to discover an optimal global
solution using simple algorithms or rules. The use of meta
heuristics in several applications emphasizes their perfor-
mance in dealing with difficult and large scale problems.
Fig. 4 Genetic algorithm steps
Fig. 5 Illustration of problem encoding
4728 M.Mokni et al.
1 3
Therefore, we intend to develop a workflow scheduling
approach based on the Genetic Algorithm (GA), which is
one of the most successful and widely used meta heuristics
in optimization. Genetic algorithm (GA) (Holland 1992) is
one of evolutionary algorithms family. It is a method based
on “populations” of points, and aims to solve hard optimiza-
tion problems. The different steps of GA are illustrated in the
flowchart shown in Fig.4.
The GA starts by (1) generating an initial population
consisting of randomly derived solutions (each solution
represents a mapping of the workflow tasks to the virtual
machines). (2) Evaluate the fitness function (based on the
different QoS metrics) of each individual in the population.
(3) Apply the selection operator that makes it possible to
select among all the individuals of a population those who
are the most apt to reproduce a new generation. (4) Apply
the crossover operator on each pair of selected individuals
or parents to produce two new individuals or children. (5)
The mutation operator is applied on the produced children
in order to find new solutions and to avoid that the algorithm
will be trapped in a local optimum. (6) Repeat the algorithm
from the second step until it converges or achieve a maxi-
mum number of generations. Then, the
FAi
must choose the
most optimized solution. In order to adapt the genetic algo-
rithm to our workflow scheduling problem, we must define
an adequate structure encoding a solution, a fitness function
for the evaluation process as well as genetic operators for
the evolutions.
4.6.1 Solution encoding
A population is formed by a set of individuals I, where
each of them represents a feasible solution to the problem
of workflow scheduling (or tasks assignment). Each task
assignment includes four elements: taskID, resourceID,
startTime, and endTime. taskID and resourceID indicate
where each task is assigned. startTime, and endTime indi-
cate the lifetime of a task on a resource. We design a work-
flow as illustrated in Fig.5a, composed of five tasks where
T1
and
T2
are the input tasks represented in parallel, i.e.
they have no precedence link between them, subsequently
they can be executed in parallel. In addition,
T3
,
T4
and
T5
are grouped in sequential order, in other words, they
depend on each other. A scheduling solution is presented
in Fig.5c where the tasks
T1
,
T2
are affected to
VM1
,
VM2
respectively, at the same time. Following,
T3
is scheduled
on
VM1
.
T4
which must be launched after
T3
is affected
to
VM2
, then
T5
which represents the workflow output is
affected to
VM1
. The task assignment is based on the work-
flow design. Namely, a task
Ti
is served if and only if all its
predecessors tasks are served. Besides, many tasks have
a parallel position in the graph. Thus, they may complete
for the same time slot on a service. For this reason, we
use a solution representation based on series to show the
order of task assignments on each resource as illustrated
in Fig.5c.
4.6.2 Fitness function
In our proposed approach we aim to optimize the scheduling
performance based on two factors: execution time and cost.
The fitness function (Holland 1992) separates evaluation
into two parts: cost-fitness and time-fitness. Both functions
use two percentage variables,
𝛼
and
𝛽
. If the Manager Agent
specifies a budget constraint, then
𝛼
has a higher ratio than
𝛽
(for instance,
𝛼
= 80% and
𝛽
= 20%). If the Manager Agent
specifies a deadline constraint, then
𝛼
has a higher ratio than
𝛽
(for instance,
𝛽
= 80% and
𝛼
= 20%).
In the budget constrained scheduling case, the cost-fitness
function ensures the choice of a solution that satisfies the
budget constraint. .i.e. the genetic algorithm must choose a
solution with the least cost. The cost fitness function of an
individual I is calculated as follow:
where,
B is a pre-mentioned budget constraint.
maxCost is the solution in the current population with
the the highest cost.
if {
𝛼>𝛽 cost
fitness
𝛼<𝛽 time
fitness
(6)
cost
fitness(I)=
c(I)
B
𝛼
maxCost
(1𝛼)
Fig. 6 Illustration of crossover operation
4729Cooperative agents-based approach forworkflow scheduling onfog-cloud computing
1 3
c(I) is the sum of the service price and data transmission
cost for processing
Ti
on resource
Rj
In the deadline constrained scheduling case, the time-fitness
function ensures the choice of a solution that satisfies the
deadline constraint .i.e the genetic algorithm chooses a solu-
tion with less execution time. The time fitness function of an
individual I is calculated as follow:
where
D is a pre-mentioned deadline constraint.
maxTime is the solution in the current population with
the largest completion time.
t(I) is the sum of the processing time and data transmis-
sion time for processing
Ti
on resource
Rj
4.6.3 Crossover
The objective of the crossover operator is to combine several
parts of individuals, in order to create new individuals in the
current population. As illustrated in Fig.6, in the crossover
step we choose two random parents (parent 1, parent 2) in
the current population. Then, we select randomly two points
from the schedule order of the first parent. Finally, the loca-
tions of all tasks between these two points are exchanged.
The result of this step is a generation of two children (Child
1, Child 2).
4.6.4 Mutation
The role of mutation operator is to randomly modify,
with a certain probability, the value of an individual. In
our approach, to develop a Genetic algorithm we adopted
a swapping mutation as a mutation operator method. An
example of swapping mutation is illustrated in Fig.7. The
main goal of this mutation method is to change the execution
order of tasks in a solution (individual). Swapping muta-
tion consist of two principal steps, (1) a task is randomly
(7)
time
fitness(I)=
t(I)
D
𝛽
maxTime
(1𝛽
)
selected in the individual, (2) an alternative service is ran-
domly selected to replace the current task allocation.
4.7 Complexity analysis
The complexity of our approach depends on:
the complexity of workflow partitioning algorithm:
The authors in Lobo etal. (2000) noted that multi-
path partitioning requires much iteration to find a list that
includes a set of cuts. This causes an increase in time.
They noted that the total complexity of the algorithm
is equal to
O(n4)
, where n is the number of nodes in the
graph G.
the complexity of workflow scheduling algorithm (GA):
Our optimization method is based on the GA, which
converges to the global optimum (the best solution).
Therefore, we adopt the average convergence for measur-
ing the complexity of the problem. However, by increas-
ing the number of the population we will in many cases
arrive at a different convergence time. Usually, genetic
algorithm performance is determined by the number of
fitness function evaluations done during an iteration of
execution. The fitness function is given by the sum of m
non-overlapping sub functions. Each sub function is a
function of k genes. Therefore, Genetic Algorithms with
perfect mixing have time complexities of O(m) (Jiang
and Wang 2007).
5 Experimental study andanalysis ofresults
In this section, we expose the evaluation experiments of our
proposed workflow scheduling solution. The basic aim of our
experiments is to assess the combination of Fog and Cloud
computing and demonstrate the effectiveness of MAS-GA in
converging towards an optimal scheduling solution.
5.1 Experimental parameters
We choose WorkflowSim (Chen and Deelman 2012) as our
simulation platform, which extends the existing CloudSim
simulator by providing a higher layer of workflow man-
agement. During the evaluation experiments, we adopt
as parameters the user workflow, the Fog-Cloud resource
model and the parameters of the Genetic Algorithm GA.
5.1.1 Workflow parameters
Due to the lack or unavailability of real data and benchmarks
of IoT workflows, we used scientific workflow benchmarks
in the experiments but also generated random workflows that
Fig. 7 Illustration of swapping mutation operation
4730 M.Mokni et al.
1 3
could simulate the characteristics of IoT tasks. Therefore, in
our experiments we created randomly generated workflows
with sizes between 10 and 300 tasks. Each workflow consists
of tasks that have to be performed to achieve a given goal.
A task is described by a capacity which represents the task
computing value, duration (in ms) and a list of predecessors
(precedence relationships). For a straightforward implemen-
tation, task also stores information about resources assigned
to it and its start time in a workflow. In order to evaluate our
approach, we consider also five workflow benchmarks (God-
eris etal. 2008) often used in related works, such as Cyber-
Shake, Epigenomics, Inspiral, Montage, as well as Sipht.
Figure8 and Table 2 illustrate the structure and character-
istics (such as the number of nodes and edges as well as the
size of nodes) of each workflow type (Goderis etal. 2008).
5.1.2 Fog‑cloud resource parameters
The simulation scenario comprises three Fog regions
and a Cloud data center, each Fog region possesses one
Fog node with three virtual machines and the Cloud
Fig. 8 Structure of five types of workflow benchmarks
Table 2 Characteristics of used workflow benchmarks
Workflow Nodes Edges Average
data size
(MB)
CyberShake 30 112 747.48
50 188 864.74
100 380 849.60
Epigenomics 24 75 116.20
46 148 104.81
100 322 395.10
Inspiral 30 95 9.00
50 160 9.16
100 319 8.93
Montage 25 95 3.43
50 206 3.46
100 433 3.23
Sipht 30 91 7.73
60 198 6.95
100 335 6.27
4731Cooperative agents-based approach forworkflow scheduling onfog-cloud computing
1 3
data center is composed of 100 virtual machines. Each
resource has a price as well as a capacity which is com-
pared with each task computing capacity. We adopt the
similar pricing model of the Amazon EC2. Table3 illus-
trates the type; characteristics (such as the memory and
CPU average) and the price of used Fog computing and
Cloud computing VM instances.
5.1.3 GA parameters
A basic difficulty of genetic algorithms is not in the imple-
mentation of the algorithm itself, but rather, in the choice
of the adequate values of GA parameters. It is there-
fore a question of finding the right GA parameters until
an acceptable solution is found. The genetic algorithm
Table 3 Characteristics of used
VM instances
a Amazon Elastic Block Store (EBS) is high-performance block storage service designed for use with Ama-
zon Elastic Compute Cloud (EC2)
Environment VM instances Instance characteristics Price USD/h
Fog Computing t3.small 0.751
Memory:2 GB
CPU:1 VirtualCore
Storage: EBSa only
Network: up to 5 GB Ethernet
t3.xlarge 1.016
Memory:16 GB
CPU: 4 Virtual Cores
Storage: EBS only
Network: up to 5 GB Ethernet
Cloud computing c4.large 0.83
Memory: 3.75 GB
CPU: 8 VirtualCore
Storage: EBS only
Network: Moderate
c4.xlarge 1.049
Memory: 7.5 GB
CPU: 16 Virtual Cores
Storage: EBS only
Network: High
Fig. 9 GA convergence
Fig. 10 Random workflow execution time on Fog, Cloud and MAS-
GA
4732 M.Mokni et al.
1 3
parameters depend on the population size. On the one
hand, when adopting a large population size, its diversity
will increase, which decreases the convergence towards a
local optimum. However, the execution time of each gen-
eration will increase, thus affecting the efficiency of the
algorithm. On the other hand, if the size of the population
is small, thus the probability of converging towards local
optimum is high. The genetic algorithm parameters are
depended also on the crossover operator. The higher the
crossover probability (Goldberg 1994) the more new indi-
viduals are introduced in the new generation. However, if
this probability is too low, the population does not evolve
fast enough. The recent genetic algorithm parameter is the
mutation operator. If this rate is high, the search becomes
purely random. The population is diverse and the GA loses
its effectiveness. If this rate is low, the population is less
diversified and may be at the risk of stagnating. Empirical
studies recommend a low mutation rate of around 0.01
(Goldberg 1994) for optimal results. In order to guaran-
tee optimal results generated by the genetic algorithm, we
tested our scheduling algorithm with different GA param-
eters based on the details cited above. Figure9 shows the
progression of the fitness value, according to the number
of generations. Through results depicted in Fig.9 we adopt
100 as number of generation, 30 as population number,
0.9 as crossover probability and 0.01 as mutation prob-
ability. Indeed, the first set of parameters selected, of our
proposed MAS-GA algorithm, allow it to converge at the
17th generation, while with the second set of parameters
the optimal solution is reached at the 60th generation.
5.2 Experimental results
In order to evaluate our contribution performance, we
have created several experiences. Firstly, we assess the
performance of the collaboration between Fog and Cloud
computing, by applying our proposed approach on the Fog
layer separately then on the Cloud layer, also on both Fog
and Cloud layers. Secondly, we appraise the effectiveness
of the MAS-GA in converging towards an optimal sched-
uling solution.
5.2.1 Environment evaluation
This section presents the MAS-GA evaluation in terms of
execution time, response time, and makespan.
5.2.2 Comparison ofworkflow execution time onFog,
Cloud andMAS‑GA
Figure10 demonstrates that when the workflow size is
small (i.e., size
=10
), the computation capability of the
Fog node can guarantee the lowest execution time. But
when the workflow size increases (i.e., size
>10
), the exe-
cution time on Fog nodes continue to grow because of the
limited Fog resources capacity. The Fog agent will schedule
the workflow partition that does not exceed their resource
capacity, and send the rest of the tasks to the nearest Fog
agent. This partitioning process and the collaboration
between Fog nodes to execute workflow tasks that require
an important computational capacity causes significant
queuing delay to drastically increase the execution time.
Although the Cloud data center has sufficient computation
Fig. 11 Workflow benchmarks
execution time on Fog, Cloud
and MAS-GA
4733Cooperative agents-based approach forworkflow scheduling onfog-cloud computing
1 3
capabilities to eliminate the queuing delay. Therefore, the
results in Fig.10 illustrates that the workflow execution
time on Cloud is lower than on Fog computing. Compared
with the Fog computing, although the Cloud achieves a
lower execution time, it ignores the latency sensibility of
tasks based on IoT application. Our proposed scheduling
algorithm based on MAS-GA can improve the lowest exe-
cution time compared to both scheduling algorithms based
on cloud and fog computing, separately. This is because
our algorithm not only enables the Fog agent to cooperate
with the Cloud data center; it also enables the Fog agent
to cooperate with its neighboring Fog agents. In Fig.11,
the execution time of the five workflow benchmarks on
Cloud computing is much higher than other environments.
The search of the suitable mapping task-resource in such
huge Cloud data centers can affect the workflow execution
time. Similar to random workflow execution time on Fog
computing, workflow benchmarks execution time on Fog
computing is important and that’s due to the significant
queuing delay. Furthermore, the results in Figs.10 and 11
validate that it is vital to enable the Fog agent to cooperate
with the Cloud agent in order to optimize the workflow
execution time.
5.2.3 Comparison ofworkflow scheduling response time
onFog, Cloud andMAS‑GA
We also evaluate the efficiency of the MAS-GA by measur-
ing the response time, when executing the workflow on the
Fog computing, on the Cloud computing and between Fog
computing and Cloud computing. The different earth posi-
tions of users, Fog nodes and Cloud data center adopted in
our experimental tests are shown in Fig.13. The average
of response time is relying on the distance value between
each scheduling actor (users, Fog nodes and Cloud data
center). With this in mind, we adopted the Haversine for-
mula (Robusto 1957) which determines the orthodromic
distance (the shortest distance between two points on the
surface of a sphere) between two points on an earth’s sur-
face dependent on their longitudes and latitudes. Equation
(8) presents the Haversine formula (Robusto 1957):
where d is the distance between two points which
Φ
is the
latitude,
𝜆
is the longitude and R is earth’s radius (mean
radius = 6371 km). Initially, we have calculated the earth’s
distance between the two agents who will firstly commu-
nicate with each other (i.e. CTA and FA). It consists of the
RARi
transmission from the Contractual Agent to the closest
Fog agent in its region. Then, the second communication
will be established between FA and CA, this communica-
tion is represented in the cooperation between Fog Agent
and Cloud Agent to create a workflow scheduling proposal.
The two communications mentioned above are illustrated
in Fig.12.
Unsurprisingly, the Cloud data center and the end users
may be physically far-away that may increase significantly
the response time, as shown results in Figs.14 and 15. Oth-
erwise, Fog computing is able to guarantee a reasonable
(8)
d
=2R
sin2
Φ2−Φ
1
2
+cos1)cos2)sin2(
𝜆2𝜆1
2
)
Fig. 12 Scheduling actors distances
Fig. 13 GPS coordinates
Fig. 14 Random workflow response time on Fog, Cloud and MAS-
GA
4734 M.Mokni et al.
1 3
Fig. 15 Workflow benchmarks
response time on Fog, Cloud
and MAS-GA
Fig. 16 Workflow benchmarks
makespan on Fog, Cloud and
MAS-GA
Table 4 Mono-objective
optimization values Mono-objective opti-
mization values
MAS-GA configuration Makespan/s Cost/$ Reliability/% Availability/%
For epigenomics-24 MAS-GA(1,0,0,0) 192 1799 99,999,766 336,917
MAS-GA(0,1,0,0) 251 1238 999,998,529 634,058
MAS-GA(0,0,1,0) 210,891 1303 999,999,971 355,131
MAS-GA(0,0,0,1) 232 1320 999,999,801 25,431
best result 192 1238 999,999,971 25,431
For Montage-2 MAS-GA(1,0,0,0) 199 1763 999,997,766 406,917
MAS-GA(0,1,0,0) 299 15,046 999,998,529 634,058
MAS-GA(0,0,1,0) 224,891 152,564 99,999,9971 355,131
MAS-GA(0,0,0,1) 200,891 161,564 999,998,971 336,917
best result 199 1504 999,999,971 336,917
4735Cooperative agents-based approach forworkflow scheduling onfog-cloud computing
1 3
response time, due to its close position to the end user.
The results in Figs.14 and 15 shows also that the overall
response time of scheduling workflow on Fog computing is
almost the same as scheduling workflow on MAS-GA. Thus,
the presence of Fog computing in a scheduling algorithm is
able to optimize the response time.
5.2.4 Comparison ofworkflow scheduling makespan
onFog, Cloud andMAS‑GA
In this section, we execute five workflow benchmarks under
Fog computing, Cloud computing and MAS-GA in order
to evaluate the makespan on each environment. Figure16
illustrates that scheduling workflow on MAS-GA outper-
forms the scheduling workflow on Fog computing separately
in terms of makespan. Our proposed algorithm enables the
local Fog nodes to collaborate with the Cloud data center,
thus it succeeds in lowering the makespan.
Otherwise, the results in Fig.16 shows that Cloud com-
puting performs the lowest makespan and that’s due to the
Table 5 Multi-objective optimization values
Multi-objective optimiza-
tion values
MAS-GA configuration Makespan/s Cost/$ Reliability/% Availability/%
For epigenomics-24 MAS-GA (0.25, 0.25, 0.25, 0.25) 192 1344 99,9998986 30,782
MAS-GA (0.14, 0.58, 0.14, 0.14) 271 1,238 999,993,874 73,5644
MAS-GA (0.22, 0.56, 0.11, 0.11) 253 1,239 999,999,862 31,7793
For Montage-25 MAS-GA (0.25, 0.25, 0.25, 0.25) 199 1104 999,998,995 33,6917
MAS-GA (0.14, 0.58, 0.14, 0.14) 250 1105 999,998,886 4136
MAS-GA (0.22, 0.56, 0.11, 0.11) 210 2321 999,996,588 781,886
Fig. 17 Comparison of multi-objective and mono-objective optimiza-
tion percentages of best results for the Montage-25 workflow
Fig. 18 Comparison of multi-objective and mono-objective optimiza-
tion percentages of best results for Epigenomics-24 workflow
Table 6 Multi-objective optimization on cloud computing
Multi-objective
optimization values
MAS-GA configuration Makespan/s Cost/$
for Montage-25 MAS-GA(0.9,0.1) 188 2,881
MAS-GA(0.5,0.5) 227,33 2,738
MAS-GA(0.2,0.8) 260,33 2.651
for Epigenomics-24 MAS-GA(0.9,0.1) 278 1.551
MAS-GA(0.5,0.5) 297,3 2,338
MAS-GA(0.2,0.8) 310 2,301
Table 7 Multi-objective optimization on Fog computing
Multi-objective opti-
mization values
MAS-GA configuration Makespan/s Cost/$
For Montage-25 MAS-GA(0.9, 0.1) 265 2428
MAS-GA(0.5, 0.5) 292,66 2.592
MAS-GA(0.2, 0.8) 313 2108
For Epigenomics-24 MAS-GA(0.9, 0.1) 310,68 1301
MAS-GA(0.5, 0.5) 352 2219
MAS-GA(0.2, 0.8) 390,666 2154
4736 M.Mokni et al.
1 3
huge amount of performant resources that reside on Cloud
data center.
5.2.5 MAS‑GA evaluation
In order to evaluate the MAS-GA algorithm, we realize
several experiments :(1) test the aggregated multi-objective
fitness function when multiple QoS metrics are required.
(2) evaluate the efficiency of MAS-GA when realizing a
multi-objective optimization.
5.2.6 Mono‑objective andmulti‑objective optimization
We evaluate the proposed genetic algorithm firstly by
evaluating each QoS metric individually in order to real-
ize a mono-objective workflow optimization. Secondly, we
vary the weight of each QoS metric aggregated by the fit-
ness function. The MAS-GA aims to optimize four basic
QoS namely the makespan, cost, reliability and availabil-
ity. The experiments show the results of mono-objective
optimization values for the workflow epigenomics-24 and
montage-25 in Table4, whereas the results of multi-objec-
tive optimization for the workflow epigenomics-24 and
montage-25 are presented in Table5. Comparing the three
MAS-GA configuration results with the mono-objective
best values, we observe in Figs.17 and 18 that the MAS-
GA configuration (0.25 0.25 0.25 0.25) i.e. according
each QoS metric with 0.25 of weight, gives good over-
all results. When we apply the MAS-GA configuration
(0.25 0.25 0.25 0.25) on the Epigenomics-24 workflow,
it’s notable that each QoS metric reaches more than 80
percent of its best value. Moreover, the makespan and the
availability reach their best values in mono-objective opti-
mization experiments. As well as, by applying this MAS-
GA configuration on the montage-25 workflow the results
show that more than 70 percent of their best values were
obtained from the mono-objective optimizations. Regard-
ing the second MAS-GA configuration (0.14, 0.58, 0.14,
0.14), it is notable that the cost optimization can nega-
tively affect other QoS metrics as shown in Figs.17 and
18. The last MAS-GA configuration (0.22 0.56 0.11 0.11)
gives a balance between the QoS metrics. The makespan
and cost reach more than 70 percent of their best values,
as shown in Figs.17 and 18.
5.2.7 Multi‑objective optimization onMAS‑GA
This section aims to evaluate the multi-objective optimiza-
tion of the proposed algorithm by varying the weight of
each QoS metric aggregated by the fitness function. The
MAS-GA aims to optimize two basic QoS, namely the cost
and the makespan. We realize the experiments by execut-
ing the montage-25 and the epigenomic-24 workflows on
Fog computing, Cloud computing and on MAS-GA. The
results in Table7 show that regardless of the MAS-GA
configuration, the execution of Montage-24 workflow and
Epigenomics-24 workflow on Fog computing generates a
high cost and makespan compared with those obtained by
using the Cloud and MAS-GA.
Unsurprisingly, the results in Table6 show that Cloud
computing is always the best solution to execute the work-
flow with lowest makespan, but it affects on the cost and
on the response time as mentioned on Sect.5.2.1. Other-
wise, the results in Table8 illustrate that executing the
Montage-25 workflow and the Epigenomics-24 work-
flow on MAS-GA generate an optimal solution in terms
of makespan and cost. The MAS-GA configuration (0.5,
0.5) which optimizes the makespan with 50 percent and
the cost with 50 percent, generates the lowest makespan
compared with others MAS-GA configurations with a rea-
sonable cost.
6 Discussion
As per results demonstrated in Sect.5, we can argue that
setting up a workflow scheduling map on MAS-GA con-
tributes to optimising the different QoS metrics with a
response time almost same as that generated with Fog com-
puting (considered the least amount of response time ever).
In terms of makespan, the collaboration with Cloud pro-
vider allows to schedule the high computation tasks on the
powerful resources of Cloud, which contributes to minimize
the makespan. However, in terms of response time, the Fog
computing nodes often close to the user generate an optimal
response time. Hereafter, based on our proposed approach
we recommend the productive partnership between Fog
computing and Cloud computing in order to establish an
improved scheduling plan.
Table 8 Multi-objective optimization on MAS-GA
Multi-objective opti-
mization values
MAS-GA configuration Makespan/s Cost/$
For Montage-25 MAS-GA(0.9,0.1) 224 1992
MAS-GA(0.5,0.5) 251 1801
MAS-GA(0.2,0.8) 283 1712
For Epigenomics-24 MAS-GA(0.9,0.1) 280 1229
MAS-GA(0.5,0.5) 30,233 1981
MAS-GA(0.2,0.8) 363 1928
4737Cooperative agents-based approach forworkflow scheduling onfog-cloud computing
1 3
7 Conclusion andfutur works
In this work, we present an approach for scheduling IoT
Workflow in a Fog-Cloud Computing environment based
on a Multi-Agent System (MAS). These agents navigate the
environment in order to execute the workflow tasks in mini-
mum execution time and cost. Therefore, we implement a
genetic algorithm based on deadline and budget constraints,
which simultaneously optimize several objectives namely,
the makespan, cost, reliability and availability. Applications
generated from IoT devices are modeled with a workflow
where each workflow task represents an IoT application. The
workflow is partitioned into a set of partitions in order to
minimize communication between contractual agents and
to optimize the overall execution time. The proposed archi-
tecture is modeled by a MAS intended to control the strong
dynamicity of the environment, where each layer is managed
by a group of agents and the different layers communicate
and collaborate through the MAS functions. All the studied
related works that deal with the workflow scheduling prob-
lem have ignored the issue of environmental dynamics and
its impact on the quality of the scheduling solution. In addi-
tion, The Multi-Agent System is an adapted response to the
requirements of the scheduling approach to maintain their
independence and autonomy while evolving in a distributed
Fog-Cloud environment. Otherwise, the main idea of our
work is to ensure the role of Fog-Cloud computing environ-
ments in the optimization of all QoS metrics. Thus, we eval-
uate the collaboration of Fog and Cloud computing by com-
paring it with the Fog tier and the Cloud tier separately in
terms of makespan, cost and response time. The results show
that enabling the local Fog nodes to collaborate with the
Cloud data center succeed in lowering the completion time,
the response time and the cost. Furthermore, developing a
scheduling approach based on MAS-GA provides an optimal
solution in terms of all QoS metrics, where we used a fitness
function configuration that simultaneously optimizes, 100%
the makespan, 90% the reliability and 70% the cost and the
availability, when compared to the single-objective optimi-
zation of these QoS metrics. The results of the proposed
approach showed significant improvement in the overall
response time with a gain of 46.67% compared to Cloud,
cost with a gain of 21.38% compared to Fog and makespan
with a gain of 14.13% compared to Fog. Our future work
has three directions. The first direction is to conduct a more
in-depth comparative study between our approach and the
main approaches studied in the literature in order to give
academics and practitioners more knowledge on Workflow
Scheduling on Fog-Cloud computing. The second direction
is to extend current research, taking into account interactions
between agents, which involve dynamic negotiation in order
to make workflow solutions more flexible with interactions
between supplier and consumer, with the implementation of
a real case of IoT workflow. The third direction is to propose
a new solution using a pareto approach for multi-objective
optimization such as makespan, cost, reliability, availability
thus offering more flexibility to users to assess their prefer-
ences and choose a schedule that meets the better their QoS
requirements.
References
Aburukba RO, AliKarrar M, Landolsi T, El-Fakih K (2020) Schedul-
ing internet of things requests to minimize latency in hybrid fog-
cloud-computing. Future Gener Comput Syst 20:539–551
Alaei M, Khorsand R, Ramezanpour M (2020) An adaptive fault detec-
tor strategy for scientific workflow scheduling based on improved
differential evolution algorithm in cloud. Appl Soft Comput 20:20
Ali IM, Sallam KM, Moustafa N, Chakraborty R, Ryan MJ, Choo KKR
(2020) An automated task scheduling model using non-dominated
sorting genetic algorithm ii for fog-cloud systems. IEEE Trans
Cloud Comput 20:1–1
Bellifemine F, Poggi A, Rimassa G (2000) Developing multi-agent
systems with jade. In: International workshop on agent theories,
architectures, and languages, pp 89–103
Bhatia M, Sood SK, Kaur S (2020) Quantumized approach of load
scheduling in fog computing environment for IoT applications.
Computing 20:1–19
Binh HTT, Anh TT, Son DB, Duc PA, Nguyen BM (2018) An evolu-
tionary algorithm for solving task scheduling problem in cloud-
fog computing environment. In: Proceedings of the ninth interna-
tional symposium on information and communication technology,
pp 397–404
Bittencourt LF, Goldman A, Madeira ER, da Fonseca NL, Sakellariou
R (2018) Scheduling in distributed systems: a cloud computing
perspective. Comput Sci Rev 20:31–54
Bonomi F, Milito R, Zhu J, Addepalli S (2012) Fog computing and its
role in the internet of things. In: Proceedings of the first edition
of the MCC workshop on Mobile cloud computing, pp 13–16
Chen W, Deelman E (2012) Workflowsim: A toolkit for simulating
scientific workflows in distributed environments. In: 2012 IEEE
8th international conference on E-science, pp 1–8
De Maio V, Kimovski D (2020) Multi-objective scheduling of extreme
data scientific workflows in fog. Future Gener Comput Syst
20:171–184
Feki MA, Kawsar F, Boussard M, Trappeniers L (2013) The internet
of things: the next technological revolution. Computer 20:24–25
Fellir F, ElAttar A, Nafil K, Chung L (2020) A multi-agent based
model for task scheduling in cloud-fog computing platform. In:
2020 IEEE international conference on informatics, IoT, and ena-
bling technologies (ICIoT), pp 377–382
Goderis A, De Roure D, Goble C, Bhagat J, Cruickshank D, Fisher P,
Michaelides D, Tanoh F (2008) Discovering scientific workflows:
the my experiment benchmarks. Commun ACM 20:1–10
Goldberg DE (1994) Genetic and evolutionary algorithms come of age.
Commun ACM 20:113–120
Hajlaoui JE, Omri MN, Benslimane D (2017a) Multi-tenancy aware
configurable service discovery approach in cloud computing. In:
2017 IEEE 26th international conference on enabling technolo-
gies: infrastructure for collaborative enterprises (WETICE), pp
232–237
Hajlaoui JE, Omri MN, Benslimane D, Barhamgi M (2017b) Qos
based framework for configurable iaas cloud services discovery.
4738 M.Mokni et al.
1 3
In: 2017 IEEE international conference on web services (ICWS),
pp 460–467
Helali L, Omri MN (2021) A survey of data center consolidation in
cloud computing systems. Comput Sci Rev 20:39
Holland JH (1992) Genetic algorithms. Sci Am 20:66–73
Ismayilov G, Topcuoglu HR (2020) Neural network based multi-objec-
tive evolutionary algorithm for dynamic workflow scheduling in
cloud computing. Future Gener Comput Syst 20:307–322
Jiang YC, Wang JF (2007) Temporal partitioning data flow graphs for
dynamically reconfigurable computing. IEEE Trans Very Large
Scale Integrat Syst 20:1351–1361
Lobo FG, Goldberg DE, Pelikan M (2000) Time complexity of genetic
algorithms on exponentially scaled problems. In: Proceedings of
the 2nd annual conference on genetic and evolutionary computa-
tion, pp 151–158
Mohammadzadeh A, Masdari M, Gharehchopogh FS, Jafarian A
(2020) A hybrid multi-objective metaheuristic optimization algo-
rithm for scientific workflow scheduling. Cluster Comput 20:1–25
Mutlag AA, Khanapi Abd Ghani M, Mohammed MA, Maashi MS,
Mohd O, Mostafa SA, Abdulkareem KH, Marques G, de la Torre
Díez I (2020) Mafc: multi-agent fog computing model for health-
care critical tasks management. Sensors 20:1853
Pham XQ, Huh EN (2016) Towards task scheduling in a cloud-fog
computing system. In: 2016 18th Asia-Pacific network operations
and management symposium (APNOMS), pp 1–4
Pham XQ, Man ND, Tri NDT, Thai NQ, Huh EN (2017) A cost-and
performance-effective approach for task scheduling based on col-
laboration between cloud and fog computing. Int J Distrib Sens
Netw 20:1550147717742073
Rasheed S, Javaid N, Rehman S, Hassan K, Zafar F, Naeem M (2018)
A cloud-fog based smart grid model using max-min scheduling
algorithm for efficient resource allocation. In: International con-
ference on network-based information systems, pp 273–285
Robusto CC (1957) The cosine-haversine formula. Am Math Mon
20:38–40
Saeedi S, Khorsand R, Bidgoli SG, Ramezanpour M (2020) Improved
many-objective particle swarm optimization algorithm for scien-
tific workflow scheduling in cloud computing. Comput Ind Eng
20:106649
Sharma C, Rashid M (2020) Scheduling of scientific workflow in dis-
tributed cloud environment using hybrid PSO algorithm. Trends
Cloud Based IoT IEEE 20:113–123
Stavrinides GL, Karatza HD (2019) A hybrid approach to scheduling
real-time IoT workflows in fog and cloud environments. Multimed
Tools Appl 20:24639–24655
Tychalas D, Karatza H (2020) A scheduling algorithm for a fog com-
puting system with bag-of-tasks jobs: simulation and performance
evaluation. Simul Model Pract Theory 20:101982
Wang X, Yeo CS, Buyya R, Su J (2011) Optimizing the makespan
and reliability for workflow applications with reputation and
a look-ahead genetic algorithm. Future Gener Comput Syst
20:1124–1134
Wang Y, Liu H, Zheng W, Xia Y, Li Y, Chen P, Guo K, Xie H
(2019) Multi-objective workflow scheduling with deep-q-net-
work-based multi-agent reinforcement learning. IEEE Access
20:39974–39982
Wang Y, Guo Y, Guo Z, Baker T, Liu W (2020) Closure: a cloud scien-
tific workflow scheduling algorithm based on attack-defense game
model. Future Gener Comput Syst 20:460–474
Wei X (2020) Task scheduling optimization strategy using improved
ant colony optimization algorithm in cloud computing. J Ambient
Intell Human Comput 20:1–12
Xu X, Cao H, Geng Q, Liu X, Dai F, Wang C (2020) Dynamic resource
provisioning for workflow scheduling under uncertainty in edge
computing environment. Concurre Comput Pract Exp 20:e5674
Yassa S (2014) Allocation optimale multicontraintes des workflows
aux ressources d’un environnement cloud computing. PhD thesis,
Cergy-Pontoise
Yassa S, Chelouah R, Kadima H, Granado B (2013a) Multi-objective
approach for energy-aware workflow scheduling in cloud comput-
ing environments. Sci World J 20:20
Yassa S, Sublime J, Chelouah R, Kadima H, Jo GS, Granado B (2013b)
A genetic algorithm for multi-objective optimisation in workflow
scheduling with hard constraints. Int J Metaheurist 20:415–433
Publisher’s Note Springer Nature remains neutral with regard to
jurisdictional claims in published maps and institutional affiliations.
... This step is developed to differentiate the tasks that can be executed on the fog layer and that are characterized by workloads within the capabilities of the available resources, and computationally intensive tasks that exceed the capacity of the fog. The evaluation is defined by the equation 21 [18], which denotes the cost of executing a task CS divided by the capacity of a fog node CCF . ...
... After selecting the two parents (P arent1 and P arent2) from the current population, the crossover operator randomly selects two points in the scheduling order of the first parent. Then, we exchange the resource identifiers of the selected tasks with the same tasks in P arent2 [18]. In figure 3, we present an example of execution of the crossing on P arent1 and P arent2. ...
... The main point is to change the execution sequence of tasks in an individual. The adopted mutation operator has two main steps, (i) a task is selected randomly in the individual, (ii) an alternative resource is selected randomly to replace the current task allocation [18]. Figure 4 shows the application of the permutation mutation method on the solutions Child1 and Child2 to generate two new children, namely Child1 ′ and Child2 ′ . ...
Preprint
Full-text available
Internet of Things (IoT) applications have gained widespread adoption across diverse sectors such as healthcare, smart agriculture, smart cities, transportation, and water management, leading to a substantial generation of Big Data. To efficiently process this voluminous data, there is a pressing need for a platform capable of handling large quantities. However, real-time applications face challenges in cloud processing due to high latency. Fog computing, serving as a complementary infrastructure to the cloud, emerges as a viable solution by facilitating task processing, networking, and data storage in cloud data centers accessible to mobile users. Task offloading emerges as a promising fog computing solution to overcome resource constraints in IoT applications. This involves the execution of part or all of mobile applications in the cloud, aiming to enhance execution time and reduce energy consumption. Our research concentrates on optimizing the IoT task offloading problem within heterogeneous environments, considering conflicting constraints. This optimization challenge is formulated as a multi-objective problem, emphasizing energy consumption and latency Quality of Service (QoS) metrics. Our proposed solution, named Tof-NSGAII, is tailored to respect the finite resources of fog computing by adeptly balancing workloads and meeting the latency requirements of IoT tasks. We have adapted the widely employed meta-heuristic, the non-dominant sorting genetic algorithm (NSGA-II), to generate a set of non-dominated multi-objective task offloading optimization solutions, considering both energy consumption and latency. Experimental results showcase the efficacy of Tof-NSGAII in generating task offloading solutions that judiciously distribute executed tasks between fog and cloud computing environments based on their specific requirements. Additionally, the generated non-dominated solutions demonstrate optimality in terms of energy consumption, boasting an average energy reduction of 12.18\% compared to alternative approaches. Notably, our approach introduces only a marginal increase in latency, amounting to 0.38\%, a difference that can be considered negligible.
... In the CS context, each Cloud Agent (CA) is a resource provider. The function of CA i is to deliver an acceptable number of resources with the least processing time and minimal expense possible with the CS's limitless resources [47]. If necessary, this agent can communicate with the FB. ...
... The crossover operation is employed to produce a new solution using two-parent solutions. The goal is to produce new individual(offspring) in the present population by blending various components of existing individuals [47]. After picking two parent solutions(schedules) from the population namely Parent 1 and Parent 2 , the crossover operator now chooses two random spots (task-ids) in the Parent 1 solution(schedule). ...
Article
Full-text available
Along with the rising popularity of pay-as-you-go cloud services, many businesses and communities are deploying their business or scientific workflow applications on cloud-based computing platforms. The primary responsibility of cloud service providers is to reduce the monetary cost and execution time of Infrastructure as a Service (IaaS) cloud services. The majority of current solutions for cost and makespan reduction were developed for conventional cloud platforms and are incompatible with heterogeneous computing systems (HCS) having service-based resource management approaches and pricing models. Fog-cloud infrastructures (FCI) have emerged as desirable target areas for workflow automation across several fields of application. In heterogeneous FCI, the execution of workflows involving tasks having different properties might influence the performance in terms of resource usage. The primary goal of this research is to efficiently offload the computational task and optimally schedule the workflow in such diverse computing environment. In this article, we present a novel strategy for building an environment that includes techniques for offloading and scheduling while balancing competing demands from the user and the resource providers. In order to address the issue of uncertainty, our approach incorporates a fuzzy dominance-based task clustering and offloading technique. To construct a suitable execution sequence of tasks that helps to limit the precedence relationship, by preserving dependency constraints among the tasks, a novel algorithm for tasks segmentation is employed. To simplify the problem of the complexity, a hybrid-heuristics based on Harmony Search Algorithm (HSA) and Genetic Algorithm (GA) for resource scheduling algorithm is used. The multi-objective optimization using three competing objectives is taken into consideration for investigation in heterogeneous FCI. The fitness function derived includes minimization of makespan and cost along with maximization of resource utilization. We performed experimental research using five workflow datasets in order to investigate and verify the efficacy of our proposed technique. We contrasted our proposed strategy with the primary, closely comparable strategies. Extensive testing using scientific workflows confirms the effectiveness of our offloading approach. Our solution provided a substantially better cost-makespan tradeoffs, while achieving significantly less energy consumption and can execute marginally quicker than the existing algorithms.
... The paper also shows that the method may not work well for workflows with more than one objective or constraint, such as energy consumption, reliability, security, etc., as it only focuses on the cost and the deadline as the primary objective and discretion. Article [30], in determining the objectives of the article, directly mentions both the response time management in the fog and the cost optimization in the allocated cloud elastic resources, as well as the proposed approach as a multi-agent approach in the relevant environment. It is simulated to the Internet of Things. ...
... Molecular workflow with 41 tasks and 72 arcs[30] ...
Article
Full-text available
Fog and cloud computing are emerging paradigms that enable distributed and scalable data processing and analysis. However, these paradigms also pose significant challenges for workflow scheduling and assigning related tasks or jobs to available resources. Resources in fog and cloud environments are heterogeneous, dynamic, and uncertain, requiring efficient scheduling algorithms to optimize costs and latency and to handle faults for better performance. This paper aims to comprehensively survey existing workflow scheduling techniques for fog and cloud environments and their essential challenges. We analyzed 82 related papers published recently in reputable journals. We propose a subjective taxonomy that categorizes the critical difficulties in existing work to achieve this goal. Then, we present a systematic overview of existing workflow scheduling techniques for fog and cloud environments, along with their benefits and drawbacks. We also analyze different workflow scheduling techniques for various criteria, such as performance, costs, reliability, scalability, and security. The outcomes reveal that 25% of the scheduling algorithms use heuristic-based mechanisms, and 75% use different Artificial Intelligence (AI) based and parametric modelling methods. Makespan is the most significant parameter addressed in most articles. This survey article highlights potentials and limitations that can pave the way for further processing or enhancing existing techniques for interested researchers.
... For positive properties, larger values indicate higher performance (e.g., availability), while for negative properties, smaller values indicate higher performance (e.g., response time). In our work, we particularly focus on the following QoS properties, which are defined as follows [16,26]: ...
... The configuration of IoT applications requires not only the efficiency of the provided services but also their reliability. As stated in [15], the reliability is treated either as a QoS [16] or transactional property. As the fog services have inherent transactional behavior, we study their reliability from a transactional perspective. ...
Article
Full-text available
Internet of Things (IoT) applications have invaded several domains (supply chain, healthcare, etc.). To enhance the quality of the provided service in terms of latency, response time, etc., service providers such as Amazon, Google, and Microsoft turned to running IoT tasks near the end user by invoking the fog computing concept. Fog computing extends cloud services to the edge of the network. It provides a variety of computing resources in the form of fog nodes, which offer multiple services known as fog services. These latter are used to store and process the data generated by IoT devices. Fog services are characterized by their high reusability. It enables the construction of a composite service to provide complicated IoT tasks. In this paper, we introduce an adaptive requirements-aware approach for configuring IoT applications in fog computing. The configuration is based on a Composite Fog Service (CFS) model and is restricted by a set of constraints. The proposed approach is implemented in a Smart Car Parking (SCP) scenario. Simulation results reveal the effectiveness of our approach.
... Several studies focus on the orchestration of scientific workflows in fog computing The work of [11] proposes a solution that combines Discrete Moth-Flame Optimization with Differential Evolution (DMFO-DE) to reduce energy consumption for task execution in scientific workflows. Other approaches to workflow scheduling include: an agent-based approach optimized through a genetic algorithm [12], and swarm-based methods such as the hybrid fireworks algorithm of [13]. While these efforts address relevant challenges, they assume global knowledge of the state, limiting their applicability to centrally managed ecosystems. ...
Conference Paper
Full-text available
Fog computing is emerging as geo-distributed and connected edge-to-cloud ecosystems, spanning multiple domains operated by different entities. Consequently, fog-compatible applications need to support distributed operations and decentralized management. This promoted the adoption of the mi-croservices architecture, to facilitate application modularity and autonomy. Transitioning to fog-native applications, i.e., running distributed microservice workflows over multiple domains, is a challenging endeavor. On one hand, distributing workflows require awareness of the intents and dependencies of microser-vices, as this may impact the supply of data and the perceived Quality of Service (QoS). On the other hand, the variant capacities and energy supply, coupled with limited information-sharing across fog autonomies, hinders the prospect of end-to-end optimization. To tackle such problems, we propose a novel federate optimisation algorithm for multi-domain scheduling of fog-native microservice workflows. The algorithm incorporates workflow intents in decision-making by combining Bender's decomposition with Alternating Direction Method of Multipliers (ADMM) to provide optimized workflow placement, mapping, routing and admission. The performance of the algorithm is evaluated analytically and compared to state-of-the-art intent-based ADMM (iADMM). The results show performance trade-offs with the proposed iBADMM (direct), with the latter improving the fraction of workflow greenness by « 15%.
... Task priority is based on the initial population on the HEFT approach, including upward, downward, and level ranking, which are iterated. The authors of [41] have addressed the multi-objective workflow scheduling problem with a focus on the objectives of finish time, response time, and cost. They propose the multi-agent system technique based on the Genetic Algorithm to solve the problem. ...
Article
Full-text available
The Internet of Things (IoT) is constantly evolving. The variety of IoT applications has caused new demands to emerge on users’ part and competition between computing service providers. On the one hand, an IoT application may exhibit several important criteria, such as deadline and runtime simultaneously, and it is confronted with resource limitations and high energy consumption on the other hand. This has turned to adopting a computing environment and scheduling as a fundamental challenge. To resolve the issue, IoT applications are considered in this paper as a workflow composed of a series of interdependent tasks. The tasks in the same workflow (at the same level) are subject to priorities and deadlines for execution, making the problem far more complex and closer to the real world. In this paper, a hybrid Particle Swarm Optimization and Simulated Annealing algorithm (PSO–SA) is used for prioritizing tasks and improving fitness function. Our proposed method managed the task allocation and optimized energy consumption and makespan at the fog-cloud environment nodes. The simulation results indicated that the PSO–SA enhanced energy and makespan by 5% and 9% respectively on average compared with the baseline algorithm (IKH-EFT).
Article
The construction of information management system is the basis to consider the management level of an enterprise, so the construction of enterprise management information system is an important problem to be solved by enterprises. Using advanced software technology is an important way to improve the level of enterprise management. At the same time, innovative management form is also an important embodiment of enterprise system innovation. The development of cloud platform and Internet of things technology has brought revolutionary impact on enterprise management mode, methods and means. Based on information reconstruction model and Internet of things technology, this paper constructs an enterprise integrated management system, in order to provide reference for the development of enterprises. Novelty of the paper is: (1) Infrastructure level. The comprehensive management informatization is very important to the enterprise management decision-making, which is the key to improve the management level of the enterprise, hence, we use the novel MIS model to make the system efficient. We improve the traditional MIS model to make it fit for the business analytic process. (2) Algorithm design level. The traditional genetic algorithm will converge to a point in the iterative solution, resulting in inbreeding and destroying the diversity of the population. Therefore, the algorithm can only get the internal optimal value and cannot get the global optimal solution. Hence, we consider the novel data analytic model to make the system efficient. We optimize the traditional GA to make it robust for the complex data scenarios. (3) Application level. The business intelligence scenario is considered as the applications. The performance of the proposed pipeline is verified through the experimental analysis, we compare the proposed model with the latest ones and test the performance on regression performance, average response time of the system, number of hits per second of the system and the overall comparison analysis.
Article
The Internet of Things (IoT) or Web 3.0 is a growing technology that allows devices to communicate with each other as well as with the cloud. The Industrial Internet of Things (IIoT), or Industry 4.0, is the industrial application of IoT. Industry 4.0 transforms production equipment into IoTs, which in turn transmit production-related data or their operational status to other connected devices or servers for rapid storage or processing. The results of the processing are used in real-time for the control and supervision of the production system. With the time constraints related to the availability of the processing results, companies have more and more computing servers. These IoT application computing servers are the fog. While processing IoT application data, fog servers require cloud intervention to offload tasks to free them up and provide data storage. Edge computing joins the game when connected objects are themselves computing resources. One of the main challenges of these technologies is the scheduling of dependent/independent tasks. Scheduling IoT workflows in the fog to optimize quality of service (QoS) metrics often requires offloading some tasks to the cloud in order to meet customer satisfaction and service level agreement (SLA) constraints. Therefore, many algorithms have been developed to schedule workflows in the cloud-fog environment. To organize this research, we present a literature review on scheduling workflows in cloud, fog, and edge environments. We outline a taxonomy that systematically classifies existing approaches. Finally, we identify challenges for future work.
Article
With the advent of Internet of Things (IoT) applications, smart IoT devices are ubiquitous. Executing these devices on cloud data centers can lead to network congestion and transmission delay that causes failure instances in cloud architecture. Therefore, fog computing is derived to satisfy the low latency, location awareness, and mobility requirements of massive IoT applications near end users. Concerned with the high requirements of compute-intensive business applications, fog computing structure becomes complex due to limited computing capacity, and therefore these applications are offloaded to the cloud. Therefore, to overcome the offloading problem of workflow tasks in this work, a novel method named: Improved Chemical Reaction Optimization for Workflow Scheduling in a Hybrid Environment (ICRO-WSHE) has been proposed. The proposed algorithm aims at minimizing the execution cost of workflow applications under the defined deadline constraints. The algorithm is based on CRO, which has fewer parameters and the fastest convergence speed. However, the existing CRO has drawbacks in terms of getting stuck in the local optimum and having difficulty obtaining real optimal solutions. Therefore, the proposed mechanism modifies the existing CRO algorithm and includes the suitable features of PSO (Particle Swarm Optimization) and FQR (Fitness-based Quasi Reflection) methods to prevent the shortcomings of the CRO method. Further, a double-point synthesis operation is incorporated to increase the exploration rate and improve the working of the proposed algorithm. The result of the simulation experiment based on different types of workflows shows that the proposed method outperforms existing algorithms and proves to be a practical approach.
Article
Full-text available
Virtualization technology is the backbone of cloud systems, the fastest-growing energy consumers, globally. It holds an imperative position in the resource management area by providing solutions for many related critical problems and challenges. Resource consolidation remains one of the techniques most used to develop new and more efficient resource management policies. Virtual machine packing, which is mainly addressed, allows cloud data centers to move from a particular state to a more optimized one. Currently, the unprecedented advancements in container-based cloud and containerized workloads have created new consolidation opportunities. In this article, we study the problem of consolidating data centers within distributed cloud systems. We give a generic view on IT consolidation at different levels of cloud services. Firstly, we present an overview of virtualized data centers and consolidation. Next, we present a brief thematic taxonomy and an illustration of some consolidation solutions from the literature. This presentation will then be followed by a discussion of certain research questions as well as a proposal for a certain number of future orientations in this field which we assume essential in order to contribute to the resolution of the problematic dealt with in this article.
Article
Full-text available
In order to solve the problems of unbalanced load, slow convergence speed and low utilization of virtual machine resources existing in the previous task scheduling optimization strategies, this paper proposes a task scheduling optimization strategy using improved ant colony optimization algorithm in cloud computing. Firstly, based on the principle of cloud computing task scheduling, a scheduling model using improved ant colony algorithm is proposed to avoid the optimization strategy falling into local optimization. Then, task scheduling satisfaction function is constructed by combining the three objectives of the shortest waiting time, the degree of resource load balance and the cost of task completion to search the optimal solution of task scheduling. Finally, the reward and punishment coefficient is introduced to optimize the pheromone updating rules of ant colony algorithm, which speeds up the solution speed. Besides, we use dynamic update of volatility coefficient to optimize overall performance of this strategy, and introduce virtual machine load weight coefficient in the process of local pheromone updating, so as to ensure the load balance of virtual machine. The feasibility of our algorithm is analyzed and demonstrated by experiments with Cloudsim. The experimental results show that the proposed algorithm has the fastest convergence speed, the shortest completion time, the most balanced load and the highest utilization rate of virtual machine resources compared with other methods. Therefore, our proposed task scheduling optimization strategy has the best performance.
Article
Full-text available
Workflow is composed of some interdependent tasks and workflow scheduling in the cloud environment that refers to sorting the workflow tasks on virtual machines on the cloud platform. We will encounter many sorting modes with an increase in virtual machines and the variety in task size. Reaching an order with the least makespan is an NP-hard problem. The hardness of this problem increases even more with several contradictory goals. Hence, a meta-heuristic algorithm is what required in reaching the optimal response. Thus, the algorithm is a hybridization of the ant lion optimizer (ALO) algorithm with a Sine Cosine Algorithm (SCA) algorithm and used it multi-objectively to solve the problem of scheduling scientific workflows. The novelty of the proposed algorithm was to enhance search performance by making algorithms greedy and using random numbers according to Chaos Theory on the green cloud computing environment. The purpose was to minimize the makespan and cost of performing tasks, to reduce energy consumption to have a green cloud environment, and to increase throughput. WorkflowSim simulator was used for implementation, and the results were compared with the SPEA2 workflow scheduling algorithm. The results show a decrease in the energy consumed and makespan.
Chapter
Full-text available
Cloud computing is a domain where distinctive administrations are given to clients over the Web. Task scheduling is one of the critical parts of cloud computing which enhances the execution of the cloud framework. Efficient task scheduling algorithm is one of the main challenges in cloud computing. Scheduling algorithm makes a specific task to get finished in conceivable least time. Task scheduling helps in accomplishing productive usage of resources. In this chapter, the authors generate population for Particle Swarm Optimization (PSO) using the Predict Earliest Finish Time algorithm (PEFT). The authors used makespan time and cost as parameters to evaluate the results of the proposed approach. Experimental results in this study demonstrate that the proposed approach outperforms the existing technique of scheduling using Particle Swarm Optimization algorithm. The results in this work proved that this model consumes less cost and execution time than the other existing approaches of scheduling using Particle Swarm Optimization algorithm.
Article
Full-text available
In healthcare applications, numerous sensors and devices produce massive amounts of data which are the focus of critical tasks. Their management at the edge of the network can be done by Fog computing implementation. However, Fog Nodes suffer from lake of resources That could limit the time needed for final outcome/analytics. Fog Nodes could perform just a small number of tasks. A difficult decision concerns which tasks will perform locally by Fog Nodes. Each node should select such tasks carefully based on the current contextual information, for example, tasks' priority, resource load, and resource availability. We suggest in this paper a Multi-Agent Fog Computing model for healthcare critical tasks management. The main role of the multi-agent system is mapping between three decision tables to optimize scheduling the critical tasks by assigning tasks with their priority, load in the network, and network resource availability. The first step is to decide whether a critical task can be processed locally; otherwise, the second step involves the sophisticated selection of the most suitable neighbor Fog Node to allocate it. If no Fog Node is capable of processing the task throughout the network, it is then sent to the Cloud facing the highest latency. We test the proposed scheme thoroughly, demonstrating its applicability and optimality at the edge of the network using iFogSim simulator and UTeM clinic data.
Article
With the increasing popularity and acceptance of cloud computing, it is being applied in services like executing large-scale applications, where cloud environment is selected by the scientific associations to easily execute the computation intensive workflows. However, cloud computing can have higher failure rates due to the larger number of servers and components filled with the intensive workloads. These failures may lead to the unavailability of virtual machines (VMs) for computation. Hence, this issue of fault occurrences can be tolerated by adopting an effective and efficient fault tolerant strategy. The goal of our research in this paper is to develop an adaptive fault detector strategy based on Improved Differential Evolution (IDE) algorithm in cloud computing that can minimize the energy consumption, the makespan, the total cost and, at the same time, tolerate up faults when scheduling scientific workflows. This proposed work applies an adaptive network-based fuzzy inference system (ANFIS) prediction model to proactively control resource load fluctuation that increases the failure prediction accuracy before fault/failure occurrence. In addition, it applies a reactive fault tolerance technique for when a processor fails and the scheduler must allocate a new VM to execute the workflow tasks. The experimental results show that compared with existing techniques, the proposed approach significantly improves the overall scheduling performance, achieves a higher degree of fault tolerance with high HyperVolume (HV) compared with the ICFWS, IDE, and ACO algorithms, minimizes the makespan, the energy consumption and task fault ratio, and reduces the total cost.
Article
In this paper, we first propose a multi-objective task-scheduling optimization problem that minimizes both the makespans and total costs in a fog-cloud environment. Then, we suggest an optimization model based on a Discrete Non-dominated Sorting Genetic Algorithm II (DNSGA-II) to deal with the discrete multi-objective task-scheduling problem and to automatically allocate tasks that should be executed either on fog or cloud nodes. The NSGA-II algorithm is adapted to discretize crossover and mutation evolutionary operators, rather than using continuous operators that require high computational resources and not able to allocate proper computing nodes. In our model, the communications between the fog and cloud tiers are formulated as a multi-objective function to optimize the execution of tasks. The proposed model allocates computing resources that would effectively run on either the fog or cloud nodes. Moreover, it efficiently organizes the distribution of workloads through various computing resources at the fog. Several experiments are conducted to determine the performance of the proposed model compared with a continuous NSGA-II (CNSGA-II) algorithm and four peer mechanisms. The outcomes demonstrate that the model is capable of achieving dynamic task scheduling with minimizing the total execution times (i.e. makespans) and costs in fog-cloud environments.
Article
Optimized scientific workflow scheduling can greatly improve the overall performance of cloud computing. As workflow scheduling belongs to NP-complete problem, so, meta-heuristic approaches are more preferred option. Most studies on workflow scheduling in cloud mostly consider at most two or three objectives and there is a lack of effective studies and approaches on problems with more than three objectives remains; because the efficiency of multi-objective evolutionary algorithms (MOEAs) will seriously degrade when the number of objectives is more than three, which are often known as many-objective optimization problems (MaOPs). In this paper, an approach to solve workflow scheduling problem using Improved Many Objective Particle Swarm Optimization algorithm named I_MaOPSO is proposed considering four conflicting objectives namely maximization of reliability and minimization of cost, makespan and energy consumption. Specifically, we use four improvements to enhance the ability of MaOPSO to converge to the non-dominated solutions that apply a proper equilibrium between exploration and exploitation in scheduling process. The experimental results show that the proposed approach can improve up to 71%, 182%, 262% the HyperVolume (HV) criterion compared with the LEAF, MaOPSO, and EMS-C algorithms respectively. I_MaOPSO opens the way to develop a scheduler to deliver results with improved convergence and uniform spacing among the answers in compared with other counterparts and presents results that are more effective closer to non-dominated solutions.
Article
Edge computing, an extension of cloud computing, is introduced to provide sufficient computing and storage resources for mobile devices. Moreover, a series of computing tasks in a mobile device are set as structured computing processes and flows to achieve effective management by the workflow. However, the execution uncertainty caused by performance degradation, service failure, and new service additions remains a huge challenge to the user's service experience. In order to address the uncertainty, a software‐defined network (SDN)‐based edge computing framework and a dynamic resource provisioning (UARP) method are proposed in this paper. The UARP method is implemented in the proposed framework and addresses the uncertainty through the advantages of SDN. In addition, the nondominated sorting genetic algorithm‐III is employed to optimize two goals, that is, the energy consumption and the completion time, to obtain balanced scheduling strategies. The comparative experiments are performed and the results show that the UARP method is superior to other methods in addressing the uncertainty, while reducing energy consumption and shortening the completion time.