ChapterPDF Available

An Energy-Efficient Load Balancing Approach for Fog Environment Using Scientific Workflow Applications

Authors:

Abstract

Fog computing seeks the attention of researchers by bringing a revolution in the Internet of Things (IoT). Fog computing emerged as a complement to cloud computing. It extends cloud services to the network edge and processes large and complex tasks near end users. Furthermore, fog computing can help process workflow tasks on its nodes only rather than sending them to the cloud, which helps to reduce the time consumed to request and process at the cloud layer. Scientific Workflow is used to represent data flow in scientific applications, which are very time-critical. This paper has proposed an energy-efficient load balancing approach for fog computing to reduce energy consumption in scientific workflow applications. The proposed algorithm works to reduce energy consumption in fog nodes by equal distribution of workload in fog resources. Genome and SIPHT workflow applications have been considered to evaluate in iFogSim. KeywordsEnergy-efficientFog computingLoad balancingResource utilizationScientific workflows
An Energy-Efficient Load Balancing
Approach for Fog Environment Using
Scientific Workflow Applications
Mandeep Kaur and Rajni Aron
Abstract Fog computing seeks the attention of researchers by bringing a revolution
in the Internet of Things (IoT). Fog computing emerged as a complement to cloud
computing. It extends cloud services to the network edge and processes large and
complex tasks near end users. Furthermore, fog computing can help process workflow
tasks on its nodes only rather than sending them to the cloud, which helps to reduce
the time consumed to request and process at the cloud layer. Scientific Workflow is
used to represent data flow in scientific applications, which are very time-critical. This
paper has proposed an energy-efficient load balancing approach for fog computing
to reduce energy consumption in s cientific workflow applications. The proposed
algorithm works to reduce energy consumption in fog nodes by equal distribution
of workload in fog resources. Genome and SIPHT workflow applications have been
considered to evaluate in iFogSim.
Keywords Energy-efficient ·Fog computing ·Load balancing ·Resource
utilization ·Scientific workflows
1 Introduction
Fog computing contains sensors, actuators, gateways, and other computing devices in
its layered structure, which helps to store and process end-user requests at their end
only. CISCO introduced fog computing to support the end-users facing obstacles
while accessing the cloud data centers [1]. As its name indicates, fog is near the
M. Kaur (B
)
Lovely Professional University, Jalandhar, India
e-mail: k.mandeep@chitkara.edu.in
Present Address:
Chitkara University Institute of Engineering and Technology, Chitkara University, Rajpura,
Punjab, India
R. Aron
SVKM’s Narsee Monjee Institute of Management Studies (NMIMS) University, Mumbai,
Maharashtra, India
e-mail: rajni@nmims.edu
© The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2022
S
. Majhietal. (eds.), Distributed Computing and Optimization Techniques, Lecture Notes
in Electrical Engineering 903, https://doi.org/10.1007/978-981-19-2281-7_16
165
166 M. Kaur and R. Aron
end surface where all the Internet of things communicates and generates data. The
amount of data increases daily, which needs proper storage and processing. Hence
fog computing provides a layer near to end devices to cope with the high latency
problem. Fog computing also helps to implement scientific workflow tasks. Workflow
systems help manage different resources, which can be used in fog computing to
manage scientific workflow tasks. Workflows are also defined as Direct Acyclic
Graphs (DAG), which contain vertices and edges, where vertices denote different
tasks to be executed, and edges show the relationship between these tasks [2].
Workflow applications like scientific tasks, face recognition, and sentiment anal-
ysis are complex tasks that increase complexity in the fog environment. Workflows
contain dependent tasks, in which firstly available resources are found, and then
tasks are assigned for execution. Due to the complexity of workflow tasks, there
can be wastage of resources, resulting in more energy consumption [3]. Work-
flow scheduling is considered an NP-complete problem, which deems time and cost
parameters while running the tasks [4]. With the distributed nature of fog computing,
fog nodes are deployed near the end devices. Here are few examples of scientific
workflows: Cybershake, LIGO, Sipht, Genome, Montage. Sipht and Genome work-
flows have been considered for evaluation of the proposed approach. Sipht is used
to detect replicates of all bacteria in the national center. It helps to collect biological
information [5, 6]. The genome can be any data related to microbiological resistance,
pathogen’s identity, genetic information [7, 8].
1.1 Load Balancing at Fog Layer
In order to implement load balancing in workflow tasks also means conserving energy
in fog resources. If the workflow tasks are unevenly distributed in the fog nodes, then
the resource requirement may be more, or fewer resources could be utilized. So, to
conserve energy consumed by fog resources, load balancing is a must in the fog
computing layer. The task of the load balancer at the fog layer is to balance the
distribution of workload in all the fog resources equally. Workload distribution can
help for the efficient utilization of resources [3, 5, 9].
1.2 Our Contribution
This article contributes the following:
1. It proposes fog computing architecture for maximum resource utilization in
scientific workflow applications.
2. Proposed load balancing approach for a fog computing environment that aims to
reduce energy consumption in fog resources while executing large and complex
scientific workflow tasks.
An Energy-Efficient Load Balancing Approach 167
The remaining article has been organized as follows. Section 2 reviews the existing
literature in fog computing. Section 3 proposes Fog computing architecture imple-
menting load balancing for maximum resource utilization. This section also contains
the proposed EE-LB algorithm that has been evaluated to obtain simulation results
shown in the next Sect. 4. The last Sect. 5, concludes the article and provides future
scope.
2 Literature Review
This section contains the review of literature containing dynamic resource alloca-
tion and load balancing in workflows. There are many types of research works that
provided different techniques for scheduling the workflows, but load balancing is
still needed to explore. Literature review has been classified into two categories that
are described below:
2.1 Resource Allocation in Fog Computing
Li [3] proposed a load balancing based workflow scheduling model for resource
allocation in cloud environment. Their proposed system model reduces response time
and energy consumption in executing scientific workflow applications. The proposed
workflow scheduling approach is based on the shortest path technique. The authors
developed a social media application and performed a live video application. They
created social media application and considered live video application of workflows
to implement their proposed scenario. Naha [5] proposed a linear-regression method
for energy-aware resource allocation. The authors minimize failure occurred due to
energy constraints in the fog environment. Furthermore, the authors proposed an
energy-efficient framework. Along with this, an energy-aware framework has been
proposed to execute different applications in fog. The proposed approach has been
compared with other existing techniques, reducing execution and processing time.
Rehman et al. [10] proposed a “Dynamic Energy Efficient Resource Allocation
strategy (DEER)” for maximum resource utilization by implementing load balancing
in fog environment. The proposed approach has been executed in simulation envi-
ronment, and the obtained results are compared with other approach in terms of cost
and energy consumption. It has been obtained that energy consumption has been
improved by 8.67%, and computational cost by 16.77%. Xu et al. [11] proposed
“Dynamic Resource Allocation Strategy (DRAM)” for fog computing to obtain
maximum load balancing in fog environment. The major steps involved in DRAM
are: fog operation partitioning, detecting available nodes, dynamic resource alloca-
tion to local and global users. Maximum resource utilization has been obtained by
implementing load balancing, but energy consumption of nodes in fog environment
is not given much attention.
168 M. Kaur and R. Aron
2.2 Load Balancing in Fog Computing
Rizvi [4] has reduced computational cost and execution time while executing work-
flow scheduling policy, i.e., fair budget policy. The proposed policy has been evalu-
ated using different workflow applications and compared their results with the execu-
tion time and energy consumption in other approaches. The authors also perform
ANOVA test on their proposed strategies. Kaur et al. [12] proposed equal distri-
bution of workload-based load balancing approach for the fog computing envi-
ronment. The authors considered the cloud analyst tool to evaluate their proposed
system and compared its obtained results with existing round robin and the throttled
load balancing approach. The main motive of the proposed algorithm is to enhance
resource utilization and reduce the implementation cost.
Kaur et al. [16] proposed energy-aware approach for load balancing in fog
computing environment. The authors considered scientific workflow applications
(Genome, Cybershake) for evaluating their proposed approach, and simulation results
are obtained using iFogSim simulation environment. They only considered few fog
nodes to evaluate their proposed approach. More the number of fog nodes more will be
the energy consumption in them so, the proposed approach will not be able to handle
more workloads with lesser number of fog nodes. Table 1 shows the comparison of
existing approaches.
3 Proposed Fog Computing Architecture for Maximum
Resource-Utilization
Fog computing act as a middle layer between IoT and cloud layer. The traditional
fog computing architecture brings the cloud services from core to network edge [2].
The fog architecture provided in this article contains load balancing in the middle
layer. This section includes fog computing architecture for workload balancing in
workflows. In distributed fog environment, fog nodes are deployed near to the end-
users. In proposed architecture, load balancing has been applied in the fog computing
layer so that workflow tasks should be evenly distributed to all fog resources. The
following Fig. 1 shows fog computing architecture:
Figure 1 shows fog computing architecture containing three layers, i.e., end-users,
fog layer and cloud layer. End user layer is connected to the nearby deployed fog
nodes. Users submit their workflow tasks to the fog nodes. Fog nodes are having
nano data centers, which store and process user requests locally. Each fog node
is connected to the central controller, which controls all fog nodes. The central
controller firstly schedules the tasks at the fog node’s local queues. The tasks can be
executed in any manner, i.e. First Come First Serve (FCFS), shortest path first etc.
Then tasks are forwarded to the load balancer, which keeps track of all available and
utilized fog nodes. The energy is assigned in the form of electric power consumed
by various resources. The idle VMs also consume energy along with the overloaded
An Energy-Efficient Load Balancing Approach 169
Tabl e 1 Comparison of various existing approaches
Author Year Purpose of
work
Type of
network
Tool used Application Research gap
Naha
et al. [5]
2021 Proposed
energy aware
approach
based on
multiple linear
regression for
load balancing
in fog
Fog CloudSim Time-sensitive
applications
Fog nodes
can be
clustered to
enhance the
performance
of system
Mokni
et al. [7]
2021 Propose a
hybrid
multi-agent
approach for
Cloud-Fog
environment
to schedule
IoT tasks
workflows
Cloud-Fog CloudSim IoT application They do not
consider
energy
consumption
in cloud-Fog
Davami
et al. [15]
2021 Proposed
high-level
architecture
for scheduling
of multiple
work fows
Fog Architecture
tradeoff
analysis
method
(ATAM)
Scientific
workflow
Article does
not consider
load
balancing t o
distribute
equal
workloads
Kaur et al.
[16]
2020 Proposed
Energy-aware
load balancing
approach
Fog-Cloud iFogSim Scientific
workflows
Lesser
number of
fog nodes are
considered
Hameed
et al. [17]
2021 Proposed
dynamic
clustering
approach for
vehicular
system
Vehi c ul a r
ad-hoc
network
(Fog)
NS2 Realistic
vehicular
network
Security of
fog nodes can
also consider
VMs. Hence, to optimize energy consumption in fog nodes, proper load balancing is
required, so that no VM remains underutilized, and no VM becomes overloaded. Load
balancer equally distributes these tasks into the VMs. Workflow tasks are executed
by fog nodes, and load balancing helps to improve utilization of all the resource in
the fog layer. With the enhancement in resource utilization, the energy consumption
of fog nodes can be reduced, which helps to reduce implementation cost. Fog layer
is connected to cloud layer above it. Cloud layer have large data center having large
storage and computing capacity that takes data from fog layer and store it for future
use.
170 M. Kaur and R. Aron
Fig. 1 Fog computing architecture implementing load balancing for maximum resource-utilization
3.1 Energy-Efficient Load Balancing Approach (EE-LB)
In fog computing environment, when large computational workflow applications
are executed, then there is increase in demand of resources also. So, with the
increased resources requirement there is more energy consumption. For efficient-
energy consumption in fog environment there is a need for energy-efficient load
balancing approach so that there can be no wastage of energy and resource utilization
can be increased. In this section, proposed energy-efficient load balancing approach
for fog computing environment has been provided. This section proposes a hybrid
load balancing approach that is based on Simulated Annealing and water cycle opti-
mization approaches. Optimization approaches used in this work are described as
follows:
Simulation Annealing Algorithm (SAA). SAA has been used for intra-cluster
mapping of tasks on fog nodes. The energy consumption in fog clusters has been
analyzed using SAA. SAA has a large margin for error control so, it has been used
to find global solution for scientific workflows.
Water Cycle Optimization (WCO). WCO works on the basis of natural water cycle
process that has been used in this work when the optimized solution has not been
found with SAA. WCO reduces the energy consumption and cost of intra-cluster
resource mapping.
Both optimization techniques work in hybrid form to enhance the performance,
and reduce the energy consumption and computational cost in fog nodes. Here is the
pseudocode for proposed algorithm:
An Energy-Efficient Load Balancing Approach 171
4 Result Analysis and Discussion
This section provides the results obtained by evaluating proposed approach in
iFogSim environment. This section is divided into subsections i.e., parameter consid-
ered for comparison of obtained results, experimental requirements, and results are
shown in graph form in the later subsection.
4.1 Parameters Considered
The proposed approach has been evaluated based on two performance parameters
i.e., computational cost, and energy consumption. These parameters are explained
as follows:
Computational Cost: Computational cost can be calculated in terms of mainte-
nance cost of fog nodes as well as cloud nodes. Sometimes only few nodes are utilized
and others remains idle, but they also require maintenance. Hence maintenance cost
can be calculated using following equation:
Cost =C fog
r +Ccl oud
r+ R(c+ f )(1)
Equation (1) used to calculate the computational cost in fog environment i.e., it
considers as the sum of the total cost of resources of fog layer C fog
r, total cost of
resources of cloud layer Ccl oud
r, and total available resources at fog and cloud layer
R(c+ f ).
Energy Consumption: When tasks are assigned to resources for processing, the
resources consume energy while executing these tasks. Sometimes in case of large
computational tasks, few resources get more tasks to execute, and others remains
idle. All resources consume energy whether they are in execution mode or idle [7].
Hence energy consumption can be calculated as follows:
EnergyConsumption(Ec) = E fog,cloud
idle + E fog,cloud
utili zed + Rmax (2)
Equation (2) used to calculate energy consumption (Ec) in fog environment that
can be calculated as sum of energy consumption by all idle and utilized fog as
well as cloud resources E fog,cloud
idle , E fog,clou d
utili zed , and maximum number of available
resources (Rmax ).
172 M. Kaur and R. Aron
4.2 Experimental Requirement
Table 2 describes the experimental requirements used for executing proposed
approach. Proposed EE-LB approach in has been evaluated by using iFogSim
simulation tool and considered Genome and Sipht workflow applications. All the
requirements have been explained as follows in Table 2.
4.3 Experimental Results
The experimental results obtained after evaluating proposed approach EE-LB are
shown in the form of graphs by comparing with other existing approaches i.e. DEER
[10], DRAM [11], EA-LB [16], SRFog [18] on the basis of few parameters that are
described in the following Table 3.
Figure 2 shows the simulation results obtained by executing proposed Energy-
efficient (EE-LB) and comparing it with the other existing approaches. It can be seen
from the graphs that EE-LB approach outperforms the other existing approaches and
reduces cost and energy consumption in evaluating considered workflows.
Tabl e 2 Experimental
requirements Requirement Valu e
Simulator iFogSim
Operating system Windows 10, 64 bit
Fog nodes 20–140
Wor kl oa d 100–1000 tasks
Tabl e 3 Comparison of proposed approach (EE-LB) with other approaches
Parameter DEER DRAM EA-LB SRFog (EE-LB)
Simulation
environment
CloudSim CloudSim iFogSim Kubernetes iFogSim
Number of
Nodes
500 to 2000
resources
20–140 20 fog nodes 2–7 nodes 140 fog nodes
Number of
workloads
N numbers 500–1000 100–200 6 user requests 100–1000
Network type Fog Fog Fog Fog Fog
Energy
consumption
(KJoules)
1002.51397 756.345 935.321 1134.453 675.66
Computational
cost ($)
7000–8000 8000–9000 5500–6000 4000–4500 2500–3000
An Energy-Efficient Load Balancing Approach 173
0
2000
4000
6000
8000
10000
30 60 90 120 150 180 210
Cost in Dollars
Number of resources per cluster
Cost Analysis of EE-LB
EA-LB DEER SRFog DRAM Energy-Efficient Load Balancing(EE-LB)
Fig. 2 Cost analysis of EE-LB by comparing with other approaches
Fig. 3. Energy Consumption analysis of EE-LB by comparing with other approaches
Figure 3 shows the simulation results obtained by evaluating considered work-
flows in proposed framework. It has been obtained from the graphs that with increase
in number of fog nodes energy consumption also increase. Proposed approach EE-LB
reduces energy consumption compared to the other approaches.
5 Conclusion and Future Scope
Load balancing in scientific workflows is necessary to fully utilize the resources
at fog layer. This article provides architecture for fog computing implementing
load balancing in scientific workflow. Furthermore, this article reviews the existing
load balancing and scheduling techniques in workflows and provides a comparison
between them. Different types of existing scientific workflows have been considered
174 M. Kaur and R. Aron
and evaluated in iFogSim by applying proposed approach EE-LB and results are
compared with EA-LB, DEER, SRFog, DRAM. It has been observed that EE-LB
reduces computational cost by 28%, and energy consumption by 35% as compared
to the other approaches. In future, QoS parameters in fog environment needs to be
explored more.
References
1. Bonomi F, Milito R, Zhu J, Addepalli S (2012) Fog computing and its role in the internet of
things. In: Proceedings of the first edition of the MCC workshop on Mobile cloud computing,
pp 13–16
2. Ding R, Li X, Liu X, Xu J (2018) A cost-effective time-constrained multi-workflow scheduling
strategy in fog computing. In: International conference on service-oriented computing.
Springer, Cham, pp 194–207
3. Li C, Tang J, Ma T, Yang X, Luo Y (2020) Load balance based workflow job scheduling
algorithm in distributed cloud. J Netw Comput Appl 152:102518
4. Rizvi N, Ramesh D (2020) Fair budget constrained workflow scheduling approach for
heterogeneous clouds. Clust Comput 23(4):3185–3201
5. Naha RK, Garg S, Battula SK, Amin MB, Georgakopoulos D (2021) Multiple linear regression-
based energy-aware resource allocation in the fog computing environment. arXiv preprint
arXiv:2103.06385
6. De Maio V, Kimovski D (2020) Multi-objective scheduling of extreme data scientific workflows
in Fog. Future Gener Comput Syst 106:171–184
7. Mokni M et al (2021) Cooperative agents-based approach for workflow scheduling on fog-cloud
computing. J Amb Intell Hum Comput:1–20
8. Ahmad Z et al (2021) Scientific workflows management and scheduling in cloud computing:
taxonomy, prospects, and challenges. IEEE Access 9:53491–53508
9. Singh SP (2021) An energy efficient hybrid priority assigned laxity algorithm for load balancing
in fog computing. Sustain C omput Inform Syst: 100566
10. Rehman AU et al (2020) Dynamic energy efficient resource allocation strategy for load
balancing in fog environment. IEEE Access 8:199829–199839
11. Xu X et al (2018) Dynamic resource allocation for load balancing in fog environment. Wirel
Commun Mob Comput 2018
12. Kaur M, Aron R (2020) Equal distribution based load balancing technique for fog-based cloud
computing. In: International conference on artificial intelligence: advances and applications
2019. Springer, Singapore, pp 189–198
13. Shahid MH, Hameed AR, Islam S, Khattak HA, Din IU, Rodrigues JJPC (2020) Energy and
delay efficient fog computing using caching mechanism. Comput Commun 154:534–541
14. Kaur A et al (2020) Deep-Q learning-based heterogeneous earliest finish time scheduling
algorithm for scientific workflows in cloud. Softw Pract Exp
15. Davami F et al (2021) Fog-based architecture for scheduling multiple workflows with high
availability requirement. Computing 1–40
16. Kaur M, Aron R (2020) Energy-aware load balancing in fog cloud computing. Mater Today
Proc
17. Hameed AR et al (2021) (2021) Energy-and performance-aware load-balancing in vehicular
fog computing. Sustain Comput Inform Syst 30:100454
18. dos Santos P, Pedro J et al (2021) SRFog: a flexible architecture for virtual reality content
delivery through fog computing and segment routing. In: IM2021, the IFIP/IEEE symposium
on integrated network and service management
... Fog computing is an extension of cloud computing that brings computing and data storage closer to the network's edge. It enables data to be processed and stored closer to the devices that generate it, improving latency and reducing the need to move data over the network [5], [9]. IoT (Internet of Things) is a network of connected physical devices, sensors and other items embedded with electronics and software that enable these objects to collect and exchange data. ...
... This helps in reducing communication overhead as it uses group and broadcast access control. In [19,20], a novel architecture for content distribution has been proposed. Content search and delivery has been improved using this architecture. ...
Article
Full-text available
Content centric networking is one of the emerging paradigm for inter-vehicle communication that focuses on the contents being shared within the network. In this paper, we have proposed a content-centric vehicular network (CCVN) for faster access of contents. We have also proposed a hybrid encryption scheme that uses Advanced encryption standard (AES-128) and digital signature to secure the content of CCVN during inter-vehicle communication. In the results, the proposed encryption scheme minimises the computational cost for encryption and digital signature verification, and handles various security attacks like integrity, replay, and key-guessing efficiently. We have also performed the performance evaluation of CCVN with other techniques. Here, the obtained results represents that the proposed CCVN architecture reduces the overall time-delay in the network.
Chapter
Full-text available
Water scarcity and environmental concerns have become pressing issues in the modern world, necessitating innovative approaches to water management. Global issues including water scarcity and environmental concerns now require creative and sustainable approaches to managing water resources. This chapter will examine how the internet of things (IoT) and cutting-edge technologies like machine learning (ML) are revolutionizing the way that water management is done. In this chapter, the effective uses of machine learning in water resource analysis will be examined. Forecasting water demand requires the use of ML algorithms, which help water managers predict consumption trends with accuracy. Predictive analytics can also be used to evaluate the distribution and availability of water, providing information on how to allocate and optimize water resources. The chapter concluded with revolutionary potential of machine learning and the internet of things in modernizing water management practices globally.
Chapter
Full-text available
Fog computing originated from cloud computing. It provides a distributed environment for providing services to mobile users having heterogeneous configurations. Microservice is an efficient mechanism to subdivide the tasks of an application in a distributed environment. Load balancing in microservice environment is a challenge with heterogeneous fog devices and end devices. As it is important to make sure neither of the fog nodes should be overloaded nor underloaded to utilise the potential of the fog environment. In this paper, a novel approach for handling the load in a microservice-based healthcare application in fog environment has been presented. Load balancing is performed on the basis of parameters which are getting updated after the successful execution of requested microservice which will serve as a feedback mechanism to handle failures in microservices while ensuring load balancing in fog environment. The proposed technique, when Simulated on iFogSim2 simulator resulted in considerable better throughput and reduced execution time which are key parameters for determining the performance of load balancing algorithm.KeywordsLoad balancingFogCloudHealthcareMicroserviceiFogSim2Fog healthcare
Article
Full-text available
The fog-assisted cloud computing gives better quality of service (QoS) to Internet of things (IoT) applications. However, the large quantity of data transmitted by the IoT devices results in the overhead of bandwidth and increased delay. Moreover, large amounts of data transmission generate resource management issues and decrease the system’s throughput. This paper proposes the optimized task scheduling and preemption (OSCAR) model to overcome the limitations and improve the QoS. The dataset used for the study is a real-time crowd-based dataset which provides task information. The processes involved in this paper are as follows: (i) Initially, the tasks from the IoT devices are clustered based on the priority and deadline by implementing expectation–maximization (EM) clustering to decrease the computational complexity and bandwidth overhead. (ii) The clustered tasks are then scheduled by implementing a modified heap-based optimizer based on the QoS and service level agreement (SLA) constraints. (iii) Distributed resource management is performed by allocating resources to the tasks based on multiple constraints. The categorical deep Q network is the deep reinforcement learning model is implemented for this purpose. The dynamic nature of tasks from the IoT devices is addressed by performing preemption of tasks using the ranking method, where the tasks with higher priority, with a short deadline replaces less priority task by moving it into the waiting queue. The proposed model is experimented with in the iFogsim simulation tool and evaluated in terms of average response time, loss ratio, resource utilization, average makespan time, queuing waiting time, percentage of tasks satisfying the deadline and throughput. The proposed OSCAR model outperforms the existing model in achieving the QoS and SLA with maximal throughput and reduced response time.
Article
Full-text available
Cloud computing provides solutions to a large number of organizations in terms of hosting systems and services. The services provided by cloud computing are broadly used for business and scientific applications. Business applications are task oriented applications and structured into business workflows. Whereas, scientific applications are data oriented and compute intensive applications and structured into scientific workflows. Scientific workflows are managed through scientific workflows management and scheduling systems. Recently, a significant amount of research is carried out on management and scheduling of scientific workflow applications. This study presents a comprehensive review on scientific workflows management and scheduling in cloud computing. It provides an overview of existing surveys on scientific workflows management systems. It presents a taxonomy of scientific workflow applications and characteristics. It shows the working of existing scientific workflows management and scheduling techniques including resource scheduling, fault-tolerant scheduling and energy efficient scheduling. It provides discussion on various performance evaluation parameters along with definition and equation. It also provides discussion on various performance evaluation platforms used for evaluation of scientific workflows management and scheduling strategies. It finds evaluation platforms used for the evaluation of scientific workflows techniques based on various performance evaluation parameters. It also finds various design goals for presenting new scientific workflow management techniques. Finally, it explores the open research issues that require attention and high importance.
Article
Full-text available
Connected objects in the Internet of Things (IoT) domain are widespread everywhere. They interact with each other and cooperate with their neighbors to achieve a common goal. Most of these objects generate a huge amount of data, often requiring a process under strict time constraints. Being motivated by the question of optimizing the execution time of these IoT tasks, we remain aware of the sensitivity to latency and the volume of data generated. In this article, we propose a hybrid Cloud-Fog multi-agent approach to schedule a set of dependent IoT tasks modeled as a workflow. The major advantage of our approach is to allow to model IoT workflow planning as a multi-objetif optimization problem in order to create a compromise planning solution in terms of response time, cost and makespan. In addition to taking into account data communications between workflow tasks, during the planning process, our approach has two other advantages: (i) maximizing the use of Fog Computing in order to minimize response time, and (ii) the use of elastic cloud computing resources at minimum cost. The implementation of the MAS-GA (Multi-Agent System based Genetic Algorithm), which we have proposed in this context; the series of experiments carried out on different corpora, as well as the analysis of the found results confirm the feasibility of our approach and its performance in terms of cost which represents an average gain of 21.38% compared to Fog and 25.49% compared to Cloud, makespan which represents a gain of 14.13% compared to Fog and a slight increase of 5.24% compared to Cloud and in response time which represents an average gain of 46.66% compared to Cloud with a slight increase of 6.66% compared to Fog, while strengthening the collaboration between Fog computing and Cloud computing.
Article
Full-text available
Given the significant development of the Internet of Things (IoT) in recent years as well as the growing need for data around the world, cloud computing alone is not able to manage this volume of data. Accordingly, fog computing has just now become a popular paradigm for further data analysis in close proximity to devices generating and processing data instantly, in order to solve various problems of existing cloud-only based systems. With regard to the complexity and the wide variety of types of computational resources such as cloud servers and fog nodes, workflow scheduling is thus one of the most important challenges in fog computing environments. To address such a problem, this paper presents software architecture for scheduling multiple workflows in cloud-fog environments simultaneously. Within this scheduling, workflow clustering and priority of workflows are also taken into account. As well, architecture layers, components, as well as their major interactions are represented using 4 + 1 architectural view models. The architecture components are ultimately proposed to meet quality attributes such as availability, reliability, recoverability, interoperability, and performance. The proposed architecture evaluation is based on the Architecture Tradeoff Analysis Method (ATAM) is a scenario-based technique. Compared with previous works, various scenarios and more quality attributes are discussed within this evaluation in addition to clustering and prioritizing workflows.
Article
Full-text available
The Internet of Things is a flexible, emerging technology and an innovative development of the environmental trend. It is a large and complex network of devices in which fog computing plays a growing role in order to handle the information flow of such large and complex networks. Influence of their activities on carbon emissions and energy costs in unlimited results. Dynamic and efficient load balancing technology can be used to improve overall performance and reduce energy consumption. Load can be transferred or shared between computer nodes through load balancing technology. Therefore, the design of energy-efficient load balancing solutions for edge and fog environments has become the main focus. In this research work, we have proposed Dynamic Energy Efficient Resource Allocation (DEER) strategy for balancing the load in fog computing environments. In the presented strategy, initially the user submits tasks for execution to the Tasks Manager. Resource Information Provider registers resources from Cloud Data Centres. The information about the tasks and resources are then submitted to the Resource Scheduler. The resource scheduler arranges the available resources in descending order as per their utilization. The resource engine after receiving the information of tasks and resources from the resource scheduler assigns tasks to the resources as per ordered list. During execution of tasks, the information about the status of the resources is also sent to the Resource Load Manager and Resource Power Manager. The Resource Power Manager manages the power consumption through the resource On/Off mechanism. After successful execution of tasks, the resource engine returns the result to the user. Simulation results reveal that the presented strategy is an efficient resource allocation scheme for balancing load in fog environments to minimize the energy consumption and computation cost by 8.67 % and 16.77 % as compared with existing DRAM scheme.
Article
Full-text available
Fog computing has emerged as an extension to the existing cloud infrastructure for providing latency-aware and highly scalable services to geographically distributed end devices. The addition of the fog layer in the cloud computing paradigm helps to improve the quality of service (QoS) in time-critical and delay-sensitive applications. Due to the continuous increase in the deployment of fog networks at large scale, energy efficiency is a significant issue in the fog computing paradigm to reduce the service cost and to protect the environment. A plethora of research has been conducted to reduce energy consumption in fog computing, majorly, focusing on the scheduling of incoming jobs to improve energy efficiency. However, node-level mechanisms have largely been neglected. Cache placement is a critical issue in fog networks for efficient content distribution to clients, which requires simultaneous consideration of many factors including quality of network connection, the demand for contents, and users’ activities. In this paper, a popularity-based caching mechanism in content delivery fog networks is proposed. In this context, two energy-aware mechanisms, i.e., content filtration and load balancing, have been applied. In the proposed approach, popular contents are found using random distribution and these contents are categorized into three classes. After finding the file popularity, an active fog node is selected based on the number of neighbors, energy level, and operational power. Further, the popular content is cached on the active node using a filtration mechanism. Moreover, a load-balancing algorithm is proposed to increase the overall system efficiency in the cached fog network. The evaluation of the proposed approach exhibits promising results in terms of energy consumption and latency. The proposed scheme consumes 92.6% and 82.7% less energy in comparison to without caching and simple caching mechanisms, respectively. Similarly, an improvement of 85.29% and 67.4% in delay has also been noticed while using advance caching against the without caching and simple caching techniques, respectively.
Article
Full-text available
The phenomenal advancement of technology paved the way for the execution of complex scientific applications. The emergence of the cloud provides a distributed heterogeneous environment for the execution of large and complex workflows. Due to the dynamic and heterogeneous nature of the cloud, scheduling workflows become a challenging problem. Mapping and assignment of heterogeneous instances for each task while minimizing execution time and cost is a NP-complete problem. For efficient scheduling, it is required to consider various QoS parameters such as time, cost, security, and reliability. Among these, computation time and cost are the two notable parameters. In order to preserve the functionalities of these two parameters in heterogeneous cloud environments, in this paper, a fair budget-constrained workflow scheduling algorithm (FBCWS) is proposed. The novelty of the proposed algorithm is to minimize the makespan while satisfying budget constraints and a fair means of schedule for every task. FBCWS also provides a mechanism to save budget by adjusting the cost-time efficient factor of the minimization problem. The inclusion of a cost-time efficient factor in the algorithm provides flexibility to minimize the makespan or save budget. In order to validate the effectiveness of the proposed approach, several real scientific workflows are simulated, and experimental results are compared with other existing approaches, namely; Heterogeneous Budget Constrained Scheduling (HBCS), Minimizing Schedule Length using Budget Level (MSBL) and Pareto Optimal Scheduling Heuristic (POSH) algorithms. Experimental results prove that the proposed algorithm behaves outstandingly for compute-intensive workflows such as Epigenomic and Sipht. Also, FBCWS outperforms the existing HBCS in most of the cases. Moreover, FBCWS proves to be more time-efficient than POSH and more cost-efficient than MSBL. The effectiveness of the proposed algorithm is illustrated through the popular ANOVA test.
Article
This article has been withdrawn at the request of the author(s) and/or editor. The Publisher apologizes for any inconvenience this may cause. The full Elsevier Policy on Article Withdrawal can be found at https://www.elsevier.com/about/our-business/policies/article-withdrawal
Article
With the introduction of fog computing, storage, computing and networking take places near to edge devices rather than in centralized data-centres of cloud computing. Fog computing works as an assistant of cloud computing because it could not work alone, for efficient work fog computing combined with cloud computing. More the number of fog nodes in the network more is that hardware requirement, which increases the demand for energy consumption at the fog layer. For efficient processing of tasks at the fog layer, tasks need to be assigned equally to all the nodes in the fog layer. Load balancing helps to reduce energy consumption in the fog-cloud computing environment. This article provides an energy-aware load balancing technique for scientific workflows in fog-cloud computing environment. Also, a load-balancing algorithm proposed for fog environment. iFogSim used for simulation results. The results also compared with existing methods. Load balancing at the fog layer help in proper utilization of resources, which helps to reduce latency and enhance the quality of service. The articles conclude by providing future directions.
Article
An IoT-enabled cluster of automobiles provides a rich source of computational resources, in addition to facilitating efficient collaboration with vehicle-to-vehicle and vehicle-to-infrastructure communication. This is enabled by vehicular fog computing where vehicles are used as fog nodes and provide cloud-like services to the Internet of things (IoT) and are further integrated with the traditional cloud to collaboratively complete the tasks. However, efficient load management in vehicular fog computing is a challenging task due to the dynamic nature of the vehicular ad-hoc network (VANET). In this context, we propose a cluster-enabled capacity-based load-balancing approach to perform energy- and performance-aware vehicular fog distributed computing for efficiently processing the IoT jobs. The paper proposes a dynamic clustering approach that takes into account the position, speed, and direction of vehicles to form their clusters that act as the pool of computing resources. The paper also proposes a mechanism for identifying a vehicle's departure time from the cluster, which allows predicting the future position of the vehicle within the dynamic network. Furthermore, the paper provides a capacity-based load-distribution mechanism for performing load-balancing at the intra- as well as the inter-cluster level of the vehicular fog network. The simulation results are obtained using the state-of-the-art NS2 network simulation environment. The results show that the proposed scheme achieves balanced network energy consumption, reduced network delay, and improved network utilization.