ArticlePDF Available

Optimal Distribution of Workloads in Cloud-Fog Architecture in Intelligent Vehicular Networks

Authors:
  • Bu-Ali Sina University

Abstract and Figures

With the fast growth in network-connected vehicular devices, the Internet of Vehicles (IoV) has many advances in terms of size and speed for Intelligent Transportation System (ITS) applications. As a result, the amount of produced data and computational loads has increased intensely. A solution to handle the vast volume of workload has been traditionally cloud computing such that a substantial delay is encountered in the processing of workload, and this has made a serious challenge in the ITS management and workload distribution. Processing a part of workloads at the edge-systems of the vehicular network can reduce the processing delay while striking energy restrictions by migrating the mission of handling workloads from powerful servers of the cloud to the edge systems with limited computing resources at the same time. Therefore, a fair distribution method is required that can evenly distribute the workloads between the powerful data centers and the light computing systems at the edge of the vehicular network. In this paper, a kind of Genetic Algorithm (GA) is exploited to optimize the power consumption of edge systems and reduce delays in the processing of workloads simultaneously. By considering the battery depreciation, the supporting power supply, and the delay, the proposed method can distribute the workloads more evenly between cloud and fog servers so that the processing delay decreases significantly. Also, in comparison with the existing methods, the proposed algorithm performs significantly better in both using green energy for recharging the fog server batteries and reducing the delay in processing data.
Content may be subject to copyright.
IEEE TRANSACTIONS ON INTELLIGENT TRANSPORTATION SYSTEMS 1
Optimal Distribution of Workloads in Cloud-Fog
Architecture in Intelligent Vehicular Networks
Mahdi Abbasi , Mina Yaghoobikia, Milad Rafiee , Mohammad R. Khosravi ,
and Varun G. Menon ,Senior Member, IEEE
Abstract— With the fast growth in network-connected
vehicular devices, the Internet of Vehicles (IoV) has many
advances in terms of size and speed for Intelligent Transportation
System (ITS) applications. As a result, the amount of produced
data and computational loads has increased intensely. A solution
to handle the vast volume of workload has been traditionally
cloud computing such that a substantial delay is encountered in
the processing of workload, and this has made a serious challenge
in the ITS management and workload distribution. Processing a
part of workloads at the edge-systems of the vehicular network
can reduce the processing delay while striking energy restrictions
by migrating the mission of handling workloads from powerful
servers of the cloud to the edge systems with limited computing
resources at the same time. Therefore, a fair distribution method
is required that can evenly distribute the workloads between the
powerful data centers and the light computing systems at the
edge of the vehicular network. In this paper, a kind of Genetic
Algorithm (GA) is exploited to optimize the power consumption
of edge systems and reduce delays in the processing of work-
loads simultaneously. By considering the battery depreciation,
the supporting power supply, and the delay, the proposed method
can distribute the workloads more evenly between cloud and fog
servers so that the processing delay decreases significantly. Also,
in comparison with the existing methods, the proposed algorithm
performs significantly better in both using green energy for
recharging the fog server batteries and reducing the delay in
processing data.
Index Terms—Cloud, fog, genetic algorithm, Internet of
vehicles, workload allocation.
I. INTRODUCTION
AS a result of the tremendous growth in the number of
smart vehicular devices, the Internet of Vehicles (IoV)
has experienced rapid expansion. The increase in the number
of devices has caused a multiplication of data and large-scale
computation loads [1], [2]. Cloud computing has been pro-
posed as a solution to manage these loads [3]. However,
Mahdi Abbasi, Mina Yaghoobikia, and Milad Rafiee are with the
Department of Computer Engineering, Faculty of Engineering, Bu-Ali
Sina University, Hamedan 65178-38695, Iran (e-mail: abbasi@basu.ac.ir;
m.yaghoobikia@eng.basu.ac.ir; m.rafiee@alumni.basu.ac.ir).
Mohammad R. Khosravi is with the Department of Computer Engineering,
Persian Gulf University, Bushehr 75169-13817, Iran, and also with the
Department of Electrical and Electronics Engineering, Shiraz University of
Technology, Shiraz 71557-13876, Iran (e-mail: m.khosravi@sutech.ac.ir).
Varun G. Menon is with the Department of Computer Science and Engi-
neering, SCMS School of Engineering and Technology, Kochi 683582, India
(e-mail: varunmenon@scmsgroup.org).
Digital Object Identifier 10.1109/TITS.2021.3071328
Fig. 1. IoV data processing layer stack.
the time-consuming nature of the processing of workloads in
clouds is still a major issue in the field of distributed vehicular
networks [4]. Processing the workloads at the edges of the
vehicular network can reduce the processing time, but the
transmission of workloads from the data centers (which are
equipped with sustainable electric power) to the edges leads
to serious limitations in terms of supplying the required power
for computing [5], [6]. Thus, we need to achieve a balance in
distributing the processing requests between the cloud and the
edge [7].
Fig. 1 shows the layers of data processing in a cloud-fog
architecture for IoV. As can be seen in the figure, the lowest
layer contains vehicular devices that produce the data. These
devices can use their own processing resources and process
the data in positions close to the user. Although the proximity
of edge devices to the end-user remarkably reduces the delay
in request transmission and increases the response rate, these
devices have a lower processing power than the cloud. In the
next layer lie powerful routers and servers which are close to
the edge and can process the workloads without transferring
them to the cloud [8]. By moving away from the edge of the
Internet to towards the data centers, the transmission delay
will increase. In the highest layer, large data centers that
provide the enhanced capability of processing and storage are
distributed as clouds around the world. As they are very far
from the end-users, these resources usually impose long delays
in the processing of requests [9]. Also, they also consume high
amounts of electric power, whereas most edge devices can
function with small amounts of power or even with batteries.
2IEEE TRANSACTIONS ON INTELLIGENT TRANSPORTATION SYSTEMS
Fog computing is a kind of distributed computing that can
replace cloud computing by using several devices near the
edge of the vehicular network (see Fig. 1). Fog computing is
more efficient than edge computing in terms of processing,
while it is less potent than cloud computing. The chief issue
in fog computing is the high costs of the required electric
power. Nowadays, a more challenging problem is providing
sustainable energy resources that can afford the long-term
energy requirements of fog nodes in IoV [10], [11]. The
processing nodes chiefly receive their required power from
rechargeable batteries [12]. As this type of power source is
extremely limited and should be frequently recharged, the use
of renewable energies as a secondary or even the only power
supply at the network edge is necessary [13]–[15]. Thus,
we need to develop a method for striking a balance in
distributing the workloads among fog nodes and cloud data
centers so that both delay and power consumption could be
optimized. As a result, the energy resources of the IoV become
more sustainable.
To achieve this goal, the present study makes use of a
genetic algorithm in finding the best distribution for the
workloads. A review of the literature indicates that few studies
have addressed the issue of finding the best cost function and
the effect of the coefficients of this function on the algorithm’s
decision-making. Given this, this study first introduces a cost
function of distribution based on two parameters, i.e., power
and delay, and then attempts to modify the coefficients corre-
sponding to these parameters according to a genetic algorithm
in order to attain the best coefficients of workload distribution
in a way that the workloads could be processed with the least
delay and the least amount of power consumption.
The genetic algorithm is a method for finding approximate
solutions to search and optimization problems. This algorithm
is considered as a kind of evolutionary algorithm due to its
use of biological concepts such as inheritance and mutation.
Genetics addresses inheritance and the transfer of attributes
from one generation to the next. In living creatures, chro-
mosomes and genes are responsible for this transfer. This
mechanism acts in a way that superior and stronger chromo-
somes will survive. The final result is that stronger creatures
would be able to survive. Genetic programming is a technique
of programming that uses genetic evolution as a model for
problem-solving. Over time, the genetic algorithm has grown
in popularity in a diversity of problems such as optimization,
image processing, topology, artificial neural network training,
and decision-making systems [16].
A genetic algorithm begins with initializing a random
population, which is composed of the possible solutions to
the problem. Each solution is a chromosome, and the entire
chromosomes form the initial population. In the first step,
the value of each chromosome in the population is specified
by the fitness function. During the execution of the algorithm,
parents with more fitness are selected for reproduction, and
the next generation is generated using genetic operators.
Crossover, mutation, and selection are the three main operators
in genetic algorithms [17].
By using a genetic algorithm, the present paper seeks to
obtain the best value function so that we could strike a balance
between power consumption at the edge of network and delay
in the transmission of workloads as well as minimize these two
parameters. Also, we use renewable energies in the processing
and transmission of workloads at the network edge due to
the significance of sustainable energy resources in computing
tasks. Another innovation of this study is its use of renewable
energy as an input parameter in the genetic algorithm. For
this purpose, green energy is used in our proposed method to
calculate the value function of the algorithm. The main reason
for using renewable energies is the limitation of edge devices
on power consumption. As a result, these devices need to be
regularly connected to a power source for being recharged,
which limits their mobility. Also, changing the battery in IoV
devices may impose high costs and is sometimes dangerous.
For this reason, IoV devices should be able to maintain their
independence and sustainability by using green energies and
wireless charging ability [18]. In this vein, we aim to utilize
renewable energies to minimize the number of batteries at the
network edge.
The structure of the paper is as follows. Section 2 is a
review of the related literature. In Section 3, the workload
allocation model is formulated. Next, after a brief description
of the structure of the genetic algorithm, the optimization
of delay and power consumption in a cloud-fog environment
is discussed. Section 5 describes the implementation of the
proposed method and evaluates it in terms of the parameters
of the algorithm. The method is also compared with other
existing methods. Finally, some concluding remarks are made
and ideas for further research are suggested.
II. REVIEW OF LITERATURE
In recent years, many researchers have studied the method-
ologies to orchestrate the distribution of workloads and reduce
the overall processing delay in IoV. We briefly review some
of the recent studies that have investigated the optimization of
energy usage and delay reduction in fog computing.
Pioneering work has been presented by Xu and Ren [19].
In this work, they inspect the possibility of using renewable
energies as backup energy sources in mobile edge networks.
Their method uses machine learning algorithms to manage the
energy resources and distribute the computation workloads.
Their method aims to minimize the prominent costs of the
processing requests that include the processing delay and
consuming energy. The consequence of using a slow learning
mechanism in their method is the weak results in controlling
the power consumption of edge nodes.
Unfortunately, in many of the recently proposed methods,
only one aspect of the problem is considered. That is, some
of them sacrifice the processing time for optimizing the power
consumption at edge systems, and vice versa. Hence, in any
method to be developed, the processing delay and consuming
power should be optimized simultaneously.
Regarding abovementioned challenge, Xu et al. presented
a reinforcement learning-based method [20]. Their algorithm
was able to learn and adapt itself to any system with unknown
modeling parameters. Despite the undeniable results of their
method in achieving acceptable performance in orchestrating
the edge computations and using renewable energy sources for
mobile edge nodes, their algorithm failed to fairly distribute
the workloads among the computing nodes.
ABBASI et al.: OPTIMAL DISTRIBUTION OF WORKLOADS IN CLOUD-FOG ARCHITECTURE 3
The GLOBE method of Wu et al. [12] tries to optimize
the performance processing nodes at the network edge by
geographically balancing the distribution of loads, and at the
same time, controlling the input load of any edge nodes. This
method can handle stochastic events concerning the battery
status and power limitations. Although the GLOBE is slightly
successful in optimizing the battery energy level, it is still so
far from perfect.
In 2019, Dalvand and Zamanifar [21] proposed a new
model for processing data in the Internet of things (IoT)
and developed an IoT-Fog-Cloud in which the fog layer
is geographically close to the IoT edge devices. In their
system, a multi-purpose dynamic service is created to achieve
a balance between delay and resource costs. This service
is formulated by MILP and solved through weighted goal
programming. The method controls and minimizes only one
goal as a compromise between time and power consumption.
None of the above studies have been able to strike a balance
between cost and power consumption in distributing workloads
among network nodes. Our work is aimed at developing a
mechanism for the balanced distribution of workloads between
the cloud and edge nodes. This mechanism is supposed to
attain a desirable tradeoff between power consumption and
workload delay. It will make use of renewable energy sources
as the power supply for edge computations. These sources are
expected to preserve the battery charge level.
III. THE PROPOSED METHOD
Our point of departure is the fact that none of the previous
works have offered an optimum solution to reduce the costs
of fog computing. To elaborate on our proposed method and
evaluate it, a cloud-fog environment is simulated as in [10].
This environment involves four main models: workload, delay,
power consumption, and battery status. In the following,
we shall first examine these models as described in the refer-
ence and then present our proposed algorithm. Next, we will
define a new scenario to study how the algorithm functions.
The proposed scenario will be simulated for evaluation.
A. Formulation of the Problem
For the chief scenario, the edge system includes a base
station and a set of edge servers that are set geographically
close to each other. A battery with a limited capacity is used
in each computing resource at the network edge (i.e., fog
servers). The shared power supply mechanism used in the
network lets the workloads be sent to the cloud especially
when the edge servers require battery charge. The workload
sent by the users to the edge is first received by the base
station. The base station manages and decides on the amount
of workload that must be allocated to the edge or transmitted
to the cloud. The definition of the formulation parameters is
presented in Table I.
The proposed system is modeled here by considering it from
four aspects.
1) Workload: The equal time intervals t=0,1,2...
are used to model the time. The computation
capacity of each edge device is specified in each time interval
in terms of the number of active servers; λ(t)[0
max ]is
the rate of assigning the workloads to edge nodes and μ(t)
TAB L E I
THE MAIN SYMBOLS
is the rate by which the edge nodes process the assigned
workload. Finally, λ(t)μ(t)denotes the remaining
part of the workload transmitted to the cloud. The number
of dynamic servers in any time interval is m(t)[0,M].
This number may change in different time intervals.
2) Delay: We consider three different delays in the system
model:
2-1. The delay in communicating workloads on the wireless
network, which is shown by cwi(t). This delay depends on the
input load of the network (i.e., λ(t)λ(t)). In our model, it is
assumed as 0 due to the physical closeness of the active nodes.
2-2. The delay of processing workloads at local subregions
of the network edge, which is shown by clo (t). The amount
of this delay directly depends on the number of active servers,
the processing rate of them, and the model of managing queues
in each of them. In our experiments, the M/G/1 mechanism
models the queue management in any active server running on
edge nodes. As a result, the delay in processing at the network
edge is estimated using the following equation [22]:
clo (μ(t),m(t)) =μ(t)
m(t).kμ(t)(1)
In this equation, kk represents the processing capacity of
each active server.
2-3. The delay in communicating the residual workload to
the cloud is shown by cof f (t)and is estimated based on the
congestion status of the network. This status is represented by
h(t), and is computed by adding the round-trip time (RTT)
delay and the processing delay of the cloud. As a result, this
delay is calculated based on h(t)according to the following
equation [22]:
cof f (h(t)(t)(t)) =(λ(t)μ(t)) h(t)(2)
Finally, the cost of overall delay of the aggregate input
workload is estimated by adding the three above delays [22]:
cdelay (h(t)(t)(t),m(t))
=cwi(λ(t))
+clo (μ(t),m(t)) +cof f (h(t)(t)(t)) (3)
Note that, cwi(λ(t)) is negligible.
3) Power Consumption: The total consumed power is com-
posed of two parts:
3-1. A part of the power is used for basic operations
and communicating the loads. This part is represented by
dop (t)and is independent of any operations regarding
4IEEE TRANSACTIONS ON INTELLIGENT TRANSPORTATION SYSTEMS
Fig. 2. Two modes of battery status.
processing the loads but merely depends on the input load
of the network (λ(t)). In our model, dop (t)is composed of
two power types [22]:
dop (λ(t)) =dsta +ddyn (λ(t)) (4)
The dsta and ddyn (λ(t)) represent the static power con-
sumption of the network edge and the dynamic power con-
sumption, respectively. The latter differs from the input load
of the network and is set to 0 in our model due to the physical
closeness of the computing nodes.
3-2. The power required for the processing of workloads at
the edge is shown by dcom (t). To estimate this parameter,
the amount of the workload allocated to the edge (μ(t))
and the number of active edge servers (m(t))are required.
Finally, the total required power is obtained by the following
equation [22]:
d(λ(t)(t),m(t)) =dop (λ(t))+dcom (μ(t),m(t)) (5)
In this model, g(t)denotes any renewable energy source
that can be used as the power supply g(t).
4) Battery Status: As formerly explained, one battery
with limited charge is used to supply each active edge
server. Overall, the total battery charge at the network edge is
b(t)[0,B],where Bdenotes a predefined maximum
capacity. The renewable energy sources can recharge these
batteries. The initial battery level is set to 0. To control
the battery level at the network edge, we should control
the rate of processing of workloads at the edge servers.
Hence, the state of the battery is determined by the following
conditions:
D-1. When b(t)dop (λ(t)), no processing is allowed at
the network edge. In this state, since the battery charge is not
sufficient, the whole workload λ(t)is transmitted to the cloud.
In this state, the renewable energy sources recharge the battery.
The overall cost of communicating the workload to the cloud
is calculated by the following equation [22]:
cbak (λ(t)) =ϕ.dop (λ(t)) (6)
where ϕis the coefficient reflecting the cost of consum-
ing the supporting power supply. In the next interval, the
renewable power source will charge the battery according to
equation (7) [22].
b(t+1)=b(t)+g(t)(7)
The first state in Fig. 2 shows this state.
D-2. If the battery level is sufficiently more than the
required power for processing a part of workload, that part
of the workload (μ(t))is processed at the edge, and the
remaining part (λ(t)μ(t))is transmitted to the cloud. Thus,
Fig. 3. The cloud-fog architecture.
the following equation calculates the battery level in the next
interval as:
b(t+1)=b(t)+g(t)d(λ(t)(t),m(t)) (8)
The operational cost of the battery in this state is:
cbattery (t)=ω.max {d(λ(t)(t),m(t)) g(t),0}(9)
where ω>0 is the operating cost of one battery unit.
This state is shown by the second state in Fig. 2. Based on
the above four models, the architecture of the proposed system
can be illustrated as in Fig. 3.
In this figure, the set of requests λ(t)which have been
sent by the users enter the base station. The base station is
responsible for distributing the loads between edge servers
(i.e., fog servers) and the cloud. The base station uses the
evolutionary algorithm to calculate the amount of workload
that can be processed by edge servers (μ(t)). Then, the
excessive requests are transmitted to the cloud. The trans-
mission of workloads to the cloud creates congestion in the
network and imposes longer delays on the loads. Therefore,
the congestion is measured in every interval (h(t))to be
taken into account in subsequent decisions. In the meantime,
renewable energy sources (g(t)) provide the power required
for edge computations in each interval. If renewable sources
produce more energy than is needed by the servers, the surplus
is stored in network batteries b(t). Conversely, if the produced
renewable energy is inadequate, the batteries will be used.
IV. USING GENETIC ALGORITHM IN THE OPTIMIZATION
OF WORKLOAD DISTRIBUTION
This section describes how a genetic algorithm can be used
to distribute the workloads more efficiently. The aim of using
a genetic algorithm is to minimize system costs. The solution
to this problem using a genetic algorithm is presented in
Algorithm 1. Below is the description of the algorithm.
In the beginning, the battery level is checked. If the battery
level is not sufficient for the basic operation, the supporting
power supply is used, and the entire input workload is trans-
mitted to the cloud. In this case (t)=0, and the genetic
ABBASI et al.: OPTIMAL DISTRIBUTION OF WORKLOADS IN CLOUD-FOG ARCHITECTURE 5
algorithm is not executed (lines 1 and 2). However, if the
battery level is high enough for the basic operation to run,
all or part of the input load can be processed at the network
edge. In this case, the genetic algorithm is used to calculate
μ(t)(lines 3 to 29). In the first step of the genetic algorithm,
the initial population is generated (line 4). This population
consists of a set of chromosomes. Each chromosome indicates
the amount of workload that can be computed at the edge.
Next, the fitness of the initial population is calculated (line 5).
The fitness function returns a non-negative value for each
chromosome which is indicative of the individual capacity of
that chromosome to reduce the costs. The cost function [20]
can be used to calculate the fitness of a chromosome. The
proposed algorithm attempts to reduce this amount in order to
minimize system costs. Given the battery status of the system,
the cost function can be calculated in two ways:
c(t)=cdelay (h(t)(t),0,0)+cbak (λ(t)) ,
if b(t)dp(λ(t))
c(t)=cdelay (h(t)(t)(t),m(t)) +cbattery (t),
els e (10)
This equation is composed of two parts: delay cost and
power cost. The following two coefficients are used for the
power cost part:
1) Battery depreciation coefficient (ω)
2) Cost coefficient of the supporting power supply (ϕ)
As the effect of delay is directly involved in the cost
function, a new coefficient called delay cost coefficient (θ) is
introduced. The proposed algorithm modifies these coefficients
to examine their effect on power consumption and workload
delay and to find the optimum state on the network.
Another important genetic operator is crossover. Crossover
is used to exchange information between two chromosomes,
which accelerates convergence in the genetic algorithm. The
probability of the effectiveness of this operator lies in the range
of 0.6-0.9. This value is called a crossover rate and denoted by
Pcrossover . In this problem, two parents and a random position
in the parents’ genes are selected. Next, the genes on the right
side of the random position of the first parent and those on
the left side of the random position of the second parent are
selected to produce a new chromosome (lines 9 to 15). Another
operator is the mutation, which is responsible for producing
new information. This operator randomly changes one of the
genes of the child with a low probability, such as 0.01. The
probability of mutating any chromosome is called the mutation
rate and is denoted by Pmutation. In the proposed algorithm,
one gene from the chromosome is randomly selected and
changed (lines 16-19). In this algorithm, the number of chil-
dren produced by crossover and mutation is set by the variable
Nc. In each step of this operation, a new child is added to the
set P(line 20). Then the fitness function of the generated
population is obtained by crossover and mutation operators as
was done for the initial population (line 22).
There are different methods in genetic algorithms to select
the superior chromosome and transfer it to the next generation.
One of the common methods is tournament selection [23].
In this method, two chromosomes are randomly selected from
the population. Next, a random number r between 0 and 1
Algorithm 1 Using a Genetic Algorithm in the
Optimization of the Workload Distribution
Input :λ, g,h,b,Nc,Ng,Pcrossover ,Pmutation,Pselect ion
Output :μ
1: if b(t)dp(λ(t))
2: μ(t)0
3: else
4: PCreate Populati on()
5: fitness(P)
6: do
7: for i0,1,...,Nc//crossover and mutation
8: parent1random(P)
9: parent2random(P)
10: chil d par ent1
11: if (random()> Pcrossover )
12: point
random(length of choromosome)
13: chil d
crossover (parent1,parent2,point )
14: End i f
15: if (random()> Pmutation)
16: gen random(length of choromosome)
17: chil d(gen)mutation()
18: End i f
19: AddachildtoP
20: end f or
21: fitness(chil dr en cr eat ed by crossover section)
22: Pselection(P,Pselect ion )
23: while
24: μ(t)best_choromosom(P)
25: while
(battery +green <PowerC onsum pti on (t)))
26: μ(t)next_best_chor omosome(P)
27: end while
28: End i f
is generated. If r<Pselect ion (Pselect ion is a parameter,
e.g., 0.8), the fitter individual will be selected as the parent;
otherwise, the less fit individual will be selected. These two are
again returned to the population and involved in the selection
process. After the selection process, the selected chromosomes
are introduced as the new generation and sent to the next
iteration of the algorithm (line 23). In the proposed genetic
algorithm, the child generation operators such as crossover
and mutation as well as fitness calculation and selection are
executed for Ngtimes, which is indicative of the number
of generations (line 24). When all generations have been
executed, the first element of the population will be put in
μ(t)as the final result (line 25).
If the selected chromosome (which indicates the distribution
of processable load at the network edge μ(t))faces battery
limitations, the next chromosome in the population should be
selected. The process will continue until the power consump-
tion for μ(t)becomes proportional to the edge batteries (lines
26-28). At the end of the algorithm, the best value is selected
for μ(t), which in addition to minimizing the cost of delay and
6IEEE TRANSACTIONS ON INTELLIGENT TRANSPORTATION SYSTEMS
Algorithm 2 The Effect of ωand θon the Proposed
Method
Input :λ, g,h
Output :aver age delay,average power consum pt i on
1: for θ0.01 to 1step 0.01 do
2: for ω0.01 to 1step 0.01 do
3: GA_Al g or i t hm(λ, g,h)
4: End f or
5: End f or
power consumption regulates power consumption according to
the level of edge batteries.
V. I MPLEMENTATION AND EVA L U AT I O N
This section describes the implementation and evaluation of
the proposed method for optimum distribution of workloads
between the cloud and the fog. For this purpose, the evaluation
parameters of the problem, the parameters of the different
genetic operators, and the implementation environment are
examined. Next, the effect of the variations in ω,andθon the
distribution of workloads is studied and the optimum value of
these two parameters is obtained. Finally, the proposed method
with the optimum values of ωand θis compared with other
existing methods.
A. Simulation Parameters
This section describes the simulation of a cloud-fog envi-
ronment in order to evaluate the proposed method. In this
environment, the genetic algorithm described above is used
in the base station as the distributor of workloads between the
cloud and fog servers. The simulation aims to examine the
effect of the delay cost coefficient and battery depreciation
coefficient on the fitness function as well as on the average
delay in workload transmission and the power consumption
at the network edge. To narrow down the search space in
the genetic algorithm, we assume the cost coefficient of
the supporting power supply (0.15) as constant and only
study the variations in ω,andθ. The process is shown in
Algorithm 2. According to this algorithm, with changing the
value of ωand θ, the genetic algorithm runs 10000 times in
each experiment and the average energy consumption and the
delay are measured. In these experiments, 0.01 ω1and
0.01 θ1, and their values are changed by 0.01 in each
experiment.
The proposed method was examined on a system with an
8-core 1.8 GHz CPU and 12GB RAM. In the following,
we first initialize the parameters and then discuss the results.
The amount of input workload in each interval is specified
by a random number that uniformly varies between 10 and
100 requests per second. The renewable energy fluctuates
according to a normal distribution of N(520W,150)[20].
The maximum capacity of each battery is B =2kWh. Also,
we assume that the initial charge of battery b (0) =0.
The static power consumption of the base station is
dsta =300W. We set the maximum number of edge servers
M=10. Also, each active server consumes 150W of electric-
ity. The maximum processing rate of each server is 20 requests
per second. We restrict the maximum number of generations
of our evolutionary algorithm to 100.
B. The Effect of ωand θon Workload Distribution
In this section, the results of the experiments are presented
using graphs. Then the graphs are analyzed and, by normal-
izing the values of delay and power consumption, the best
coefficients of the fitness function to minimize the costs are
obtained.
Fig. 4 illustrates the average delay cost in different experi-
ments for ωand θ. As can be seen in Fig. 4(a), the increased
delay coefficient decreases the average delay cost. The rea-
son behind this decrease is the stronger effect of θon the
cost function, which the genetic algorithm seeks to reduce.
In fact, the system attempts to reduce the delay cost so that
more workloads could be processed locally. For example,
Fig. 4(b) shows the variations in the average delay depending
on the varying values of θ. In this figure, assuming a constant
coefficient of battery depreciation (ω=1), an increase in
the delay coefficient results in a decrease in the delay cost.
The most important reason behind the decrease in the delay
is the increased value of this parameter in the fitness function
as well as the processing of increased amounts of workloads
in the fog servers.
Fig. 4(c) depicts the average delay according to the vari-
ations of ωfor two constant values of θ.Whenθ=0.01
(the minimum value), the majority of processes are conducted
in the cloud, and the average delay is maximized due to the
minimal effect of this parameter on the fitness function and the
decision-making. It can be seen that when the delay coefficient
is constant, by increasing the battery depreciation coefficient
(ω) from 0.01 to 1,the power consumption part in the
cost function becomes more significant. Therefore, the system
attempts to send more workloads to the cloud to reduce power
consumption. Consequently, with the transmission of the loads
to the cloud, the average delay begins to escalate. Also,
a comparison of the two lines depicted in the figure would
show that the average delay with θ=0.08 is less than with
θ=0.01, which can be explained by its increased effect on
the fitness function. Given the above discussion, the greater
the coefficient of battery depreciation (ω) and the greater the
delay coefficient (θ), the less the average delay.
Fig. 5(a) shows the average power consumption with ωand
θin each experiment. In general, an increase in the coefficient
of battery depreciation will reduce power consumption. The
reason for this reduction is the increased effect of battery
depreciation on the cost function. In fact, the algorithm tries
to allocate most of the processes to the cloud to reduce
power consumption in the fog servers. Fig. 5(b) depicts the
average power consumption with three constant values of θ
according to the variations of ω. It can be observed that, as
the coefficient of battery depreciation increases, the average
power consumption with θ=0.15 and θ=0.01 is reduced
from 550 w to 450 w. The reason for this reduction is
the system’s attempt to send more workloads to the cloud
and decrease power consumption in the fog servers. As the
figure shows, in points where θ=0.01 (i.e., the minimum
value), the majority of workloads are sent to the cloud, and
the average power consumption decreases at a higher rate to
ABBASI et al.: OPTIMAL DISTRIBUTION OF WORKLOADS IN CLOUD-FOG ARCHITECTURE 7
Fig. 4. The average delay cost based on the delay coefficient (θ) and the coefficient of the battery depreciation (ω). (a) The average delay cost by changing
the coefficients ωand θ. (b) The effect of delay coefficient (θ) on delay cost. (c) The effect of the coefficient of battery depreciation (ω) on delay cost.
Fig. 5. The average power consumption based on the delay coefficient (θ) and coefficient of battery depreciation (ω). (a) The average power consumption
by changing ωand θ. (b) The effect of the coefficient of battery depreciation (ω) on power consumption. (c) The effect of delay coefficient (θ) on power
consumption.
achieve its final value (i.e., 450 w). Also, it can be concluded
that the rapid decrease in power consumption is due to the
minimal effect of delay and the stronger effect of battery
power consumption on the fitness function. Another point
to mention in this figure is the points on which the delay
coefficient θ=1 is maximum. On these points, due to the
strong effect of delay on the cost function, the algorithm
sends the majority of processes to the fog server so that
they would be conducted locally and the power consumption
would not decrease. In this case, the battery level reaches its
maximum.
A comparison of the three lines in this graph indicates that
the average power consumption of θ=1 is greater than
θ=0.15 and the average power consumption of θ=0.15 is
greater than θ=0.01. The high level of power consumption
is due to the greater significance of the delay part in the cost
function.
Fig. 5(a) shows that, on points with a delay coefficient
greater than 0.7, the average power consumption reaches
its maximum and remains constant for each state ω.Also,
given that θ<0.7, as the coefficient of battery depreciation
increases, attempts are made to send the loads to the cloud
and decrease power consumption. As θdecreases, the power
consumption part becomes more significant, and the average
power consumption is reduced. Fig. 5(c) shows the power
consumption graph based on the variations of θfor two values
of ω. As can be seen in the figure, when the coefficient of
battery depreciation is ω=1 (maximum), power consumption
will increase as the delay coefficient increases and becomes
more significant in the cost function. In addition, when the
coefficient of battery depreciation is ω=0.01 (minimum),
power consumption will not change with the increase in θ.
This is due to the minimal effect of the coefficient of battery
depreciation on the cost function.
Given this, we seek out a state in which the average
power consumption is minimized so that the least amount of
depreciation could be achieved. As discussed earlier in the
formulation of the problem, green energy enters the system as
normal distribution according to the equation: N(520W,150).
According to Fig. 5(a), power consumption is almost equal to
the average green energy received by the system. It can be thus
concluded that this algorithm tries to distribute the workloads
in a way that the required power for processing could become
almost equal to the green energy and the coefficient of battery
depreciation as well as the power consumption at the edge of
the network could be minimized. Also, with the decrease in
power consumption at the network edge, more green energy
could be stored in the batteries.
Fig. 6(a) illustrates the network edge battery levels in
different experiments for ωand θ. With the increase in the
coefficient of battery depreciation (on points where θis less
than 0.7), more workloads are transmitted to the cloud, which
will increase the battery level. Fig. 6 (b) shows the battery level
for three constant states of θ.Whenθ=0.15 and θ=0.01,
with the increase in the coefficient of battery depreciation (ω),
the battery level will increase from 700 w to 2000 w (charging
mode). When θ=0.01 (minimum), due to the transmission of
all workloads to the cloud, the green energy not consumed is
stored in the batteries and the battery level rises more quickly.
However, when θ=1 (maximum), the loads are maintained
at the network edge, thereby leading to the remarkably high
power consumption and keeping the battery level at 800 w.
It should be mentioned that the proposed algorithm could
maintain a full battery in most cases.
8IEEE TRANSACTIONS ON INTELLIGENT TRANSPORTATION SYSTEMS
Fig. 6. The average battery level based on the delay coefficient (θ) and the coefficient of battery depreciation (ω). (a) The average battery level by changing
ωand θ. (b) The effect of the coefficient of battery depreciation (ω) on battery level. (c) The effect of delay coefficient (θ) on the battery level.
Fig. 7. Normalized values of delay cost and average power consumption.
Comparing the corresponding graphs in Fig. 5 and 6 indi-
cates that, by sending more workloads to the cloud and
decreasing power consumption at the network edge, more
green energy could be stored in the batteries. This process at
the network edge will increase the battery levels. To illustrate
this point, let us compare Fig. 5 (c) and 6 (c) in terms
of power consumption for the delay coefficient θand two
values of ω. It can be observed that, when the coefficient of
battery depreciation is ω=1 (maximum), power consumption
increases with the increase in the delay coefficient as well as
its effect on the cost function, thus reducing the average battery
level. Also, when the coefficient of battery depreciation is
ω=0.01 (minimum), power consumption will not change
with the increase in θ, and the average battery level at the
network edge will remain constant. This is not desirable for
us because we seek out circumstances in which the average
battery level would be maximized. On the other hand, the cor-
responding graphs in Fig. 4 and 6 indicate that the battery
level decreases with the reduction in the delay. The reason is
that, in order to reduce the delay costs, the system attempts to
process most of the workload in the fog servers, which leads to
more battery consumption. Given what was discussed above,
we need to reduce the average delay while maintaining the
maximum battery level.
C. The Optimum Point of ωand θ
To reach a balance between power consumption and delay in
workload distribution, average power consumption and delay
cost were normalized to find the optimum state of ω,andθ.
TAB L E I I
OPTIMUM POINTS BASED ON THE VALUES OF θAND ω
Fig. 7 illustrates the normalized levels of average power
consumption and delay cost for every value of ωand θ.
It can be observed that these two parameters have a negative
relationship. That is, an increased delay means decreased
power consumption and vice versa. As a result, a balance
between ωand θcan be attained when the normalized values
of delay and power consumption are equal. In other words,
the intersection points of the two normalized levels are the
points of balance. The intersection of these levels in this figure
forms a line. Those values of ωand θthat lie on this line are
indicative of a balanced state. Of these points, however, only
those points provide an optimum state in which the sum of
the two normalized parameters is minimal.
On this basis, the three optimum points from Fig. 7 are
described in Table II. This table lists the average power
consumption and the delay for each of the points in the
parametric space (θ , ω). For a better comparison of the three
points, six cross-sectional cuts have been made in the graph
in Fig. 7 (Fig. 8 (a) to 8 (f)). In Fig. 8(a), the normalized
values for θ=0.05 can be observed. At the intersection point
where the sum of the two parameters is minimal, the cost
of battery depreciation should be ω=0.23. For ω=0.23,
Fig. 8(b) shows that the intersection point at which power
and delay are minimal is θ=0.05. Similarly, for the second
optimum point (Fig. 8(c) and 8 (d)), θ=0.08 and ω=0.35
achieve a balance, and their sum is the minimum amount.
Fig. 8 (e) and 8 (f) depict the third optimum point for ω,
and θ. At this point, too, the sum of delay cost and power
consumption is minimal with θ=0.1andω=0.47.
The following are the results of the execution of the pro-
posed workload distribution at the optimum points (θ=0.05
and ω=0.23). Fig. 9 illustrates the amount of data processed
in the fog, the battery consumption of servers, and the amount
of workload sent to the cloud.
In this figure, the ratio of offloading in the cloud and
the fog to the total input workload is shown in intervals
of 1000. On average, in each interval, 64 percent of the total
input load has been processed in fog servers, and the rest
ABBASI et al.: OPTIMAL DISTRIBUTION OF WORKLOADS IN CLOUD-FOG ARCHITECTURE 9
Fig. 8. Normalized values of average delay cost and power consumption for the optimum points. (a) Variations of the coefficient of battery depreciation
(ω) for the first point. (b) Variations of delay cost (θ) for the first point. (c) Variations of the coefficient of battery depreciation (ω) for the second point.
(d) Variations of delay cost (θ) for the second point. (e) Variations of the coefficient of battery depreciation (ω) for the third point. (f) Variations of delay
cost (θ) for the third point.
(around 35 percent) has been sent to the cloud. As most
of the loads have been processed locally, it is expected that
battery consumption should be high. However, the battery level
graph shows that an average of 12 percent of the battery has
been consumed in each interval. This can be explained by
the optimum use of renewable energy. The system distributes
the loads in a way that the power consumed for processing
at the network edge be equal to renewable energy. Also,
in intervals where more load has been processed locally,
there is a rise in battery consumption. For example, battery
consumption in the interval 5000-6000 is 3 percent more than
in the interval 4000-5000. On the other hand, local processing
reduces the delay in the handling workloads.
D. Comparison With Other Methods
In this section, the results of the proposed method at its
optimum point are compared with other methods to confirm
the decrease achieved in the delay in workload transmission.
These methods are briefly described below.
1) Fixed Power: In this method, a fixed amount of power is
considered for edge computations at each interval of time [24].
2) Post Decision State (PDS) Algorithm [20]: The PDS
algorithm grabs the state of the system instantly after making
the decision at the end of each time interval. The state of the
system after making a decision at the end of the interval is an
important data that is named the after-state variable. The PDS
is mainly used as a decision-tree based optimization algorithm.
In this algorithm, to find the optimum solution, the problem
is broken down into decision nodes and outcome nodes,
which correspondingly denote pre-decision and post-decision
states. For finding the optimum decision for the vector-valued
problem of workload allocation, the PDS tries to find a state
that minimizes the long-term costs of the system.
3) Q-Learning [25]: Q-learning is considered as a rein-
forcement learning algorithm that is independent of the type
of system model. In this agent-based algorithm, the agent tries
to learn a strategy, which results in the best action for each
Fig. 9. The rate of usage of cloud and fog resources and power supply over
time.
state of the system. Since this algorithm does not need a model
of the environment, it can solve the problems with stochastic
transitions and payoffs without needing any regulation.
4) Myopic Optimization [26] : In this algorithm, regardless
of any relationship between the system states and correspond-
ing decisions, the cost function of each state is minimized
by only considering the present input information of the
system. That is, in the Myopic optimization model, the present
knowledge of the workload allocation is densely presented by
a Myopic window which represent the knowledge of system in
a limited number of time frames. The content of this window
may be repeated in different times. As a result, the outcome
of the system may be seen repeatedly.
Fig. 10 shows the average delay cost for different methods.
As can be observed, learning-based methods perform better
and have a lower average delay when run on the battery
than when using the electricity network. On the other hand,
the proposed algorithm has a lower delay than the other
methods. In this figure, the delay cost of all the methods is
greater than five, whereas the genetic algorithm used in the
proposed method has reduced this cost down to 3.5.
The main point that is clear in both figures is the reduction
of the average energy consumption and the reduction of
10 IEEE TRANSACTIONS ON INTELLIGENT TRANSPORTATION SYSTEMS
Fig. 10. The average delay cost.
the average delay in successive intervals of time. Reduc-
ing the average processing latency for the proposed method
in Figure 10 means that workloads are processed more on
the fog side, and a smaller percentage of them are sent to
the cloud, as evidenced by Figure 9. In Fig.9, for the first
1000 time slots, more percentage of workloads are processed
in fog, respectively reducing the average delay. One of the
strengths of the proposed method is that it does not have many
fluctuations in time slots, especially in the first 2000 time slots.
VI. CONCLUSION
In this paper, we tried to achieve a balance between power
consumption at the intelligent vehicular network edge and
delay in workload transmission in the clouds by using a
genetic algorithm and finding the optimum modes of workload
distribution. We also showed that workload distribution at the
edge of the vehicular network using renewable energy sources
is suitable for vehicular networks in which the processing
resources do not have access to the electrical grid and depend
on batteries for operation. By utilizing parameters such as the
input load and the proportion of green energy as the input para-
meters of the genetic algorithm, this paper calculated for the
first time the optimum number of workloads to be processed
locally. Also, by changing the coefficients of the parameters of
the cost function of the genetic algorithm, we determined the
optimum coefficients for processing the workloads with the
least amount of delay and the least power consumption.
The simulation results suggest that the proposed method can
achieve a better balance in workload distribution than the other
existing methods do. While reducing the workload delay by
40 percent and decreasing power consumption at the edge of
the vehicular network, this method also seeks to minimize
battery consumption by making use of renewable energies.
In future work, other machine learning methods such as
neural networks can be used for selecting the optimum
parameters.
REFERENCES
[1] Z. E. Ahmed, R. A. Saeed, and A. Mukherjee, “Challenges and oppor-
tunities in vehicular cloud computing,” in Cloud Security: Concepts,
Methodologies, Tools, and Applications. Hershey, PA, USA: IGI Global,
2019, pp. 2168–2185.
[2] T. Islam and M. M. A. Hashem, “A big data management system for
providing real time services using fog infrastructure,” in Proc. IEEE
Symp. Comput. Appl. Ind. Electron. (ISCAIE), Apr. 2018, pp. 85–89.
[3] A. Yousefpour et al., “All one needs to know about fog computing and
related edge computing paradigms: A complete survey,J. Syst. Archit.,
vol. 98, pp. 289–330, Sep. 2019.
[4] M. Shojafar, N. Cordeschi, and E. Baccarelli, “Energy-efficient adaptive
resource management for real-time vehicular cloud services,” IEEE
Trans. Cloud Comput., vol. 7, no. 1, pp. 196–209, Jan. 2019.
[5] F. S. Abkenar and A. Jamalipour, “EBA: Energy balancing algo-
rithm for fog-IoT networks,” IEEE Internet Things J., vol. 6, no. 4,
pp. 6843–6849, Aug. 2019.
[6] W. Zhang, Z. Zhang, and H.-C. Chao, “Cooperative fog computing
for dealing with big data in the Internet of vehicles: Architecture
and hierarchical resource management,” IEEE Commun. Mag., vol. 55,
no. 12, pp. 60–67, Dec. 2017.
[7] R.Deng,R.Lu,C.Lai,T.H.Luan,andH.Liang,“Optimalworkload
allocation in fog-cloud computing toward balanced delay and power
consumption,” IEEE Internet Things J., vol. 3, no. 6, pp. 1171–1181,
Dec. 2016.
[8] M. Ghobaei-Arani, A. Souri, and A. A. Rahmanian, “Resource manage-
ment approaches in fog computing: A comprehensive review,J. Gr id
Comput., vol. 18, no. 1, pp. 1–42, Mar. 2020.
[9] R. Basir et al., “Fog computing enabling industrial Internet of Things:
State-of-the-art and research challenges,” Sensors, vol. 19, no. 21,
p. 4807, Nov. 2019.
[10] S. Nižeti ´c, N. Djilali, A. Papadopoulos, and J. J. P. C. Rodrigues,
“Smart technologies for promotion of energy efficiency, utilization
of sustainable resources and waste management,” J. Cleaner Prod.,
vol. 231, pp. 565–591, Sep. 2019.
[11] M. Aloqaily, A. Boukerche, O. Bouachir, F. Khalid, and S. Jangsher,
“An energy trade framework using smart contracts: Overview and
challenges,” IEEE Netw., vol. 34, no. 4, pp. 119–125, Jul. 2020.
[12] H. Wu, L. Chen, C. Shen, W. Wen, and J. Xu, “Online geographical
load balancing for energy-harvesting mobile edge computing,” in Proc.
IEEE Int. Conf. Commun. (ICC), May 2018, pp. 1–6.
[13] Z. Ning, J. Huang, X. Wang, J. J. P. C. Rodrigues, and L. Guo, “Mobile
edge computing-enabled Internet of vehicles: Toward energy-efficient
scheduling,” IEEE Netw., vol. 33, no. 5, pp. 198–205, Sep. 2019.
[14] X. Wang et al., “Future communications and energy management in
the Internet of vehicles: Toward intelligent energy-harvesting,” IEEE
Wireless Commun., vol. 26, no. 6, pp. 87–93, Dec. 2019.
[15] H. Chen, T. Zhao, C. Li, and Y. Guo, “Green Internet of vehicles:
Architecture, enabling technologies, and applications,” IEEE Access,
vol. 7, pp. 179185–179198, 2019.
[16] F. Ahmadizar, K. Soltanian, F. AkhlaghianTab, and I. Tsoulos, “Artificial
neural network development by means of a novel combination of
grammatical evolution and genetic algorithm,” Eng. Appl. Artif. Intell.,
vol. 39, pp. 1–13, Mar. 2015.
[17] S. Verma, N. Sood, and A. K. Sharma, “Genetic algorithm-based
optimized cluster head selection for single and multiple data sinks in
heterogeneous wireless sensor network,” Appl. Soft Comput., vol. 85,
Dec. 2019, Art. no. 105788.
[18] X. Liu and N. Ansari, “Toward green IoT: Energy solutions and
key challenges,” IEEE Commun. Mag., vol. 57, no. 3, pp. 104–110,
Mar. 2019.
[19] J. Xu and S. Ren, “Online learning for offloading and autoscaling
in renewable-powered mobile edge computing,” in Proc. IEEE Global
Commun. Conf. (GLOBECOM), Dec. 2016, pp. 1–6.
[20] J. Xu, L. Chen, and S. Ren, “Online learning for offloading and
autoscaling in energy harvesting mobile edge computing,” IEEE Trans.
Cognit. Commun. Netw., vol. 3, no. 3, pp. 361–373, Sep. 2017.
[21] F. M. Dalvand and K. Zamanifar, “Multi-objective service provisioning
in fog: A trade-off between delay and cost using goal programming,” in
Proc. 27th Iranian Conf. Electr. Eng. (ICEE), Apr. 2019, pp. 2050–2056 .
[22] M. Abbasi, M. Yaghoobikia, M. Rafiee, A. Jolfaei, and M. R. Khosravi,
“Efficient resource management and workload allocation in fog–cloud
computing paradigm in IoT using learning classifier systems,” Comput.
Commun., vol. 153, pp. 217–228, Mar. 2020.
[23] C. N. Giap and D. T. Ha, “Parallel genetic algorithm for minimum dom-
inating set problem,” in Proc. Int. Conf. Comput., Manage. Telecommun.
(ComManTel), Apr. 2014, pp. 165–169.
[24] K. Kaur, S. Garg, G. S. Aujla, N. Kumar, J. J. P. C. Rodrigues, and
M. Guizani, “Edge computing in the industrial Internet of Things envi-
ronment: Software-defined-networks-based edge-cloud interplay,” IEEE
Commun. Mag., vol. 56, no. 2, pp. 44–51, Feb. 2018.
[25] R. S. Sutton and A. G. Barto, Introduction to Reinforcement Learning
vol. 2. Cambridge, MA, USA: MIT Press, 1998.
[26] K. Poncelet, E. Delarue, D. Six, and W. D’haeseleer, “Myopic optimiza-
tion models for simulation of investment decisions in the electric power
sector,” in Proc. 13th Int. Conf. Eur. Energy Market (EEM), Jun. 2016,
pp. 1–9.
... However, GA consumes more energy during computation than other meta-heuristic approaches because it adopts more parameters and does not consider other state-of-the-art comparative algorithms for evaluation. Referring to [20] proposed the idea of distributing workloads in an IoT edgecloud computing system while guaranteeing energy efficiency and minimal delay via a delay-based workload allocation algorithm (DBWA). The algorithm uses the Lyapunov drift-plus-penalty theory to reduce the system's energy consumption with a granular delay for each job. ...
... Validating the proposed algorithm by performing 30 independent runs, for each run, the best obtained delay and energy consumption were recorded to get average result [31]. The main parameters for validating the performance of the algorithms are summarized in Table II VOLUME XX, 2022 referring to [20], and NPSO parameters referring to [32] that indicate and the represents the suitable settings to provide the best convergence rate. First, it is obvious from Figure 4 that the quantitative comparison in terms of delay illustrates a set of tasks scheduled by the FCFS, STML, LLF, and MLLF algorithms. ...
... This affects the task completion time and effectively allocates the workload. Figure 6 compares the proposed algorithm, NPSO, non-linear optimization [20], MOPSO-DC, and NSGA-II regarding the energy consumption when D_max= 100. Recall the average delay threshold of the MLLF in Figure 5; that is, the normal delay threshold is (-∞, 100). ...
Article
Full-text available
The Internet of Things (IoT) generates massive data from smart devices that demand responses from cloud servers. However, sending tasks to the cloud reduces the power consumed by the users’ devices, but increases the transmission delay of the tasks. In contrast, sending tasks to the fog server reduces the transmission delay due to the shorter distance between the user and the server. However, this occurs at the user end’s expense of higher energy consumption. Thus, this study proposes a mathematical framework for workload allocation to model the power consumption and delay functions for both fog and clouds. After that, a Modified Least Laxity First (MLLF) algorithm was proposed to reduce the maximum delay threshold. Furthermore, a new multi-objective approach, namely the Non-dominated Particle Swarm Optimization (NPSO), is proposed to reduce energy consumption and delay compared to the state-of-the-art algorithms. The simulation results show that NPSO outperforms the state-of-the-art algorithm in reducing energy consumption, while NGSA-II proves its effectiveness in reducing transmission delay compared to the other algorithms in the experimental simulation. In addition, the MLLF algorithm reduces the maximum delay threshold by approximately 11% compared with other related algorithms. Moreover, the results prove that metaheuristics are more appropriate for distributed computing.
... Recently, many articles have been discussing communication and computing resource allocation in edge computing for various demanding networks such as IoT, wireless and vehicular networks [8]- [10]. Different techniques based on ML and AI have been proposed, given that these paradigms have shown enhancements in resource allocation [11]. ...
... Different techniques based on ML and AI have been proposed, given that these paradigms have shown enhancements in resource allocation [11]. In [8], the authors introduced a solution to fairly distribute the workload between the fog/edge servers and the powerful cloud with its data centers in vehicular networks. This solution aims to optimize the power conception of edge systems that use renewable energy and to reduce delays in the transmission and processing of the workload. ...
... In [15], Abbasi et al. applied GA to reduce the energy consumption of edge systems along with the network delay in the processing of workloads. They also showed that the workload distribution, at the edge using renewable energy sources, is suitable for vehicular networks. ...
... In our study, we compared our algorithm with another GA-based algorithm [45] that presents fog computing for reducing energy consumption. The whole simulation is repeated ten times to be sure about the validity of random samples with a confidence level of 95 %. ...
Article
Full-text available
Connecting to the internet will increase our computing challenges as it becomes an integral part of our daily lives. Therefore, it is necessary to advance the service qualities of Internet of Things (IoT) applications because the data produced by all these devices will need to be processed quickly and sustainably. Previously, cloud data centers with large capacity interface IoT devices with support servers. While IoT devices proliferate and generate massive amounts of data, communicating between devices and the Cloud is becoming more complex and harder, resulting in high costs and inefficiencies. Fog computing emerges as an approach to address the growing demand for IoT solutions. In this article, an IoT-fog-cloud application's general framework is developed, followed by an algorithm for Energy efficiency through an integrated approach computation model. Fog-Enabled Smart Cities (FESC) are proposed to minimize service delay and response time by using a fog offloading policy for the fog-enabled IoTs. Also, we developed an analytical model evaluating the proposed framework's effectiveness in reducing the delay of IoT services. Comparing the proposed model and the Alternating Direction Method of Multipliers (ADMM-VS) algorithm, the proposed model performs significantly better. Thus, by optimizing response and processing times, fog-enabled smart grids determine whether computation will be performed autonomously or semi-autonomously on fog nodes or in the Cloud.
... An Overview has been provided to show the benefits of the involvement of AI technology in fog computing and how AI can solve the difficulty in fog computing challenges. In [76] Abbasi et al. have designed a cloud-fog framework in intelligent vehicular networks. Various works are given as experiments to enhance the magical capabilities of both fog computing and AI. ...
Article
Full-text available
In recent times, the Internet of Things (IoT) applications, including smart transportation, smart healthcare, smart grid, smart city, etc. generate a large volume of real-time data for decision making. In the past decades, real-time sensory data have been offloaded to centralized cloud servers for data analysis through a reliable communication channel. However, due to the long communication distance between end-users and centralized cloud servers, the chances of increasing network congestion, data loss, latency, and energy consumption are getting significantly higher. To address the challenges mentioned above, fog computing emerges in a distributed environment that extends the computation and storage facilities at the edge of the network. Compared to centralized cloud infrastructure, a distributed fog framework can support delay-sensitive IoT applications with minimum latency and energy consumption while analyzing the data using a set of resource-constraint fog/edge devices. Thus our survey covers the layered IoT architecture, evaluation metrics, and applications aspects of fog computing and its progress in the last four years. Furthermore, the layered architecture of the standard fog framework and different state-of-the-art techniques for utilizing computing resources of fog networks have been covered in this study. Moreover, we included an IoT use case scenario to demonstrate the fog data offloading and resource provisioning example in heterogeneous vehicular fog networks. Finally, we examine various challenges and potential solutions to establish interoperable communication and computation for next-generation IoT applications in fog networks.
... Some researchers investigated on different types of service architectures. [1,2] studied the cloud-based vehicular networks, where the computation-intensive tasks generated by vehicles can be wirelessly offloaded to remote cloud server for reducing computation time. However, frequent information exchange between terminal users and cloud server brings great backhaul traffic and causes excessive transmission delay. ...
Article
Full-text available
Mobile edge computing has been a promising solution to enable real-time service in vehicular networks. However, due to high dynamics of mobile environment and heterogeneous features of vehicular services, traditional expert-based or learning-based strategies has to update handcrafted parameters or retrain learning model, which leads to intolerant overhead. Therefore, this paper investigates the problem of multi-task offloading (MTO), where there exist multiple offloading scenarios with varying parameters, such as task topology, resource requirement and transmission/computation capability. The objective is to design a unified solution to minimize task execution time under different MTO scenarios. Accordingly, we develop a Seq2seq-based Meta Reinforcement Learning algorithm for MTO (SMRL-MTO). Specifically, a bidirectional gated recurrent units integrated with attention mechanism is designed to determine offloading action by encoding sequential offloading actions and showing different preferences to different parts of input sequence. Particularly, a meta reinforcement learning framework is designed based on model-agnostic meta learning, which trains a meta policy offline and fast adapts to new MTO scenario within a few training steps. Finally, we conduct performance evaluation based on task generator DAGGEN and realistic vehicular traces, which shows that the SMRL-MTO reduces task execution time by 11.36% on average compared with greedy algorithm.
... To optimize the execution time caused by power-hungry services, Wang et al. [28] jointly optimized the transmission power, calculation speed and task division ratio in different network nodes. Abbasi et al. [35] leveraged the power consumption and network latency by cooperatively distributing workloads among network nodes. Regarding a tradeoff between the task-processing latency and energy consumed by edge terminals, Bozorgchenani et al. [30] discussed the impact of task classification on task offloading, local computing energy efficiency and service time. ...
Article
Full-text available
With the emergence of intelligent terminals, the Internet of Vehicles (IoV) has been drawing great attention by taking advantage of mobile communication technologies. However, high computation complexity, collaboration communication overhead and limited network bandwidths bring severe challenges to the provision of latency-sensitive IoV services. To overcome these problems, we design a cloud-edge cooperative content-delivery strategy in asymmetrical IoV environments to minimize network latency by providing optimal computing, caching and communication resource allocation. We abstract the joint allocation issue of heterogeneous resources as a queuing theory-based latency minimization objective. Next, a new deep reinforcement learning (DRL) scheme works in each network node to achieve optimal content caching and request routing on the basis of the perceptive request history and network state. Extensive simulations show that our proposed strategy has lower network latency compared with the current solutions in the cloud-edge collaboration system and converges fast under different scenarios.
Article
Falls are one of the main causes of injuries and even fatalities among the elderly. However, few existing research on fall detection focuses on the supervision of the elderly. Furthermore, the face image of the target usually exhibits low resolution in the surveillance videos, posing a challenge in achieving a balance between the response latency and the computational cost. To this end, a novel fall detection framework with age estimation based on cloud-fog computing architecture is proposed in this paper. Specifically, an optimized Soft Stage Regression-Shallow (SSR-S) network is presented to achieve excellent performance on age estimation for the low-resolution images collected by the edge layer. A fall pre-judgment mechanism and a Shallow-Convolutional Neural Networks (S-CNN) are proposed to make a better judgment on fall behaviors among different age groups at the fog layer and the cloud layer respectively. Besides, an age estimation-based priority algorithm is presented to prioritize the people aged 60 or older in fall detection at the cloud layer, with the aim to make a trade-off between the response latency and the computational overhead. Finally, extensive simulations have been conducted to evaluate the performance of our proposal. Experimental results have shown that the minimum Mean Absolute Error (MAE) of SSR-S reaches 7.59. The fall prejudgment mechanism can achieve 0% miss rate, and the accuracy of S-CNN reaches 90.5%. The detection speed of the overall framework is 17.0 frames per second (FPS).
Article
Mobile crowdsourcing is a new computing paradigm that enables outsourcing computation tasks to mobile crowd nodes by means of offloading the tasks from the user to a mobile edge computing (MEC) server. This paper studies the problem of scheduling security-critical tasks of crowdsourcing applications in a multi-server MEC environment. We formulate this scheduling problem as an integer program and propose a family of convergent grey wolf optimizer (CGWO) metaheuristic algorithms to seek for the best scheduling solutions. Our proposed CGWO uses a task permutation to represent a candidate solution to the formulated scheduling problem, and employs a probability-based mapping scheme to map each search agent in grey wolf optimizer (GWO) onto a valid task permutation. We introduce a new position update strategy for generating the next generation of grey wolf population after each round of search. With this strategy, we prove our proposed CGWO guarantees its convergence to the global best solution. More importantly, we provide a thorough analysis on the movement trajectories of grey wolves during the evolutionary procedure, in order to determine appropriate parameter values such that CGWO would not be trapped in local optima. Experimental results justify the superiority of CGWO metaheuristics over the standard GWO in solving the crowdsourcing task scheduling problem.
Article
In cloud-edge computing paradigms, the integration of edge servers and task offloading mechanisms has posed new challenges to developing task scheduling strategies. This paper proposes an efficient convergent firefly algorithm (ECFA) for scheduling security-critical tasks onto edge servers and the cloud datacenter. The proposed ECFA uses a probability-based mapping operator to convert an individual firefly into a scheduling solution, in order to associate the firefly space with the solution space. Distinct from the standard FA, ECFA employs a low-complexity position update strategy to enhance computational efficiency in solution exploration. In addition, we provide a rigorous theoretical analysis to justify that ECFA owns the capability of converging to the global best individual in the firefly space. Furthermore, we introduce the concept of boundary traps for analyzing firefly movement trajectories, and investigate whether ECFA would fall into boundary traps during the evolutionary procedure under different parameter settings. We create various testing instances to evaluate the performance of ECFA in solving the cloud-edge scheduling problem, demonstrating its superiority over FA-based and other competing metaheuristics. Evaluation results also validate that the parameter range derived from the theoretical analysis can prevent our algorithm from falling into boundary traps.
Article
Full-text available
With the development of Internet of Vehicles (IoV) and the gradual maturity of 5th Generation Mobile Networks (5G) technology, the further development of the IoV highly relies on network energy and resources. However, basic methods of researching new energy or improving equipment result in high cost. This article focuses on researching how to minimize energy consumption and maximize resource utilization with the constraints of existing environment and equipment. We jointly discuss 5G technology, mobile edge computing and deep reinforcement learning in green IoV. We also discuss how to make rational use of resources to realize the sustainable development of IoV. By classifying and comparing the existing researches according to different emphases, the energy consumption can be managed effectively with the above-mentioned technologies. Finally, we analyze the possible research directions and challenges in the future.
Article
Full-text available
Industry is going through a transformation phase, enabling automation and data exchange in manufacturing technologies and processes, and this transformation is called Industry 4.0. Industrial Internet-of-Things (IIoT) applications require real-time processing, near-by storage, ultra-low latency, reliability and high data rate, all of which can be satisfied by fog computing architecture. With smart devices expected to grow exponentially, the need for an optimized fog computing architecture and protocols is crucial. Therein, efficient, intelligent and decentralized solutions are required to ensure real-time connectivity, reliability and green communication. In this paper, we provide a comprehensive review of methods and techniques in fog computing. Our focus is on fog infrastructure and protocols in the context of IIoT applications. This article has two main research areas: In the first half, we discuss the history of industrial revolution, application areas of IIoT followed by key enabling technologies that act as building blocks for industrial transformation. In the second half, we focus on fog computing, providing solutions to critical challenges and as an enabler for IIoT application domains. Finally, open research challenges are discussed to enlighten fog computing aspects in different fields and technologies.
Article
Full-text available
In recent years, the Internet of Things (IoT) has been one of the most popular technologies that facilitate new interactions among things and humans to enhance the quality of life. With the rapid development of IoT, the fog computing paradigm is emerging as an attractive solution for processing the data of IoT applications. In the fog environment, IoT applications are executed by the intermediate computing nodes in the fog, as well as the physical servers in cloud data centers. On the other hand, due to the resource limitations, resource heterogeneity, dynamic nature, and unpredictability of fog environment, it necessitates the resource management issues as one of the challenging problems to be considered in the fog landscape. Despite the importance of resource management issues, to the best of our knowledge, there is not any systematic, comprehensive and detailed survey on the field of resource management approaches in the fog computing context. In this paper, we provide a systematic literature review (SLR) on the resource management approaches in fog environment in the form of a classical taxonomy to recognize the state-of-the-art mechanisms on this important topic and providing open issues as well. The presented taxonomy are classified into six main fields: application placement, resource scheduling, task offloading, load balancing, resource allocation, and resource provisioning. The resource management approaches are compared with each other according to the important factors such as the performance metrics, case studies, utilized techniques, and evaluation tools as well as their advantages and disadvantages are discussed.
Article
As an emerging communication platform in the Internet of Things, IoV is promising to pave the way for the establishment of smart cities and provide support for various kinds of applications and services. Energy management in IoV has been attracting an upsurge of interest in both academia and industry. Currently, green IoV mainly focuses on two aspects: energy management of battery- enabled RSUs and EVs. However, these two issues are always resolved separately while ignoring their interactions. This standalone design may cause energy underutilization, a mismatch between traffic demands and energy supplies, as well as high deployment and sustainable costs for RSUs. Therefore, the integration of energy management between battery-enabled RSUs and EVs calls for comprehensive investigation. This article first provides an overview of several promising research fields for energy management in green IoV systems. Given the significance of efficient communications and energy management, we construct an intelligent energy-harvesting framework based on V2I communications in green IoV communication systems. Specifically, we develop a three-stage Stackelberg game to maximize the utilities of both RSUs and EVs in V2I communications. After that, a real-world trajectory-based performance evaluation is provided to demonstrate the effectiveness of our scheme. Finally, we identify and discuss some research challenges and open issues for energy management in green IoV systems.
Article
Although modern transportation systems facilitate the daily life of citizens, the ever-increasing energy consumption and air pollution challenge the establishment of green cities. Current studies on green IoV generally concentrate on energy management of either battery-enabled RSUs or electric vehicles. However, computing tasks and load balancing among RSUs have not been fully investigated. In order to satisfy heterogeneous requirements of communication, computation and storage in IoVs, this article constructs an energy-efficient scheduling framework for MEC-enabled IoVs to minimize the energy consumption of RSUs under task latency constraints. Specifically, a heuristic algorithm is put forward by jointly considering task scheduling among MEC servers and downlink energy consumption of RSUs. To the best of our knowledge, this is a prior work to focus on the energy consumption control issues of MEC-enabled RSUs. Performance evaluations demonstrate the effectiveness of our framework in terms of energy consumption, latency and task blocking possibility. Finally, this article elaborates some major challenges and open issues toward energy-efficient scheduling in IoVs.
Article
In this paper, we propose a new energy-aware algorithm, called energy balancing algorithm (EBA), for a three-tier Fog-IoT network. The EBA comprises of two optimization models to reduce the energy consumption and delay of the network, while guaranteeing the energy balancing among all fog nodes (FNs). The first optimization model, called best transmission power and transmission rate (BTPR), finds the optimal transmission power and transmission rate of terminal nodes (TNs), such that the request loss is prevented. Then, the topology potential between each TN and FN is defined in the best fog node (BFN) model to find the best FN for serving the TN, while the energy balancing among all FNs is guaranteed. Simulation results reveal that the proposed EBA can reduce total energy consumption and delay of the network and yield efficient energy balancing among all FNs.