ArticlePDF Available

Toward Sustainable Micro-Level Fog-Federated Load Sharing in Internet of Vehicles

Authors:
  • Intelligent Systems Center

Abstract and Figures

Advancement of technology has enabled access to innovative applications for connected devices. To handle growing computation requirements, the backend cloud data centers become an inefficient solution due to the caused network overhead. This is generally alleviated using edge locations deployed to meet the increasing computing demands. This work proposes a micro-level fog unit deployment to facilitate delay-sensitive applications. To manage imbalanced workloads due to the traffic density, the framework established a fog federation acting as a consortium where under-utilized resources are shared to provide service quality. Moreover, we implement a price-based workload balancing algorithm to limit offloading among fog units relative to other consortium members. The experimental results show a balanced offload rate compared to tradition algorithms. Moreover, other measures like queue length, end-to-end delay, and workload balancing demonstrate performance gain under the federation. Overall, 72% energy reduction is achieved through the proposed technique in comparison with the traditional non-federated model.
Content may be subject to copyright.
DRAFT
1
Towards sustainable micro-level fog federated
load-sharing in internet of vehicles
Zeseya Sharmin1, Asad Waqar Malik2,3, Anis Ur Rahman2,3, Rafidah MD Noor1
1Department of Computer System and Technology, Faculty of Computer Science and Information
Technology, University Malaya, Malaysia
2Department of Information Systems, Faculty of Computer Science and Information Technology,
University Malaya, Malaysia
3School of Electrical Engineering and Computer Science (SEECS), National University of Sciences and
Technology (NUST), Islamabad, Pakistan
F
Abstract—Advancement of technology has enabled access to innova-
tive applications for connected devices. To handle growing computation
requirements, the backend cloud data centers become an inefficient
solution due to the caused network overhead. This is generally allevi-
ated using edge locations deployed to meet the increasing computing
demands. This work proposes a micro-level fog unit deployment to
facilitate delay-sensitive applications. To manage imbalanced workloads
due to the traffic density, the framework established a fog federation
acting as a consortium where under-utilized resources are shared to
provide service quality. Moreover, we implement a price-based workload
balancing algorithm to limit offloading among fog units relative to other
consortium members. The experimental results show a balanced offload
rate compared to tradition algorithms. Moreover, other measures like
queue length, end-to-end delay, and workload balancing demonstrate
performance gain under the federation. Overall, 72% energy reduction
is achieved through the proposed technique in comparison with the
traditional non-federated model.
Index Terms—Fog computing, internet of vehicles, sustainable distribu-
tion, fog federation, task offloading
1 INTRODUCTION
Cloud computing and internet of things (IoT) are key con-
cepts providing many innovative services and applications
in the foreseeable future. This is primarily due to the
number of connected devices surpassing the number of
people living since 2011 [1]. In 2012, there are 8.7 billion
connected devices, which is expected to grow at a much
faster rate crossing 50 billion by 2020 [2]. Consequently,
efficiently performing resource- and bandwidth-restricted
tasks is becoming challenging when using subscription-
based IoT services. One solution is to this challenge is to
expand the contemporary cloud on to proximity computing
Corresponding Author: Asad Waqar Malik, e-mail: asad.malik@um.edu.my
c
2020 IEEE. Personal use of this material is permitted. Permission from
IEEE must be obtained for all other uses, in any current or future media,
including reprinting/republishing this material for advertising or promotional
purposes, creating new collective works, for resale or redistribution to servers
or lists, or reuse of any copyrighted component of this work in other works.
Citation information: DOI 10.1109/JIOT.2020.2973420, IEEE Internet of
Things Journal
resources within the connected infrastructure. The goal is
to maximize overall utilization by reducing the amount of
information sent over to the cloud for processing, analysis,
and/or storage, fog computing plays an important role.
Particularly, delay-sensitive services can take advantage of
its proximity to reduce latency and cost [3].
Traditionally, vehicular networks are limited by space
and connectivity requirements, whereas more recent ad
hoc vehicular networks (VANETs) are dependent on cloud
computing. The later provides vehicular cloud computing
(VCC) services that are low latency and uninterrupted. But,
the services becoming ubiquitous with every passing day,
they demand high bandwidth to communicate with the
cloud server, which is a challenge when satisfying the QoS.
However, with the emergence of ever-growing connected
vehicles, a new networking paradigm referred to as vehicu-
lar fog computing (VFC) is being used to increase computa-
tional efficiency [4]. Here, vehicles offload computing using
vehicle-to-all (V2X) connectivity to the edge of the network.
Moreover, transient storage can be implemented at the
roadside units (RSUs). Notably, such offloading mechanisms
incur a communication overhead, thereby demanding new
design approaches to allocate computation and communica-
tion resources to offset energy-performance trade-off while
providing sustained user-experience.
Undoubtedly, the concept of fog computing has been
explored extensively for delay-sensitive applications. More-
over, it helps reduce the data transmitted across the backend
network. However, a large number of devices in every
region make it difficult for an edge location to maintain
the QoS. There is a need for micro-level fog placement
managing the QoS within its coverage area. This recent
concept of federated fog computing enables functionality
at different locations rather than at a centralized location ac-
cessible via a single network link, as illustrated in Figure 1.
Moreover, it provides the flexibility of horizontal expansion
over dispersed locations. The direct communication and
extendable framework minimizes the request-response time
with a reliable connection, and hence, achieving a better
DRAFT
2
quality of services (QoS) since the functionality resides
closer to the service request source.
Fig. 1: High-level fog consortium model.
In this study, we propose a fog collaboration frame-
work where the fog devices are placed at the micro-level
with a collaboration based on a pricing model. The model
is dynamically fluctuates based on the current workload;
therefore, while offloading, the fog agent located at every
fog location collaborates to find the suitable fog resource to
outsource the task. The main contributions of the proposed
work are listed as follows:
Propose a micro-level fog federation environment
for vehicular networks, especially to handle delay-
sensitive applications. A vehicle either executes tasks
locally or offload them to nearby fog locations.
Within its range, the vehicles dynamically establish a
connection with nearby fog locations to offload tasks
to balance its pending tasks queue.
Design a pricing model to offload fog-federation
outsource task. The model dynamically fluctuates
based on the current workload of a fog federate. This
outsourcing scenario is transparent for the end-users.
Evaluate proposed technique against traditional non-
federated fog environment in terms of queue length,
delay, and efficiency. Moreover, the fog location is
benchmarked with varying compute resources.
Organization The rest of the paper is organized as
follows. In Section 2recent contribution in the domain of
fog computing is covered. The system model is presented in
Section 3. Section 4covers the proposed system architecture
design and its implementation. The experimental results
and simulation framework setup are discussed in Section 5.
Finally, we conclude the paper in Section 6.
2 RE LATED WORK
The use of vehicular networks has tremendously increased
due to its wide adoption for different innovative services
like content sharing, information sharing, data caching,
emergency message dissemination, etc. This section covers
recent contributions towards the sustainability of such net-
works while providing the aforementioned services.
Federated vehicular fog networks With an increasing
number of smart vehicles, it has become a challenge to
establish secure and error-free communication among mov-
ing vehicles in a vehicular network. Initially, such networks
are augmented using RSUs connected to the backend cloud
data centers [5], the VFC paradigm, but limited bandwidth
and high deployment costs. Recently, vehicular fog com-
puting is used to better use available communication and
computation resources in vehicular networks using effec-
tive load distribution among nearby fog devices [6], [7].
However, to support delay-sensitive applications, in [8], fog
delivery networks (FDN) architecture is extended to include
federated fog devices, termed as federated fog delivery
network (F-FDN). The architecture is composed of several
connected FDNs that are further connected to the back-
end cloud data center. A relevant fog-based storage frame-
work called Nebula specifically targets applications us-
ing region-aware, neighborhood-specific and information-
intensive storage [9]. No doubt, efficient management of
cloud storage and computing resources available at nearby
fog devices is important for any delay-sensitive application.
Task offloading in vehicular networks With the
widespread adoption of smart vehicles, task offloading is
a viable solution to improve the system performance. That
is, the vehicles can share their computation resources among
the network to support other vehicles or users execute tasks
at nearby compute nodes. Note that with the increasing
complexity of fog networks, it is difficult to find an opti-
mal allocation policy for the task offloading. Nonetheless,
there is substantial literature exploring different decision
models for task offloading, subsequently, improving the
system performance. A relevant study in [10] proposes
a task offloading framework from the user equipment to
nearby fog nodes in fog-enabled networks using a heuristic-
based dynamic allocation index. Similar two-tier federated
approach for vehicular networks like ones in [11], [12]
uses device characteristics to estimate completion deadlines
along with the total cost of offloading in terms of energy and
delay. The goal is to minimize cumulative latency for task
offloading to various devices within a resource-constraint
environment. However, most of the aforementioned models
ignore typical network and mobility issues. More recent
models like the one in [13] propose a resource selection
service based on run-time predictions for the fog environ-
ment. Similar works use adaptive learning [14], multi-armed
bandit (MAB) [15], and probabilistic techniques like ant
colony optimization [16] to minimize the average offloading
delay. Another category of techniques uses pricing models.
For instance, in [17], [18] a resource-based pricing model is
correlated with expenditure; thus, providing better methods
to access and allocate distributed resources efficiently.
Cloud federation for task offloading The concept of fed-
eration for cloud and edge computing is used to maximize
profit, optimize use of available resources. The term federa-
tion commonly refers to the integration of autonomous en-
tities with a mutual agreement, establishing a collaborative
environment to maximize profit. In [19], the authors propose
a highly profitable cloud formation model using evolution-
ary game theory. The model maintains balance among its
members to ensure the QoS, with members deallocating
their resources to use ones available elsewhere to maximize
DRAFT
3
profit. The evaluation shows that the model performs better
compared to the traditional genetic algorithms in terms
of profit and QoS. Similarly, in [20], a hedonic coalition
formation algorithm for cloud federation is proposed to
reduce energy consumption and maximize overall profit.
Another approach implements a load migration planning
policy between cloud and fog infrastructure to reduce delay
and network usage [21]. However, the works do not cover
offloading among the members of the fog federation.
Summary of literature review Table 1summarises the
recent contribution based on vehicular fog networks. Most
of the work presented here utilizes the computing power
of vehicles through resource sharing. In some cases, RSUs
are used as a computing agent or a centralize coordinator
for tasks or information sharing. However, limited work
exists that covers the fog federation to handle the peak
workload on some devices compared to others. In this
paper, an extensive micro-level fog federation framework
is presented, which outsources tasks based on the buyer
and seller model. Thus, the decision of outsourcing is not
individually managed it is relative to the workload of all
the members of fog consortium.
3 SYSTEM MODEL
Resource model Consider Vis the set of vehicles, and R
is the set of RSUs. The two resources combined results in a
resource set G,VR. Each resource comprises on-board
compute capability, for instance, Fiis the compute capacity
available at the resource iG.
Communication model is the time spent on transmission
between connected nodes. Here, the offloading data rate Bij
between two nodes is either wired or wireless computed as,
Bij =Xij if iV, j R(wireless)
Yij if (i, j)R(wired) (1)
The first case is the wireless data offloading rate Xij based
on Shannon–Hartley theorem is given as,
Xij =Wlog21 + P
N0W(2)
where Wis the available bandwidth, Prepresents the
transmission power, and N0is the noise power spectral
density. Note that we assume symmetric data rate between
vehicle-to-RSU and RSU-to-vehicle. The second case is the
wired data offloading rate Yij relatively higher and robust
compared to wireless communication.
Task model Each vehicle vVgenerates a computational
task with random Mbits requirement for random data input
in KBs, a task set S. Each computation request tcan be
formulated as, t,{s, c}, where sis the input size for the
computation and cis the total number of CPU cycles used
to complete the task.
Computation model The tasks are executed locally, of-
floaded at the nearby RSU based on a computation policy
dij . However, due to limited computation resource avail-
ability at the RSU, more resources are provisioned at the
neighbouring RSUs, we refer to them as fog nodes.
Time consumption model defines the overhead caused
when handling a task. There are two possible cases: the task
is handled locally or offloaded to fog nodes.
Local computing The computation time Tifor ith
task is computed as,
Ti=1
Fi
ci+X
xQ
cx
(3)
where ciare the required cycles for task, P
xQ
cx
are the total CPU cycles pending in the vehicle’s
local task queue, and Fiis compute capacity of the
allocated resource.
Offload computing The computation time Tifor
ith task when offloaded to the neighboring RSU is
computed as [26],
Ti=si
Bij
+1
Fj
ci+X
xQ
cx
(4)
where siis the task input size, ciare the required cy-
cles for task, P
xQ
cxare the total CPU cycles pending
at the RSU’s task queue, Bij is the offload data rate
to and from the RSU, and Fjis compute capacity of
the allocated resource at the RSU.
Energy consumption model estimates the overhead of
energy consumption, based on the time consumption model,
it is defined for local and offloaded computing.
Local computing The energy consumption Eiat
ith task is computed as,
Ei=υi
ci+X
xQ
cx
(5)
where υiis the coefficient of energy consumption per
unit CPU cycle. It is set to 1011(Fi)2[27].
Offload computing The energy consumption Eiat
ith resource is computed as,
Ei=P·si
Bij
+υj
ci+X
xQ
cx
(6)
where υjis the coefficient of energy consumption per
unit CPU cycle at the RSU. It is set to 1011(Fj)2.
Overhead model defines the total overhead when the task
is computed locally or offloaded to the fog node.
Local computing The total local overhead loc
iis
calculated as, loc
i=γTTi+γEEiwhere γTand γE
are the time and energy coefficients for the weighted
overhead, such that γT+γE= 1 and 06γT, γE61.
Offload computing The total offloading overhead
of f
iis calculated as, of f
i=γTTi+γEEiwhere
γTand γEare the time and energy coefficients for
the weighted overhead, such that γT+γE= 1 and
06γT, γE61.
Problem formulation The objective is to guarantee min-
imal delay when processing requests from vehicles. As
mentioned earlier in the time consumption model, the delay
includes the time to transmit the task to the fog node, the
time it waits in the fog node’s task queue, the task execution
DRAFT
4
TABLE 1: The comparison table summarises the recent contributions.
Authors (year) Fog-Node Strategy/Model Tools Energy Dataset Network Federation
Fan et al. (2019) [11] End devices Heuristic-based DewSim Simulated × ×
Lin et al. (2019) [12] Vehicles Iterated greedy NA Simulated ×Vehicles
Sun et al. (2018) [15] Vehicles Adaptive learning MATLAB ×Real V2V
Nguyen et al. (2019) [18] Edge devices Pricing-based scheme MATLAB ×Real/Simulated ×Edge devices
Hammoud et al. (2019) [19] NA Evolutionary game theory MATLAB Real ×Cloud
Moghaddam et al. (2019) [20] NA Cooperative game theory MATLAB ×Simulated ×V2I/V2V
Sarsawat et al. (2019) [22] Edge devices D/M/1 & M/M/1 queue LightBlue Bean+ Real × ×
Mashayekhy et al. (2019) [23] NA Coalitional graph game MATLAB ×Simulated NA Cloud
Al-khafajiy et al. (2019) [24] Edge devices Resource management MATLAB ×Simulated Edge devices
Zhou et al. (2019) [25] End devices Contract-matching NA ×Simulated V2I/V2V
Zeseya et al. (Proposed) Mirco-level fog units Queuing-based pricing Anylogic Simulated Micro-units
time, and the time to return the results. We assume that the
overhead of returning the results is negligible. The challenge
is to balance the workloads on fog nodes to ensure QoS, the
problem can be defined as,
P: min[Ti],iS
s.t. F min
r6ωr6Fmax
r
λs
min[Di]
F
Ti6Λ
(7)
where the workload ωrat any fog node rRis bound by
the capacity of the fog node [Fmin
r, F max
r].Dis the propaga-
tion delay between the task source and offloaded fog node,
with the task offloaded to one with minimal transmission
delay. Assume that task iSis computed with probability
P r(·)for possible task computation space K={local, f og}.
The service deadline Λis given as P
κK
P r(κ)·Tκ
iwith
P
κK
P r(κ) = 1. The Tiis bound by this deadline.
Computation offloading model The offloading decision
at any node is based on completion time Tifrom task com-
putation space K={local, f og}. The offloading decision
model dcan be stated as,
dij =1Ti>Λ
0otherwise (8)
where zero (0) means the task is executed locally and one
(1) means the task is offloaded when it exceeds its service
deadline Λ.
4 PROPOSED FOG-FEDERATED DISTRIBUTION
A typical fog-federation comprising a set of fog units collab-
orating to provide QoS and to achieve workload balancing.
Each unit can communicate with end devices in its lim-
ited range. Often they are connected through a high-speed
wired connection. In the work, we introduce the concept of
micro-level fog deployment where the connected fog units
deployed at road intersections forming a fog federation. The
main objective is to distribute workload effectively to avoid
overloaded fog units within the federation.
The proposed system comprises smart vehicles, and fog
computing units. The vehicles have onboard computing
units with limited capacity and storage, and hence, for
compute-intensive tasks, nearby fog locations are used. That
is, the vehicle offloads tasks to nearby fog locations via
a wireless communication link. Note that due to varying
traffic density, the workload on each fog location is different.
At peak times, some of the fog locations become overloaded
whereas others remain underloaded. In the former case,
the QoS is difficult to maintain attributed to long end-to-
end delays. In the proposed framework, we introduce a fog
federation placed at the micro-level to maintain the QoS for
IoT devices. Generally, the fog units are deployed at the
intersections where vehicles slow down or even stop for
a while; thus, helping the fog location execute a task and
return its result directly to the source vehicle or else an ad
hoc based mechanism is used to relay the results.
Local offloading The task offload decision model on a
vehicle is based on the existing state of its computing unit.
That is, the number of pending tasks in the vehicle’s task
queue and whether the vehicle is within the communication
range of any fog-unit, if that is the case then the task
is offloaded to the nearby fog-unit. Note that the current
workload at the fog-unit is unknown to the vehicle, which
is true in real-world scenarios. Furthermore, the computing
units installed at the fog locations have higher computing
capacity compared to that of the vehicles, encouraging the
offloading of delay-sensitive tasks. The execution of typical
smart vehicle is illustrated in Algorithm 1, where the tasks
are generated at regular interval and offload to nearby fog-
unit on the availability of communication channel. Here,
with input queues (for messages, tasks, and results) and ve-
hicle data transmission range, in line 2, tasks are generated
and placed on the task queue. Later, in lines 5-12, the tasks
are offloaded to the local RSU, one in the vehicle’s range,
or else the task is executed locally. This is followed by its
addition to the executing task queue. In lines 14-18, upon
reception of the task completion message, the completed
task is removed from the executing task queue and queued
into the output queue.
Fog offloading Each fog unit is a multi-core device with an
input queue, output queue, workload manager, federation
manager, decision manager, and communication module, as
illustrated in Fig. 2. The workload manager (WM) keeps
track of the tasks received, processed and returned after
execution. It receives a task, places it in the input queue
DRAFT
5
Algorithm 1 Workflow for smart vehicle
Input
M: message queue; Q: task queue; O: output queue;
Γ: vehicle data transmission range
Output status message
1: while true do
2: if (tGenerateTask())6=φthen
3: Q.Enqueue(t)
4: end if
5: if (tQ.dequeue()) 6=φthen get task
6: if rRWITHIN Γand U{0,1}then
7: Send(t,r,REQUEST) send task to local RSU
8: else
9: OnBoardCompute(t)
10: end if
11: P.Enqueue(t)add to executing task queue
12: end if
13: mM.dequeue() get message from queue
14: if m.type IS COMPLETED then task completed by RSU
15: tm.task
16: P.Remove(t)remove completed task
17: O.Enqueue(t)add to executing task queue
18: end if
19: end while
and dispatches its results from the output queue to the
source vehicle. The federation manager (FM) is responsible
to share current workload information with other fog units
and maintain their information. The FM also periodically
gather all relevant information to update its local registers.
The current status of all the fog units is used by the decision
manager (DM) to select the outsourcing node. The selection
criterion varies based on the algorithm implemented at
the DM. We propose a pricing-based workload distributor
algorithm to evenly balance the workload among all the fog
units. Last, the communication module is responsible for
maintaining a connection with other fog units and vehicles
in the communication range. The proposed fog-based fed-
eration approach is illustrated in Algorithm 2. Here, with a
message queue, task queue, and execution task list as input,
in lines 2-9, the task is offloaded to the federation in the case
of suitable federate availability otherwise executed locally.
In lines 11-17, the task request and completion messages are
handled, that is the results are returned to the requesting
federation resources.
Algorithm 2 Workflow for RSU federate
Input
M: message queue; Q: task queue; P: execution task list
Output status message
1: while true do
2: if (tQ.dequeue()) 6=φthen get task
3: if (fSelectFederate() 6=φthen
4: Send(t,f,OFFLOAD) send task to federate
5: else
6: OnBoardCompute(t)
7: end if
8: P.Enqueue(t)add to executing task queue
9: end if
10: mM.dequeue() get message from queue
11: switch m.type do
12: tm.task
13: case REQUEST compute task received
14: Q.Enqueue(t)
15: case COMPLETED task completed by federate
16: Send(t,t.vehicle,COMPLETED) return result to vehicle
17: P.Remove(t)remove completed task
18: end while
Fig. 2: Internal architecture and connectivity of fog units.
Pricing-based workload distributor (F) To achieve
optimal distribution of workload on fog nodes with minimal
delay, the tasks are further outsourced to suitable fog nodes,
to meet the QoS and service deadline. In the proposed price-
based utilization model, we assume that fog locations are
placed at the micro-level managed by different providers
forming a fog-federation consortium based on the pricing
factor where resources with the least price are shared.
The consortium comprises two participating entities, buyers
and sellers, fog locations with a significant workload buy
resources from selling fog locations with less workload.
Suppose there is a buyer with utility cost umeasured in
terms of queuing time, as defined earlier. If a task is cMbits
then the buyer bids b=c·u. Next the buyer asks for bids
from npotential sellers with utility costs v=v1,· · · , vn,
so the bids made are s=c·v. In this work, the pricing
is dynamic varying with the current workload at the fog
locations, the set of bids from all the auction participants
p={b} s. For instance, the higher is the workload at a fog
location, the higher is its price. That is, the pricing factor ρ
for task is normalized within [0,1], computed as,
ρ=pp
p+p(9)
where (p, p+)=(min(p), max(p)) are queuing times at
the fog locations, one with least and the other with most
workload, respectively. Based on ρ, the buyer’s request for
auction is rejected with “no deal” if ρ(b)<0.5, the “reserve
price”. Otherwise, a unique bid strategy is followed where
the winner is one with the lowest pricing factor, the lowest
bidder. Among multiple sellers, a buyer selects one with the
least price. The offload decision model dis given as,
d(t) = iR, 1ρi6ρx:min(jR|j6=i)
0otherwise (10)
DRAFT
6
In general, there are always few underloaded resource
providers due to the limited number of local customers,
the fog consortium encourages other providers to use these
available resources.
5 EVALUATION
To evaluate the proposed federated task offloading denoted
as F, we compare it to trivial non-federated F0with no
offloading within federation, and two traditional offloading
algorithms. The measures used are success rate, offload rate,
queuing delay, and end-to-end delay.
Fig. 3: Network topology used to benchmark the proposed
work. The arrival rates for the entry points are defined as
(λ1, λ2, λ3).
Simulation setup For implementation of the proposed
scheme, we use AnyLogic 8 PLE 8.5.11as the agent-based
simulation platform with support for traffic simulation. We
use a custom map inspired from New York City blocks
with vehicular movements lasting for up to an hour. The
simulation results are averaged for five random runs. Other
simulation parameters used are listed in Table 2.
Network topology used to benchmark the proposed
work is illustrated in Fig. 3. All roads are bi-directional
with the deployment of nine multi-core fog units connected
via a wired link. The units communicate with vehicles in
range via a wireless link. The arrival rate denoted as λis
defined as the number of vehicles entering the simulation
per entry point per hour. The entering vehicles take random
paths until exiting the simulation through any exit point. To
benchmark different algorithms, we define four workload
scenarios based on the arrival rate as listed in Table 2.
This is done to simulate imbalanced workload situations,
for instance, with standalone RSUs failing to handle the
incoming requests. Such imbalance is common in realistic
scenarios with regions with dense vehicular traffic easily
overloading the nearby RSU. On the other hand, resources
at the nearby RSU remain under-utilized in a less dense
environment. Thus, an RSU-based collaborative resource
sharing facilitates handling resource requests in varying
vehicular environments.
1. https://www.anylogic.com/
TABLE 2: Simulation configuration and system specifica-
tion.
Parameter Value
Simulation area 3 ×3 km
Total simulation time 1 hr
Simulation repetition 5 times
Vehicle speed [2.78–16.67] m/s
Vehicle acceleration 1.6 m/s2
Vehicle deceleration 2.6 m/s2
Vehicle compute capacity 50 MHz
Compute request size [15–50] Mbits
Task generation interval random
# of fog units 9 (nine)
fog-unit range 100m
fog-unit compute capacity [2.6–3.5] GHz
fog-unit computing cores 8 (eight)
Scenario S1(λ1=100, λ2=200, λ3=300)
Scenario S2(λ1=200, λ2=300, λ3=400)
Scenario S3(λ1=300, λ2=400, λ3=500)
Scenario S4(λ1=400, λ2=500, λ3=400)
CPU 3.40 GHz Intel Core i7
RAM 4.00 GB
OS Microsoft Windows 10
Simulator AnyLogic v8.4.0
Scenario In the simulation, the vehicular traffic is varied
in terms of the arrival rate. Every vehicle entering the
simulation is equipped with an onboard computing unit and
storage. During its lifetime, the vehicles with compute and
storage capabilities generate tasks. A task offload decision is
made based on the pending tasks and/or direct connectivity
with the micro-fog unit. As mentioned earlier, there are
eight entry points where the vehicles enter the simulation.
We categorize these entry points into three groups with
different arrival rates. That is, for evaluation, we define
four scenarios with varying combinations of arrival rates, as
listed in Table 2. Note that the fog nodes have heterogeneous
computing capacities up to eight cores in total; however,
every vehicle is equipped with similar four cores.
5.1 Result and Discussion
For evaluation, the measures used to benchmark the pro-
posed pricing-based task offloading scheme are: queue
length, queuing delay, end-to-end delay, offload rate, and
workload deviation. We experiment with two variants of
the proposed scheme, federated (F) and non-federated (F0).
Moreover, we use two classical task offloading algorithms
for comparison, random walk algorithm (RWA) and neigh-
boring fogs algorithm (NFA), described as follows:
Random walk algorithm (RWA) In a random walk,
the tasks are outsourced to the random fog locations
to balanced the workload [28]. Here, every fog unit
uniformly offloads tasks among the fog nodes, the
set of RSUs R, mathematically, U{rR}. There are
no particular selection criterion for selection.
Neighboring fogs algorithm (NFA) In NFA, only
neighboring fog nodes are used to share work-
load [29]. The source fog node uniformly offloads
tasks among the nearest fog nodes R0R, ones with
less propagation delay, mathematically, U{rR0}.
DRAFT
7
Queue length is defined as the number of tasks waiting in
the task queue at any time instant. Often used to implement
decision models about the allocation of resources to provide
service. With equal priority tasks, it represents the waiting
time for vehicles offloading tasks. Moreover, it represents
the workload at each RSU deployed under the federated
environment, for instance, the RSU with the least queuing
time is considered a suitable candidate for task offloading.
Fig. 4shows the average queue length for the proposed
(Fand F0), RWA, and NFA algorithms. We observe that
the queue lengths increase with increasing arrival rates.
However, compared to F0,Fshows an advantage in terms
of reduced queue lengths compared to algorithms using a
simple selection model like that used for RWA and NFA.
Here, the highest queue length can be observed in F0case
due to the non-sharing of resources in the micro-federation.
S1S2S3S4
0
5000
10000
Scenarios
Queue length
F
F0
RWA
NFA
Fig. 4: RSU queue length with varying arrival rates per hour
per entry point.
S1S2S3S4
0
500
1000
Scenarios
End-to-end delay (s)
F
F0
RWA
NFA
Fig. 5: RSU queuing delay with varying arrival rates per
hour per entry point.
End-to-end delay is defined as the time taken to transmit
the task from its source to the RSU and then receive its
returned result at the source. That is, the delay includes
queuing and computation times. Fig. 5shows the end-to-
end delay comparison among proposed (Fand F0), RWA
and NFA. The highest delay is observed in F0due to its
non-federated nature, primarily affected by the increasing
queue length. The proposed federated scheme Fshows a
clear gain compared to F0, RWA, and NFA.
Total energy consumption The total energy is defined
as the sum of energy consumed during task execution and
maintaining the task queue. Fig. 6shows the total energy
consumption comparison for all the scenarios. In S4, a max-
imum number of tasks generated, therefore the energy con-
sumption showing the increasing trend in the non-federated
S1S2S3S4
0
1
2
·104
Scenarios
Energy consumption (µJ)
F0RWA
NFA F
Fig. 6: RSU total energy consumption with varying arrival
rates per hour per entry point.
model; however, the proposed Fconsumes the least amount
of energy due to efficient workload distribution among the
federation members. In S4, the proposed Fshows 56% en-
ergy reduction compared to the non-federate approach. Sim-
ilarly, the Fshows a significant energy reduction compared
with NFA and RWA. Further, at the maximum workload,
RWA performs better compared to NFA; whereas, at S3
scenario, NFA shows less consumption. Thus, both NFA and
RWA behavior remain close to each other in terms of energy
consumption. Moreover, in all scenarios, the proposed F
clearly outclasses all the other techniques. Overall 72% en-
ergy reduction is achieved through the proposed technique
in comparison with the traditional non-federated model.
S1S2S3S4
0.1
0.2
0.3
0.4
0.5
Scenarios
Offload rate
F
RWA
NFA
Fig. 7: RSU offload rate with varying arrival rates per hour
per entry point.
Offload rate is the measure to observe the number of tasks
outsourced from one federate (the local RSU) to another
federate in any federation. Fig. 7shows that the proposed
federated scheme Foffloads approximately 25% of the
incoming tasks from vehicles in RSU’s range while the
remaining tasks are computed locally on the RSU. On the
other hand, the offload rate for RWA and NFA is approx-
imately 50%, almost twice compared to F. This reduction
in the offload rate in the case of Fis due to the use of a
pricing-based policy adopted where every RSU computes a
local index to decide its role as a buyer or seller. The nodes
offload tasks across the federation only if there is a seller
among the set of neighboring federates otherwise the task
remains at the federate for local computation.
Workload balancing Recall that federated model is im-
plemented to balance workload across under-utilized com-
DRAFT
8
S1S2S3S4
0
20
40
60
Scenarios
Number of tasks (in thousands)
F0RWA
NFA F
Fig. 8: Deviation of offloaded tasks per RSU. Note the mean
stands 11%.
puting resources, that is to provide the agreed QoS to
end-users. To assess workload across the federation, we
compute workload deviation for the proposed scheme (F
and F0), RWA and NFA, as illustrated by the error bars in
Fig. 8. In the case of F, the deviation becomes smaller with
increasing vehicle density representing a balanced work-
load across the federation. In contrast, the non-federated
scheme F0demonstrates maximum workload imbalance.
Moreover, RWA performs better compared to NFA, which
uses a locality-based federate selection model. That is, only
resources at neighboring federates are used for workload
balancing, relatively better than F0. Similarly, Table 3shows
the workload deviation for each of the evaluated techniques
with varying arrival rates. The proposed technique Fshows
a significant reduction in workload imbalance compared
to all other techniques. Thus, the fog units with evenly
distributed workloads improve the QoS.
5.2 Discussion
The concept of fog computing is adopted to support delay-
sensitive applications. In most of the traditional systems, fog
units work independently without any collaboration among
them. Generally, in the case of a heavy workload, the tasks
are transferred to the backend cloud data center. This use of
the cloud architecture incurs additional communication and
scheduling delays; thus, making it the last possible solution
to handle such workloads. Even though the concept of fog
collaboration is still in its initial stages. Many of the works
cover fog collaboration at edge locations. In this paper, we
proposed a micro-level fog deployment model to handle
heavy workloads through resource sharing. The proposed
model allows the outsourcing of tasks in a balanced manner.
At any instance of time, the fog units are classified as buyers
or sellers. The buyers are the ones facing heavy workload
whereas the sellers are underloaded. Once the categoriza-
tion is done, outsourcing is only allowed if the fog units
are relatively overloaded, otherwise, they compute it locally.
In comparison to traditional federated environments where
the federates outsource tasks as soon as they cross a fixed
threshold. Such techniques can lead to a significant number
of tasks being outsourced even to already overloaded fog
units. The proposed work is evaluated against traditional
random walk and neighboring fog selection algorithms. The
results show a significant gain in terms of reduced delay,
queue length and workload balance throughout the entire
fog federation.
6 CONCLUSIONS
In this paper, we propose a novel micro-level fog federation
that enables resource sharing among fog units. The frame-
work allows the sharing of workload-related information
for the fog unit selection model, with the objective to meet
the deadline. The model proposes a pricing-based federate
selection model, that is, allows the sharing of federated
resources from multiple possible sellers only if the source
is categorized as a buyer. The result demonstrates that the
proposed model based on queuing time distributes tasks
in a balanced manner across the federation compared to
traditional techniques.
AVAILABILITY
The propsoed framework is an open source project
anonymously available on GitHub at https://github.
com/AsadWaqarMalik/MicroFogFederation. We expect re-
searchers to add new ideas and models to take this frame-
work in interesting directions. Nevertheless, the authors
have an interest in developing new algorithms for task
offloading for federated vehicular fog resource provisioning.
ACKNOWLEDGEMENT
This work is supported by the Faculty Program, University
Malaya under Grant GPF019D-2019.
REFERENCES
[1] S. Vashi, J. Ram, J. Modi, S. Verma, and C. Prakash, “Internet of
things (iot): A vision, architectural elements, and security issues,”
in 2017 International Conference on I-SMAC (IoT in Social, Mobile,
Analytics and Cloud)(I-SMAC). IEEE, 2017, pp. 492–496.
[2] M. Burhan, R. Rehman, B. Khan, and B.-S. Kim, “Iot elements, lay-
ered architectures and security issues: A comprehensive survey,”
Sensors, vol. 18, no. 9, p. 2796, 2018.
[3] M. Aazam, S. Zeadally, and K. A. Harras, “Fog computing archi-
tecture, evaluation, and future research directions,” IEEE Commun.
Mag., vol. 56, no. 5, pp. 46–52, 2018.
[4] C. Huang, R. Lu, and K.-K. R. Choo, “Vehicular fog computing:
architecture, use case, and security and forensic challenges,” IEEE
Commun. Mag., vol. 55, no. 11, pp. 105–111, 2017.
[5] W.-H. Kuo, Y.-S. Tung, and S.-H. Fang, “A node management
scheme for r2v connections in rsu-supported vehicular adhoc net-
works,” in 2013 International Conference on Computing, Networking
and Communications (ICNC). IEEE, 2013, pp. 768–772.
[6] X. Hou, Y. Li, M. Chen, D. Wu, D. Jin, and S. Chen, “Vehicular fog
computing: A viewpoint of vehicles as the infrastructures,” IEEE
Trans. Veh. Technol., vol. 65, no. 6, pp. 3860–3873, 2016.
[7] V. G. Menon and P. J. Prathap, “Moving from vehicular cloud
computing to vehicular fog computing: Issues and challenges,”
International Journal on Computer Science and Engineering, vol. 9,
no. 2, pp. 14–18, 2017.
[8] V. Veillon, C. Denninnart, and M. A. Salehi, “F-fdn: Federation
of fog computing systems for low latency video streaming,” in
2019 IEEE 3rd International Conference on Fog and Edge Computing
(ICFEC). IEEE, 2019, pp. 1–9.
[9] M. Ryden, K. Oh, A. Chandra, and J. Weissman, “Nebula: Dis-
tributed edge cloud for data intensive computing,” in 2014 IEEE
International Conference on Cloud Engineering. IEEE, 2014, pp. 57–
66.
[10] F. Yang, Z. Zhu, S. Zhao, Y. Yang, and X. Luo, “Optimal task of-
floading in fog-enabled networks via index policies,” in 2018 IEEE
Global Conference on Signal and Information Processing (GlobalSIP).
IEEE, 2018, pp. 688–692.
DRAFT
9
TABLE 3: Comparison showing workload distribution variation among different techniques.
Scenario Non-federated (F0) RWA NE Federated (F)
µ(tasks) σ(tasks) σ(%) µ(tasks) σ(tasks) σ(%) µ(tasks) σ(tasks) σ(%) µ(tasks) σ(tasks) σ(%)
1 15652 7625 5.413 16436 6280 4.245 17344 5786 3.706 16632 7497 5.009
2 25580 12985 5.64 26388 5340 2.249 25030 6473 2.874 26323 3795 1.602
3 35309 18189 5.724 35507 8136 2.546 34383 10345 3.343 35203 175 0.055
4 38367 18867 5.464 38536 8685 2.504 39392 12033 3.394 38327 222 0.064
[11] Y. Fan, L. Zhai, and H. Wang, “Cost-efficient dependent task
offloading for multiusers,” IEEE Access, vol. 7, pp. 115 843–115856,
2019.
[12] Y.-D. Lin, J.-C. Hu, B. Kar, and L.-H. Yen, “Cost minimization with
offloading to vehicles in two-tier federated edge and vehicular-
fog systems,” in 2019 IEEE 90th Vehicular Technology Conference
(VTC2019-Fall). IEEE, 2019, pp. 1–6.
[13] N. Mostafa, “Cooperative fog communications using a multi-level
load balancing,” in 2019 Fourth International Conference on Fog and
Mobile Edge Computing (FMEC). IEEE, 2019, pp. 45–51.
[14] L. Xiao, W. Zhuang, S. Zhou, and C. Chen, “Learning while of-
floading: Task offloading in vehicular edge computing network,”
in Learning-based VANET Communication and Security Techniques.
Springer, 2019, pp. 49–77.
[15] Y. Sun, X. Guo, S. Zhou, Z. Jiang, X. Liu, and Z. Niu, “Learning-
based task offloading for vehicular cloud computing systems,” in
2018 IEEE International Conference on Communications (ICC). IEEE,
2018, pp. 1–7.
[16] M. Dorigo and T. St¨
utzle, “Ant colony optimization: overview and
recent advances,” in Handbook of Metaheuristics. Springer, 2019,
pp. 311–351.
[17] C. Wu, R. Buyya, and K. Ramamohanarao, “Cloud pricing models:
Taxonomy, survey, and interdisciplinary challenges,” ACM Com-
put. Surv., vol. 52, no. 6, p. 108, 2019.
[18] D. T. Nguyen, L. B. Le, and V. K. Bhargava, “A market-
based framework for multi-resource allocation in fog computing,”
IEEE/ACM Trans. Networking, 2019.
[19] A. Hammoud, A. Mourad, H. Otrok, O. A. Wahab, and H. Har-
manani, “Cloud federation formation using genetic and evolution-
ary game theoretical models,” Future Gener. Comput. Syst., vol. 104,
pp. 92–104, 2020.
[20] M. M. Moghaddam, M. H. Manshaei, W. Saad, and M. Goudarzi,
“On data center demand response: A cloud federation approach,”
IEEE Access, vol. 7, pp. 101 829–101 843, 2019.
[21] B. Ottenw¨
alder, B. Koldehofe, K. Rothermel, and U. Ramachan-
dran, “Migcep: operator migration for mobility driven distributed
complex event processing,” in Proceedings of the 7th ACM interna-
tional conference on Distributed event-based systems. ACM, 2013, pp.
183–194.
[22] S. Saraswat, H. P. Gupta, T. Dutta, and S. K. Das, “Energy efficient
data forwarding scheme in fog based ubiquitous system with
deadline constraints,” IEEE Trans. Netw. Serv. Manage., 2019.
[23] L. Mashayekhy, M. M. Nejad, and D. Grosu, “A trust-aware
mechanism for cloud federation formation,” IEEE Trans. Cloud
Comput., 2019.
[24] M. Al-khafajiy, T. Baker, H. Al-Libawy, Z. Maamar, M. Aloqaily,
and Y. Jararweh, “Improving fog computing performance via fog-
2-fog collaboration,” Future Gener. Comput. Syst., vol. 100, pp. 266–
280, 2019.
[25] Z. Zhou, P. Liu, J. Feng, Y. Zhang, S. Mumtaz, and J. Rodriguez,
“Computation resource allocation and task assignment optimiza-
tion in vehicular fog computing: A contract-matching approach,”
IEEE Trans. Veh. Technol., vol. 68, no. 4, pp. 3113–3125, 2019.
[26] Z. Yin, H. Chen, and F. Hu, “An advanced decision model enabling
two-way initiative offloading in edge computing,” Future Gener.
Comput. Syst., vol. 90, pp. 39–48, 2019.
[27] Y. Wen, W. Zhang, and H. Luo, “Energy-optimal mobile applica-
tion execution: Taming resource-poor mobile devices with cloud
clones,” in 2012 Proceedings IEEE Infocom. IEEE, 2012, pp. 2716–
2720.
[28] Q. Zhu, B. Si, F. Yang, and Y. Ma, “Task offloading decision in
fog computing system,” China Commun., vol. 14, no. 11, pp. 59–68,
2017.
[29] A. Bozorgchenani, D. Tarchi, and G. E. Corazza, “An energy
and delay-efficient partial offloading technique for fog computing
architectures,” in GLOBECOM 2017-2017 IEEE Global Communica-
tions Conference. IEEE, 2017, pp. 1–6.
Zeseya Sharmin received BSc degree in Com-
puter Science and Engineering from Green Uni-
versity of Bangladesh, Bangladesh, in 2018.
Currently she is doing her Master’s in Applied
Computing, University Malaya, Malaysia. Be-
sides, she is working as a Research Assistant
in University Malaya. Her research interest is
in cloud computing, internet of things, mobile
computing.
Asad W. Malik is an Assistant Professor at
NUST-SEECS, Pakistan. Besides, he is working
as Senior Lecturer at the Department of Infor-
mation Systems, Faculty of Computer Science
& Information Technology, University Malaya,
Malaysia. He finished his Ph.D. with majors in
parallel and distributed simulation/systems from
NUST, Pakistan in 2012. His primary area of
interest includes distributed simulation, cloud/fog
computing, and internet of things.
Anis U. Rahman received Master’s degree in
Parallel and Distributed Systems from Joseph
Fourier University, France, and Ph.D. in Com-
puter Science from Grenoble University, France,
in 2013. He is currently an Assistant Professor
NUST-SEECS, Pakistan. Besides, he is working
as Research Fellow at the Faculty of Computer
Science & Information Technology, University
Malaya, Malaysia. His main research interests
include internet of things and machine learning.
Rafidah MD Noor received BIT from University
Utara Malaysia, in 1998, M.Sc. in Computer Sci-
ence from Universiti Teknologi Malaysia, in 2000,
and Ph.D. in Computing from Lancaster Univer-
sity, UK, in 2010. She is currently an Associate
Professor with the Dept. of Computer System
& Technology, Faculty of Computer Science &
Information Technology, University Malaya, and
the Director of the Centre of Mobile Cloud Com-
puting Research (C4MCCR), which focuses on
high impact research related to transportation
systems including vehicular networks, wireless networks, network mo-
bility, quality of service, and internet of things.
... Fog federation formation problem is classified as NP-hard [4], indicating that there is no known algorithm that can solve the problem in polynomial time. Therefore, researchers typically employ heuristics, meta-heuristics, or approximation algorithms to find near-optimal solutions to the problem [9,22]. The formation algorithm is required to have low complexity, low accuracy and respect the privacy of the user data, while the formed federations must be stable (i.e., providers would not want to deviate from their federations) and profitable to the providers. ...
Preprint
Full-text available
In this paper, we tackle the network delays in the Internet of Things (IoT) for an enhanced QoS through a stable and optimized federated fog computing infrastructure. Network delays contribute to a decline in the Quality-of-Service (QoS) for IoT applications and may even disrupt time-critical functions. Our paper addresses the challenge of establishing fog federations, which are designed to enhance QoS. However, instabilities within these federations can lead to the withdrawal of providers, thereby diminishing federation profitability and expected QoS. Additionally, the techniques used to form federations could potentially pose data leakage risks to end-users whose data is involved in the process. In response, we propose a stable and comprehensive federated fog architecture that considers federated network profiling of the environment to enhance the QoS for IoT applications. This paper introduces a decentralized evolutionary game theoretic algorithm built on top of a Genetic Algorithm mechanism that addresses the fog federation formation issue. Furthermore, we present a decentralized federated learning algorithm that predicts the QoS between fog servers without the need to expose users' location to external entities. Such a predictor module enhances the decision-making process when allocating resources during the federation formation phases without exposing the data privacy of the users/servers. Notably, our approach demonstrates superior stability and improved QoS when compared to other benchmark approaches.
... These gadgets are collecting data through a variety of sensors and applications. Consequently, organisations are consistently producing and retaining substantial volumes of data [1]. Following the widespread adoption of the IoT, there has been a significant surge in the volume of data generated by various sensors. ...
... As a result, people's demands for driving comfort and vehicle safety are becoming more and more critical. 1 IoV is a research hotspot that offers great demand to government agencies, research institutes, and manufacturing companies. [2][3][4] IoV technology is an integration of information technology and transport. ...
Article
Full-text available
Internet of vehicles (IoV) comprises connected vehicles and connected autonomous vehicles and offers numerous benefits for ensuring traffic and safety competence. Several IoV applications are delay‐sensitive and need resources for computation and data storage that are not provided by vehicles. Therefore, these tasks are always offloaded to highly powerful nodes, namely, fog, which can bring resources nearer to the networking edges, reducing both traffic congestion and load. Besides, the mechanism of offloading the tasks to the fog nodes in terms of delay, computing power, and completion time remains still as an open concern. Hence, an efficient task offloading strategy, named Aquila Student Psychology Optimization Algorithm (ASPOA), is developed for offloading the IoV tasks in a fog setting in terms of the objectives, such as delay, computing power, and completion time. The devised optimization algorithm, known as ASPOA, is the incorporation of Aquila Optimizer (AO) and Student Psychology Based Optimization (SPBO). Task offloading in the IoV‐fog system selects suitable resources for executing the tasks of the vehicles by considering several constraints and parameters to satisfy the user requirements. The simulation outcomes have shown that the devised ASPOA‐based task offloading method has achieved better performance by achieving a minimum delay of 0.0009 s, minimum computing power of 8.884 W, and minimum completion time of 0.441 s.
... The following papers are based on edge-fog federated systems. Sharmin et al. [30] proposed a micro-level fog federation environment for vehicular networks that handle delay-sensitive applications and derived a pricing-based workload distributor algorithm to balance the workload. Yen et al. [31] designed a two-tier edge and vehicular-fog federated architecture and proposed a decentralized offloading configuration protocol (DOCP) for low-cost offloading. ...
Article
Full-text available
Edge and fog computing technologies are akin to cloud computing but operate in closer proximity to users, offering similar services on a more widely distributed and localized scale. To enhance the computing environment and enable efficient offloading of computing requests, we propose a unified federation of these technologies, forming a federated cloud-edge-fog (CEF) system. Unlike current offloading models limited to single-hop and unidirectional vertical scenarios, our model facilitates two-hop, bidirectional (horizontal and vertical) offloading. The CEF model enables not only fog and edge devices to offload tasks to the cloud but also allows the cloud to offload tasks to the edges and fogs, creating a more dynamic and flexible computing ecosystem. To optimize this system, we formulate an optimization problem focused on minimizing the total cost while adhering to latency constraints. We employ simulated annealing as the solution approach. By adopting the proposed CEF model and optimization strategy, organizations can effectively leverage the strengths of cloud, edge, and fog computing while achieving significant cost reductions and improved task offloading efficiency. The findings from our study indicate that adopting a two-hop offloading approach can result in cost savings of 10–20% compared to the traditional one-hop method. Furthermore, when incorporating horizontal and bidirectional offloading, cost savings of approximately 12% and 20% can be achieved, respectively, in contrast to scenarios without horizontal offloading and only unidirectional vertical offloading. This advancement holds promise for optimizing computing resources and enhancing the overall performance of distributed systems in real-world applications.
... A micro-level fog device placement strategy [12] is proposed by Sharmin et al. whereby fog units participate in a consortium. Devices within the consortium assume the roles of buyers and sellers and collaborate using a pricing model allowing fog units with large workloads to buy resources from those with less workload. ...
Article
Smart vehicles are equipped with onboard computing units designed to run in-vehicle applications. However, due to limited computing power, the onboard units are unable to execute compute-intensive tasks and those that require near real-time processing. Therefore tasks are offloaded to nearby fog/ edge devices that have more powerful processors. However, the fog devices are static, placed at fixed locations such as intersections, and have a limited communication range. Therefore they can only facilitate vehicles in their immediate vicinity and only limited areas of the city can be covered to provide services on demand. In this paper, we propose a UAV-based computing framework design termed Skywalker to provide computing in regions where there are no static fog units thereby extending coverage. Skywalker’s contributions are three-fold: (1) It allows for load-aware UAV placement and provisions a swarm of UAVs to fly to areas experiencing a gap in service where the size of the swarm is proportional to the demand. (2) It implements multiple scheduling algorithms that the UAVs swarm employs to divide up the task processing responsibility for individual UAVs within the swarm. (3) A zone-based delivery mechanism is being proposed to facilitate the return of completed tasks, either through direct delivery or relay-based methods. The choice between these options depends on the distance covered by the requesting vehicle from the UAV swarm. The efficiency of the framework is compared with existing techniques and it is found that it can greatly extend coverage during peak traffic hours while providing low communication delay and consuming minimum energy.
... Hence, there is great demand for driving comfort and car safety in the vehicular network. 3 Therefore, IoV has evolved in the network environment, and it becomes a hot research topic in research institutions, vehicle monitoring companies, and the government. 4,5 A major important factor to be noticed in IoT is the exchange of messages among vehicles to increase road safety. ...
Article
Full-text available
For the rising number of vehicles, and the advancement of communication and computation technologies, the perception of Internet of Vehicles (IoV) is introduced. It is utilized to capture vehicle‐based information that includes road description, road congestion, vehicle speed, and location. However, this information is important, and it showed more benefits in different ways, like route selection and message dissemination. IoV is the self‐structured network composed with vehicles lies in the road and the road side units (RSUs). It offers Infrastructure‐to‐Vehicle (I2V), as well as Vehicle‐to‐Vehicle (V2V) data transmission mechanism for transmitting service messages. To reliably broadcast the service information to the intended recipient in the IoV network still faced issues. Hence, an efficient service message transmission protocol is developed using the proposed fractional mayfly optimization algorithm (FMA) for selecting the relay vehicle and cooperative vehicle for transmitting service messages to the destination vehicle from RSU through the process of I2V and V2V scheduling. The RSU selects the relay vehicle for every service message using the proposed algorithm and allocates the cooperative vehicle by RSU for scheduling V2V transmission. The simulation results showed that the proposed scheduling method obtains the best channel quality indicator (CQI), delay, distance, packet delivery ratio (PDR), and throughput value of 0.92 for 150 vehicles and 0.005891, 6.060731, 83.45%, and 171.50 Mbps for 100 vehicles.
... Their technique maintains the data on fog networks by using information replication technology and reducing the overall need for large data centers. More studies [245][246][247][248][249][250] proposed some essential methods to provide LB in fog contexts. ...
Article
Full-text available
The Internet of things (IoT) extends the Internet space by allowing smart things to sense and/or interact with the physical environment and communicate with other physical objects (or things) around us. In IoT, sensors, actuators, smart devices, cameras, protocols, and cloud services are used to support many intelligent applications such as environmental monitoring, traffic monitoring, remote monitoring of patients, security surveillance, and smart home automation. To optimize the usage of an IoT network, certain challenges must be addressed such as energy constraints, scalability, reliability, heterogeneity, security, privacy, routing, quality of service (QoS), and congestion. To avoid congestion in IoT, efficient load balancing (LB) is needed for distributing traffic loads among different routes. To this end, this survey presents the IoT architectures and the networking paradigms (i.e., edge–fog–cloud paradigms) adopted in these architectures. Then, it analyzes and compares previous related surveys on LB in the IoT. It reviews and classifies dynamic LB techniques in the IoT for cloud and edge/fog networks. Lastly, it presents some lessons learned and open research issues.
Article
Full-text available
With the rapid advance of the Internet of Things (IoT), technology has entered a new era. It is changing the way smart devices relate to such fields as healthcare, smart cities, and transport. However, such rapid expansion also challenges data processing, latency, and QoS. This paper aims to consider fog computing as a key solution for addressing these problems, with a special emphasis on the function of load balancing to improve the quality of service in IoT environments. In addition, we study the relationship between IoT devices and fog computing, highlighting why the latter acts as an intermediate layer that can not only reduce delays but also achieve efficient data processing by moving the computational resources closer to where they are needed. Its essence is to analyze various load balancing algorithms and their impact in fog computing environments on the performance of IoT applications. Static and dynamic load balancing strategies and algorithms have been tested in terms of their impact on throughput, energy efficiency, and overall system reliability. Ultimately, dynamic load balancing methods of this sort are better than static ones for managing load in fog computing scenarios since they are sensitive to changing workloads and changes in the system. The paper also discusses the state of the art of load balancing solutions, such as secure and sustainable techniques for Edge Data Centers (EDCs), It manages the allocation of resources for scheduling. We aim to provide a general overview of important recent developments in the literature while also pointing out limitation where improvements might be made. To this end, we set out to better understand and describe load balancing in fog computing and its importance for improving QoS. We thus hope that a better understanding of load balancing technologies can lead us towards more resilient and secure systems.
Article
Due to the increasing number of service requests from the vehicles, the load at the road side units (RSUs) increases, which affects the delay-sensitive vehicle services. In Internet of Vehicles (IoV), the vehicles can communicate directly with other vehicles and take help from the vehicles to cooperatively accomplish a task. However, it is very challenging to cooperatively execute a task in an IoV environment with high traffic and dynamic vehicle movements. Furthermore, it is difficult for a task vehicle to choose trustworthy and cooperative vehicles. In this paper, we propose algorithms for cooperative task execution by taking the help of trusted vehicles, when it is not possible to complete a deadline-specified task through the RSUs. We propose a hedonic coalition formation game-based approach to form distributed coalitions of cooperative vehicles. We consider the trust score of the vehicles along with their computational capabilities and journey routes. After each task execution, the service feedback is reflected in the trust score of each cooperative vehicle in the coalition. Our proposed algorithms allow the cooperative vehicles to autonomously choose the coalitions and select a vehicle task to maximize their payoffs. To satisfy the task deadlines in multiple coalitions, we design the merging of vehicle coalitions. We consider the simulation of urban mobility (SUMO) tool to generate the mobility traces of the vehicles in a real road network of Berlin city, which considers the traffic junctions and vehicle density on the roads. Through extensive simulations, we show that the proposed algorithms significantly increase the service rate of delay-sensitive task requests by at least $30.5 \%$ and the trust score by at least $20.61 \%$ , compared to the benchmark schemes.
Article
Full-text available
The extensive use of mobile intelligent devices, such as smart phones and tablets, induces new opportunity and challenge for computation offloading. Task offloading is an important issue in a system consisting of multiple types of devices, such as mobile intelligent devices, local edge hosts and a remote cloud server. In this paper, we study the offloading assignment of multiple applications, each one comprising several dependent tasks, in such a system. To evaluate the total cost in the offloading process, a new metric is introduced to take into account features of different devices. The remote server and local hosts are more concerned about their processors utilization, while mobile devices pay more attention to their energy. Therefore, this metric uses relative energy consumption to denote the cost of mobile devices, and evaluates the cost of the remote server and local hosts by the processor cycle number of task execution. We formulate the offloading problem to minimize the system cost of all applications within each application’s completed time deadline. Since this problem is NP-hard, the heuristic algorithm is proposed to offload these dependent tasks. At first, our algorithm arranges all tasks from different applications in a priority queue considering both completed time deadline and task-dependency requirements. Then, based on the priority queue, all tasks are initially assigned to devices to protect mobile devices with low energy and make them survive in the assignment process as long as possible. At last, to obtain a better schedule realizing lower system cost, based on the relative remaining energy of mobile devices, we reassign tasks from high-cost devices to low-cost devices to minimize the system cost. Simulation results show that our proposed algorithm increases the successfully completed probability of whole applications and reduces the system cost effectively under time and energy constraints.
Article
Full-text available
The significantly high energy consumption of data centers constitutes a major load on the smart power grid. Data center demand response is a promising solution to incentivize the cloud providers to adapt their consumption to the power grid conditions. These policies not only mitigate the operational stability issues of the smart grid but also potentially decrease the electricity bills of cloud providers. Cloud providers can improve their contribution and reduce their energy cost by collaboratively managing their workload. Through cooperation in the form of cloud federations, providers can spatially migrate their workload to better utilize the benefits provided by demand response schemes over multiple locations. To this end, this work considers an interaction system between the independent cloud providers and the corresponding smart grid utilities in the context of a demand response program. Leveraging the cooperative game theory, this paper presents a federation formation among the cloud providers in the presence of a location-dependent demand response program. A distributed algorithm which is coupled with an optimal workload allocation problem is applied. The effect of the federations formation on the clouds’ profits and on the smart grid performance is analyzed through simulation. Simulation results show that cooperation increases the clouds’ profits as well as the smart grid performance compared to the noncooperative case.
Article
This paper proposes an approach based on genetic algorithms and evolutionary game theory in order to study the problem of forming highly profitable federated clouds, while maintaining stability among the members in the presence of dynamic strategies (i.e. cloud providers joining and/or leaving federations) that might result in decreased Quality of Service (QoS). Cloud federation helps cloud providers to take advantage of the available unused virtual machines. It allows the providers to combine their resources in order to serve a larger pool of requests that could not have been served otherwise. We tackle the problem of forming federations while maximizing the total profit they yield using a Genetic Algorithm. However, the main problem may rise after the federation formation where many cloud providers, due to the dynamicity, may be tempted to reallocated their resources into other federations for seeking better payoff. Such an act may lead to a decrease in the QoS and cause a drop in the profit earned by the federations. Thus, we extend the genetic model as an evolutionary game, which aims to improve the profit while maintaining stability among federations. Experiments were conducted using CloudHarmony real-world dataset and benchmarked with Sky federation model previously introduced in the literature. Both the genetic and evolutionary game theoretical models outperform the benchmarked one. The evolutionary game model gave better results in terms of profit and QoS’s due to its mechanism of reaching a stable state, in which no provider has incentive to reallocate his resources into different federations.
Article
This article provides a systematic review of cloud pricing in an interdisciplinary approach. It examines many historical cases of pricing in practice and tracks down multiple roots of pricing in research. The aim is to help both cloud service provider (CSP) and cloud customers to capture the essence of cloud pricing when they need to make a critical decision either to achieve competitive advantages or to manage cloud resource effectively. Currently, the number of available pricing schemes in the cloud market is overwhelming. It is an intricate issue to understand these schemes and associated pricing models clearly due to involving several domains of knowledge, such as cloud technologies, microeconomics, operations research, and value theory. Some earlier studies have introduced this topic unsystematically. Their approaches inevitably lead to much confusion for many cloud decision-makers. To address their weaknesses, we present a comprehensive taxonomy of cloud pricing, which is driven by a framework of three fundamental pricing strategies that are built on nine cloud pricing categories. These categories can be further mapped onto a total of 60 pricing models. Many of the pricing models have been already adopted by CSPs. Others have been widespread across in other industries. We give descriptions of these model categories and highlight both advantages and disadvantages. Moreover, this article offers an extensive survey of many cloud pricing models that were proposed by many researchers during the past decade. Based on the survey, we identify four trends of cloud pricing and the general direction, which is moving from intrinsic value per physical box to extrinsic value per serverless sandbox. We conclude that hyper-converged cloud resources pool supported by cloud orchestration, virtual machine, Open Application Programming Interface, and serverless sandbox will drive the future of cloud pricing.
Article
Ubiquitous Computing (UbiComp) is a computa- tional paradigm that enhances the use of computing devices by making them available to the user anywhere and anytime. From the energy perspective, it is often very important to compute the entire task within a specific deadline with the minimum energy consumption of the UbiComp system. The literature on determining the energy consumption of the system for computing the task does not consider a periodic task and different sampling rate of the sensors, which eliminates the deadline constraints in the analysis. Since the period of the tasks are not fixed, the estimated delay without considering the fixed period is lower than the actual value. In this paper, we assume that an Edge, Fog, and Cloud layers based UbiComp system computes the periodic task within the specific deadline. We estimate the fractions of the task that are computed at each layer to reduce the energy consumption such that the task is computed within a specific deadline. We refer this problem as (x1,x2,x3)-Energy Delay problem, where x1, x2, and x3 are the fractions of task computed at Edge, Fog, and Cloud layers, respectively, x1 +x2 +x3 =1, and 0 ≤ {x1 , x2 , x3 } ≤ 1. Our numerical and prototype results demonstrate the impact of the data size, network topologies, deadline, and characteristics of the sensors on the energy consumption, delay, and the accuracy of the system.
Article
In the Internet of Things (IoT) era, a large volume of data is continuously emitted from a plethora of connected devices. The current network paradigm, which relies on centralised data centres (aka Cloud computing), has become inefficient to respond to IoT latency concern. To address this concern, fog computing allows data processing and storage “close” to IoT devices. However, fog is still not efficient due to spatial and temporal distribution of these devices, which leads to fog nodes’ unbalanced loads. This paper proposes a new Fog-2-Fog (F2F) collaboration model that promotes offloading incoming requests among fog nodes, according to their load and processing capabilities, via a novel load balancing known as Fog Resource manAgeMEnt Scheme (FRAMES). A formal mathematical model of F2F and FRAMES has been formulated, and a set of experiments has been carried out demonstrating the technical doability of F2F collaboration. The performance of the proposed fog load balancing model is compared to other load balancing models.
Article
Fog computing is transforming the network edge into an intelligent platform by bringing storage, computing, control, and networking functions closer to end users, things, and sensors. How to allocate multiple resource types (e.g., CPU, memory, bandwidth) of capacity-limited heterogeneous fog nodes to competing services with diverse requirements and preferences in a fair and efficient manner is a challenging task. To this end, we propose a novel market-based resource allocation framework in which the services act as buyers and fog resources act as divisible goods in the market. The proposed framework aims to compute a market equilibrium (ME) solution at which every service obtains its favorite resource bundle under the budget constraint, while the system achieves high resource utilization. This paper extends the general equilibrium literature by considering a practical case of satiated utility functions. In addition, we introduce the notions of non-wastefulness and frugality for equilibrium selection and rigorously demonstrate that all the non-wasteful and frugal ME are the optimal solutions to a convex program. Furthermore, the proposed equilibrium is shown to possess salient fairness properties, including envy-freeness, sharing-incentive, and proportionality. Another major contribution of this paper is to develop a privacy-preserving distributed algorithm, which is of independent interest, for computing an ME while allowing market participants to obfuscate their private information. Finally, extensive performance evaluation is conducted to verify our theoretical analyses.