Content uploaded by Anis Ur Rahman
Author content
All content in this area was uploaded by Anis Ur Rahman on Feb 14, 2020
Content may be subject to copyright.
DRAFT
1
Towards sustainable micro-level fog federated
load-sharing in internet of vehicles
Zeseya Sharmin1, Asad Waqar Malik2,3, Anis Ur Rahman2,3, Rafidah MD Noor1
1Department of Computer System and Technology, Faculty of Computer Science and Information
Technology, University Malaya, Malaysia
2Department of Information Systems, Faculty of Computer Science and Information Technology,
University Malaya, Malaysia
3School of Electrical Engineering and Computer Science (SEECS), National University of Sciences and
Technology (NUST), Islamabad, Pakistan
F
Abstract—Advancement of technology has enabled access to innova-
tive applications for connected devices. To handle growing computation
requirements, the backend cloud data centers become an inefficient
solution due to the caused network overhead. This is generally allevi-
ated using edge locations deployed to meet the increasing computing
demands. This work proposes a micro-level fog unit deployment to
facilitate delay-sensitive applications. To manage imbalanced workloads
due to the traffic density, the framework established a fog federation
acting as a consortium where under-utilized resources are shared to
provide service quality. Moreover, we implement a price-based workload
balancing algorithm to limit offloading among fog units relative to other
consortium members. The experimental results show a balanced offload
rate compared to tradition algorithms. Moreover, other measures like
queue length, end-to-end delay, and workload balancing demonstrate
performance gain under the federation. Overall, 72% energy reduction
is achieved through the proposed technique in comparison with the
traditional non-federated model.
Index Terms—Fog computing, internet of vehicles, sustainable distribu-
tion, fog federation, task offloading
1 INTRODUCTION
Cloud computing and internet of things (IoT) are key con-
cepts providing many innovative services and applications
in the foreseeable future. This is primarily due to the
number of connected devices surpassing the number of
people living since 2011 [1]. In 2012, there are 8.7 billion
connected devices, which is expected to grow at a much
faster rate crossing 50 billion by 2020 [2]. Consequently,
efficiently performing resource- and bandwidth-restricted
tasks is becoming challenging when using subscription-
based IoT services. One solution is to this challenge is to
expand the contemporary cloud on to proximity computing
Corresponding Author: Asad Waqar Malik, e-mail: asad.malik@um.edu.my
c
2020 IEEE. Personal use of this material is permitted. Permission from
IEEE must be obtained for all other uses, in any current or future media,
including reprinting/republishing this material for advertising or promotional
purposes, creating new collective works, for resale or redistribution to servers
or lists, or reuse of any copyrighted component of this work in other works.
Citation information: DOI 10.1109/JIOT.2020.2973420, IEEE Internet of
Things Journal
resources within the connected infrastructure. The goal is
to maximize overall utilization by reducing the amount of
information sent over to the cloud for processing, analysis,
and/or storage, fog computing plays an important role.
Particularly, delay-sensitive services can take advantage of
its proximity to reduce latency and cost [3].
Traditionally, vehicular networks are limited by space
and connectivity requirements, whereas more recent ad
hoc vehicular networks (VANETs) are dependent on cloud
computing. The later provides vehicular cloud computing
(VCC) services that are low latency and uninterrupted. But,
the services becoming ubiquitous with every passing day,
they demand high bandwidth to communicate with the
cloud server, which is a challenge when satisfying the QoS.
However, with the emergence of ever-growing connected
vehicles, a new networking paradigm referred to as vehicu-
lar fog computing (VFC) is being used to increase computa-
tional efficiency [4]. Here, vehicles offload computing using
vehicle-to-all (V2X) connectivity to the edge of the network.
Moreover, transient storage can be implemented at the
roadside units (RSUs). Notably, such offloading mechanisms
incur a communication overhead, thereby demanding new
design approaches to allocate computation and communica-
tion resources to offset energy-performance trade-off while
providing sustained user-experience.
Undoubtedly, the concept of fog computing has been
explored extensively for delay-sensitive applications. More-
over, it helps reduce the data transmitted across the backend
network. However, a large number of devices in every
region make it difficult for an edge location to maintain
the QoS. There is a need for micro-level fog placement
managing the QoS within its coverage area. This recent
concept of federated fog computing enables functionality
at different locations rather than at a centralized location ac-
cessible via a single network link, as illustrated in Figure 1.
Moreover, it provides the flexibility of horizontal expansion
over dispersed locations. The direct communication and
extendable framework minimizes the request-response time
with a reliable connection, and hence, achieving a better
DRAFT
2
quality of services (QoS) since the functionality resides
closer to the service request source.
Fig. 1: High-level fog consortium model.
In this study, we propose a fog collaboration frame-
work where the fog devices are placed at the micro-level
with a collaboration based on a pricing model. The model
is dynamically fluctuates based on the current workload;
therefore, while offloading, the fog agent located at every
fog location collaborates to find the suitable fog resource to
outsource the task. The main contributions of the proposed
work are listed as follows:
•Propose a micro-level fog federation environment
for vehicular networks, especially to handle delay-
sensitive applications. A vehicle either executes tasks
locally or offload them to nearby fog locations.
Within its range, the vehicles dynamically establish a
connection with nearby fog locations to offload tasks
to balance its pending tasks queue.
•Design a pricing model to offload fog-federation
outsource task. The model dynamically fluctuates
based on the current workload of a fog federate. This
outsourcing scenario is transparent for the end-users.
•Evaluate proposed technique against traditional non-
federated fog environment in terms of queue length,
delay, and efficiency. Moreover, the fog location is
benchmarked with varying compute resources.
Organization – The rest of the paper is organized as
follows. In Section 2recent contribution in the domain of
fog computing is covered. The system model is presented in
Section 3. Section 4covers the proposed system architecture
design and its implementation. The experimental results
and simulation framework setup are discussed in Section 5.
Finally, we conclude the paper in Section 6.
2 RE LATED WORK
The use of vehicular networks has tremendously increased
due to its wide adoption for different innovative services
like content sharing, information sharing, data caching,
emergency message dissemination, etc. This section covers
recent contributions towards the sustainability of such net-
works while providing the aforementioned services.
Federated vehicular fog networks – With an increasing
number of smart vehicles, it has become a challenge to
establish secure and error-free communication among mov-
ing vehicles in a vehicular network. Initially, such networks
are augmented using RSUs connected to the backend cloud
data centers [5], the VFC paradigm, but limited bandwidth
and high deployment costs. Recently, vehicular fog com-
puting is used to better use available communication and
computation resources in vehicular networks using effec-
tive load distribution among nearby fog devices [6], [7].
However, to support delay-sensitive applications, in [8], fog
delivery networks (FDN) architecture is extended to include
federated fog devices, termed as federated fog delivery
network (F-FDN). The architecture is composed of several
connected FDNs that are further connected to the back-
end cloud data center. A relevant fog-based storage frame-
work called Nebula specifically targets applications us-
ing region-aware, neighborhood-specific and information-
intensive storage [9]. No doubt, efficient management of
cloud storage and computing resources available at nearby
fog devices is important for any delay-sensitive application.
Task offloading in vehicular networks – With the
widespread adoption of smart vehicles, task offloading is
a viable solution to improve the system performance. That
is, the vehicles can share their computation resources among
the network to support other vehicles or users execute tasks
at nearby compute nodes. Note that with the increasing
complexity of fog networks, it is difficult to find an opti-
mal allocation policy for the task offloading. Nonetheless,
there is substantial literature exploring different decision
models for task offloading, subsequently, improving the
system performance. A relevant study in [10] proposes
a task offloading framework from the user equipment to
nearby fog nodes in fog-enabled networks using a heuristic-
based dynamic allocation index. Similar two-tier federated
approach for vehicular networks like ones in [11], [12]
uses device characteristics to estimate completion deadlines
along with the total cost of offloading in terms of energy and
delay. The goal is to minimize cumulative latency for task
offloading to various devices within a resource-constraint
environment. However, most of the aforementioned models
ignore typical network and mobility issues. More recent
models like the one in [13] propose a resource selection
service based on run-time predictions for the fog environ-
ment. Similar works use adaptive learning [14], multi-armed
bandit (MAB) [15], and probabilistic techniques like ant
colony optimization [16] to minimize the average offloading
delay. Another category of techniques uses pricing models.
For instance, in [17], [18] a resource-based pricing model is
correlated with expenditure; thus, providing better methods
to access and allocate distributed resources efficiently.
Cloud federation for task offloading – The concept of fed-
eration for cloud and edge computing is used to maximize
profit, optimize use of available resources. The term federa-
tion commonly refers to the integration of autonomous en-
tities with a mutual agreement, establishing a collaborative
environment to maximize profit. In [19], the authors propose
a highly profitable cloud formation model using evolution-
ary game theory. The model maintains balance among its
members to ensure the QoS, with members deallocating
their resources to use ones available elsewhere to maximize
DRAFT
3
profit. The evaluation shows that the model performs better
compared to the traditional genetic algorithms in terms
of profit and QoS. Similarly, in [20], a hedonic coalition
formation algorithm for cloud federation is proposed to
reduce energy consumption and maximize overall profit.
Another approach implements a load migration planning
policy between cloud and fog infrastructure to reduce delay
and network usage [21]. However, the works do not cover
offloading among the members of the fog federation.
Summary of literature review – Table 1summarises the
recent contribution based on vehicular fog networks. Most
of the work presented here utilizes the computing power
of vehicles through resource sharing. In some cases, RSUs
are used as a computing agent or a centralize coordinator
for tasks or information sharing. However, limited work
exists that covers the fog federation to handle the peak
workload on some devices compared to others. In this
paper, an extensive micro-level fog federation framework
is presented, which outsources tasks based on the buyer
and seller model. Thus, the decision of outsourcing is not
individually managed it is relative to the workload of all
the members of fog consortium.
3 SYSTEM MODEL
Resource model – Consider Vis the set of vehicles, and R
is the set of RSUs. The two resources combined results in a
resource set G,V∪R. Each resource comprises on-board
compute capability, for instance, Fiis the compute capacity
available at the resource i∈G.
Communication model – is the time spent on transmission
between connected nodes. Here, the offloading data rate Bij
between two nodes is either wired or wireless computed as,
Bij =Xij if i∈V, j ∈R(wireless)
Yij if (i, j)∈R(wired) (1)
The first case is the wireless data offloading rate Xij based
on Shannon–Hartley theorem is given as,
Xij =Wlog21 + P
N0W(2)
where Wis the available bandwidth, Prepresents the
transmission power, and N0is the noise power spectral
density. Note that we assume symmetric data rate between
vehicle-to-RSU and RSU-to-vehicle. The second case is the
wired data offloading rate Yij relatively higher and robust
compared to wireless communication.
Task model – Each vehicle v∈Vgenerates a computational
task with random Mbits requirement for random data input
in KBs, a task set S. Each computation request tcan be
formulated as, t,{s, c}, where sis the input size for the
computation and cis the total number of CPU cycles used
to complete the task.
Computation model – The tasks are executed locally, of-
floaded at the nearby RSU based on a computation policy
dij . However, due to limited computation resource avail-
ability at the RSU, more resources are provisioned at the
neighbouring RSUs, we refer to them as fog nodes.
Time consumption model – defines the overhead caused
when handling a task. There are two possible cases: the task
is handled locally or offloaded to fog nodes.
•Local computing – The computation time Tifor ith
task is computed as,
Ti=1
Fi
ci+X
x∈Q
cx
(3)
where ciare the required cycles for task, P
x∈Q
cx
are the total CPU cycles pending in the vehicle’s
local task queue, and Fiis compute capacity of the
allocated resource.
•Offload computing – The computation time Tifor
ith task when offloaded to the neighboring RSU is
computed as [26],
Ti=si
Bij
+1
Fj
ci+X
x∈Q
cx
(4)
where siis the task input size, ciare the required cy-
cles for task, P
x∈Q
cxare the total CPU cycles pending
at the RSU’s task queue, Bij is the offload data rate
to and from the RSU, and Fjis compute capacity of
the allocated resource at the RSU.
Energy consumption model – estimates the overhead of
energy consumption, based on the time consumption model,
it is defined for local and offloaded computing.
•Local computing – The energy consumption Eiat
ith task is computed as,
Ei=υi
ci+X
x∈Q
cx
(5)
where υiis the coefficient of energy consumption per
unit CPU cycle. It is set to 10−11(Fi)2[27].
•Offload computing – The energy consumption Eiat
ith resource is computed as,
Ei=P·si
Bij
+υj
ci+X
x∈Q
cx
(6)
where υjis the coefficient of energy consumption per
unit CPU cycle at the RSU. It is set to 10−11(Fj)2.
Overhead model – defines the total overhead when the task
is computed locally or offloaded to the fog node.
•Local computing – The total local overhead Ωloc
iis
calculated as, Ωloc
i=γTTi+γEEiwhere γTand γE
are the time and energy coefficients for the weighted
overhead, such that γT+γE= 1 and 06γT, γE61.
•Offload computing – The total offloading overhead
Ωof f
iis calculated as, Ωof f
i=γTTi+γEEiwhere
γTand γEare the time and energy coefficients for
the weighted overhead, such that γT+γE= 1 and
06γT, γE61.
Problem formulation – The objective is to guarantee min-
imal delay when processing requests from vehicles. As
mentioned earlier in the time consumption model, the delay
includes the time to transmit the task to the fog node, the
time it waits in the fog node’s task queue, the task execution
DRAFT
4
TABLE 1: The comparison table summarises the recent contributions.
Authors (year) Fog-Node Strategy/Model Tools Energy Dataset Network Federation
Fan et al. (2019) [11] End devices Heuristic-based DewSim Simulated × ×
Lin et al. (2019) [12] Vehicles Iterated greedy NA Simulated ×Vehicles
Sun et al. (2018) [15] Vehicles Adaptive learning MATLAB ×Real V2V
Nguyen et al. (2019) [18] Edge devices Pricing-based scheme MATLAB ×Real/Simulated ×Edge devices
Hammoud et al. (2019) [19] NA Evolutionary game theory MATLAB Real ×Cloud
Moghaddam et al. (2019) [20] NA Cooperative game theory MATLAB ×Simulated ×V2I/V2V
Sarsawat et al. (2019) [22] Edge devices D/M/1 & M/M/1 queue LightBlue Bean+ Real × ×
Mashayekhy et al. (2019) [23] NA Coalitional graph game MATLAB ×Simulated NA Cloud
Al-khafajiy et al. (2019) [24] Edge devices Resource management MATLAB ×Simulated Edge devices
Zhou et al. (2019) [25] End devices Contract-matching NA ×Simulated V2I/V2V
Zeseya et al. (Proposed) Mirco-level fog units Queuing-based pricing Anylogic Simulated Micro-units
time, and the time to return the results. We assume that the
overhead of returning the results is negligible. The challenge
is to balance the workloads on fog nodes to ensure QoS, the
problem can be defined as,
P: min[Ti],∀i∈S
s.t. F min
r6ωr6Fmax
r
λs
min[Di]
→F
Ti6Λ
(7)
where the workload ωrat any fog node r∈Ris bound by
the capacity of the fog node [Fmin
r, F max
r].Dis the propaga-
tion delay between the task source and offloaded fog node,
with the task offloaded to one with minimal transmission
delay. Assume that task i∈Sis computed with probability
P r(·)for possible task computation space K={local, f og}.
The service deadline Λis given as P
κ∈K
P r(κ)·Tκ
iwith
P
κ∈K
P r(κ) = 1. The Tiis bound by this deadline.
Computation offloading model – The offloading decision
at any node is based on completion time Tifrom task com-
putation space K={local, f og}. The offloading decision
model dcan be stated as,
dij =1Ti>Λ
0otherwise (8)
where zero (0) means the task is executed locally and one
(1) means the task is offloaded when it exceeds its service
deadline Λ.
4 PROPOSED FOG-FEDERATED DISTRIBUTION
A typical fog-federation comprising a set of fog units collab-
orating to provide QoS and to achieve workload balancing.
Each unit can communicate with end devices in its lim-
ited range. Often they are connected through a high-speed
wired connection. In the work, we introduce the concept of
micro-level fog deployment where the connected fog units
deployed at road intersections forming a fog federation. The
main objective is to distribute workload effectively to avoid
overloaded fog units within the federation.
The proposed system comprises smart vehicles, and fog
computing units. The vehicles have onboard computing
units with limited capacity and storage, and hence, for
compute-intensive tasks, nearby fog locations are used. That
is, the vehicle offloads tasks to nearby fog locations via
a wireless communication link. Note that due to varying
traffic density, the workload on each fog location is different.
At peak times, some of the fog locations become overloaded
whereas others remain underloaded. In the former case,
the QoS is difficult to maintain attributed to long end-to-
end delays. In the proposed framework, we introduce a fog
federation placed at the micro-level to maintain the QoS for
IoT devices. Generally, the fog units are deployed at the
intersections where vehicles slow down or even stop for
a while; thus, helping the fog location execute a task and
return its result directly to the source vehicle or else an ad
hoc based mechanism is used to relay the results.
Local offloading – The task offload decision model on a
vehicle is based on the existing state of its computing unit.
That is, the number of pending tasks in the vehicle’s task
queue and whether the vehicle is within the communication
range of any fog-unit, if that is the case then the task
is offloaded to the nearby fog-unit. Note that the current
workload at the fog-unit is unknown to the vehicle, which
is true in real-world scenarios. Furthermore, the computing
units installed at the fog locations have higher computing
capacity compared to that of the vehicles, encouraging the
offloading of delay-sensitive tasks. The execution of typical
smart vehicle is illustrated in Algorithm 1, where the tasks
are generated at regular interval and offload to nearby fog-
unit on the availability of communication channel. Here,
with input queues (for messages, tasks, and results) and ve-
hicle data transmission range, in line 2, tasks are generated
and placed on the task queue. Later, in lines 5-12, the tasks
are offloaded to the local RSU, one in the vehicle’s range,
or else the task is executed locally. This is followed by its
addition to the executing task queue. In lines 14-18, upon
reception of the task completion message, the completed
task is removed from the executing task queue and queued
into the output queue.
Fog offloading – Each fog unit is a multi-core device with an
input queue, output queue, workload manager, federation
manager, decision manager, and communication module, as
illustrated in Fig. 2. The workload manager (WM) keeps
track of the tasks received, processed and returned after
execution. It receives a task, places it in the input queue
DRAFT
5
Algorithm 1 Workflow for smart vehicle
Input
M: message queue; Q: task queue; O: output queue;
Γ: vehicle data transmission range
Output status message
1: while true do
2: if (t←GenerateTask())6=φthen
3: Q.Enqueue(t)
4: end if
5: if (t←Q.dequeue()) 6=φthen get task
6: if r∈RWITHIN Γand U{0,1}then
7: Send(t,r,REQUEST) send task to local RSU
8: else
9: OnBoardCompute(t)
10: end if
11: P.Enqueue(t)add to executing task queue
12: end if
13: m←M.dequeue() get message from queue
14: if m.type IS COMPLETED then task completed by RSU
15: t←m.task
16: P.Remove(t)remove completed task
17: O.Enqueue(t)add to executing task queue
18: end if
19: end while
and dispatches its results from the output queue to the
source vehicle. The federation manager (FM) is responsible
to share current workload information with other fog units
and maintain their information. The FM also periodically
gather all relevant information to update its local registers.
The current status of all the fog units is used by the decision
manager (DM) to select the outsourcing node. The selection
criterion varies based on the algorithm implemented at
the DM. We propose a pricing-based workload distributor
algorithm to evenly balance the workload among all the fog
units. Last, the communication module is responsible for
maintaining a connection with other fog units and vehicles
in the communication range. The proposed fog-based fed-
eration approach is illustrated in Algorithm 2. Here, with a
message queue, task queue, and execution task list as input,
in lines 2-9, the task is offloaded to the federation in the case
of suitable federate availability otherwise executed locally.
In lines 11-17, the task request and completion messages are
handled, that is the results are returned to the requesting
federation resources.
Algorithm 2 Workflow for RSU federate
Input
M: message queue; Q: task queue; P: execution task list
Output status message
1: while true do
2: if (t←Q.dequeue()) 6=φthen get task
3: if (f←SelectFederate() 6=φthen
4: Send(t,f,OFFLOAD) send task to federate
5: else
6: OnBoardCompute(t)
7: end if
8: P.Enqueue(t)add to executing task queue
9: end if
10: m←M.dequeue() get message from queue
11: switch m.type do
12: t←m.task
13: case REQUEST compute task received
14: Q.Enqueue(t)
15: case COMPLETED task completed by federate
16: Send(t,t.vehicle,COMPLETED) return result to vehicle
17: P.Remove(t)remove completed task
18: end while
Fig. 2: Internal architecture and connectivity of fog units.
Pricing-based workload distributor (F)– To achieve
optimal distribution of workload on fog nodes with minimal
delay, the tasks are further outsourced to suitable fog nodes,
to meet the QoS and service deadline. In the proposed price-
based utilization model, we assume that fog locations are
placed at the micro-level managed by different providers
forming a fog-federation consortium based on the pricing
factor where resources with the least price are shared.
The consortium comprises two participating entities, buyers
and sellers, fog locations with a significant workload buy
resources from selling fog locations with less workload.
Suppose there is a buyer with utility cost umeasured in
terms of queuing time, as defined earlier. If a task is cMbits
then the buyer bids b=c·u. Next the buyer asks for bids
from npotential sellers with utility costs v=v1,· · · , vn,
so the bids made are s=c·v. In this work, the pricing
is dynamic varying with the current workload at the fog
locations, the set of bids from all the auction participants
p={b} ∪ s. For instance, the higher is the workload at a fog
location, the higher is its price. That is, the pricing factor ρ
for task is normalized within [0,1], computed as,
ρ=p−p−
p+−p−(9)
where (p−, p+)=(min(p), max(p)) are queuing times at
the fog locations, one with least and the other with most
workload, respectively. Based on ρ, the buyer’s request for
auction is rejected with “no deal” if ρ(b)<0.5, the “reserve
price”. Otherwise, a unique bid strategy is followed where
the winner is one with the lowest pricing factor, the lowest
bidder. Among multiple sellers, a buyer selects one with the
least price. The offload decision model dis given as,
d(t) = ∀i∈R, 1ρi6ρx:min(j∈R|j6=i)
0otherwise (10)
DRAFT
6
In general, there are always few underloaded resource
providers due to the limited number of local customers,
the fog consortium encourages other providers to use these
available resources.
5 EVALUATION
To evaluate the proposed federated task offloading denoted
as F, we compare it to trivial non-federated F0with no
offloading within federation, and two traditional offloading
algorithms. The measures used are success rate, offload rate,
queuing delay, and end-to-end delay.
Fig. 3: Network topology used to benchmark the proposed
work. The arrival rates for the entry points are defined as
(λ1, λ2, λ3).
Simulation setup – For implementation of the proposed
scheme, we use AnyLogic 8 PLE 8.5.11as the agent-based
simulation platform with support for traffic simulation. We
use a custom map inspired from New York City blocks
with vehicular movements lasting for up to an hour. The
simulation results are averaged for five random runs. Other
simulation parameters used are listed in Table 2.
Network topology – used to benchmark the proposed
work is illustrated in Fig. 3. All roads are bi-directional
with the deployment of nine multi-core fog units connected
via a wired link. The units communicate with vehicles in
range via a wireless link. The arrival rate denoted as λis
defined as the number of vehicles entering the simulation
per entry point per hour. The entering vehicles take random
paths until exiting the simulation through any exit point. To
benchmark different algorithms, we define four workload
scenarios based on the arrival rate as listed in Table 2.
This is done to simulate imbalanced workload situations,
for instance, with standalone RSUs failing to handle the
incoming requests. Such imbalance is common in realistic
scenarios with regions with dense vehicular traffic easily
overloading the nearby RSU. On the other hand, resources
at the nearby RSU remain under-utilized in a less dense
environment. Thus, an RSU-based collaborative resource
sharing facilitates handling resource requests in varying
vehicular environments.
1. https://www.anylogic.com/
TABLE 2: Simulation configuration and system specifica-
tion.
Parameter Value
Simulation area 3 ×3 km
Total simulation time 1 hr
Simulation repetition 5 times
Vehicle speed [2.78–16.67] m/s
Vehicle acceleration 1.6 m/s2
Vehicle deceleration 2.6 m/s2
Vehicle compute capacity 50 MHz
Compute request size [15–50] Mbits
Task generation interval random
# of fog units 9 (nine)
fog-unit range 100m
fog-unit compute capacity [2.6–3.5] GHz
fog-unit computing cores 8 (eight)
Scenario S1(λ1=100, λ2=200, λ3=300)
Scenario S2(λ1=200, λ2=300, λ3=400)
Scenario S3(λ1=300, λ2=400, λ3=500)
Scenario S4(λ1=400, λ2=500, λ3=400)
CPU 3.40 GHz Intel Core i7
RAM 4.00 GB
OS Microsoft Windows 10
Simulator AnyLogic v8.4.0
Scenario – In the simulation, the vehicular traffic is varied
in terms of the arrival rate. Every vehicle entering the
simulation is equipped with an onboard computing unit and
storage. During its lifetime, the vehicles with compute and
storage capabilities generate tasks. A task offload decision is
made based on the pending tasks and/or direct connectivity
with the micro-fog unit. As mentioned earlier, there are
eight entry points where the vehicles enter the simulation.
We categorize these entry points into three groups with
different arrival rates. That is, for evaluation, we define
four scenarios with varying combinations of arrival rates, as
listed in Table 2. Note that the fog nodes have heterogeneous
computing capacities up to eight cores in total; however,
every vehicle is equipped with similar four cores.
5.1 Result and Discussion
For evaluation, the measures used to benchmark the pro-
posed pricing-based task offloading scheme are: queue
length, queuing delay, end-to-end delay, offload rate, and
workload deviation. We experiment with two variants of
the proposed scheme, federated (F) and non-federated (F0).
Moreover, we use two classical task offloading algorithms
for comparison, random walk algorithm (RWA) and neigh-
boring fogs algorithm (NFA), described as follows:
•Random walk algorithm (RWA) – In a random walk,
the tasks are outsourced to the random fog locations
to balanced the workload [28]. Here, every fog unit
uniformly offloads tasks among the fog nodes, the
set of RSUs R, mathematically, U{r∈R}. There are
no particular selection criterion for selection.
•Neighboring fogs algorithm (NFA) – In NFA, only
neighboring fog nodes are used to share work-
load [29]. The source fog node uniformly offloads
tasks among the nearest fog nodes R0⊂R, ones with
less propagation delay, mathematically, U{r∈R0}.
DRAFT
7
Queue length – is defined as the number of tasks waiting in
the task queue at any time instant. Often used to implement
decision models about the allocation of resources to provide
service. With equal priority tasks, it represents the waiting
time for vehicles offloading tasks. Moreover, it represents
the workload at each RSU deployed under the federated
environment, for instance, the RSU with the least queuing
time is considered a suitable candidate for task offloading.
Fig. 4shows the average queue length for the proposed
(Fand F0), RWA, and NFA algorithms. We observe that
the queue lengths increase with increasing arrival rates.
However, compared to F0,Fshows an advantage in terms
of reduced queue lengths compared to algorithms using a
simple selection model like that used for RWA and NFA.
Here, the highest queue length can be observed in F0case
due to the non-sharing of resources in the micro-federation.
S1S2S3S4
0
5000
10000
Scenarios
Queue length
F
F0
RWA
NFA
Fig. 4: RSU queue length with varying arrival rates per hour
per entry point.
S1S2S3S4
0
500
1000
Scenarios
End-to-end delay (s)
F
F0
RWA
NFA
Fig. 5: RSU queuing delay with varying arrival rates per
hour per entry point.
End-to-end delay – is defined as the time taken to transmit
the task from its source to the RSU and then receive its
returned result at the source. That is, the delay includes
queuing and computation times. Fig. 5shows the end-to-
end delay comparison among proposed (Fand F0), RWA
and NFA. The highest delay is observed in F0due to its
non-federated nature, primarily affected by the increasing
queue length. The proposed federated scheme Fshows a
clear gain compared to F0, RWA, and NFA.
Total energy consumption – The total energy is defined
as the sum of energy consumed during task execution and
maintaining the task queue. Fig. 6shows the total energy
consumption comparison for all the scenarios. In S4, a max-
imum number of tasks generated, therefore the energy con-
sumption showing the increasing trend in the non-federated
S1S2S3S4
0
1
2
·104
Scenarios
Energy consumption (µJ)
F0RWA
NFA F
Fig. 6: RSU total energy consumption with varying arrival
rates per hour per entry point.
model; however, the proposed Fconsumes the least amount
of energy due to efficient workload distribution among the
federation members. In S4, the proposed Fshows 56% en-
ergy reduction compared to the non-federate approach. Sim-
ilarly, the Fshows a significant energy reduction compared
with NFA and RWA. Further, at the maximum workload,
RWA performs better compared to NFA; whereas, at S3
scenario, NFA shows less consumption. Thus, both NFA and
RWA behavior remain close to each other in terms of energy
consumption. Moreover, in all scenarios, the proposed F
clearly outclasses all the other techniques. Overall 72% en-
ergy reduction is achieved through the proposed technique
in comparison with the traditional non-federated model.
S1S2S3S4
0.1
0.2
0.3
0.4
0.5
Scenarios
Offload rate
F
RWA
NFA
Fig. 7: RSU offload rate with varying arrival rates per hour
per entry point.
Offload rate – is the measure to observe the number of tasks
outsourced from one federate (the local RSU) to another
federate in any federation. Fig. 7shows that the proposed
federated scheme Foffloads approximately 25% of the
incoming tasks from vehicles in RSU’s range while the
remaining tasks are computed locally on the RSU. On the
other hand, the offload rate for RWA and NFA is approx-
imately 50%, almost twice compared to F. This reduction
in the offload rate in the case of Fis due to the use of a
pricing-based policy adopted where every RSU computes a
local index to decide its role as a buyer or seller. The nodes
offload tasks across the federation only if there is a seller
among the set of neighboring federates otherwise the task
remains at the federate for local computation.
Workload balancing – Recall that federated model is im-
plemented to balance workload across under-utilized com-
DRAFT
8
S1S2S3S4
0
20
40
60
Scenarios
Number of tasks (in thousands)
F0RWA
NFA F
Fig. 8: Deviation of offloaded tasks per RSU. Note the mean
stands 11%.
puting resources, that is to provide the agreed QoS to
end-users. To assess workload across the federation, we
compute workload deviation for the proposed scheme (F
and F0), RWA and NFA, as illustrated by the error bars in
Fig. 8. In the case of F, the deviation becomes smaller with
increasing vehicle density representing a balanced work-
load across the federation. In contrast, the non-federated
scheme F0demonstrates maximum workload imbalance.
Moreover, RWA performs better compared to NFA, which
uses a locality-based federate selection model. That is, only
resources at neighboring federates are used for workload
balancing, relatively better than F0. Similarly, Table 3shows
the workload deviation for each of the evaluated techniques
with varying arrival rates. The proposed technique Fshows
a significant reduction in workload imbalance compared
to all other techniques. Thus, the fog units with evenly
distributed workloads improve the QoS.
5.2 Discussion
The concept of fog computing is adopted to support delay-
sensitive applications. In most of the traditional systems, fog
units work independently without any collaboration among
them. Generally, in the case of a heavy workload, the tasks
are transferred to the backend cloud data center. This use of
the cloud architecture incurs additional communication and
scheduling delays; thus, making it the last possible solution
to handle such workloads. Even though the concept of fog
collaboration is still in its initial stages. Many of the works
cover fog collaboration at edge locations. In this paper, we
proposed a micro-level fog deployment model to handle
heavy workloads through resource sharing. The proposed
model allows the outsourcing of tasks in a balanced manner.
At any instance of time, the fog units are classified as buyers
or sellers. The buyers are the ones facing heavy workload
whereas the sellers are underloaded. Once the categoriza-
tion is done, outsourcing is only allowed if the fog units
are relatively overloaded, otherwise, they compute it locally.
In comparison to traditional federated environments where
the federates outsource tasks as soon as they cross a fixed
threshold. Such techniques can lead to a significant number
of tasks being outsourced even to already overloaded fog
units. The proposed work is evaluated against traditional
random walk and neighboring fog selection algorithms. The
results show a significant gain in terms of reduced delay,
queue length and workload balance throughout the entire
fog federation.
6 CONCLUSIONS
In this paper, we propose a novel micro-level fog federation
that enables resource sharing among fog units. The frame-
work allows the sharing of workload-related information
for the fog unit selection model, with the objective to meet
the deadline. The model proposes a pricing-based federate
selection model, that is, allows the sharing of federated
resources from multiple possible sellers only if the source
is categorized as a buyer. The result demonstrates that the
proposed model based on queuing time distributes tasks
in a balanced manner across the federation compared to
traditional techniques.
AVAILABILITY
The propsoed framework is an open source project
anonymously available on GitHub at https://github.
com/AsadWaqarMalik/MicroFogFederation. We expect re-
searchers to add new ideas and models to take this frame-
work in interesting directions. Nevertheless, the authors
have an interest in developing new algorithms for task
offloading for federated vehicular fog resource provisioning.
ACKNOWLEDGEMENT
This work is supported by the Faculty Program, University
Malaya under Grant GPF019D-2019.
REFERENCES
[1] S. Vashi, J. Ram, J. Modi, S. Verma, and C. Prakash, “Internet of
things (iot): A vision, architectural elements, and security issues,”
in 2017 International Conference on I-SMAC (IoT in Social, Mobile,
Analytics and Cloud)(I-SMAC). IEEE, 2017, pp. 492–496.
[2] M. Burhan, R. Rehman, B. Khan, and B.-S. Kim, “Iot elements, lay-
ered architectures and security issues: A comprehensive survey,”
Sensors, vol. 18, no. 9, p. 2796, 2018.
[3] M. Aazam, S. Zeadally, and K. A. Harras, “Fog computing archi-
tecture, evaluation, and future research directions,” IEEE Commun.
Mag., vol. 56, no. 5, pp. 46–52, 2018.
[4] C. Huang, R. Lu, and K.-K. R. Choo, “Vehicular fog computing:
architecture, use case, and security and forensic challenges,” IEEE
Commun. Mag., vol. 55, no. 11, pp. 105–111, 2017.
[5] W.-H. Kuo, Y.-S. Tung, and S.-H. Fang, “A node management
scheme for r2v connections in rsu-supported vehicular adhoc net-
works,” in 2013 International Conference on Computing, Networking
and Communications (ICNC). IEEE, 2013, pp. 768–772.
[6] X. Hou, Y. Li, M. Chen, D. Wu, D. Jin, and S. Chen, “Vehicular fog
computing: A viewpoint of vehicles as the infrastructures,” IEEE
Trans. Veh. Technol., vol. 65, no. 6, pp. 3860–3873, 2016.
[7] V. G. Menon and P. J. Prathap, “Moving from vehicular cloud
computing to vehicular fog computing: Issues and challenges,”
International Journal on Computer Science and Engineering, vol. 9,
no. 2, pp. 14–18, 2017.
[8] V. Veillon, C. Denninnart, and M. A. Salehi, “F-fdn: Federation
of fog computing systems for low latency video streaming,” in
2019 IEEE 3rd International Conference on Fog and Edge Computing
(ICFEC). IEEE, 2019, pp. 1–9.
[9] M. Ryden, K. Oh, A. Chandra, and J. Weissman, “Nebula: Dis-
tributed edge cloud for data intensive computing,” in 2014 IEEE
International Conference on Cloud Engineering. IEEE, 2014, pp. 57–
66.
[10] F. Yang, Z. Zhu, S. Zhao, Y. Yang, and X. Luo, “Optimal task of-
floading in fog-enabled networks via index policies,” in 2018 IEEE
Global Conference on Signal and Information Processing (GlobalSIP).
IEEE, 2018, pp. 688–692.
DRAFT
9
TABLE 3: Comparison showing workload distribution variation among different techniques.
Scenario Non-federated (F0) RWA NE Federated (F)
µ(tasks) σ(tasks) σ(%) µ(tasks) σ(tasks) σ(%) µ(tasks) σ(tasks) σ(%) µ(tasks) σ(tasks) σ(%)
1 15652 7625 5.413 16436 6280 4.245 17344 5786 3.706 16632 7497 5.009
2 25580 12985 5.64 26388 5340 2.249 25030 6473 2.874 26323 3795 1.602
3 35309 18189 5.724 35507 8136 2.546 34383 10345 3.343 35203 175 0.055
4 38367 18867 5.464 38536 8685 2.504 39392 12033 3.394 38327 222 0.064
[11] Y. Fan, L. Zhai, and H. Wang, “Cost-efficient dependent task
offloading for multiusers,” IEEE Access, vol. 7, pp. 115 843–115856,
2019.
[12] Y.-D. Lin, J.-C. Hu, B. Kar, and L.-H. Yen, “Cost minimization with
offloading to vehicles in two-tier federated edge and vehicular-
fog systems,” in 2019 IEEE 90th Vehicular Technology Conference
(VTC2019-Fall). IEEE, 2019, pp. 1–6.
[13] N. Mostafa, “Cooperative fog communications using a multi-level
load balancing,” in 2019 Fourth International Conference on Fog and
Mobile Edge Computing (FMEC). IEEE, 2019, pp. 45–51.
[14] L. Xiao, W. Zhuang, S. Zhou, and C. Chen, “Learning while of-
floading: Task offloading in vehicular edge computing network,”
in Learning-based VANET Communication and Security Techniques.
Springer, 2019, pp. 49–77.
[15] Y. Sun, X. Guo, S. Zhou, Z. Jiang, X. Liu, and Z. Niu, “Learning-
based task offloading for vehicular cloud computing systems,” in
2018 IEEE International Conference on Communications (ICC). IEEE,
2018, pp. 1–7.
[16] M. Dorigo and T. St¨
utzle, “Ant colony optimization: overview and
recent advances,” in Handbook of Metaheuristics. Springer, 2019,
pp. 311–351.
[17] C. Wu, R. Buyya, and K. Ramamohanarao, “Cloud pricing models:
Taxonomy, survey, and interdisciplinary challenges,” ACM Com-
put. Surv., vol. 52, no. 6, p. 108, 2019.
[18] D. T. Nguyen, L. B. Le, and V. K. Bhargava, “A market-
based framework for multi-resource allocation in fog computing,”
IEEE/ACM Trans. Networking, 2019.
[19] A. Hammoud, A. Mourad, H. Otrok, O. A. Wahab, and H. Har-
manani, “Cloud federation formation using genetic and evolution-
ary game theoretical models,” Future Gener. Comput. Syst., vol. 104,
pp. 92–104, 2020.
[20] M. M. Moghaddam, M. H. Manshaei, W. Saad, and M. Goudarzi,
“On data center demand response: A cloud federation approach,”
IEEE Access, vol. 7, pp. 101 829–101 843, 2019.
[21] B. Ottenw¨
alder, B. Koldehofe, K. Rothermel, and U. Ramachan-
dran, “Migcep: operator migration for mobility driven distributed
complex event processing,” in Proceedings of the 7th ACM interna-
tional conference on Distributed event-based systems. ACM, 2013, pp.
183–194.
[22] S. Saraswat, H. P. Gupta, T. Dutta, and S. K. Das, “Energy efficient
data forwarding scheme in fog based ubiquitous system with
deadline constraints,” IEEE Trans. Netw. Serv. Manage., 2019.
[23] L. Mashayekhy, M. M. Nejad, and D. Grosu, “A trust-aware
mechanism for cloud federation formation,” IEEE Trans. Cloud
Comput., 2019.
[24] M. Al-khafajiy, T. Baker, H. Al-Libawy, Z. Maamar, M. Aloqaily,
and Y. Jararweh, “Improving fog computing performance via fog-
2-fog collaboration,” Future Gener. Comput. Syst., vol. 100, pp. 266–
280, 2019.
[25] Z. Zhou, P. Liu, J. Feng, Y. Zhang, S. Mumtaz, and J. Rodriguez,
“Computation resource allocation and task assignment optimiza-
tion in vehicular fog computing: A contract-matching approach,”
IEEE Trans. Veh. Technol., vol. 68, no. 4, pp. 3113–3125, 2019.
[26] Z. Yin, H. Chen, and F. Hu, “An advanced decision model enabling
two-way initiative offloading in edge computing,” Future Gener.
Comput. Syst., vol. 90, pp. 39–48, 2019.
[27] Y. Wen, W. Zhang, and H. Luo, “Energy-optimal mobile applica-
tion execution: Taming resource-poor mobile devices with cloud
clones,” in 2012 Proceedings IEEE Infocom. IEEE, 2012, pp. 2716–
2720.
[28] Q. Zhu, B. Si, F. Yang, and Y. Ma, “Task offloading decision in
fog computing system,” China Commun., vol. 14, no. 11, pp. 59–68,
2017.
[29] A. Bozorgchenani, D. Tarchi, and G. E. Corazza, “An energy
and delay-efficient partial offloading technique for fog computing
architectures,” in GLOBECOM 2017-2017 IEEE Global Communica-
tions Conference. IEEE, 2017, pp. 1–6.
Zeseya Sharmin received BSc degree in Com-
puter Science and Engineering from Green Uni-
versity of Bangladesh, Bangladesh, in 2018.
Currently she is doing her Master’s in Applied
Computing, University Malaya, Malaysia. Be-
sides, she is working as a Research Assistant
in University Malaya. Her research interest is
in cloud computing, internet of things, mobile
computing.
Asad W. Malik is an Assistant Professor at
NUST-SEECS, Pakistan. Besides, he is working
as Senior Lecturer at the Department of Infor-
mation Systems, Faculty of Computer Science
& Information Technology, University Malaya,
Malaysia. He finished his Ph.D. with majors in
parallel and distributed simulation/systems from
NUST, Pakistan in 2012. His primary area of
interest includes distributed simulation, cloud/fog
computing, and internet of things.
Anis U. Rahman received Master’s degree in
Parallel and Distributed Systems from Joseph
Fourier University, France, and Ph.D. in Com-
puter Science from Grenoble University, France,
in 2013. He is currently an Assistant Professor
NUST-SEECS, Pakistan. Besides, he is working
as Research Fellow at the Faculty of Computer
Science & Information Technology, University
Malaya, Malaysia. His main research interests
include internet of things and machine learning.
Rafidah MD Noor received BIT from University
Utara Malaysia, in 1998, M.Sc. in Computer Sci-
ence from Universiti Teknologi Malaysia, in 2000,
and Ph.D. in Computing from Lancaster Univer-
sity, UK, in 2010. She is currently an Associate
Professor with the Dept. of Computer System
& Technology, Faculty of Computer Science &
Information Technology, University Malaya, and
the Director of the Centre of Mobile Cloud Com-
puting Research (C4MCCR), which focuses on
high impact research related to transportation
systems including vehicular networks, wireless networks, network mo-
bility, quality of service, and internet of things.