ArticlePDF Available

Dynamic load balancing assisted optimized access control mechanism for Edge‐Fog‐Cloud network in Internet of Things environment

Authors:
  • IIIT Sri City Chittoor

Abstract and Figures

The modern age networks are supposed to be connected, agile, programmable, and load efficient to counter the side‐effects of an imbalanced network such as network congestion, higher transmission cost, low reliability, and so forth. The wide number of electronic devices around us have a great potential to realize the concept of a connected world. Internet of Things (IoT) is an effort of the research community to realize this concept, and cloud computing plays an important role in this realization. However, fog and edge computing are more prominent solutions for small power devices and latency sensitive applications. As IoT sensors generate huge volume of data, and the IoT environment comprises of multiple applications and traffic conditions, a network traffic based dynamic load balancing approach is needed to optimize the overall network performance. This work proposes a layer based Edge‐Fog‐Cloud network architecture to distribute the network traffic load in an IoT environment. In addition to this, a load balancing assisted optimized access control mechanism is discussed to improve the network load conditions further. The proposed mechanism is tested using Amazon web services platform and the achieved results validate the effectiveness of the proposed approach. The simulation results show an average improved CPU utilization rate of 10.13%, 10.01%, 10.82%, 8.78%, and 11.91% in five different experiments conducted.
Content may be subject to copyright.
Received: 26 December 2020 Revised: 8 April 2021 Accepted: 18 May 2021
DOI: 10.1002/cpe.6440
RESEARCH ARTICLE
Dynamic load balancing assisted optimized access control
mechanism for Edge-Fog-Cloud network in Internet of
Things environment
Neha Agrawal
Computer Science and Engineering Group,
Indian Institute of Information Technology Sri
City, Chittoor, India
Correspondence
Neha Agrawal, Computer Science and
Engineering Group, Indian Institute of
Information TechnologySri City, Chittoor,
Andhra Pradesh 517646, India.
Email: nehaiiitm345@gmail.com,
neha.a@iiits.in
Abstract
The modern age networks are supposed to be connected, agile, programmable, and
load efficient to counter the side-effects of an imbalanced network such as network
congestion, higher transmission cost, low reliability, and so forth. The wide number of
electronic devices around us have a great potential to realize the concept of a con-
nected world. Internet of Things (IoT) is an effort of the research community to realize
this concept, and cloud computing plays an important role in this realization. However,
fog and edge computing are more prominent solutions for small power devices and
latency sensitive applications. As IoT sensors generatehuge volume of data, and the IoT
environment comprises of multiple applications and traffic conditions, a network traf-
fic based dynamic load balancing approach is needed to optimize the overall network
performance. This work proposes a layer based Edge-Fog-Cloud network architecture
to distribute the network traffic load in an IoT environment. In addition to this, a load
balancing assisted optimized access control mechanism is discussed to improve the
network load conditions further. The proposed mechanism is tested using Amazon web
services platform and the achieved results validate the effectiveness of the proposed
approach. The simulation results show an average improved CPU utilization rate of
10.13%, 10.01%, 10.82%, 8.78%, and 11.91% in five different experiments conducted.
KEYWORDS
access control mechanism, cloud computing, edge computing, fog computing, Internet of Things,
load balancing
1INTRODUCTION
Future generation computer networks are supposed to overcome the drawbacks of the traditional networks. Internet of Things (IoT) is a new gen-
eration technology,1consisting of resource constrained and battery enabled devices. These devices have the potential to change the architecture
and functional behavior of the future generation networks. As most of the IoT devices are resource-constrained in nature, the data preprocessing
and manipulation are not possible at device level. Hence, the role of cloud computing becomes very important.2Cloud is an established technol-
ogy and serves as a platform for numerous applications offering support from software level to infrastructure level.The numerous benefits of cloud
computing such as resource pooling, on-the-fly resource provisioning, rapid elasticity, multitenancy, and so forth, enable hassle-free application
deployment.3A cloud environment consists of front-end (systems, APIs, and so forth) and back-end (servers, database, and so forth) offering full
control and support for cloud-based application development.
Concurrency Computat Pract Exper. 2021;e6440. wileyonlinelibrary.com/journal/cpe © 2021 John Wiley & Sons, Ltd. 1of15
https://doi.org/10.1002/cpe.6440
2of15 AGRAWAL
Although integration of cloud with IoT allows cost-effective off-premise application development/deployment with additional benefits of scala-
bility, flexibility, and so forth, the performance issues always remain a genuine concern.4Cloud-based IoT applications can avail all possible network
benefits in the form of services, but the latency always remains an open issue.5Most of the IoT applications and devices need low latency services,
but cloud environments do not guarantee it. Based on the distance between the data-centers and IoT-users, latency gets varied. Fog computing
solves this issue of latency by offering cloud services at lower level.6,7 In actual, fog computing is an extension of cloud computing having a decen-
tralized architecture in contrast to the centralized architecture of cloud. It serves as a regulator between cloud servers and users. It controls the
flow to the cloud servers by processing some of the flows locally based on the network policy. It means that fog layer works as an intelligent gateway,
offloading the cloud and enabling data storage, processing, and analysis locally. In short, fog doesn’t replace cloud but complements it by processing
the essentials at local level.
To improve the IoT applications’ latency further, fog computing is assisted with edge computing.8,9 Both, fog and edge computing architectures
offer similar benefits such as network latency and service response time improvement. Reducing the amount of data sent to the cloud results in
improved latency, which further helps in mission-critical applications. Both fog and edge bring the network intelligence closer to the IoT sensor layer,
but the basic difference is at which level this intelligence gets implemented.10 Fog computing implements this intelligence at local area network
(LAN) level by processing data at fog node/IoT gateway, whereas the edge computing implements this intelligence at IoT device level by process-
ing the data at edge programmable industrial controllers, programmable logic controllers, programmable automation controllers, and so forth. Fog
computing has a complex architecture in comparison to edge computing. Its architecture involves multiple links in communication chain that trans-
fer data from the physical IoT devicesto the digital cloud environment.11 As each of the links can be a potential point of failure in fog computing, edge
computing helps to simplify the fog architecture by offering some device level services, which saves time and money by streamlining the IoT com-
munication, decreasing the number of potential failure points, and so forth.12 As the simplified architectures are the key to the success of industrial
IoT applications, the role of edge computing becomes very important.13
As the number of IoT devices are growing at rapid pace and the generated sensor data is bulky in nature, it is hard to process them at device
or LAN level due to the resource-constrained nature of the IoT devices. It can overwhelm a traditional communication network. Hence, a cloud
supported multilayer network architecture is needed with additional fog and edge layers to counter the latency and response time issues. This work
proposes a sensor data based dynamic load balancing assisted optimized access control mechanism for Edge-Fog-Cloudnetwork in IoT environment.
The respective motivation and major contributions are detailed in following subsections.
1.1 Motivation
Majority of the existing literary works focus on the cloud, fog, and edge computing based individual solutions for IoT network. Although some of the
works14,15 discuss the hybrid (Edge-Fog-Cloud based) solutions, but none of them details the need of dynamic network flow adjustment to implement
the effective resource access control in Edge-Fog-Cloud based network in IoT environment. Thus, this work offers a load balancing assisted access
control mechanism (LB-ACM) in Edge-Fog-Cloud network in IoT environment.
1.2 Major contributions of the paper
The major contributions of this work are as follows.
1. This article details the challenges of cloud, fog, and edge networks. In addition, the merge of these networks to complement each-other and the
potential to become a solid foundation of a future generation networks together have also been explored.
2. The work compares the existing literature based solutions for Edge-Fog-Cloud networks. The comparison also highlights the contributions of
the solutions in IoT domain.
3. The work proposes an architecture for dynamic LB-ACM for Edge-Fog-Cloud network. The architecture exploits the benefits of a hybrid
technology based architecture.
4. Subsequently, the work simulates the proposed approach for optimized load balancing in Edge-Fog-Cloud based IoT environment. The simulation
results help to validate the proposed approach.
5. Finally, the achieved results are compared based on different network performance metrics and the respective analysis is presented.
In the rest of the paper, Section 2 details the related Edge-Fog-Cloud based load balancing and access control solutions for IoT. The proposed
architecture and working methodology is discussed in Section 3. Section 4 describes the implementation details of the proposed LB-ACM. The
respective results and analysis are also providedin this section. Finally, Section 5 offers the concluding remarks with a highlight of the possible future
work.
AGRAWAL 3of15
2RELATED WORK
Multiple load balancing and access control works based on the Edge-Fog-Cloud networks have been explored in this section. The respective discus-
sion of these works has also been covered w.r.t. IoT environment in this section. An efficient resource management and workload allocation based
approach is proposed in Reference 16 using a Fog-Cloud computing framework. In this, the learning classifier methods are used for diminishing
the workload processing delay, communication delay, and power consumption at the network edge. Similarly, a pseudo-dynamic testing approach
is discussed in Reference 17 to realize a realistic edge-fog cloud ecosystem. The approach explored in Reference 18 proposes a capillary comput-
ing architecture for a dynamic IoT environment. The work details about the microservices orchestration from the edge devices to the fog and cloud
providers. In Reference 19, an Edge-Fog-Cloud environment is discussed that details a distributed cloud for IoT computations. In addition to this,
authors discuss an offloading approach detailing the simultaneous localization and mapping for indoor mobile robots with Edge-Fog-Cloud comput-
ing in Reference 20. To add a different orientation to the fog assisted cloud environment, a trust management approach is highlighted in Reference
21 enabling the block-chain based fog computing platform.
In addition to the above work, a machine learning based secure offloading approach is proposed in Reference 22 for the Fog-Cloud of
things enabling smart city applications. A semantics based IoT data description and discovery method is discussed in Reference 23 for the
IoT-Edge-Fog-Cloud infrastructure. Authors in Reference 24, discuss a federated capability-based access control mechanism for the IoT net-
works. Similarly, the authors discuss the fog/edge computing based IoT architecture, applications, and open research issues in Reference 25. The
continuous processing of health data is discussed in Reference 26 using micro/nano service composition in an Edge-Fog-Cloud computing envi-
ronment. In addition, an Edge-Fog-Cloud platform is detailed in Reference 27 for the anticipatory learning process, specifically designed for the
internet of mobile things. In Reference 28, a container fog node based cloud orchestration mechanism is discussed for IoT. In addition to this
work, a decentralized Edge-to-Cloud Load-balancing mechanism is proposed in Reference 29 for the service placement in IoT. An edge AI based
smart farming is detailed in Reference 30, focusing the discussion of CNNs at the edge and fog computing with LoRa in IoT environment. Finally,
an approach named DualFog-IoT is proposed in Reference 31 with a focus of solving block-chain integration problem using an additional fog
layerinIoT.
The comparative analysis of the above-mentioned approaches is provided in Table 1. The comparison is performed w.r.t the objective, approach
referred, outcome of the work, and the derived inference. It may be observed from Table 1, that the access control for Edge-Fog-Cloud network in
IoT environment is not well addressed in the literature. Thus, a LB-ACM for such environment in proposed in this work.
3THE PROPOSED ARCHITECTURE AND SOLUTION
In this section, the proposed architecture of the Edge-Fog-Cloudnetwork, its working methodology, and the process workflow have been described.
The respective details are provided in the following subsections.
3.1 Architecture of the proposed Edge-Fog-Cloud network
The proposed architecture of the Edge-Fog-Cloud network is shown in Figure 1. It consists of multiple layers, namely, IoT sensor layer, edge
layer, fog layer, and the cloud environment layer. The first layer involves different IoT sensors that generate the sensed data. Edge layer prepro-
cesses the data (received from the IoT device layer), and offers support for the subsequent Fog layer. Fog layer receives the whole preprocessed
data traffic and balances the load. It stores the traffic statistics in the database, and updates the topology periodically with the help of a topol-
ogy manager. Finally, the network traffic passes through the fog layer and arrives at the cloud layer, involving multiple constituent components.
The cloud layer includes various cloud network components, such as firewall, access control list (ACL), Servers, network monitoring framework,
and so forth.32
The requests originating from the IoT layer cross the edge and fog layers, and arrive at the cloud firewall. Firewall filters the possible malicious
requests, and forwards only the authentic requests to the ACL. The ACL further filters the requests based on the access control rules defined by the
network administrator. The traffic passing from the ACL is distributed among the remote servers(cloud VMs) according to the application-specificity.
The servers are scaled to satisfy the IoT traffic.This whole phenomenon remains under the constant supervision of the network monitoring agent. A
monitoring agent is a cloud component that takes care of the network statistics. It alerts and/or sends feedback to the network administrator based
on the specific thresholds and alarms programmed inside it. The network administrator updates the ACL based on the inputs from the monitoring
agent accordingly. Specifically, the monitoring agent is responsible to collect the application and/or system metrics, and helps to observe the func-
tional behavior of different virtual machine instances. The common metrics monitored by the cloud monitoring agent are the Disk, CPU, Network,
and Process metrics. Such monitoring agents can be configured to monitor the functional aspects of the third-party applications using the cloud
services.
4of15 AGRAWAL
TA B L E 1 Comparative analysis of the existing approaches for load-balancing and access control using Edge-Fog-Cloud computing
S.no. Authors(s) and Year Objective Approach referred Outcomes Inference
1. Abbasi et al.16 Resource management and
workload allocation in IoT using
Fog-Cloud
Learning classifier Processing delay is reduced by
42%.
Workload processing and communication delays are
diminished.
Power consumption is also improved.
2. Ficco et al.17 Resource management in
Edge-Fog-Cloud systems
Pseudo-dynamic testing Latency is minimized and
throughput is maximized
The approach is cost-effective, scalable, reliable, and
stable.
3. Taherizadeh et al.18 Novel architecture developmentfor
smart applications supporting
varying IoT workloads in
Edge-Fog-Cloud environment
Containers-based orchestration Four times faster service
response time
Successful offloading from an Edge node to a Fog or
Cloud resource.
Suitable for handling highly dynamic IoT environments.
4. Mohan and
Kangasharju19
Distributed tasks processing in
Edge-Fog Cloud environment
Least processing cost first
method is used for tasks
assignment
Deployment time without
sacrificing the associated cost
Depiction of Network and processing costs.
Comparison of Edge, Fog, and Edge-Fog approaches
based on interdevice connection densities.
5. Alli and Alam22 Secure offloading in Fog-Cloud of
things
Machine learning strategies,
neuro-fuzzy model, and PSO
Minimized latency Availability of fog node is computed using available
processing capacity, and remaining node energy.
Selection of cloud is based on reinforcement learning.
6. Zeng et al.23 IoT data management solution in the
IoT-Edge-Fog-Cloud
Semantic model for better
specification of the IoT data
streams (time series data)
Enhanced IoT data streams
specifications to enable
semantic-based data retrievals
Focus on the issues of data storage, specification and
discovery.
Developed the data discovery protocols for the IoT
infrastructure.
7. Xu et al.24 Enable an effective access control
processes in large-scale IoT
systems
Identity-based capability token
management strategy
Scalable, lightweight, and
fine-grained access control
solution to IoT systems
Advantages such as Load balance, Decentralized
authorization, fine granularity, and lightweight approach
are achieved.
8. Omoniwa et al.25 Implementation of an architecture
to continuously handle healthcare
big data
Online analysis and anomalies
identification
Cost-efficiency storage
consumption, reduction of
data transportation
Microservices and nanoservices are used to create
Edge-Fog-Cloud processing structures.
Spirometry studies, ECGs and tomography images are
used.
9. Cao et al.27 Architecture for anticipatory
learning based data analytics in
internet of mobile things (IoMT)
Data analytics using Machine
Learning
Data privacy, low-cost data
transfer to data centers, and
fast feedback
Manages huge data of IoMT devices, and analyzes at
different levels of edge, fog and cloud.
10. Kim et al.28 High performance and simple to
management architecture
Container fog node based cloud
orchestration
Increased data throughput,
improve performance, and
quick attack detection.
Network management prototype with a lighter
container technology than virtualization.
11. Nezami et al.29 Decentralized multiagent system for
collective learning
Mathematical modeling Improved scalability in various
circumstances
Cost of deadline violation, service deployment, and
unhosted services based modeling.
AGRAWAL 5of15
Typ e
Network
Access
Control List
Cloud Environment Layer
Remote Servers
(VMs)
Monitoring
Agent
(6)
Requests are
forwarded to the
Cloud firewall
Filtered
requests are
forwarded
to the ACL
(7)
Requests
filtered by ACL
are forwarded
to the servers (8) (9)
Requests are
verified at the
server level
(10)
Servers are
scaled in the
form of
VMs and
servers the
requests at
remote level
Network performance
is monitored and
consistent feedback is
provided to the load-
balancer
(11)
Edge Layer
IoT Sensor
Layer
(1)
Load
Balancer
(2)
Database
Topology
Manager
Fog Layer
(3) (5)
(4)
IoT sensors
transmit
respective
data
Edge layer
preprocesses
the data
Sensory
data is
submitted
to the
Edge layer
The data is
submitted to
the Fog Layer
Fog Layer load-balances
the data and servers the
requests at local level
Local
Servers
Type-1
IoT
Sensors
Type-2
IoT
Sensors
FIGURE 1 Proposed conceptual architecture of Edge-Fog-Cloud network
3.2 The proposed solution
The proposed solution discusses some assumptions, the LB-ACM, and the respective process workflow.The conceived assumptions are listed below.
3.2.1 Assumptions
1. The traffic requests follow the Poisson distribution. As it is hard to generate the real heavy cloud traffic and the Poisson distribution mimics the
actual cloud traffic, the random artificial Poisson distribution based traffic is generated for simulation purposes.33,34
2. In addition, the interarrival and service time for the legitimate traffic follow the exponential distribution.33,34
3. Finally, all VMs are identical (i.e., have same computing capacity) in nature, and serving as the servers for different services.33,34
4. All internal users are authentic. The fog layer is implemented at local network level.
5. The cloud environment has sufficient pool of resources, and can autoscale the resources based on the need.
3.2.2 Load balancing assisted access control mechanism
The proposed LB-ACMperforms two main tasks, namely,(1) load-balancing the IoT traffic, and (2) access control maintenance. A multilayer network
architecture is used for this purpose as discussed in subsection 3.1. The IoT traffic is refined at multiple levels.Initially,the raw traffic is preprocessed
at the edge layer filtering out the outliers with the help of multiple edge-nodes. Subsequently, the fog layer collects the preprocessed traffic and
assigns them to the respective local servers based on the current state of the cloud servers. If the local servers are in overloaded state and are
unable to service the requests, load-balancer forwards the requests to the cloud gateway, that is, cloud firewall. Finally, the cloud firewall and ACL
help to serve the network requests and improve the network load condition by filtering out the nonauthentic traffic sources. Thus, the cloud traffic
is refined and only authentic traffic reaches at the cloud servers for actual processing.
The load balanced traffic from fog layer reaches at the cloud layer, and may overwhelm the cloud resources in case of mismanagement. To prevent
the cloud resources from getting overwhelmed, a three-level access control is used. The firewall works at the first level, and is used to reduce the
false alarm rate. It receives the IoT traffic in its aggregated form, and tries to reduce the load of the remote servers by filtering out the traffic. At
second level, ACL receives the filtered traffic from the cloud firewall and only allows the matched traffic entries based on the access control table.
The authentic traffic is forwarded to the remote servers, that is, cloud VMs. Finally, the remote servers have their own virtual logins guaranteeing
the authorized access to the resources.
6of15 AGRAWAL
In addition to the fog layer based load balancing feature, this three-layer access control mechanism assures the IoT users of low latency services
and improved response time, while minimizing the risk of unauthorized access.
3.3 Process workflow
The workflow of the proposed LB-ACM is shown in Figure 2. It consists of multiple layers including IoT, Edge, Fog, and Cloud layers. The service
requests are verified at multiple locations and the access control is optimized. The service requests get originated at the IoT layer, and pass through
the edge and fog layers to reach the cloud layer. As it is assumed that all internal users are authentic as mentioned in subsection 3.2.1, there is a
chance of unauthorized access only at the cloud layer. Hence, multilevel access control is applied at cloud layer. The firewall, ACL, and the VM-level
authentication together seize the chances of any unauthorized access. The detailed workflow is depicted in Figure 2.
4EXPERIMENTAL RESULTS AND ANALYSIS
This section discusses the experimental setup details, including the considered use-case application, IoT network traffic generation, its preprocess-
ing and load balancing in edge and fog layers, respectively, cloud-based heavy flow accommodation, and dynamic autoscaling of the resources. In
addition to this, the related experimental results and respective discussion is also provided in this section.
4.1 Experimental setup
This section details the application use-case for the proposed architecture, IoT traffic generation in IoT layer, data preprocessing in edge layer, load
balancing of the flows in fog layer, and the autoscaling of the resources in cloud layer to accommodate heavy flows. The experimental set-up uses
Amazon web services (AWS) platform for effective and easy implementation of the proposed approach.
4.1.1 Use-case application
This work uses health-care application as a use-case. Real-time applications such as healthcare system urge for standardized algorithms for
real-time dynamic scheduling that can determine the data criticality and service request importance. IoT sensors generate huge amount of traffic
and need an effective optimized computing mechanism. As per the proposed architecture, IoT traffic is serviced in fog and/or cloud layer servers.
Thus, the IoT data use the network computing in the form of service. Hence, it is called as the “Computing-as-a-Service.” Before the IoT traffic gets
stored in local servers in fog layer, it passes through the edge layer and gets preprocessed. This layer filters the possible outliers and ensures the ser-
vicing of only optimized data resulting in the optimal use of the service. If local servers do not suffice for the servicing, it is directed to the cloud-based
remote servers.
4.1.2 Traffic generation in IoT sensor layer
Artificial random IoT traffic is generated for IoT devices. Different traffic generation for nonauthentic traffic is adopted from Reference 35.
4.1.3 Traffic preprocessing in edge layer
Edge is a layer in close-proximity to the IoT sensors. It receives the service requests from the sensors and preprocesses them. After completing the
preprocessing, the request is forwarded to the fog layer for further servicing. The edge computing is implemented using an IoT device in the form
of IoT gateway.
4.1.4 Flow load-balancing in fog layer
In fog layer, load balancing plays a crucial role. It is used to optimize the LAN resources which further helps to increase the throughput and decrease
the response time. Thus, it has a common objective to balance the load among the fog nodes resulting in the LAN performance improvement which
AGRAWAL 7of15
FIGURE 2 Workflow of the proposed
load balancing assisted access control
mechanism
Forwarded request arrives at the the cloud rewall
Request is forwarded to the load balancer
IoT sensors generate application specic service
request
Data is preprocessed to improve the
correctness of data
All local servers
overloaded ?
No Service the request at least loaded
application specic local server
Yes
The request is forwarded to the edge layer
Start
End
IoT Sensor Layer
Edge LayerFog Layer
Cloud Environment Layer
Firewall allows the
ow request ?
No
Yes
ACL has a matching
entry ? Discard the request
No
Yes
Request is forwarded to the respective
remote server
VM authorization
veried ?
No
Request is processed at respective VM and the cloud
performance is recorded by the monitoring agent
Performance metrics
crosses the lower threshold?
Yes
Request servicing is continued
Yes Request is marked as suspicious and
an alert is sent to the administrator
No
Performance remains
low for 't' seconds ?
No
Yes
Load balancer forwards the request to the
cloud based remote servers (VMs)
8of15 AGRAWAL
TABLE 2 Load balancer rules
Inbound rules
Source Protocol Port range Comment
192.168.38.0/0 TCP listener Allows all inbound network traffic to load balancer listener port
Outbound rules
Destination Protocol Port range Comment
instance security group TCP instance listener Allows outbound traffic to instances listener port
instance security group TCP health check Allow outbound traffic to instances health check port
further improvesthe response time of the IoT service requests. It assists in avoiding bottlenecks and implementing scalability at fog layer. In LB-ACM,
the load-balancer in fog layer performs dynamic scheduling. The fog layer is implemented in the form of local LAN servers. The load balancer rules
are specified in Table 2.
4.1.5 Heavy flow accommodation in cloud layer
The aggregate traffic arriving at the cloud layer may consist of authentic as well as nonauthentic sources. Some EC2 instances are used to con-
figure the application server and Internet Information Services. Different physical machines are deployed for authentic and nonauthentic traffic.
The Amazon-CloudWatch monitoring service is used to monitor the cloud network performance metrics. The configuration of the EC2 instances is
detailed in Table 3.
4.1.6 Dynamic autoscaling and performance metrics selection
The AWS autoscaling service is used to dynamically provision the cloud instances. It is also responsible to maintain the availability of the cloud
resources, and to offer a flexible VM scaling mechanism to the cloud service provider to cope-up with the user’s requirements. Autoscaling feature
uses three parameters, namely, (1) system performance metric such as memory consumption, CPU usage, and so forth, (2) performance threshold,
Parameter Description
Name T2.ExtraLarge, T2.Microa
API name t2.xlarge, t2.microa
Memory 16 GiB, 1 GiBa
Virtual CPU 4, 1a
CPU credits/hour 54, 6a
Storage 90 GiB, 30 GiBa
Network speed 10 Gbps, 1 Gbpsa
Network performance Low to moderate
Operating system Microsoft Windows Server
2012 Standard
Processor Intel(R) Xeon(R) CPU
E5-2676 V3 @2.40Hz
Amazon machine image ami-f318de8b
Architecture AMD64
Hardware Xen HVM domU
aFor t2.microinstance.
TABLE 3 Configuration of AWS EC2 t2.xlarge and t2.microinstances
AGRAWAL 9of15
TA B L E 4 Launch
configuration details of the
Amazon web services instances
Configuration parameters Instance 1 Instance 2 Instance m
Hostname WIN-3s05EEJ6886 WIN-0GB90ER2S3J WIN-0792ANSIVCA
Instance ID i-024fdf71c42616f6a i-090debd7fb94112eb i-034988080ffb76062
Availability Zone Asia-Pacific-2a Asia-Pacific-2a Asia-Pacific-2a
IPV4 Public IP 51.149.128.247 52.43.7.131 52.27.92.163
Private IP 172.31.21.217 172.31.31.217 172.31.30.38
VPC ID vpc-039e4165 vpc-039e4165 vpc-039e4165
Subnet ID subnet-f2bd5494 subnet-f2bd5494 subnet-f2bd5494
and (3) the autoscaling time period. The system performance metric is an indication of the overallsystem utilization. The second parameter threshold
defines the value that triggers the autoscaling process. The threshold value is a system performance dependent parameter. Finally if the autoscaling
process gets triggered, it continues for a specific time period. The third parameter is the measure of this time period. The configuration of various
EC2 instances is shown in Table 4. As CPU utilization is the most important system performance parameter and is used in most of the works, it is
considered as detection parameter in this work.
4.1.7 Three level cloud access control design
Different access control mechanisms have been adopted at different locations using cloud firewall, ACL, and server level authentication. The fire-
wall is configured at cloud gateway. It must forward the authentic traffic coming from the fog layer to the ACL. Subnet level ACL configuration
is adopted to control the inbound and outbound traffic. ACL details ALLOW/DENY rules. Initially, it allows whole incoming traffic coming from
the network firewall. The DENY rules are added for the specific sources in ACL upon detection of malpractices. The subsequent traffic from
such sources is blocked at ACL level. The respective details are listed in Table 5. Similarly, the VM-based server authorization rules are specified
in Table 6.
4.2 Experimental results and discussions
Multiple experiments have been performed for experimentalpurpose. Five different Network Scenarios are considered with varying number of IoT
devices. The IoT devices are incremented in an order of 5 for each successiveexperiment starting from Experiment 1 (Exp1) and so forth, as detailed
in Table 7. All experiments are run for 24 h period.
TA B L E 5 Cloud access control list rules Rule no. Type Protocol Port range Source Allow/Deny
Inbound rules
100 HTTP TCP 80 192.168.38.0/24 Deny
110 HTTPS TCP 443 192.168.38.0/24 Allow
120 SSH TCP 22 192.168.38.0/24 Allow
130 RDP TCP 3389 192.168.38.0/24 Deny
140 Custom TCP TCP 49,152–65,535 0.0.0.0/0 Allow
* All Traffic All All 0.0.0.0/0 Deny
Outbound rules
100 HTTP TCP 80 192.168.38.0/24 Allow
110 HTTPS TCP 443 192.168.38.0/24 Allow
120 Custom TCP TCP 49,152–65,535 0.0.0.0/0 Allow
* All Traffic All All 0.0.0.0/0 Deny
10 of 15 AGRAWAL
TA B L E 6 Cloud server (VM) authorization rules based on security groups
Inbound rules
Source Protocol Port range Comment
192.168.38.0/0 TCP 80 Denies all inbound HTTP traffic
192.168.38.0/0 TCP 443 Allows all inbound HTTPS traffic
192.168.38.0/0 TCP 3389 Denies RDP access to Windows instance
192.168.38.0/0 TCP 22 Allows SSH access to Linux instance
Outbound rules
Destination Protocol Port range Comment
Database server TCP 1433 Allows outbound Microsoft SQL traffic to database listener port
MySQL database server TCP 3306 Allows outbound MySQL traffic to database listener port
Experiment no. IoT sensors Edge layer nodes Fog layer motes AWSEC2 instances
1 (Exp1) 5 2 1 10
2 (Exp2) 10 3 2 10
3 (Exp3) 15 5 3 10
4 (Exp4) 20 7 4 10
5 (Exp5) 25 10 5 10
TABLE 7 Different
Edge-Fog-Cloud network based
IoT environment scenarios
These experiments are performed to study the behavior of the proposed Edge-Fog-Cloud network framework in IoT environment. IoT nodes
generate the sensor data which is stored either at local LAN level or at the remote cloud level using AWS, after needed preprocessing. The results
obtained from the experimental setup w.r.t different experiments are detailed in Figures 3–12.
In Figure 3, the number of packets entering into the cloud environment are depicted w.r.t. different experiments. It is evident that the number
of packets entering into the cloud are increasing for subsequent experiments starting from the Exp1. This happens due to the increasing number of
IoT sensors in each successive experiment. Figure 4 shows the average number of packets entering into the networks. The average number of input
packets in Exp1 to Exp5 are 1398, 2669, 4289, 6301, and 6878, respectively.
Figure 5 details the average number of packets coming out of the cloud network w.r.t. all experiments. Similar to Figure 3, Figure 5 also shows an
increase in the number of packetsin successive experiments starting from Exp1. But, the number of packetscoming out of the cloud network are less
in comparison to the number of packets entering into the network w.r.t. all experiments. This happens due to the network packet collision, network
FIGURE 3 Cloud network input packets statistics
AGRAWAL 11of 15
FIGURE 4 Average count of cloud network input packets
FIGURE 5 Cloud network output packets statistics
FIGURE 6 Average count of cloud network output packets
FIGURE 7 CPU utilization statistics
12 of 15 AGRAWAL
FIGURE 8 Average CPU utilization statistics
FIGURE 9 Optimized CPU utilization statistics
FIGURE 10 Average optimized CPU utilization statistics
FIGURE 11 Improved CPU utilization statistics
AGRAWAL 13of 15
FIGURE 12 Average improved CPU utilization statistics
packet drop, and so forth. Similar to Figure 4, Figure 6 also shows the average number of packets processed by the cloud network successfully. The
average count of packets-out in different experiments are 1286, 2588, 4178, 6189, and 6745.
Figure 7 depicts the CPU utilization in cloud network w.r.t all experiments. This figure represents the 24 h statistics similar to cloud packets-in
and packet-out statistics having a sample duration of 1 h.
The average CPU utilization is shown in Figure 8 w.r.t. all experiments. It increase gradually w.r.t. each successive experiment due to the gradual
increase in the number of packet-in count in the cloud network as shown in Figure 3. Figure 9 shows the optimized CPU utilization in cloud network
over the total observation period. It is clear from the comparison of Figure 9 with Figure 7 that the proposed architecture based ACM helps to
optimize the network performance. Similar to Figure 9, Figure 10 shows the average optimized CPU utilization in all experiments. The achieved
optimal performance of the proposed architecture helps to optimize the network resource utilization in terms of CPU utilization. Figure 11 shows
the achieved improved CPU utilization statistics. In addition, the average improved CPU utilization statistics is shown in Figure 12. It is clear from
Figure 12 that Exp5 achieves the highest CPU utilization improvement followed by Exp3, Exp1, Exp2, and Exp4.
4.3 Comparative analysis
The comparative analysis of the proposed approach with related existing literature works has been detailed in Table 8. The comparison is performed
based on some of the features of the literary works such as resource management, load balancing, access control, and so forth. Almost all of the
works focus on the importance of resource management except.19,24 Some of the works employ the concept of load balancing such as References
16,17,19,25,29. None of the works discuss the concept of access control except.24 In addition to this, most of the works use the concepts of edge,
fog, and cloud computing. In References 17-19,27-29, the concept of edge computing is used. In addition, all of the works use the concepts of fog
and cloud computing.
TABLE 8 Comparison of the proposed approach with the existing approaches
S. no. Reference Resource management Load balancing Access control Edge based Fog based Cloud based
1. Abbasi et al.16 ✓✓××✓✓
2. Ficco et al.17 ×
3. Taherizadeh et al.18 ××✓✓
4. Mohan and Kangasharju19 ××
5. Alli and Alam22 ×××✓✓
6. Zeng et al.23 × × ×
7. Xu et al.24 ×××✓✓
8. Omoniwa et al.25 × ×
9. Cao et al.27 ××✓✓
10. Kim et al.28 × ×
11. Nezami et al.29 ✓✓×✓✓
12. Proposed approach
14 of 15 AGRAWAL
5CONCLUSION AND FUTURE WORK
The huge amount of data generated by IoT devices need to be services in an optimized way. The data need to be preprocessed first to improve the
overall networkperformance and service response time. The proposed multilayer architecture involves an efficient load balancing mechanism, and
allows the optimized access control. The proposed mechanism ensures the efficient servicing of the IoT requests coming from the IoT layerusing the
services offered by multiple architectural layers such as edge, fog, and cloud. The achieved results verify the performance of the proposed approach
and paves the ways for related future research.
The related future works include the implementation of the proposed architecture with mobile IoT devices offering the real-time environment
mobility. In addition, the concepts of packetsloss rate control, and secure access in a mobile environment with guaranteeddelivery QoS are supposed
to be explored.
ORCID
Neha Agrawal https://orcid.org/0000- 0002-7254-079X
REFERENCES
1. Gubbi J, Buyya R, Marusic S, Palaniswami M. Internet of things (IoT): a vision, architectural elements, and future directions. Future Gener Comput Syst.
2013;29(7):1645-1660.
2. Cai H, Xu B, Jiang L, Vasilakos AV. IoT-based big data storage systems in cloud computing: perspectives and challenges. IEEE Internet Things J.
2016;4(1):75-87.
3. Agrawal N, Tapaswi S. Defense mechanisms against DDoS attacks in a cloud computing environment: state-of-the-art and research challenges. IEEE
Commun Surv Tutor . 2019;21(4):3769-3795.
4. Sobin CC. A survey on architecture, protocols and challenges in IoT. Wirel Personal Commun. 2020;112(3):1383-1429.
5. Wazid M, Das AK, Hussain R, Succi G, Rodrigues JJ. Authentication in cloud-driven IoT-based big data environment: survey and outlook. JSystArchit.
2019;97:185-196.
6. Dastjerdi AV, Buyya R. Fogcomputing: helping the internet of things realize its potential. Computer. 2016;49(8):112-116.
7. Chiang M, Zhang T. Fog and IoT: an overview of research opportunities. IEEE Internet Things J. 2016;3(6):854-864.
8. Satyanarayanan M. The emergence of edge computing. Computer. 2017;50(1):30-39.
9. Shi W, Cao J, Zhang Q, Li Y, Xu L. Edge computing: vision and challenges. IEEE Internet Things J. 2016;3(5):637-646.
10. Li H, Ota K, Dong M. Learning IoT in edge: deep learning for the internet of things with edge computing. IEEE Netw. 2018;32(1):96-101.
11. Hu P, Dhelim S, Ning H, Qiu T. Survey on fogcomputing: architecture, key technologies, applications and open issues. J Netw Comput Appl. 2017;98:27-42.
12. Ren J, Guo H, Xu C, Zhang Y. Serving at the edge: a scalable IoT architecture based on transparent computing. IEEE Netw. 2017;31(5):96-105.
13. Wang T, Wang P, Cai S, Ma Y, Liu A, Xie M. A unified trustworthy environment establishment based on edge computing in industrial IoT. IEEE Trans Ind
Inform. 2019;16(9):6083-6091.
14. Aburukba RO, AliKarrar M, Landolsi T, El-Fakih K. Scheduling internet of things requests to minimize latency in hybrid Fog cloud computing. Futur Gener
Comput Syst. 2020;111:539-551.
15. Alli AA, Alam MM. The fog cloud of things: a survey on concepts, architecture, standards, tools, and applications. IoT. 2020;9:100177.
16. Abbasi M, Yaghoobikia M, Rafiee M, Jolfaei A, Khosravi MR. Efficient resource management and workload allocation in fog cloud computing paradigmin
IoT using learning classifier systems. Comput Commun. 2020;153:217-228.
17. Ficco M, Esposito C, Xiang Y, Palmieri F. Pseudo-dynamic testing of realistic edge-fog cloud ecosystems. IEEE Commun Mag. 2017;55(11):98-104.
18. Taherizadeh S, Stankovski V, Grobelnik M. A capillary computing architecture for dynamic internet of things: orchestration of microservices from edge
devices to fog and cloud providers. Sensors. 2018;18(9):29-38.
19. Mohan N, Kangasharju J. Edge-Fog cloud: a distributed cloud for internet of things computations. Proceedings of the 2016 Cloudification of the Internet of
Things (CIoT). Paris, France: IEEE; 2016:1-6.
20. SarkerVK, Queralta JP, Gia TN, Tenhunen H, Westerlund T. Offloading slam for indoor mobile robots with edge-fog-cloud computing. Paper presentedat:
Proceedings of the 2019 1st International Conference on Advances in Science, Engineering and Robotics Technology (ICASERT); May 2019:1-6; Dhaka,
Bangladesh: IEEE.
21. Kochovski P, Gec S, Stankovski V, Bajec M, Drobintsev PD. Trust management in a blockchain based fog computing platform with trustless smart oracles.
Futur Gener Comput Syst. 2019;101:747-759.
22. Alli AA, Alam MM. SecOFF-FCIoT: machine learning based secure offloading in fog-cloud of things for smart city applications. IoT . 2019;7:100070.
23. ZengW, Zhang S, YenIL, Bastani F. Semantic IoT data description and discovery in the IoT-edge-fog-cloud infrastructure. Paperpresented at: Proceedings
of the 2019 IEEE International Conference on Service-Oriented System Engineering (SOSE); April 2019:106-10609; San Francisco, CA, USA: IEEE.
24. Xu R, Chen Y, Blasch E, Chen G. A federated capability-based access control mechanism for internet of things (iots). Sensors and Systems for Space
Applications XI. Vol 10641. Orlando, FL, USA: International society for optics and photonics; 2018:106410U.
25. Omoniwa B, Hussain R, Javed MA, Bouk SH, Malik SA. Fog/edge computing-based IoT (FECIoT): architecture, applications, and research issues. IEEE
Internet Things J. 2018;6(3):4118-4149.
26. Sanchez-Gallegos DD, Galaviz-Mosqueda A, Gonzalez-Compean JL, et al. On the continuous processing of health data in edge-fog-cloud computing by
using micro/nanoservice composition. IEEE Access. 2020;8:120255-120281.
27. Cao H, WachowiczM, Renso C, Carlini E. An edge-fogcloud platform for anticipatory learning process designed for internet of mobile things; 2017. arXiv
preprint arXiv:1711.09745.
28. Kim NY, Ryu JH, Kwon BW, Pan Y, Park JH. CFCloudOrch: container fog node-based cloud orchestration for IoT networks. J Supercomput.
2018;74(12):7024-7045.
AGRAWAL 15of 15
29. Nezami Z, Zamanifar K, Djemame K, Pournaras E. Decentralizededge-to-cloud load-balancing: service placement for the internet of things; 2020. arXiv
preprint arXiv:2005.00270.
30. Gia TN, Qingqing L, Queralta JP, Zou Z, Tenhunen H, Westerlund T. Edge AI in Smart Farming IoT: CNNs at the Edge and Fog Computing with LoRa.
Proceedings of the IEEE AFRICON. Accra, Ghana, Ghana: IEEE; 2019:1-6.
31. Memon RA, Li JP, Nazeer MI, Khan AN, Ahmed J. DualFog-IoT: additional fog layer for solving blockchain integration problem in Internet of Things. IEEE
Access. 2019;7:169073-169093.
32. Agrawal N, Tapaswi S. A proactive defense method for the stealthy EDoS attacks in a cloud environment. Int J Netw Manag. 2020;30(2):e2094.
33. Al-Haidari F, Sqalli M, Salah K. Evaluation of the impact of EDoS attacks against cloud computing services. Arab J Sci Eng. 2015;40(3):773-785.
34. Al-Haidari F, Salah K, Sqalli M, Buhari SM. Performance modeling and analysis of the EDoS-shield mitigation. Arab J Sci Eng. 2017;42(2):793-804.
35. Ficco M. Could emerging fraudulent energy consumption attacks make the cloud infrastructure costs unsustainable? Inform Sci. 2019;476:474-490.
How to cite this article: Agrawal N. Dynamic load balancing assisted optimized access control mechanism for Edge-Fog-Cloud network in
Internet of Things environment. Concurrency Computat Pract Exper. 2021;e6440. https://doi.org/10.1002/cpe.6440
... LB is a process by which workloads are distributed across multiple computers within a distributed system [7]. This improves the speed of job processing, such as reducing response time (RT) and increasing resource utilization [3] [7]. ...
... LB is a process by which workloads are distributed across multiple computers within a distributed system [7]. This improves the speed of job processing, such as reducing response time (RT) and increasing resource utilization [3] [7]. The primary goal of a LBA is to optimize system performance while keeping costs reasonable [8]. ...
Research
Full-text available
In the field of information technology cloud computing is a recently developed technology. In such a complicated system, an effective load balancing scheme is critical in order to meet peak user demands and deliver high-quality services. Load balancing is a way of distributing workload among several nodes over network links in order to maximize resource utilization, decrease data processing and response time, and avoid overload. There have been a number of load balancing algorithms suggested that concentrate on important factors including processing time, response time, and processing costs. These techniques, however, ignore cloud computing scenarios. At the same time, there is a few research works that focuses on the subject of load balancing in cloud computing. Motivated by this issue, this study addresses the load balancing challenge in cloud computing by comparing natural inspired Load Balancing Algorithms based on the resource utilization metric. The chosen load balancing methods will next be tested and assessed using the CloudSim simulator to choose the proper natural inspired Load balancing algorithms that solves the problem of load balancing in cloud computing, according to the result of the simulation it can be concluded that the LBA_HB is better than the HBB_LB based on the results for the response time, MakeSpan, and the degree of imbalance.
... well as back-end infrastructure like servers and databases, offering comprehensive control and assistance for cloud-based application development [1]. ...
Article
Full-text available
The only thing that sets fog computing apart from the cloud is its proximity to end users, which allows them to process and respond to customers faster. Second, it helps the Internet of Things, sensor networks, and real-time streaming apps—all of which depend on dependable and fast internet access. This study develops the optimal technique for fog computing load balancing and authentication. The Improved Tasmanian devil optimization (ITDO) algorithm and blockchain technology are combined in the suggested strategy. Blockchain technology is applied to data security and user authentication. Fog computing load balancing is optimised by the TDO, which manages optimal load balancing. Blockchain network nodes called fog nodes and edge devices log and verify the load-blanking procedure. In fog computing, load balancing and user authentication are the main research objectives. For load balancing, novel multi-objective function is designed. The performance of presented technique is analysed based on various metrics and performance compared with different techniques. For experimental analysis proposed approach attained the minimum waiting time of 25 s.
... Through this process the latency and overhead generation among the user devices can get reduced and as well here the fog devices generate the data in the form of motion capabilities [9]. In the fog computing network, the resources allocation among the devices is not familiar and it is complex to attain proper resource management among the end devices [10]. In order to increase the interconnected networks communication quality and terms like heterogeneity, geographic dispersion, and the asymmetry are concentrated among them. ...
Preprint
Full-text available
Fog computing is one of the efficient technologies used for overcoming the issues related to handling large quantities of Heterogeneous Internet of Things (HIoT) data like distributed network and health monitoring by dispensing the applications which gets applied nearer to the network edge. In fog based network model, distributed fog server closeness to the terminal devices allows currently available data to be transmitted effectually. But the increasing size of fog enabled HIoT environments leads to the design of effective routing and path selection to accomplish minimum delay and power consumption. With this motivation, this paper presents Optimization based Data Aggregation and fault Tolerance with Energy Management (ODFTEM) technique for fog enabled HIoT. This idea is segmented into three categories; they are a quasi-oppositional chimp optimization based data aggregation for routing, fault tolerance amongst the fog nodes and a Markov-chain-based probabilistic model for energy management. Initially, data aggregation is employed to gather the data and accumulate it for removing repetitive data and conserving energy. The goal of data aggregation is to reduce quantity of data dissemination and extend lifetime. A quasi-oppositional chimp optimization based data aggregation in routing is performed to choose an optimal collection of routes and perform data aggregation and it follows two stages like relay node selection and data aggregation. Secondly, to boost the network performance the best cost path is chosen where it selects the best nearest neighbor so that the packet transmission time can get reduced. Finally to forecast the energy consumption a Markov chain based probabilistic approach is used and this method can provide a comfortable energy saving model to the fog network. A detailed experimental validation of the ODFTEM technique reported the better performance in terms of energy consumption, throughput and transmission delay of the devices in the Fog enabled HIoT environment which is compared with the baseline methods like IHCFC, DAIFC and ODIFC.
... Cameras, LiDAR, ultrasonic, and temperature sensors can be connected to robotic systems to capture real-time environmental data. This plethora of data helps robots better understand and adapt to changes in their operational environment [16]. ...
Article
Full-text available
Robotics has been transformed by machine learning (ML), enabling intelligent and adaptive autonomous systems. By delivering massive computational resources and real-time data, fog/cloud computing and the Internet of Things boost ML-based robotics. Intelligent and linked robotics have emerged from fog/cloud computing, IoT, and machine learning. Robots using distributed computing, real-time IoT data, and advanced machine learning algorithms could alter industries and improve automation. To maximize its potential, this revolutionary combination must overcome several obstacles. This paper discusses the benefits and drawbacks of integrating technologies. It offer rapid model training and deployment for robots ML algorithms like deep learning and reinforcement learning. Case studies demonstrate how this combination might enhance robotics across industries. This study discusses the benefits and drawbacks of fog/cloud computing, IoT, and machine learning in robots. We propose solutions for security and privacy, resource management, latency and bandwidth, interoperability, energy efficiency, data quality, and bias. By proactively addressing these difficulties, we can establish a secure, efficient, and privacy-conscious robotic ecosystem where robots seamlessly interact with the physical world, improving productivity, safety, and human-robot collaboration. As these technologies progress, appropriate integration and ethical principles are needed to maximize their benefits to society.
... The jobs run as a container task on AWS and execute spaCy training based on the provided configuration and the uploaded training corpus. Our NER training application is optimized to efficiently process large datasets through the use of AWS dynamic load balancing [1]. This allows multiple training jobs to be queued up and executed in parallel, without overwhelming the compute resources. ...
Chapter
This research introduces an extended Information Extraction system for Named Entity Recognition (NER) that allows machine learning (ML) practitioners and medical domain experts to customize and develop their own models using transformers and a range of Cloud resources. Our system provides support for the entire process of managing Cloud resources, including hardware, computing, storage, and training services for NER models. The paper discusses the design and development of two prototypes that target the AWS and Azure Cloud, which were evaluated by experts using the cognitive walkthrough methodology. Additionally, the paper presents quantitative evaluation results that showcase the promising performance of our NER model training approach in the medical domain, outperforming existing approaches.
... The jobs run as a container task on AWS and execute spaCy training based on the provided configuration and the uploaded training corpus. Our NER training application is optimized to efficiently process large datasets through the use of AWS dynamic load balancing [1]. This allows multiple training jobs to be queued up and executed in parallel, without overwhelming the compute resources. ...
Conference Paper
Full-text available
This research introduces an extended Information Extrac- tion system for Named Entity Recognition (NER) that allows machine learning (ML) practitioners and medical domain experts to customize and develop their own models using transformers and a range of Cloud resources. Our system provides support for the entire process of managing Cloud resources, including hardware, computing, storage, and training services for NER models. The paper discusses the design and development of two prototypes that target the AWS and Azure Cloud, which were evaluated by experts using the cognitive walkthrough methodology. Additionally, the paper presents quantitative evaluation results that showcase the promising performance of our NER model training approach in the medical domain, outperforming existing approaches target the AWS and Azure Cloud, which were evaluated by experts using the cognitive walkthrough methodology. Additionally, the paper presents quantitative evaluation results that showcase the promising performance of our NER model training approach in the medical domain, outperforming existing approaches.
... By introducing even the Cloud layer [18] we increase the computation capability, although the cloud is not used in this work, we can still refer to the load balancing strategies offered by different works. For example, [19] proposes an Edge-Fog-Cloud algorithm for distributing the traffic in all of the three layers but the focus is not the latency optimisation, [1] provides a model based on queuing theory, [20] studies a load balancing approach for the Fog-Cloud environment classifying requests in real-time, important and time-tolerant but again the approach is not focused on latency levelling, then [21] proposes a scheduling approach based on blockchain and [22] a strategy to cope with failures by using Software-Defined Networks (SDN). ...
Article
Full-text available
When dealing with distributed applications in Edge or Fog computing environments, the service latency that the user experiences at a given node can be considered an indicator of how much the node itself is loaded with respect to the others. Indeed, only considering the average CPU time or the RAM utilisation, for example, does not give a clear depiction of the load situation because these parameters are application- and hardware-agnostic. They do not give any information about how the application is performing from the user's perspective, and they cannot be used for a QoS-oriented load balancing. In this paper, we propose a load balancing algorithm that is focused on the service latency with the objective of levelling it across all the nodes in a fully decentralised manner. In this way, no user will experience a worse QoS than the other. By providing a differential model of the system and an adaptive heuristic to find the solution to the problem in real settings, we show both in simulation and in a real-world deployment, based on a cluster of Raspberry Pi boards, that our approach is able to level the service latency among a set of heterogeneous nodes organised in different topology.
Article
Full-text available
Internet of Things has been a popular technology in recent years for deploying a large‐scale, smart environment monitoring application across the country, using fog computing and the cloud. However, most locations of the developing countries suffer from power outages and limited network connectivity. Moreover, varied population in different locations, may lead to either frequent or rare changes in the state of the monitored environment. Due to these stochastic conditions, there may be substantial increase in service time of the application with unnecessary battery and resource consumption of the fog node. For efficient utilization of fog node resources in dynamic environment conditions, an event‐driven information fusion framework is proposed using docker containerization technology and MQTT (Message Queuing Telemetry Transport) as application layer protocol. The proposed framework provides resilience to the application in the presence of stochastic conditions and auto‐scales edge intelligence with minimum scaling operations. The performance of the framework is validated on a smart sanitation system use‐case application, proposed for autonomous monitoring of restrooms deployed in Indian rural and semi‐urban environment, and the experimental result show deviation of 2.8% in average response time of the application with average accuracy of 98.9%.
Article
Full-text available
The Internet of Things (IoT) requires a new processing paradigm that inherits the scalability of the cloud while minimizing network latency using resources closer to the network edge. On the one hand, building up such flexibility within the edge-to-cloud continuum consisting of a distributed networked ecosystem of heterogeneous computing resources is challenging. On the other hand, IoT traffic dynamics and the rising demand for low-latency services foster the need for minimizing the response time and a balanced service placement. Load-balancing for fog computing becomes a cornerstone for cost-effective system management and operations. This paper studies two optimization objectives and formulates a decentralized load-balancing problem for IoT service placement: (global) IoT workload balance and (local) quality of service (QoS), in terms of minimizing the cost of deadline violation, service deployment, and unhosted services. The proposed solution, EPOS Fog, introduces a decentralized multi-agent system for collective learning that utilizes edge-to-cloud nodes to jointly balance the input workload across the network and minimize the costs involved in service execution. The agents locally generate possible assignments of requests to resources and then cooperatively select an assignment such that their combination maximizes edge utilization while minimizes service execution cost. Extensive experimental evaluation with realistic Google cluster workloads on various networks demonstrates the superior performance of EPOS Fog in terms of workload balance and QoS, compared to approaches such as First Fit and exclusively Cloud-based. The results confirm that EPOS Fog reduces service execution delay up to 25% and the load-balance of network nodes up to 90%. The findings also demonstrate how distributed computational resources on the edge can be utilized more cost-effectively by harvesting collective intelligence.
Article
Full-text available
The edge, the fog, the cloud, and even the end-user’s devices play a key role in the management of the health sensitive content/data lifecycle. However, the creation and management of solutions including multiple applications executed by multiple users in multiple environments (edge, the fog, and the cloud) to process multiple health repositories that, at the same time, fulfilling non-functional requirements (NFRs) represents a complex challenge for health care organizations. This paper presents the design, development, and implementation of an architectural model to create, on-demand, edge-fog-cloud processing structures to continuously handle big health data and, at the same time, to execute services for fulfilling NFRs. In this model, constructive and modular blocks, implemented as microservices and nanoservices, are recursively interconnected to create edge-fog-cloud processing structures as infrastructure-agnostic services. Continuity schemes create dataflows through the blocks of edge-fog-cloud structures and enforce, in an implicit manner, the fulfillment of NFRs for data arriving and departing to/from each block of each edge-fog-cloud structure. To show the feasibility of this model, a prototype was built using this model, which was evaluated in a case study based on the processing of health data for supporting critical decision-making procedures in remote patient monitoring. This study considered scenarios where end-users and medical staff received insights discovered when processing electrocardiograms (ECGs) produced by sensors in wireless IoT devices as well as where physicians received patient records (spirometry studies, ECGs and tomography images) and warnings raised when online analyzing and identifying anomalies in the analyzed ECG data. A scenario where organizations manage multiple simultaneous each edge-fog-cloud structure for processing of health data and contents delivered to internal and external staff was also studied. The evaluation of these scenarios showed the feasibility of applying this model to the building of solutions interconnecting multiple services/applications managing big health data through different environments.
Article
Full-text available
Cloud computing technology provides flexibility to Cloud Service Provider (CSP) for providing the cloud resources based on the users' requirements. In on‐demand pricing model, the attackers exploit this feature and cause unwanted scaling‐up of the cloud resources without any intent to pay for them. The associated cost for the unpaid malicious usage burdens the CSP, and over a long period, economic losses occur at the CSP end. Thus, the resources and services offered by the CSP become unsustainable, and the attack is termed as Economic Denial‐of‐Sustainability (EDoS) attack. The existing defense approaches for EDoS attacks are reactive. Thus, the associated attack detection/mitigation cost is high; consequently, the approaches are not suitable for the Small and Medium Enterprises (SMEs). The aim of this paper is to detect and mitigate, internal and external, stealthy EDoS attacks proactively. The attack is detected using average CPU utilization threshold and utility function (in terms of cost for the utilized cloud computing resources) and mitigated using virtual firewalls. Amazon Elastic Compute Cloud (Amazon EC2) is used to evaluate the performance of the proposed approach. The proposed approach accurately detects the EDoS attack and mitigates its effect as well with reduced cost. It is observed that the approach provides competitive response time, victim service downtime, and attack reporting time. Thus, the overall performance is improved. Key Findings: a. In the proposed approach, the attack is detected using the average CPU utilization and mitigated using the virtual cooperative firewalls. b. The performance of the proposed approach is evaluated on Amazon Elastic Compute Cloud (Amazon EC2). c. The experimental results show that the approach accurately detects EDoS attack and mitigates its effect as well with reduced cost and improved performance. d. The approach provides competitive response time, attack reporting time, and victim service downtime.
Article
Full-text available
Internet of Things (IoT) is an emerging paradigm which aims to inter-connect all smart physical devices, so that the devices together can provide smart services to the users. Some of the IoT applications include smart homes, smart cities, smart grids, smart retail, etc. Since IoT systems are built up with heterogeneous hardware and networking technologies, connecting them to the software/application level to extract information from large amounts of data is a complex task. In this paper, we have surveyed various architecture and protocols used in IoT systems and proposed suitable taxonomies for classifying them. We have also discussed the technical challenges, such as security and privacy, interoperability, scalability, and energy efficiency. We have provided an in-depth coverage of recent research works for every mentioned challenge. The objective of this survey is to help future researchers to identify IoT specific challenges and to adopt appropriate technology depending on the application requirements.
Article
Full-text available
Integration of blockchain and Internet of Things (IoT) to build a secure, trusted and robust communication technology is currently of great interest for research communities and industries. But challenge is to identify the appropriate position of blockchain in current settings of IoT with minimal consequences. In this article we propose a blockchain-based DualFog-IoT architecture with three configuration filter of incoming requests at access level, namely: Real Time, Non-Real Time, and Delay Tolerant Blockchain applications. The DualFog-IoT segregate the Fog layer into two: Fog Cloud Cluster and Fog Mining Cluster. Fog Cloud Cluster and the main cloud datacenter work in a tandem similar to existing IoT architecture for real-time and non-real-time application requests, while the additional Fog Mining Cluster is dedicated to deal with only Delay Tolerant Blockchain application requests. The proposed DualFog-IoT is compared with existing centralized datacenter based IoT architecture. Along with the inherited features of blockchain, the proposed model decreases system drop rate, and further offload the cloud datacenter with minimal upgradation in existing IoT ecosystem. The reduced computing load from cloud datacenter doesn’t only help in saving the capital and operational expenses, but it is also a huge contribution for saving energy resources and minimizing carbon emission in environment. Furthermore, the proposed DualFog-IoT is also being analyzed for optimization of computing resources at cloud level, the results presented shows the feasibility of proposed architecture under various ratios of incoming RT and NRT requests. However, the integration of blockchain has its footprints in terms of latent response for delay tolerant blockchain applications, but real-time and non-real-time requests are gracefully satisfying the service level agreement.
Article
Full-text available
Delivering services for Internet of Things (IoT) applications that demand real-time and predictable latency is a challenge. Several IoT applications require stringent latency requirements due to the interaction between the IoT devices and the physical environment through sensing and actuation. The limited capabilities of IoT devices require applications to be integrated in Cloud and Fog computing paradigms. Fog computing significantly improves on the service latency as it brings resources closer to the edge. The characteristics of both Fog and Cloud computing will enable the integration and interoperation of a large number of IoT devices and services in different domains. This work models the scheduling of IoT service requests as an optimization problem using integer programming in order to minimize the overall service request latency. The scheduling problem by nature is NP-hard, and hence, exact optimization solutions are inadequate for large size problems. This work introduces a customized implementation of the genetic algorithm (GA) as a heuristic approach to schedule the IoT requests to achieve the objective of minimizing the overall latency. The GA is tested in a simulation environment that considers the dynamic nature of the environment. The performance of the GA is evaluated and compared to the performance of waited-fair queuing (WFQ), priority-strict queuing (PSQ), and round robin (RR) techniques. The results show that the overall latency for the proposed approach is 21.9% to 46.6% better than the other algorithms. The proposed approach also showed significant improvement in meeting the requests deadlines by up to 31%.
Article
Computation offloading is one of the important application in Internet of Things (IoT) ecosystem. Computational offloading provides assisted means of processing large amounts of data generated by abundant IoT devices, speed up processing of intensive tasks and save battery life. In this paper, we propose a secure computation offloading scheme in Fog-Cloud-IoT environment (SecOFF-FCIoT). Using machine learning strategies, we accomplish efficient, secure offloading in Fog-IoT setting. In particular, we employ Neuro-Fuzzy Model to secure data at the smart gateway, then the IoT device selects an optimal Fog node to which it can offload its workload using Particle Swarm Optimization(PSO) via the smart gateway. If the fog node is not capable of handling the workload, it is forwarded to the cloud after being classified as either sensitive or non-sensitive. Sensitive data is maintained in private cloud. Whereas non-sensitive data is offloaded using dynamic offloading strategy. In PSO, the availability of fog node is computed using two metrics; i) Available Processing Capacity (APC), and ii) Remaining Node Energy (RNE). Selection of cloud is based on Reinforcement Learning. Our proposed approach is implemented for smart city applications using NS-3 simulator with JAVA Programming. We compare our proposed secure computation offloading model with previous approaches which include DTO-SO, FCFS, LOTEC, and CMS-ACO. Simulation results show that our proposed scheme minimizes latency as compared to selected benchmarks.
Article
The Fog computing paradigms are becoming popular means of utilizing resources optimally by the IoT devices, extending quality of service to the vicinity of the user, and achieve fast processing in the IoT-cloud ecosystems. Fog models allow fast processing of data, easy to reach storage, and reduce bulky network transition. The inefficiencies of the cloud inspire unnecessarily big data to be sent to the backhaul of the network, which incapacitates the cloud infrastructure. Fog computing addresses the limitation of the cloud systems by improving robustness, efficiency, and performance of cloud infrastructure. The need to process some of the big data produced at the peripheral of the network using keen techniques in the fog—cloud ecosystems is a key to new interesting architectures filed in the recent literature. These architectures provide new business opportunities that drive the Internet of things devices to function according to users’ demands. In this paper, we provide an extensive survey on Fog—Edge computing to give a foundation to solutions proposed in studies that involve IoT—Fog—Cloud ecosystems. This is done by providing insights of new research aspects filed, the state—of—art in fog computing architectures, standards, tools and applications. We project the future development trends and provide open issues in fog cloud of things. This will focus developers to develop applications that work well in a cloud-based controlled ecosystem across a range of network terminals.
Article
With the increase of the number of IIOT devices in industrial environments, security threats and quality of service (QoS) issues increase drastically. Internal attack is one type of important security threat that makes service environment worse and less reliable. However, there is no unified and fine-grained trust evaluation mechanism to deal with the threats of internal attack and improve QoS of IIoT. To this end, a unified trustworthy environment based on edge computing is established and maintained, which can timely detect malicious service providers and service consumers, filter unreal information and recommend credible service providers. Moreover, a service selection method is designed to choose the corresponding trustworthy and reliable service providers based on the trust evaluation and the recording criterion, which has distinctive advantages in the succinct trust management, convenient searching service and accurate service matching. Experiments validated the feasibility of the proposed trustworthy environment.