ArticlePDF Available

EXEGESIS: Extreme edge resources harvesting for a virtualized Fog environment

Authors:
  • Future Intelligence Ltd

Abstract and Figures

Currently there is an active debate about how the existing cloud paradigm can cope with the volume, variety and velocity of the data generated by the end devices (e.g. Internet of Things sensors)—it is expected to have over 50 billion of these devices by 2020, which will create more than two Exabyte’s worth of data each day. Additionally, the vast number of edge devices create a huge ocean of digital resources close to the data source which, however, remain so far unexploited to their full extent. EXEGESIS proposes to harness these unutilized resources via a three-layer architecture that encompasses the mist, fog and cloud. The mist network is located at the very bottom, where interconnected objects (Internet of Things devices, Small Servers, etc.) create neighborhoods of objects. This arrangement is enhanced by a virtual fog layer which allows for dynamic, ad-hoc interconnections among the various neighborhoods. At the top layer resides the cloud with its abundant resources that can also be included in one or more virtual fog neighborhoods. Thus, this paper complements and leverages existing cloud architectures enabling them to interact with this new edge-centric ecosystem of devices/resources, and benefit from the fact that critical data are available where they can add the most value.
Content may be subject to copyright.
EXEGESIS: Extreme edge resources harvesting for a virtualized Fog
environment
Evangelos K. Markakis, Kimon Karras, Nikolaos Zotos, Anargyros Sideris, Theoharris Moysiadis, Angelo
Corsaro, George Alexiou, Charalabos Skianis, George Mastorakis, Constandinos X. Mavromoustakis,
Evangelos Pallis
Abstract
Currently there is an active debate about how the existing cloud paradigm can cope with the volume,
variety and velocity of the data generated by the end devices (e.g. Internet of Things sensors)it is expected
to have over 50 billion of these devices by 2020, which will create more than two Exabyte’s worth of data
each day. Additionally, the vast number of edge devices create a huge ocean of digital resources close to
the data source which, however, remain so far unexploited to their full extent. EXEGESIS proposes to
harness these unutilized resources via a three-layer architecture that encompasses the mist, fog and cloud.
The mist network is located at the very bottom, where interconnected objects (Internet of Things devices,
Small Servers, etc.) create neighborhoods of objects. This arrangement is enhanced by a virtual fog layer
which allows for dynamic, ad-hoc interconnections among the various neighborhoods. At the top layer
resides the cloud with its abundant resources that can also be included in one or more virtual fog
neighborhoods. Thus, this paper complements and leverages existing cloud architectures enabling them to
interact with this new edge-centric ecosystem of devices/resources, and benefit from the fact that critical
data are available where they can add the most value.
INTRODUCTION AND CONTEXT
Nowadays a lot of discussion is being going on regarding the way the cloud paradigm can cope with the
volume, variety and velocity of the data generated by the end devices (e.g. Internet of Things (IoT) sensors).
It is expected to have over 50 billion of these end-devices [1], currently referred to as the “τhings”, by 2020,
which will be creating more than two εxabyte’s worth of data each day. It is clear that shipping all of that
Evangelos K. Markakis, George Mastorakis and Evangelos Pallis are with the Technological Educational Institute of
Crete.
Kimon Karras, Nikolaos Zotos, Anargyros Sideris and Theoharris Moysiadis are with Future Intelligence Ltd.
Angelo Corsaro is with PrismTech Corp.
Charalabos Skianis is with the University of the Aegean.
Constandinos X. Mavromoustakis is with the University of Nicosia.
data to the cloud, processing and storing them there, as the current paradigm dictates, can run into significant
bottlenecks, in terms of latency and network capacity. On the other hand, it is hard to miss that the vast
number of end-devices, most of them utilizing some form of processing power, storage space and network
connectivity, could constitute a pristine “ocean” of digital resources, which could be harnessed and used to
address the bottlenecks of the current cloud paradigm by processing and storing data close to where they
were created.
In this context, EXEGESIS, building upon and extending existing concepts [3], [8] such us as the micro
datacenters, cloudlets, mobile edge computing (MEC) and fog computing [7], proposes a novel three
layered architecture that is able not only to reap the resources of the end-users’ devices but also couple them
to the cloud by providing a cross-layer orchestration platform able to deploy services that have a cloud and
mist component and to provide a distributed marketplace where these resources can be traded off by any
EXEGESIS stakeholder; a local authority in Athens, an Small Medium Enterprise (SME) in Madrid, a
corporation in Brussels. In this way, EXEGESIS envisages that it can steer new and innovative services and
process efficiencies not possible with cloud computing alone.
Figure 1. High level view of EXEGESIS concept
The EXEGESIS high level architecture is composed of three layers (Figure 1). At the very bottom, the
mist network is located, where interconnected objects (probes, sensors, cell phones, home appliance devices,
small servers, small cell controllers, etc.) create a neighborhood. This arrangement is enhanced by the virtual
Fog (vFog) layer which allows for dynamic, ad-hoc interconnections among the various neighborhoods
allowing sub-groupings called “suburbs” to be formed. At the top layer resides the conventional cloud with
its abundant resources that can also be included in one or more “suburbs” in order to provide compute
resources and facilitate the interconnection of the various vFog elements. In this context, EXEGESIS
complements and even leverages existing cloud architectures as it enables them to interact with this new
edge-centric ecosystem of devices/resources and benefit from the fact that critical data are available where
they can add the most value.
The key idea and challenge here is to be able to partition the three-layer infrastructure consisting of the
mist, vFog and cloud layers into logical networks whose membership can partially overlap with that of other
such logical networks and to be able to dynamically remold this partitioning to ensure optimal performance
and utilization of the available resources
Furthermore, EXEGESIS aims to enable business innovation via the deployment and use of suburb-
based marketplaces named “AGORAs”, stemming from a Greek word which means the place where all
social and economic activity take place. The “AGORA” is for EXEGESIS the system through which every
infrastructure/platform provider offers over the top (OTT) and on-demand accelerated
service/network/connectivity applications to the requesting entities.
In other words, EXEGESIS aims to radically reshape the mist, fog and cloud landscapes by merging
them into one coherent whole and then slicing and dicing that into logical entities in order to achieve optimal
performance and resource utilization.
BACKGROUND AND RELATED WORK
Concept
EXEGESIS, building on the concepts of “edge computing” [2], “frugality of resources” [5] and
“democratization of the digital economy” [6], envisages a future where the processing, storage and
networking resources of the devices residing at the edge of the network can be harnessed and integrated
seamlessly and dynamically in a flexible system architecture. In making this a reality, EXEGESIS, provides
the means for establishing virtual fogs; overlays of interconnected end-deviceswhich can be intertwined
with cloud resourcesforming ad-hoc isles of connectivity and compute, setting the basis for a common
marketplace where services can easily be deployed across all layers.
The following sections describe the methodology that EXEGESIS follows in order to reach its objectives
as well as the technological aspects utilized for realizing these objectives
Technical Approach
EXEGESIS proposes a new interaction ecosystem composed of three layers. At the very bottom, the
mist layer is located, where interconnected objects create a neighborhood. This arrangement is enhanced by
the vFog layer which allows for dynamic, ad-hoc interconnections among various mist elements allowing
sub-groupings called “suburbs” to be formed. Cloud layer resources can also be included in a suburb in
order to provide resources and facilitate the interconnection of the various elements. The key idea here is to
be able to partition the three-layer infrastructure consisting of the mist, fog and cloud layers into logical
virtual networks whose membership can partially overlap with that of other vFog networks and to be able
to dynamically remold this partitioning to ensure optimal utilization of the available resources.
Mist for EXEGESIS is the unified extreme edge playground where a variety of end user end user can
be also a company that utilizes EXEGESIS solutions devices cooperate towards abstracting, in a common
virtual pool, their available resources and as such enable any legitimate entity to use these resources for
hosting a variety of compute and networking tasks. The EXEGESIS mist overlay “copies” the hybrid P2P's
approach where a peer can be "primus inter pares"
1
. In this context, the EXEGESIS mist network has two
classes of peers (see Figure 2), regular mist nodes (RMNs) and super mist nodes (SMNs).
A RMN can be any end device having at least some processing and communication capabilities that will
allow EXEGESIS to deploy its solution on it and thus transform the device to a fully operational EXEGESIS
mist node. An RMN is able to interact with its corresponding SMN, first to inform it about the device's
available resources and second to receive and carry out the assigned computational and/or networking tasks.
To that end, a special kind of software, called the vFog agent, runs on each RMN. An RMN can be any
physical or virtual entity having even a "pinch" of processing and communication capabilities.
An SMN plays two roles inside the EXEGESIS ecosystem, namely the role of the mist's intra-manager
and the role of the mist's envoy to the vFog orchestrator. As an intra-manager, an SMN:
oversees the formation of the mist network by performing operations such as the (de)registering
of Mist nodes.
queries the registered Mist nodes about their state and their available resources.
creates a logical topology of the mist network along with a virtual pool of the RMNs available
resources.
As an envoy, an SMN interacts with the vFog orchestrator towards:
(de)registering a Mist network to the vFog overlay.
1
First among equals
providing a "copy" of the SMNs virtual pool of resources, therefore enabling the vFog
orchestrator to have a clear image about the available resources across the whole vFog overlay.
mediating between vFog orchestrator and RMNs for reserving resources, assigning
computational tasks or even deploying network function virtualization infrastructure (NFVI)
elements.
Following hybrid P2P's paradigm, an SMN is elected from the currently running RMNs taking into
account several attributes like processing and memory capabilities, network capacity, power level/type
among others. Acknowledging that the uncontrolled participation of mist nodes in the election process could
pose security threats, EXEGESIS provides the means for “screening” the candidates list based on the
EXEGESIS stakeholder’s policies. The SMN is elected from the existing RMNs; it manages RMNs and it
is the point of contact to the vFog orchestrator.
Figure 2. Two vFog neighborhoods accommodating two mist networks each
The tremendous number and the vast heterogeneity of the devices living on the edge of the network,
poses a significant challenge for EXEGESIS towards forming manageable and efficiently operating mist
networks. To handle this challenge, EXEGESIS proposes the development and exploitation of a middleware
solution that will sit on top of each device’s operating system (OS). The middleware utilizes a southbound
application programming interface (API) for interacting with the OS and acquiring access to the device’s
actual resources and a northbound API for communicating with its vFog orchestrator. A hypervisor will be
exploited for deploying in containerized form—it reduces the system’s footprint and increases services
deploy abilitythe RMN/SMN module and, if assigned from the vFog orchestrator, other software units
that carry out computational tasks or realize a service.
EXEGESIS proposes the idea of a vFog for managing the underlying mist networks and harnessing their
available resources. As the name implies a vFog assumes the operations of a conventional Fog network
for example coordination of the fog nodes, provisioning of the available resources to third parties,
management operations but is not deployed over dedicated equipment pre-installed at specific places; a
vFog lives on the top of mist networks as an overlaid virtual entity (see Figure 2). In these configurations,
the underlying SMNs will be the vFog nodes utilizing an election protocol to select, based on a set of
predefined criteria (e.g. processing capabilities, storage space, network capacity, power level, etc.), the SMN
that will undertake the role of the vFog orchestrator; the mind and heart of vFog's overlay. In a nutshell, the
vFog orchestrator will carry out the following key tasks:
Perform the vertical managerial operations needed to form and maintain the vFog overlay network.
Query the underlying mist nodes for available resources and create an abstract pool of them.
Provide information about the available resources to any authorized third-party (including the
Agora)
Handle horizontal communication operations (e.g. with other vFogs and/or conventional fogs)
Exchange data with any clouds with which it belongs to the same suburb.
Accept and forward requests for computational tasks, storage space and deployment of services to
the vFog nodes based on the needed and the available resources.
Deploy the Agora across the vFog network
One of the key issues that EXEGESIS attempts to tackle is to stem the tide of data flowing into and out
of the cloud. This is done by injecting SMNs into the vFog network which have increased processing
capabilities. These nodes will then expose their resources to the orchestration environment so they can be
used for pre-processing and filtering of data. That processing might lead to direct decision making or to a
whittled down version of it being uploaded to the cloud for further elaboration. At the core of this process
are heterogeneous, programmable logic-based nodes, which are located in the vFog network and which will
be used both for processing and for vFog suburb management. Programmable logic was selected because it
offers the critical combination of high performance, low power and complete flexibility which is necessary
to successfully meet the challenges of this role.
A heterogeneous vFog node within the context of EXEGESIS will consist of a field programmable gate
array (FPGA) System-on-Chip (SoC), which is an integrated circuit that combines processors,
programmable fabric and, potentially, additional logic. This combination allows us to optimally balance the
task load by allowing the processors to handle control-dominated tasks, like managing a vFog network and
delegating all compute-intensive tasks to the programmable logic. To accomplish this, the programmable
fabric needs to be virtualized so that the orchestration environment can deploy the appropriate application
on it at any given time. This is accomplished by executing cloud software on the processors of the FPGA
SoC which, together with the specialized hardware, enables the deployment of hardware virtual machines
on the programmable logic.
Abstraction of Resources in EXEGESIS
Starting at the mist layer (see Figure 3a), each RMN, during its registration process or upon a status
update, informs the SMN about the amount and type of physical resources it is willing to provide to the
EXEGESIS platform. The SMN in turn abstracts this information towards constructing a virtual resources
pool aggregating the physical resources of all the mist network nodes. Following the same paradigm, each
SMN after registering as a vFog node delegates information about its virtual resources pool to the vFog
orchestrator. At the same time, the vFog orchestrator can request and bind, if needed, more resources from
a conventional cloud. In this way, the vFog orchestrator forges a new virtual pool that holds in abstracted
form the physical resources across the whole vFog network.
Figure 3. a) Abstraction of resources in EXEGESIS b) Deployment framework for tasks and
services
Deployment of Services and Tasks in EXEGESIS
EXEGESIS will deploy services and perform computational tasks following a hybrid operational
scheme (see Figure 3b). In such a scheme, the vFog orchestrators can receive the requests for computational
tasks and service deployment. Upon that, the orchestrator based on the vFog’s available resources and
policies and also by taking into account the incoming task/service requirements can assign each task or
service to one or more vFog nodes (including itself if appropriate). In doing so, the orchestrator will utilize
and extend existing work to optimize task allocation [9], [10]. In turn, each vFog node passes the request
to its SMN module and based on the mist’s resources and the assigned operation’s requirements forwards
the tasks to itself and also, if needed, to the appropriate RMNs. It is noted here that if a task exceeds the
capacity of a vFog the orchestrator can forward the task to another vFog or assign it to cloud computing
resources. EXEGESIS’ deployment framework has segmentation of tasks and services at its core. In this
way, barring any security policies or specific task requirements, EXEGESIS can optimally fragment and
distribute tasks to resources as required to ensure that performance targets are met.
USE CASES
Security cameras, mobile phones, machine sensors, environmental sensors, and so on are just a few of
the items in daily use that create data that can be mined and analyzed. Add to it the data created at smart
cities, manufacturing plants, financial institutions, oil and gas drilling platforms, pipelines and processing
plants, and it’s not hard to understand that the deluge of streaming and IoT sensor data can and will
very quickly overwhelm today’s traditional data analytics tools. Organizations are beginning to look to edge
computing as the answer. Edge computing exploits vFog and mist and promotes data thinning at the edge
that can dramatically reduce the amount of data that needs to be transmitted to a data center or cloud
infrastructure. Without having to move unnecessary data to a central location, analytics or distributed
processes at the edge can simplify and drastically speed up analysis while also cutting costs. This drastic
shift in data processing paradigm propounded in EXEGESIS can be utilized in many, diverse use cases. The
proposed concept thus includes and investigates two concrete use cases where the proposed architecture can
prove to be a game changer compared to the currently available infrastructure. These use cases among others
are illustrated on Figure 4, which demonstrates one possible example of an EXEGESIS architectural
configuration where the four scenarios presented in the following sections are served by three vFog suburbs,
each with its own mist node neighborhood. All three suburbs share a common cloud infrastructure, while
each use case runs different tasks that are executed on their respective suburbs.
Figure 4. EXEGESIS use cases playground
Enabling and Enhancing Services for Smart Cities
Cameras are ubiquitous in modern cities and they can be used for various purposes, among which is
traffic management and surveillance. Both of these applications can benefit from acceleration in the form
of advanced image processing but require that different algorithms be executed (e.g. traffic management
requires that the number of cars per lane or the number of cars violating traffic laws are counted whereas
surveillance demands that specific individual must be identified).
The smart city is going to be one of the major revolutions of the coming decades, with large urban areas,
under ever-increasing pressure to accommodate a busy, fast-paced life for their citizens, turning to the
Internet of Things to optimize the use of their infrastructure and thus save on cost and enable new services.
This entails everything from smart lightning, smart water supply, smart security and others.
There are two issues where today’s architecture is lacking: The reuse of existing infrastructure and the
complexity in implementing data analysis solutions over that infrastructure. The former means that a set of
input devices, say cameras in this scenario, is installed in order to be used only for one function, for instance
traffic monitoring. That function can’t be changed unless the infrastructure itself is physically altered,
replaced or duplicated. The latter refers to the fact that retrieving the data from the input devices, analyzing,
reaching a decision and applying that decision is prohibitively slow and complex since all city infrastructure
today is purpose built.
The architecture proposed in this paper can solve both issues by creating two separate fog segments both
of which share the same FPGA-accelerated node, through which the data pass and which performs the
appropriate analysis. The orchestrator platform makes sure the accelerated node executes the required
functionality at any given time. The switch between the two tasks can be performed very swiftly which will
allow the node to perform both tasks seemingly at the same time much like a typical central processing unit
(CPU) appears to parallelize thread execution. The results of this analysis can then be either sent on for
further processing (e.g. after identifying suspicious activity) to the cloud or trigger automatic reactions in
other systems (e.g. manipulating traffic signals when detecting an accident and notifying emergency
services automatically).
Even within the narrower confines of a smart traffic management, fog computing improves the
performance of the application in terms of response time and bandwidth consumption. A smart traffic
management system can be realized by a set of stream queries executing on data generated by sensors
deployed throughout the city. Typical examples of such queries are real time calculations of congestion (for
route planning), or detection of traffic incidents. One possible case study, further elaborated on later in this
paepr, could compare the performance of a DETECT_TRAFFIC_INCIDENT query on fog infrastructure
[4] vs. the typical cloud implementation. In the query, the sensors deployed on roads send the speed of each
crossing vehicle to the query processing engine. The operator Average Speed Calculation calculates the
average speed of the vehicles from the sensor readings over a given time frame and sends this information
to the next operator. The operator Congestion Calculation calculates the level of congestion in each lane
based on the average speed of vehicles in that lane. The operator Incident Detection, based on the average
level of congestion, detects whether an incident has occurred or not. This process will be implemented and
executed on both fog- as well as cloud-based stream query processing engine, which will highlight the faster
response times and bandwidth savings offered by the fog-based alternative.
Smart Industrial Automation
The new trend in automation is that of virtualizing as much as possible the operational technologies
(OT) side of the system over contemporary IT infrastructure. The idea is simple, as virtual machines have
virtualized hardware in IT, the automation industry is trying to virtualize OT hardware such as
programmable logic controllers and run them over, more or less, traditional IT infrastructure.
The automation industry has been challenged for several years by the difference in innovation cycles
and obsolescence rate existing between the OT and the IT. The result of this divergence in change rates has
left the automation floor replete with obsolete IT technologies that have often introduced security breaches
and that in general reduce the productivity and usability of the entire system.
Fog and mist computing has been identified as the most natural approach to leverage the benefits of
functions virtualization while maintaining the performance constraints typical of OT systems. This however
is one side of the coin as companies also like to leverage the advantage of the cloud, namely large storage
and massive data analytics to identify issues and bottlenecks in production and flesh them out.
The EXEGESIS platform provides the ideal deployment target for software defined automation as it can
enable (1) mist computing to address the deployment and management of virtualized OT functions and
services over industrial hardware, and (2) fog computing to address the consolidation of higher level control
and analytics on more computationally capable hardware deployed on the edge of the system.
PRELIMINARY EVALUATION
This section provides an initial investigation into how the EXEGESIS edge compute paradigm
influences the amount of data flowing throughout a network. This is accomplished by simulating a simple
scenario similar to the traffic camera use case described in the previous section. In order to perform the
evaluation we use an open source fog environment simulator called iFogSim [11]. We tested three separate
scenarios, all of them comprising a camera which collects information, a programmable-logic accelerated
gateway device which connects the camera to the cloud, an actuator that receives commands after analysis
of the camera data and performs the appropriate actions and finally the cloud itself as shown on Figure 5:
In the first scenario the camera input stream is forwarded through the gateway to the cloud,
which performs the analysis and decision making and returns the decision to the actuator. This
scenario is most akin to the current paradigm.
The second scenario performs motion detection in the fog using the gateway device but sends
the clip to the cloud for detailed analysis and decision making, representing a middle ground
between a pure cloud and a pure edge approach.
The third scenario implements all the processing including motion detection, analysis and
decision making at the edge on the gateway device and only sends a notification of actions taken
to the cloud.
Figure 5. Overview of the simulated scenario
We evaluate two important parameters for all three scenarios. The first is normalized network usage (Figure
6a) and the second is the energy consumption for the entire system (Figure 6b).
Figure 6. Simulation results showing: a) normalized network usage and b) system energy consumption
It is plainly evident that the edge compute variant (scenario 3) is clearly superior in both metrics. Energy
usage reduction is to be attributed to the advantages of using programmable logic to perform the
computation at the edge but also at the constrained network traffic which also factors into energy use.
Network traffic is whittled down by performing all the processing close to the source and only sending a
small action report to the cloud instead of an entire camera stream. These results underpin the claim that the
EXEGESIS architecture can yield important potential benefits in multiple areas if realized at scale.
CONCLUSIONS
Future 5G networks are being viewed as the key technology that will allow for the realization of a "hyper-
connected society" where billions of IoT devices will be able to exchange data and offer/receive services at
a high quality of service (QoS) level. Towards this, 5G aims to support high data speed at the networks'
edges (1-10 Gb/s) and achieve ultra-low end to end latency (~1ms); however, these alone may not be
enough, especially with highly heterogeneous and fragmented network environments, a vast number and
huge variety of the devices residing at the network edges and the colossal amount of generated data which
are slowly coming to the foreground. To overcome this, EXEGESIS exploits and advances the fog and mist
paradigms to propose a beyond 5G ecosystem where heterogeneous fixed and mobile edge nodes (e.g. home
gateways, small cells, smart phones, SME Servers, IoT devices, Vehicles) will form an archipelago of
interconnected islands of resources (e.g. storage, computing, network) where each island can be viewed as
the successor of a small-cell and the archipelago as the evolution of the macro-cell. A preliminary
simulation-based investigation hinted at the significant benefits that can be derived from moving to the
edge-centric EXEGESIS architecture. Future work will involve the implementation of a real-life prototype
and the validation of the EXEGESIS paradigm in real-life scenarios.
REFERENCES
[1] Fog Computing and the Internet of Things: Extend the Cloud to Where the Things
Arehttp://www.cisco.com/c/dam/en_us/solutions/trends/iot/docs/computing-overview.pdf Retrieved Mar. 2016
Retrieved July 2016
[2] Chiang, Mung. "Fog networking: An overview on research opportunities." arXiv preprint arXiv:1601.00835
(2016).
[3] Klas, G.I., 2016. Edge Cloud to Cloud Integration for IoT.
[4] Y. Nikoloudakis, S. Panagiotakis, E. Markakis, E. Pallis, G. Mastorakis, C. X. Mavromoustakis and C. Dobre. "A
Fog-based Emergency System for Smart Enhanced Living Environments." IEEE Cloud Computing magazine to
be published Nov./Dec. 2016.
[5] Vaquero, L. M., and Rodero-Merino, L. (2014). Finding your way in the fog: Towards a comprehensive definition
of fog computing. ACM SIGCOMM Computer Communication Review, 44(5), 27-32.
[6] https://ec.europa.eu/digital-single-market/en/digital-single-market Retrieved July 2016
[7] http://www.openfogconsortium.org/news Retrieved July 2016
[8] A. Poenaru, R. Istrate and F. Pop “AFT: Adaptive and fault tolerant peer-to-peer overlay A user-centric solution
for data sharing”, In Press, Future Generation Computer Systems, May 2016
[9] A. Sfrent, F. Pop “Asymptotic scheduling for many task computing in Big Data platforms”, Information Sciences
Journal, Vol. 319, pp. 71-91, Oct. 2015
[10] J.F. Riera et al., "TeNOR: Steps towards an orchestration platform for multi-PoP NFV deployment," 2016 IEEE
NetSoft Conference and Workshops (NetSoft), Seoul, 2016, pp. 243-250.
doi: 10.1109/NETSOFT.2016.7502419
[11] H. Gupta, A.V. Dastjerdi, S. K. Ghosh, R. Buyya, “iFogSim: A Toolkit for Modelling and Simulation of Resource
Management Techniques in Internet of Things, Edge and Fog Computing Environments”, CoRR, vol.
abs/1606.02007, June 2016

Supplementary resources (5)

Data
August 2017
Data
August 2017
Data
August 2017
Data
August 2017
Data
August 2017
... This way they will be able to generate the desired data analytics at the edge of the network without centralized Cloud intervention [34]. Some examples of advanced smart city applications, which need to manage an enormous amount of devices and intelligently combine the data to get a prompt answer, are city surveillance and traffic monitoring, which are growing trends in big cities [35]. ...
... Most of the architectures rely on an upper side infrastructure that helps with tasks that are out of the scope of the capabilities of mist devices, thus they need continuous vertical connectivity to accomplish a fully operational service. Such is the case of EXEGESIS middleware [35], which depends on a virtual Fog (vFog) infrastructure to interconnect Mist networks and Cloud services. Namely, from all regular mist nodes (RMN) in each Mist network, one node is appointed super mist node (SMN) to serve as a link to the vFog and to manage its neighborhood. ...
... A very common and powerful way to carry out such a task is the creation of an Abstract Programming Interface (API) which locates, above the hardware, a set of welldescribed programming calls that comprise all the functionalities. This requires a southbound and northbound operation to suit a specific platform and enable the use of those resources by external agents respectively, as it is suggested in EXEGESIS platform [35]. The API can be designed with many technologies, but the general idea is to enable transparent interaction with an infrastructure pool of IoT devices, as in Niflheim middleware [45], which provides a REST API to operate tiny CerberOS devices. ...
Article
Full-text available
The advent and consolidation of the Massive Internet of Things (MIoT) comes with a need for new architectures to process the massive amount of generated information. A new approach, Mist Computing, entails a series of changes compared to previous computing paradigms, such as Cloud and Fog Computing, with regard to extremely low latency, local smart processing, high mobility, and massive deployment of heterogeneous devices. Hence, context awareness use cases will be enabled, which will vigorously promote the implementation of advantageous Internet of Things applications. Mist Computing is expected to reach existing fields, such as Industry 4.0, future 6G networks and Big Data problems, and it may be the answer for advanced applications where interaction with the environment is essential and lots of data are managed. Despite the low degree of maturity, it shows plenty of potential for IoT together with Cloud, Fog, and Edge Computing, but it is required to reach a general agreement about its foundations, scope, and fields of action according to the existing early works. In this paper, (i) an extensive review of proposals focused on Mist Computing is done to determine the application fields and network elements that must be developed for certain objectives, besides, (ii) a comparative assessment between Cloud, Fog, Edge, and Mist is completed and (iii) several research challenges are listed for future work. In addition, Mist Computing is the last piece to benefit from the resources of complete network infrastructures in the Fluid Computing paradigm.
... Fog computing gateways and cloudlets are added between IoT devices and the cloud to improve response time. Mist computing devices can perform complex tasks locally as well in collaboration with other IoT nodes, therefore, reducing the communication overhead associated with the cloud [174]. Cloudlets use high-end PCs to perform computing-intensive services and real-time rendering in the local network [22]. ...
... Cloudlets use high-end PCs to perform computing-intensive services and real-time rendering in the local network [22]. The fog computing gateways can provide QoS-aware, low-latency, and physically distributed services [174]. The use of these devices in communication architectures help to achieve greener and sustainable solutions. ...
... As one of the driving forces for Industry 5.0 is mass customization and personalization, it attracts human eccentricity into the mix that prioritizes human interest and needs in the production process. Industry 5.0 is also considered to be a Healthcare ■ Fusion Energy [170] ■ Bio Energy [171] ■ Reliable Maintenance in Industries [50] ■ Human-Robot Interaction [66] ■ Smart Industries [44] ■ Industrial Applications [47] ■ Smart Circular Economy [48] ■ Control Systems [52] ■ 6G communication Systems [59] ■ Reliability Maintenance in Industries [50] ■ Bio Energy [174] ■ Smart Industries [48] ■ Social Value Creation [66] ■ Smart Cyber-physical Systems [26] ■ Smart Circular Economy [50] ■ Data and Model Security [46] ■ Model Inversion Attacks [45] ■ Fault Tolerance [51] ■ Blockchain based Services [55] ■ UAVs for Industries [56] ■ Academics and Education [169] ■ Reliability Maintenance in Industries [48] ■ Ethical and Value Oriented Industries [65] ■ Smart Industries [172] ■ Human-Robot Collaboration [53] ■ Industrial Automation [68] ■ Industrial Applications [56] ■ Smart Circular Economy [50] ■ 6G communication systems [59] ■ Explainable AI [54] ■ Emotional Intelligence [57] Fig. 13. Taxonomy and applications of the reviewed studies that uses the combination of both computational intelligence and cloud-fog-edge network computing resources concerning three components of Industry 5.0 vision society-centric approach, suggesting that the technology would adapt to the diversity and needs of workers. ...
Preprint
Full-text available
Industry 5.0 vision, a step toward the next industrial revolution and enhancement to Industry 4.0, conceives the new goals of resilient, sustainable, and human-centric approaches in diverse emerging applications such as factories-of-the-future and digital society. The vision seeks to leverage human intelligence and creativity in nexus with intelligent, efficient, and reliable cognitive collaborating robots (cobots) to achieve zero waste, zero-defect, and mass customization-based manufacturing solutions. However, it requires merging distinctive cyber-physical worlds through intelligent orchestration of various technological enablers, e.g., cognitive cobots, human-centric artificial intelligence (AI), cyber-physical systems, digital twins, hyperconverged data storage and computing, communication infrastructure, and others. In this regard, the convergence of the emerging computational intelligence (CI) paradigm and softwarized next-generation wireless networks (NGWNs) can fulfill the stringent communication and computation requirements of the technological enablers of the Industry 5.0, which is the aim of this survey. In this article, we address this issue by reviewing and analyzing current emerging concepts and technologies, e.g., CI tools and frameworks, network-in-box architecture, open radio access networks, softwarized service architectures, potential enabling services, and others, elemental and holistic for designing the objectives of CI-NGWNs to fulfill the Industry 5.0 vision requirements. Furthermore, we outline and discuss ongoing initiatives, demos, and frameworks linked to Industry 5.0. Finally, we provide a list of lessons learned from our detailed review, research challenges, and open issues that should be addressed in CI-NGWNs to realize Industry 5.0.
... They gave the optimization strategy design of video distribution such as edge caching and replacement, edge content prefetching, edge content collection, and EC migration under this framework. Markakis et al. [8][9] proposed unified computing, caching, and communication (3C) solution for 5g, which allows service, content, and function providers to deploy their services, content, and functions near end users. The solution also will enable end-users to share and cooperate to form a virtual edge resource pool. ...
Article
Full-text available
Providing users with nearby video distribution services based on Edge Computing (EC) reduces backbone traffic and improves users' viewing experience. However, the behavior of edge networks is difficult to manage and cannot guarantee the rationality of video resource distribution, which has become a significant challenge for online video services. This paper takes the idle resources from home Internet of Things (IoT) devices for video distribution service as the research point. It proposes an incentive mechanism based on behavior management and balance of resources (BMBR). By paying attention to the rationality of resource distribution and establishing reasonable reward and punishment measures for edge devices, the incentive mechanism makes the initiative of edge devices and the relationship between resource supply and demand reach a stable equilibrium. Simulation results and analysis show that the incentive mechanism can effectively improve the enthusiasm for resource sharing in the edge network, ensure the rationality of resource distribution to reduce the resource price fluctuations, and maintain the balance of resource supply and demand.
Article
Full-text available
Industry 5.0 vision, a step toward the next industrial revolution and enhancement to Industry 4.0, conceives the new goals of resilient, sustainable, and human-centric approaches in diverse emerging applications such as factories-of-the-future and digital society. The vision seeks to leverage human intelligence and creativity in nexus with intelligent, efficient, and reliable cognitive collaborating robots (cobots) to achieve zero waste, zero-defect, and mass customization-based manufacturing solutions. However , it requires merging distinctive cyber-physical worlds through intelligent orchestration of various technological enablers, e.g., cognitive cobots, human-centric artificial intelligence (AI), cyber-physical systems, digital twins, hyperconverged data storage and computing, communication infrastructure, and others. In this regard, the convergence of the emerging computational intelligence (CI) paradigm and softwarized next-generation wireless networks (NGWNs) can fulfill the stringent communication and computation requirements of the technological enablers of the Industry 5.0, which is the aim of this survey. In this article, we address this issue by reviewing and analyzing current emerging concepts and technologies, e.g., CI tools and frameworks, network-in-box architecture, open radio access networks, softwarized service architectures, potential enabling services, and others, elemental and holistic for designing the objectives of CI-NGWNs to fulfill the Industry 5.0 vision requirements. Furthermore, we outline and discuss ongoing initiatives, demos, and frameworks linked to Industry 5.0. Finally, we provide a list of lessons learned from our detailed review, research challenges, and open issues that should be addressed in CI-NGWNs to realize Industry 5.0.
Chapter
Full-text available
A bio-signal is any signal that can be continuously measured and monitored in living creatures. Bio-signals are traces of organic conjecture beside a chest or a contracting muscle in space, time, or space-time. Bio-signals provide communication between bio-systems and are our primary source of information on their behavior. Bio-signals contain valuable information for medical diagnosis by understanding the underlying physiological mechanisms. Electrical bio-signals ordinarily advert total about electrical potential difference along a specific tissue, organ, or cell system, such as the nervous system, producing a change in electric current. Organic signals can be acquired in various ways, such as EEG, ECG, EMG, EOG. The brain has billions of neurons collecting bio-signals from every organ, tissue of the body, and each neuron is connected to millions of others on average. They communicate with each other via minuscule electrical currents that pass along the neurons and across vast networks of brain circuitry. Electrical pulses are produced when all of these neurons are active. This electrical activity is coordinated and results in a “brainwave.” The chapter will further discuss the application of various algorithms to study the characteristics of the effect of press physiotherapy on the bio-signals of the lieges. The analysis’ main goal is to look for a possible attenuation of alpha rhythm in each patient. Common-spatial-structure are calculated to enhance alpha-power before or after stimulation or post- over pre-simulation for each trial involving cross-validation.
Article
Natural language processing (NLP) assists to increase the efficiency of human and Multimedia Internet of Things (MIoT) interaction. Notably, large-scale NLP tasks can be offloaded from a cloud server to fog nodes closer to a mobile terminal device for lower response latency. But communication security is ongoing issues that need to be addressed. Effective mutual authentication among multiple entities is essential to ensure the security of MIoT systems based on a dynamic Fog Computing Network (FCN). However, the existing schemes are unsuitable for the dynamic FCN due to the security vulnerabilities such as the linkable sessions. To solve this problem, an Anonymous Multi-Party Authentication (AMPA) scheme is proposed to address the challenges of secure FCN-based MIoT communications in this paper. The proposed scheme uses a bilinear pairing operation to realize the authentication between the fog nodes and cloud server and to establish the group key. Besides, the scheme allows cloud-authenticated terminal devices to be added to the FCN and reduces the need for the resource-limited terminal device to perform many authentication protocols. The security analysis is carried out to demonstrate that AMPA scheme can meet various safety requirements. Performance evaluations shown that the proposed AMPA scheme achieves satisfactory performance.
Chapter
The basics of Internet of Things (IoT) is the attribute of Internet of Objects, expected as a flexible strategy for giving several facilities. Condensed clever units create a critical phase of IoT. They vary extensively in practice, volume, service facility, and calculation strength. However, the incorporation of these clever matters in the widespread internetworking proposed IoT of safety threat due to the act the most internetwork technologies along with verbal exchange manners that had not been now designed to guide IoT. Furthermore, the trade of IoT has produced public safety matter, such as nonpublic secrecy issues, risk of cyber threats, and trained crime. Have guidelines or suggestions on the analysis of IoT security and contribute to improvement. This control offers full information of the open penetration and retrieval as opposed to the IoT edge side layer, which are done in three phases: edge nodes, transmission, and edge computing. The method to reach this target, first, we quickly define those popular IoT recommendation models and outline protection in the IoT's framework. We also talk about the feasible purposes of IoT and the inspiration of the intruder and set new goals in this new model. With such rise in IoT and with the advent of ongoing digital applications, huge amount of data is produced each and every day, leading to the emergence of the term big data. In this chapter, we will be collaborating the detailed security study in case of IoT and big data.
Article
Full-text available
Internet of Things (IoT) is the paramount virtual network that enables remote users to access connected multimedia devices. It has dragged the attention of the community because it encompasses real-world scenarios with implicit environs. Despite several beneficial aspects, IoT is surrounded by provocations for successful implementation, as data travels in different layers. One of the critical challenges is the security of the data in these layers. Researchers conducted numerous studies focused on the level of security at a single technique, creating loopholes to address the entire scenario of securing an IoT network. This study aims to comprehensively review current security issues, wireless communication techniques, and technologies for securing IoT. This work’s utmost significance is addressing all the security perspectives at a glance. For this purpose, research contributions from the previous years are investigated for better understanding. Some countermeasures and snags from security perspectives have also been analyzed in detail concerning the current industry trends. Blockchain, machine learning, fog, and edge computing are possible solutions to secure IoT. After studying these techniques and their immunity to attacks, machine learning can become a hope if incorporated with end-to-end security. This comprehensive review will provide adequate understanding and knowledge in defining security lines of action for the successful implementation of IoT.
Article
Full-text available
In this paper will be addressing the managed operation of entities at various infrastructures to assist the automated orchestration of Internet of Things services which is a major challenge connected to the practical implementation of IoT. This concept is described to as “osmotic provisioning orchestration” which is basically the monitoring, automation, and orchestration principles so it can oversee the osmotic ecosystem. we will show in this paper the architectural design of an osmotic orchestrator tackling this challenge depending on trending protocols and technologies. the solution proposed will integrate will trending technologies like the cloud computing, with other more recent technologies like edge computing, osmotic computing, and osmotic provisioning. In addition, we will propose a prototype of the architecture that has been created using an open-source application and microservices technologies, taking into consideration the cloud and edge domains.
Article
Full-text available
The emergence of ubiquitous computing paradigms, empowered by the upcoming 5th generation networking (5G), as well as emerging smart Ambient Intelligence environments, might play a crucial role, towards creating better living environments for activity-challenged individuals, such as disabled, or elderly people that require constant care. In addition, cloud computing has been an empowering force for that endeavour, albeit raising several ethical, security and user experience issues. In this article, we present a virtualised Fog-based infrastructure, harvesting and managing distributed, IT resources, shifting the entire cloud functionality at the network edge, utilising the cloud in an assistive manner to ensure the resources-wise system’s robustness. The presented infrastructure facilitates an Ambient Assisted Living emergency system, which alerts the nearest responding authority, when the target user wanders-off and leaves the house premises, employing an outdoor positioning mechanism, emergency protocols such as LoST [1], and Internet of Things (IoT) communication protocols such as MQTT [2].
Conference Paper
Full-text available
Network Functions Virtualization is focused on migrating traditional hardware-based network functions to software-based appliances running on standard high volume severs. There are a variety of challenges facing early adopters of Network Function Virtualizations; key among them are resource and service mapping, to support virtual network function orchestration. Service providers need efficient and effective mapping capabilities to optimally deploy network services. This paper describes TeNOR, a micro-service based network function virtu-alisation orchestrator capable of effectively addressing resource and network service mapping. The functional architecture and data models of TeNOR are described, as well as two proposed approaches to address the resource mapping problem. Key evaluation results are discussed and an assessment of the mapping approaches is performed in terms of the service acceptance ratio and scalability of the proposed approaches.
Article
Internet of Things (IoT) aims to bring every object (e.g. smart cameras, wearable, environmental sensors, home appliances, and vehicles) online, hence generating massive amounts of data that can overwhelm storage systems and data analytics applications. Cloud computing offers services at the infrastructure level that can scale to IoT storage and processing requirements. However, there are applications such as health monitoring and emergency response that require low latency, and delay caused by transferring data to the cloud and then back to the application can seriously impact their performances. To overcome this limitation, Fog computing paradigm has been proposed, where cloud services are extended to the edge of the network to decrease the latency and network congestion. To realize the full potential of Fog and IoT paradigms for real-time analytics, several challenges need to be addressed. The first and most critical problem is designing resource management techniques that determine which modules of analytics applications are pushed to each edge device to minimize the latency and maximize the throughput. To this end, we need a evaluation platform that enables the quantification of performance of resource management policies on an IoT or Fog computing infrastructure in a repeatable manner. In this paper we propose a simulator, called iFogSim, to model IoT and Fog environments and measure the impact of resource management techniques in terms of latency, network congestion, energy consumption, and cost. We describe two case studies to demonstrate modeling of an IoT environment and comparison of resource management policies. Moreover, scalability of the simulation toolkit in terms of RAM consumption and execution time is verified under different circumstances.
Article
The widespread of interconnectable computers gives systems the chance to operate more efficiently, by better utilizing the cooperation between individual components. User-centric solutions address the devices themselves and, since there is no network infrastructure and a device powerful enough to assume the role of a coordinator, adopting a peer-to-peer model tends to be the best solution. In this paper we propose AFT, an overlay that adapts to a changing number of nodes, is resilient to faults and is the foundation for an efficient implementation of a reputation based trust system. The AFT overlay is designed to be a solution for systems that need to share transient information, performing a synchronization between various components, like in mobile ad-hoc networks, M2M networks, urban networks, and wireless sensor networks. The operations supported by the overlay, like joining, leaving, unicast transmission, broadcast sharing and maintenance can be accomplished in a duration belonging to , where is the number of nodes which are part of the structure. We proved these properties and we evaluate the time performance related to overlay creation and node joining.
Article
The cloud is migrating to the edge of the network, where routers themselves may become the virtualisation infrastructure, in an evolution labelled as \the fog". However, many other complementary technologies are reaching a high level of maturity. Their interplay may dramatically shift the information and communication technology landscape in the following years, bringing separate technologies into a common ground. This paper o ers a comprehensive definition \the fog", comprehending technologies as diverse as cloud, sensor networks, peer-to-peer networks, network virtualisation functions or configuration management techniques. We highlight the main challenges faced by this potentially break-through technology amalgamation.
Article
The past 15 years have seen the rise of the Cloud, along with rapid increase in Internet backbone traffic and more sophisticated cellular core networks. There are three different types of Clouds: (1) data center, (2) backbone IP network and (3) cellular core network, responsible for computation, storage, communication and network management. Now the functions of these three types of Clouds are descending to be among or near the end users, i.e., to the edge of networks, as Fog. This article presents an overview on research opportunities of Fog networking: an architecture that users one or a collaborative multitude of end-user clients or near-user edge devices to carry out a substantial amount of storage, communication and management. Architecture allocates functionalities, while engineering artifacts that may use a Fog architecture include 5G, home/personal networking, and the Internet of Things.
Article
The cloud is migrating to the edge of the network, where routers themselves may become the virtualisation infrastructure, in an evolution labelled as "the fog". However, many other complementary technologies are reaching a high level of maturity. Their interplay may dramatically shift the information and communication technology landscape in the following years, bringing separate technologies into a common ground. This paper offers a comprehensive definition of the fog, comprehending technologies as diverse as cloud, sensor networks, peer-to-peer networks, network virtualisation functions or configuration management techniques. We highlight the main challenges faced by this potentially breakthrough technology amalgamation.
Article
Due to the advancement of technology the datasets that are being processed nowadays in modern computer clusters extend beyond the petabyte scale - the 4 detectors of the Large Hadron Collider at CERN produced several petabytes of data in 2011. Large scale computing solutions are increasingly used for genome sequencing tasks in the Human Genome Project. In the context of Big Data platforms, efficient scheduling algorithms play an essential role. This paper deals with the problem of scheduling a set of jobs across a set of machines and specifically analyzes the behavior of the system at very high loads, which is specific to Big Data processing. We show that under certain conditions we can easily discover the best scheduling algorithm, prove its optimality and compute its asymptotic throughput. We present a simulation infrastructure designed especially for building/analyzing different types of scenarios. This allows to extract scheduling metrics for three different algorithms (the asymptotically optimal one, FCFS and a traditional GA-based algorithm) in order to compare their performance. We focus on the transition period from low incoming job rates load to the very high load and back. Interestingly, all three algorithms experience a poor performance over the transition periods. Since the Asymptotically Optimal algorithm makes the assumption of an infinite number of jobs it can be used after the transition, when the job buffers are saturated. As the other scheduling algorithms do a better job under reduced load, we will combine them into a single hybrid algorithm and empirically determine what is the best switch point, offering in this way an asymptotic scheduling mechanism for many task computing used in Big Data processing platforms.
Edge Cloud to Cloud Integration for IoT
  • G I Klas
Klas, G.I., 2016. Edge Cloud to Cloud Integration for IoT.