Conference PaperPDF Available

A neural-network-based realization of in-network computation for the Internet of Things

Authors:

Figures

Content may be subject to copyright.
1
A Neural-Network-Based Realization of In-Network
Computation for the Internet of Things
Nicholas Kaminski, Irene Macaluso, Emanuele Di Pascale, Avishek Nag, John Brady, Mark Kelly, Keith Nolan,
Wael Guibene, Linda Doyle
Abstract—Ultra-dense Internet of Things (IoT) networks and
machine type communications herald an enormous opportunity
for new computing paradigms and are serving as a catalyst for
profound change in the evolution of the Internet. We explore
leveraging the communication within IoT to serve data processing
by appropriately shaping the aggregate behaviour of a network
to parallel more traditional computation methods. This paper
presents an element of this vision, whereby we map the operations
of an artificial neural network onto the communication of an
IoT network for simultaneous data processing and transfer.
That is, we provide a framework to treat a network holistically
as an artificial neural network, rather than placing neural
networks within the network. The operation of components of a
neural network, neurons and connections between neurons, are
performed by the various elements of the IoT network, i.e., the
devices and their connections. The proposed approach reduces
the latency in delivering processed information and supports the
locality of information inherent to IoT by removing the need for
transfer to remote data processing sites.
Index Terms—Internet of Things, Artificial Neural Networks,
Wireless Sensor Networks
I. INTRODUCTION
IoT technology is rapidly progressing due to manufacturing
advancements with respect to size, weight, power, and cost
of next-generation low-power radio frequency transceivers
and micro-controllers. These advancements coupled with the
design and fabrication into single packages has resulted in
highly integrated System-on-a-Chip realisations. As a result,
IoT networks have significantly grown across a wide variety
of domains and the number of IoT devices is forecasted to
grow to about 20 billion by 2020 [1].
However, current IoT nodes are mostly passive producers
of raw data to be consumed elsewhere. Information typically
flows through the network to reach a Fog collector or the
Cloud, with no processing other than the pre-processing at
the sources [2]. Depending on the application, the remotely
processed and aggregated data is then forwarded back to
actuator devices often located in geographical proximity of
the originating sensor nodes. The collective capability of the
IoT network is only exploited by aggregating and analysing
information created by different devices. However, ultra-dense
IoT networks and machine type communications herald an
enormous opportunity for new computing paradigms and are
serving as a catalyst for profound change in the evolution of
the Internet. One such example is leveraging the network to
N. Kaminski, I. Macaluso, E. Di Pascale, A. Nag and L. Doyle are with
CONNECT Center, Trinity College Dublin, Ireland.
J. Brady, M. Kelly, K. Nolan and W. Guibene are with Intel Labs, Ireland.
perform traditional computing functions like processing and
storage to efficiently transform data into information.
In this paper, we utilise the collective behaviour of an
IoT network to perform computation simultaneously with
communication and hence unlock the IoT network’s poten-
tial to act as an integrated computation and communication
system, and not simply as the producer of information and/
or the end-consumer of processed information. In particular,
we propose a framework that maps computational processes
to communicative processes, allowing IoT nodes to collabora-
tively process the data while it flows through the network.
By treating the whole network as a computer, rather than
placing computational elements within the network, no further
processing is necessary once information reaches its intended
destination. As a first realisation of this concept, this paper
maps the computational approaches of an Artificial Neural
Network (ANN) to the communication within an IoT network.
By shaping the aggregate behaviour of the communicative
processes in parallel with the computational approach of a
typical ANN, we propose truly decentralised IoT networks
enabled with computational intelligence.
The merits of the proposed approach are twofold: (a)
it reduces the latency in delivering processed information;
(b) it supports the locality of information inherent to IoT.
Latency requirements are especially critical when the data
collected by IoT devices is used for automatic or semi-
automatic control applications. In such cases, the consumers
of processed information are likely located in proximity of the
sensing devices. By pushing processing into the IoT devices
the proposed approach avoids forwarding the information to
external systems and then back to the IoT network, thus sig-
nificantly reducing the overall latency. The same mechanism
also preserves locality of information which is important for
privacy and security.
We adopt feedforward neural networks as the reference
framework for the generic parallel processor, implemented
by the IoT network. In fact, the computation performed
by a feedforward neural network, which can approximate
any measurable function arbitrarily well, can be partitioned
into different tasks for concurrent execution. The proposed
framework exploits these features to process the data while
optimising the use of resources in the IoT network. In our
model all computations performed in a neuron are considered
as an atomic computing task and these tasks are distributed
into multiple IoT nodes subject to the resources available
at each node, the connectivity of the IoT network, and the
producers and consumers of information in the network.
2
II. DATA PROCE SS IN G ARCHITECTURES FOR IOT
The goal of the IoT concept is to provide a mechanism
that connects physical things to other physical things in an
informational sense. This notion of interconnected objects
emanates from the use of Radio Frequency Identification
(RFID) technology and wireless sensor networks (WSNs).
Evolutions of these initial areas provide the springboard for
IoT by offering physical objects that are uniquely identifiable
with communication capabilities. Connecting such objects into
device-driven networks with sensor devices directly interacting
with actuators forms the core of the IoT concept.
As devices at the edge of networks directly interact with
each other and the environment, edge computing becomes in-
creasingly important to support IoT operation [3]. Buyya et al.
[4] note a continuing trend towards the cloud-based approaches
to computing, but serving the needs of ubiquitous devices with
such methods is inherently limited by the bottleneck of com-
munication networks [5]. Avoiding such bottlenecks motivates
moving computational resources towards the edge of networks
into smaller and more local compute centers, as suggested
by fog-computing approaches [2]. In fact, proponents of fog
computing laud the architecture’s ability to deliver the lower
latency compute support demanded by typical IoT applications
[6], [7]. To this end, a new concept of mist computing [8]
evolved that pushes computation even beyond the boundaries
fog computing directly into the IoT devices. Our approach of
directly employing the communication between edge devices
to serve computation is the utmost realisation of this trend.
We note that IoT provides a powerful force for moving
computation into the farthest edges of networks.
In the context of fog computing, several works provide
a more detailed examination of how to achieve perimeter
computation. For example, Hong et al. provides a program-
ming model for dynamically scaling fog resources to suit IoT
workloads in [9]. Other works consider the decomposition of
computing tasks for deployment across fog resources, with
possible extension into the mist [10], [11]. Each of these typi-
cal examples separates computational tasks into pieces suitable
for deployment upon various platforms within a network. In
these works the authors consider the communication network
as simply a tool for exchanging data between computing
platforms, without considering network topology or operation
to support computation, i.e., computation and communication
are considered as separate processes in current literature. Our
approach extends beyond prior work by jointly considering
communication and computation.
Edge computing also takes advantage of the locality of
information inherent to IoT, which demands consideration
of several additional aspects. The localisation of information
within a system with several participants has long been a topic
of interest in the area of multi-agent systems as a means to
improve overall system performance [12]. Within this area,
research focuses on appropriately structuring interactions be-
tween system elements based on the notion that each element
fundamentally accesses different information. For example,
[13] employs the notion of locality of information to capture
the local performance of elements within a network. Finally,
the work of Ziegeldorf et al. [14] reminds us that the locality
of information itself has implications about the privacy of
services associated with the information at hand. Indeed,
our work directly considers the local nature of information
available to devices in an IoT network by matching the
computational structure to the location of interest.
III. FRA ME WORK
The proposed framework harnesses the computation and
communication capabilities of an IoT network to process the
collected information by allotting the different components of
a neural network (neurons and connections between neurons)
to the various elements of the IoT network (the devices and
their connections). In other words, the basic idea is to map
neurons into IoT nodes and connections between neurons into
wireless links. It should be noted that an IoT device can
implement one or more neurons and the connection between
two neurons can be realized through multiple wireless links.
Our mapping framework is driven by the minimisation of the
cost to deliver information from the input nodes to the output
nodes through the hidden neurons, subject to the capabilities
of each device and the connectivity of IoT network. The “cost”
is a generic term referring to any performance metric; we will
explore two possible choices in subsection III-C.
The first step in the framework is the identification of the
input and output nodes in the IoT network, which depend
on the specific application. It should be noted that the same
IoT network can implement multiple applications, for each of
which a separate instance of this framework can be run. The
second step requires the selection and training of the neural
network implementing the function of interest. This step is
performed offline; it is assumed throughout the paper that
an appropriate neural network is available as an input to the
optimisation framework. The third step involves generating a
representation of the physical IoT network, namely the IoT
topology and the resources available at each node (e.g., as
a function of the residual memory and/or power available).
The connectivity of the IoT network can be abstracted on the
basis of the proximity of the devices and the current power
levels of the devices in conjunction with standard path loss
and interference models. Relevant network-state information
also includes the cost associated with the use of a link (e.g.
transmit power, link delay, etc.). The final step is the optimal
mapping which is described in detail below.
A. Variables and Parameters
G(N, E ): IoT network topology with Nbeing the set of
nodes and Ethe set of edges.
xi,j : Binary variable that takes value ‘1’ if node iis
selected as hidden node j.
dk,j : Cost of the optimal path between nodes kand j.
S: Set of source nodes for the trained neural network.
H: Set of hidden nodes for the trained neural network.
O: Set of output nodes for the trained neural network.
T(i): Upper bound on the number of hidden neurons that
can be mapped on node i.
B. Constraints
The choice of IoT devices and wireless paths that implement
the neural network is constrained by the resource usage on
3
IoT Topology & Resources
Objective: Minimise Total Transmit Power or Overall Transmit Time
Constraints:
Each hidden node is mapped into one IoT node
Each IoT node can at most operate as T hidden nodes to ensure energy and computational resources
availability
Mapping Optimisation
Neural Network
(Trained Offline)
<R
n
1,R
n
2,···>
wi
Input
Hidden
Output
Input IoT Nodes
Output IoT Nodes
<R
n
1,R
n
2,···>
:Resources
at node n
wi
: i -th link weight
Input Nodes
Output Nodes
Hidden Nodes
Output IoT Topology with
Mapped Neural Network
Fig. 1: The optimal mapping determines the IoT nodes and links that implement a given neural network. The search space is
shaped by the constraints on the IoT nodes connectivity and resources, e.g., some IoT nodes may implement multiple neurons.
each IoT device, which shapes the search space of the optimal
mapping problem. The constraints imposed are such that each
IoT node ican at most operate as T(i)hidden neurons, where
T(i)is a function of the resources available at the node and
the requirements of other active processes in the node:
X
jH
xi,j T(i)iN(1)
Moreover, to ensure the integrity of the mapping model
additional constraints ensure that only one of the IoT nodes
can be selected for each hidden neuron:
X
iN
xi,j = 1 jH(2)
C. Objective Functions
We consider two objective functions for the optimal map-
ping of neurons into IoT nodes: i) minimises the overall
cost of communication, ii) minimises the maximum cost of
communication. In the first case, if the weight of each edge
in Gcorresponds to the transmit power between nodes, the
overall cost of communication corresponds to the total transmit
power required to deliver the processed information to the
output nodes. In the second case, if the weight of each edge
in Gcorresponds to the expected transmit time between nodes
the objective function corresponds to the maximum transmit
time to deliver the processed information to the output nodes.
The transmit power objective function is given by:
min X
jH
X
iN
(X
sS
ds,i +X
oO
di,o)xi,j (3)
Whereas, the transmit time objective function in given by:
min X
jH
X
iN
(max
sS(ds,i) + max
oO(di,o))xi,j (4)
A schematic diagram of the optimal mapping mechanism
is depicted in Fig. 1. The optimal mapping, which can be
formulated as the integer linear program described above,
identifies the IoT nodes that act as hidden neurons and the
optimal paths on the physical IoT topology from the inputs
to the outputs via the hidden neurons. The choice of the
objective function for the optimal mapping model depends on
the application of the IoT network and the specific scenario.
IV. CAS E STU DY
We now investigate the behavior of the proposed framework
over various IoT network topologies with respect to the two
objective functions defined in the previous section. In all cases,
it is assumed that all input and output nodes cannot operate
as hidden neurons (i.e. T(i) = 0,iSO), whereas all
other IoT nodes can operate at most as one hidden neuron.
We used CPLEX Concert library to find the optimal solution
to the linear optimisation problem.
We first consider a square N×Nlattice topology. Apart
from its simplicity and generality, we picked a lattice topology
as it represents a good match for a number of real world
scenarios, such as a grid of parking sensors for which we
would like to estimate the short-term occupancy probability.
Edges of the topology graph are weighted according to the
objective function of choice, although in our experiments all
edges have the same weight of 1. An illustrative example of the
application of our framework on such a topology is shown on
the left side of Fig. 2, where three sensor nodes were randomly
selected as inputs for the neural network, and another node was
marked as the output, e.g., as an actuator node which needs
to perform a certain operation depending on the results of the
neural network computation.
In order to investigate the benefits of our framework in terms
of locality of information, we further constrained input and
4
Input Node Output Node Hidden Neuron
(Transmit Power)
Hidden Neuron
(Transmit Time)
Fig. 2: Neural network mapping on a lattice topology (left) and a topology of two sparsely interconnected sub-nets (right).
0
2
4
6
8
10
12
14
16
T. Power T. Time T. Power T. Time T. Power T. Time T. Power T. Time Central
h/i = 0.5 h/i = 1 h/i = 1.5 h/i = 2 -
Edges
Longest Input-Output Path
N = 9 N = 11 N = 13
Fig. 3: Longest path between input and output nodes for the two proposed objective functions and the centralised approach,
with increasing number of nodes Nin each row of the lattice and increasing ratios h/i of hidden neurons to input nodes. In
all cases there are i= 4 input nodes and 1 output node.
output nodes to only be picked from one quadrant of the lattice
(e.g., the bottom right quarter of the grid). Next, we compared
the length of the longest messaging path between input and
output nodes using either of the objective functions proposed
in subsection III-C with a baseline centralised approach. In
particular, in both the distributed scenarios, messages from
the input nodes need to reach the hidden neuron nodes for
intermediate processing before they can be forwarded to the
output nodes; whereas in the centralised case messages need
to reach a gateway placed in the middle of the grid (either to
be sent to the cloud for processing or, equivalently, to benefit
from fog computing processing) before the desired outcome
can be sent to the output nodes. Hence, in this experiment we
investigate the effect of scaling up the size of the lattice and
increasing the ratio of hidden neurons to input nodes on the
length of the longest input-output path and on the maximum
delay experienced by our messages of interest in the network
(neglecting the impact of collisions and re-transmissions).
We ran each scenario 500 times and averaged the results,
which are presented in Fig. 3. It is apparent that our approach
allows us to fully exploit the locality of information whenever
that is applicable, i.e., when the input and output nodes are
clustered together in the network, reducing the path traveled
by messages compared to a centralised solution even when
the number of hidden neurons required for the computation
of interest is as high as twice the number of input nodes.
Naturally the price to pay for this extended decentralised
processing is an increase in the number of messages sent,
which, in the trivial case that we are considering here of one
hidden neuron per device, will be equal to |H|×(|S|+|O|).
Note that, as expected, the Transmit Time objective function
produces solutions with slightly shorter longest paths than the
Transmit Power one, but at the expense of slightly higher
number of total messages.
We also designed an experiment where two densely con-
nected sub-networks are linked with limited number of con-
necting edges, e.g., two floors of an industrial complex with
a mesh-like network of sensors on each floor interconnected
at some gateway points. More specifically, we created two
separate topologies where edges between any pair of nodes are
added with probability p, which hence determines the density
of the graph. The two sub-nets are then joined by creating a
small fixed number of additional connecting edges between
their respective nodes in order to obtain a single connected
5
topology with few gateway links between the two original sub-
nets. Three input nodes are selected randomly from one of the
sub-nets, and one output node is picked from the other, to
evaluate the effect of the connecting edge bottlenecks on the
mapping algorithms. The number of hidden neurons is set to
5. An illustrative example for a composite graph with p= 0.5
is shown on the right side of Fig. 2, from which it would
appear that the two objective functions differ in the way they
cluster hidden neurons over the topology.
To better understand this, we ran a larger simulation cam-
paign for the two sub-nets scenario, where we tracked the
number of hidden nodes placed in the “input” sub-net (i.e.,
the sub-net from which we picked the input sensor nodes)
and in the “output” sub-net respectively, and also the number
of connecting edges used, by each of the two optimisation
algorithms over 1000 iterations and for increasing values of p,
i.e., for increasingly dense networks. The results are presented
in Fig. 4; error margins with 95% confidence were not included
in the figure to keep it readable but were always within at
most 1% of the results shown. When using the Transmit
Power objective function, hidden nodes are mostly placed
in the input network and the trend grows as the density of
the graph increases. On the other hand, in the case of the
Transmit Time objective function the hidden neurons are more
evenly shared between the input and output networks. As a
consequence of this, on average the Transmission-Time-based
mapping tends to use a larger number of connections between
the two networks. Finally, the use of connecting edges appears
to be mostly constant with the value of p, with only a slight
increase for the case of fully connected sub-graphs.
V. DISCUSSION
In this paper we propose a neural-network-based implemen-
tation of the In-Network Computation concept, which exploits
the communications between IoT devices to perform general
computation which in this specific case is the online processing
of the data transiting the network. The theoretical foundation
for this approach is given by the Universal Approximation
Theorem which, put simply, roughly states that a neural
network with a single hidden layer is sufficient to compute
a bounded approximation of a generic continuous function.
This idea allows us to incorporate intelligence into a net-
work of cheap, low-power IoT sensors and devices. This is
particularly useful in the context of WSNs, as it means that
critical processing of the inputs measured by the sensor nodes
of the network can be performed locally and on-the-fly as the
data traverses the network to reach the actuator nodes, rather
than relying on remote servers or cloud-based solutions which
would increase latency and decrease information privacy. Our
approach also has the advantage of naturally scaling up
with the amount of IoT nodes unlike a solution based on
external processing that requires additional backhauling and
computational resources with increasing number of sensors.
As there are many ways to map a neural network to a physi-
cal topology of IoT devices, we have developed a framework to
optimise the placement of the hidden neurons on the available
physical nodes. Specifically, we have shown two examples
minimising respectively (a) the total transmission power or
(b) the maximum transmission time. Additional models can
be added to the framework in the future, e.g., minimising data
loss in the presence of interference etc. As a proof of concept,
we have shown how this idea could be fleshed out over either
a lattice or a composite topology, evaluating the effects of
choosing different metrics for the optimisation algorithm.
Many works address the problem of in-network computation
in WSNs. Most of the approaches in literature focus on
determining the maximum achievable computation rate of
specific functions [15]–[17]. Typically these works consider
symmetric functions and assume a single collector node for
the processed data. More recently, approaches considering the
distributed implementation of a generic function have been
considered [18], [19]. The function is described as a weighted
directed acyclic graph in [18] and as a directed tree in [19].
Both works, unlike us, skips the resource availability in each
node, which shapes the search space of the optimal mapping.
While we believe that the concept of embedding a neural
network in an IoT network is radically novel in its entirety,
there are some previous related works targeting specific as-
pects of this problem, or sharing the same aim but with
a different approach. For example, in [11] the authors de-
velop neural network architectures that can be used to report
information from a sensor network in a more “cognitive”
manner. While there are certain similarities to the work we
are presenting here, their proposal largely comes down to a
method for data aggregation and filtering. Indeed the legacy
of this work is more closely related to cognitive networks than
to the support of general computing.
The motivations of the work presented in [10] are very
much in line with ours, but their work essentially consists in a
middleware layer to distribute neural network building blocks
over multiple heterogeneous devices, not unlike what can be
achieved by TensorFlow when multiple processing resources
are available. Most importantly, the “IoT network” used for
their experiments is composed of a laptop, an embedded
device with a specialized low power GPU, and a powerful
server – in other words, powerful machines that have little in
common with the typical lightweight IoT devices we consider.
Finally, the authors did not develop optimization algorithms
to distribute these functionalities over the available devices,
although this is mentioned as possible future work.
Similarly, in [20] the authors present a similar vision with a
more specialized focus on self-adapting WSNs. They propose
using a Hopfield neural network to solve an heuristic algorithm
for determining the minimum weakly-connected dominating
set of nodes in the network – a problem which is of relevance
in many wireless communication protocols. Since every node
is mapped to a neuron, they do not introduce an optimization
framework in their proposal. General-purpose computation
over a WSN through neural networks is mentioned as a
possibility but not further investigated.
Lastly, the patent described in [21] describes a method to
map a neural network to a network of embedded devices. How-
ever, their solution also lacks the mapping optimization aspect
of our approach – nodes are instead uniformly mapped to
available embedded devices. Their work also rigidly requires a
full mesh network between the embedded devices, which can
6
0.0%
10.0%
20.0%
30.0%
40.0%
50.0%
60.0%
70.0%
80.0%
90.0%
100.0%
0.4 0.5 0.6 0.7 0.8 0.9 1 0.4 0.5 0.6 0.7 0.8 0.9 1
Hidden Neuron Distribution, 5 connecting edges
Used Connecting Edges Input Subnet Output Subnet
Transmit Power
Transmit Time
Fig. 4: Distribution of hidden neurons and percentage of connecting edges used for the two considered objective functions and
for increasing values of the edge creation probability p.
be a very restrictive assumption in many sparse deployment
scenarios. Our solution does not require full mesh connectivity,
using intermediate node relays instead whenever necessary.
Future work will focus on refining the abstractions used to
model the physical IoT devices supporting the neural network,
allowing us to implement more refined optimization strategies
taking into account the capabilities of each device in terms of
processing, communications, storage, power etc. Furthermore,
given the unreliable nature of the low-cost devices and wireless
communication mechanisms that we are considering, we aim
to investigate strategies to recover from network or node faults.
Such events could be tackled either in the neural network
domain, i.e., through the re-deployment of a new neural
network configuration on the surviving nodes and links, or in
the actual IoT network domain, i.e., by adopting re-routing
mechanisms or re-transmission request protocols; however,
particular attention should be paid to the potential overlap of
the recovery functions of these two domains to avoid disruptive
feedback loops.
ACKNOWLEDGMENT
This publication has emanated from research supported in
part by a research grant from Science Foundation Ireland (SFI)
and is co-funded under the European Regional Development
Fund under Grant Number 13/RC/2077 (CONNECT).
REFERENCES
[1] “http://www.gartner.com/newsroom/id/3165317.
[2] F. Bonomi, R. Milito, J. Zhu, and S. Addepalli, “Fog computing and its
role in the Internet of Things,” in Proceedings of the first edition of the
MCC workshop on Mobile cloud computing, 2012, pp. 13–16.
[3] M. Yannuzzi, R. Milito, R. Serral-Gracia, D. Montero, and M. Ne-
mirovsky, “Key Ingredients in an IoT Recipe: Fog Computing, Cloud
Computing, and More Fog Computing,” in 2014 IEEE 19th International
Workshop on Computer Aided Modeling and Design of Communication
Links and Networks (CAMAD), dec 2014.
[4] R. Buyya, C. S. Yeo, S. Venugopal, J. Broberg, and I. Brandic, “Cloud
Computing and Emerging IT Platforms: Vision, Hype, and Reality for
Delivering Computing as the 5th Utility,Future Generation Computer
Systems, vol. 25, no. 6, pp. 599–616, jun 2009.
[5] L. M. Vaquero and L. Rodero-Merino, “Finding your Way in the Fog,
ACM SIGCOMM Computer Communication Review, vol. 44, no. 5, pp.
27–32, oct 2014.
[6] S. Yi, C. Li, and Q. Li, “A Survey of Fog Computing,” in Proceedings
of the 2015 Workshop on Mobile Big Data - Mobidata’15, 2015.
[7] S. Sarkar and S. Misra, “Theoretical Modelling of Fog Computing: a
Green Computing Paradigm to Support IoT Applications,” IET Net-
works, vol. 5, no. 2, pp. 23–29, mar 2016.
[8] J. S. Preden, K. Tammemae, A. Jantsch, M. Leier, A. Riid, and
E. Calis, “The Benefits of Self-Awareness and Attention in Fog and
Mist Computing,” Computer, vol. 48, no. 7, pp. 37–45, jul 2015.
[9] K. Hong, D. Lillethun, U. Ramachandran, B. Ottenwlder, and B. Kolde-
hofe, “Mobile Fog: a Programming Model for Large-Scale Applications
on the Internet of Things,” in Proceedings of the second ACM SIG-
COMM workshop on Mobile cloud computing - MCC’13, 2013.
[10] E. De Coninck, T. Verbelen, B. Vankeirsbilck, S. Bohez, S. Leroux,
and P. Simoens, “DIANNE,” in Proceedings of the 2nd Workshop on
Middleware for Context-Aware Applications in the IoT - M4IoT 2015,
New York, New York, USA, 2015, pp. 19–24.
[11] L. Reznik and G. Von Pless, “Neural Networks for Cognitive Sensor Net-
works,” in International Joint Conference on Neural Networks (IJCNN),
2008, pp. 1235–1241.
[12] H. Van Dyke Parunak, ““Go to the Ant”: Engineering Principles from
Natural Multi-Agent System,” Annals of Operations Research, vol. 75,
no. 0, pp. 69–101, 1997.
[13] F. Chinchilla, M. Lindsey, and M. Papadopouli, “Analysis of Wireless
Information Locality and Association Patterns in a Campus,” in IEEE
INFOCOM, 2004.
[14] J. H. Ziegeldorf, O. G. Morchon, and K. Wehrle, “Privacy in the Inter-
net of Things: Threats and Challenges,” Security and Communication
Networks, vol. 7, no. 12, pp. 2728–2742, jun 2013.
[15] A. Giridhar and P. Kumar, “Toward a theory of in-network computation
in wireless sensor networks,” IEEE Communications Magazine, vol. 44,
no. 4, pp. 98–107, 2006.
[16] N. Khude, A. Kumar, and A. Karnik, “Time and energy complexity of
distributed computation in wireless sensor networks,” in Proceedings
IEEE 24th Annual Joint Conference of the IEEE Computer and Com-
munications Societies., vol. 4, 2005, pp. 2625–2637.
[17] L. Ying, R. Srikant, and G. E. Dullerud, “Distributed symmetric function
computation in noisy wireless sensor networks,” IEEE Transactions on
Information Theory, vol. 53, no. 12, pp. 4826–4833, 2007.
[18] P. Vyavahare, N. Limaye, and D. Manjunath, “Optimal embedding of
functions for in-network computation: Complexity analysis and algo-
rithms,” IEEE/ACM Transactions on Networking, vol. 24, no. 4, pp.
2019 – 2032, 2015.
[19] V. Shah, B. K. Dey, and D. Manjunath, “Network flows for function
computation,” IEEE Journal on Selected Areas in Communications,
vol. 31, no. 4, pp. 714–730, 2013.
[20] J. Li and G. Serpen, “Adaptive and Intelligent Wireless Sensor Networks
through Neural Networks: an Illustration for Infrastructure Adaptation
through Hopfield Network,” Applied Intelligence, mar 2016.
[21] N. Kambhatla, D. Kanevsky, and W. W. Zadrozny, “US6418423 B1-
Method and Apparatus for Executing Neural Network Applications on
a Network of Embedded Devices,” 2002.
... Optimal route selection in Guo, Cao, and Liu (2017) is based on the data correlation dictated by the usage of singular value decomposition (SVD) for in-network processing to reduce the transmission cost. Similarly, the authors in Kaminski et al. (2017) propose an optimal route selection based on an ANN approach. In this case, ANN is employed to reduce delivery latency. ...
... Opportunistic flat routing RL 2020 T. Zhao et al. Guo et al. (2017) Opportunistic flat routing SVD 2017 P. Guo et al. Kaminski et al. (2017) Opportunistic flat routing ANN 2017 N. Kaminski et al. Kim and Kim (2016) Opportunistic flat routing DRL 2016 H. Kim et al. ...
... On the other hand, the paper in Guo et al. (2017) uses an SVD algorithm to construct the optimal routing tree to reduce energy consumption through an in-network processing approach. Alternatively, optimal routing selection is performed using ANN to reduce delivery latency (Kaminski et al., 2017). Therefore, the application of ML mechanisms in flat routing solutions is mainly directed at reducing energy consumption and transmission delay. ...
... Our focus in this work is to discuss the security requirements of IoT in light of approaches and modelling techniques provided by ANN. In first attempt the security criteria for IoT has been identified from various sources of literature then ANNs contributions towards the underlying security requirements like authentication, network monitoring [7], attack detection privacy [8], secure routing [9], encryption [10], access control [11,12], privacy [13,14], theft resistance [14,15] and authorization [16] have been completely discussed. ANNs not only contribute towards the security requirements but they also leverage the security of IoT to deliver a robust IDS for detecting attacks, threats and anomalies. ...
... ANNs work as add-ons to provide privacy and element of trustworthy to the IoT networks. Data related to IoT devices can be locally processed in IoT network by using ANN's components known as neurons and theirs's interconnectivity [8]. These component allow to minimize the latency and preserves the privacy without sending data to the remote sites for the purpose of processing. ...
Article
Full-text available
Internet of Things (IoT) driven systems have been sharply growing in the recent times but this evolution is hampered by cybersecurity threats like spoofing, denial of service (DoS), distributed denial of service (DDoS) attacks, intrusions, malwares, authentication problems or other fatal attacks. The impacts of these security threats can be diminished by providing protection towards the different IoT security features. Different technological solutions have been presented to cope with the vulnerabilities and providing overall security towards IoT systems operating in numerous environments. In order to attain the full-pledged security of any IoT-driven system the significant contribution presented by artificial neural networks (ANNs) is worthy to be highlighted. Therefore, a systematic approach is presented to unfold the efforts and approaches of ANNs towards the security challenges of IoT. This systematic literature review (SLR) is composed of three (3) research questions (RQs) such that in RQ1, the major focus is to identify security requirements or criteria that defines a full-pledge IoT system. This question also focusses on pinpointing the different types of ANNs approaches that are contributing towards IoT security. In RQ2, we highlighted and discussed the contributions of ANNs approaches for individual security requirement/feature in comprehensive and detailed fashion. In this question, we also determined the various models, frameworks, techniques and algorithms suggested by ANNs for the security advancements of IoT. In RQ3, different security mechanisms presented by ANNs especially towards intrusion detection system (IDS) in IoT along with their performances are comparatively discussed. In this research, 143 research papers have been used for analysis which are providing security solutions towards IoT security issues. A comprehensive and in-depth analysis of selected studies have been made to understand the current research gaps and future research works in this domain.
... The collection of these data also introduces some additional overhead, which even competes with the data transmission. While, latency requirements are especially critical when the data collected by IoT devices is used for automatic or semiautomatic control applications Kaminski et al. (2017). Therefore, applying AI technology to manage the IoT communication and networking seems a promising way but it still needs more efforts to tackle these challenges. ...
... However, the effective and efficient use of IoT technologies for distributed intelligence processing in 5G systems is still not up to the mark. Incorporating DL into IoT-enabled devices can enhance the collaborative processing ability of the system and can reduce latency, delay, and communication overhead and increase the bit rate [243]. ML applications are still in their infancy in user sensing [244], blockchain [245], mobile surveillance, and some other areas in 5G systems . ...
Article
Full-text available
The convenience of availing quality services at affordable costs anytime and anywhere makes mobile technology very popular among users. Due to this popularity, there has been a huge rise in mobile data volume, applications, types of services, and number of customers. Furthermore, due to the COVID‐19 pandemic, the worldwide lockdown has added fuel to this increase as most of our professional and commercial activities are being done online from home. This massive increase in demand for multi‐class services has posed numerous challenges to wireless network frameworks. The services offered through wireless networks are required to support this huge volume of data and multiple types of traffic, such as real‐time live streaming of videos, audios, text, images etc., at a very high bit rate with a negligible delay in transmission and permissible vehicular speed of the customers. Next‐generation wireless networks (NGWNs, i.e. 5G networks and beyond) are being developed to accommodate the service qualities mentioned above and many more. However, achieving all the desired service qualities to be incorporated into the design of the 5G network infrastructure imposes large challenges for designers and engineers. It requires the analysis of a huge volume of network data (structured and unstructured) received or collected from heterogeneous devices, applications, services, and customers and the effective and dynamic management of network parameters based on this analysis in real time. In the ever‐increasing network heterogeneity and complexity, machine learning (ML) techniques may become an efficient tool for effectively managing these issues. In recent days, the progress of artificial intelligence and ML techniques has grown interest in their application in the networking domain. This study discusses current wireless network research, brief discussions on ML methods that can be effectively applied to the wireless networking domain, some tools available to support and customise efficient mobile system design, and some unresolved issues for future research directions.
Chapter
This chapter explores the construction method of robust topology from the perspective of machine learning. Traditional optimization schemes based on genetic evolution or swarm intelligence have a long running time when the large-scale nodes are deployed in a scenario. In order to further reduce the time for topology construction, this chapter studies the application of artificial intelligence in topology to obtain a robust topology deployment model, which can guide the connection of subsequent nodes. The application of machine learning model in network topology is explored from three perspectives: malicious node identification, highly robust topology optimization and highly robust topology generation.
Article
The rapid and universal proliferation of wireless network technology and the considerable development of artificial intelligence technology together have brought forth the era of Internet of Things (IoT). Although each IoT device has limited computational capability, an IoT network as a whole, which realiges the role of data collection from devices, can be regarded as a rich computational resource. In this paper, we introduce several attempts to embed a neural network into a wireless network and conduct neural computing, which contribute to the realization of a high-performance, energy-efficient, and low-latency IoT system.
Article
With the continuous development of technologies, our society is approaching the next stage of industrialization. The Fourth Industrial Revolution also referred to as Industry 4.0, redefines the manufacturing system as a smart and connected machinery system with fully autonomous operation capability. Several advanced cutting-edge technologies, such as cyber-physical systems (CPS), internet of things (IoT), and artificial intelligence, are believed as the essential components to realize Industry 4.0. In this paper, we focus on a comprehensive review of how artificial intelligence benefits Industry 4.0, including potential challenges and possible solutions. A panoramic introduction of neuromorphic computing is provided, which is one of the most promising and attractive research directions in artificial intelligence. Subsequently, we introduce the vista of the neuromorphic-powered Industry 4.0 system and survey a few research activities on applications of artificial neural networks for IoT.
Conference Paper
In the near future, IoT based application services are anticipated to collect massive amounts of data on which complex and diverse tasks are expected to be performed. Machine learning algorithms such as Artificial Neural Networks (ANN) are increasingly used in smart environments to predict the output for a given problem based on a set of tuning parameters as the input. To this end, we present an energy efficient neural network (EE-NN) service embedding framework for IoT based smart homes. The developed framework considers the idea of Service Oriented Architecture (SOA) to provide service abstraction for multiple complex modules of a NN which can be used by a higher application layer. We utilize Mixed Integer Linear Programming (MILP) to formulate the embedding problem to minimize the total power consumption of networking and processing simultaneously. The results of the MILP model show that our optimized NN can save up to 86% by embedding processing modules in IoT devices and up to 72% in fog nodes due to the limited capacity of IoT devices.
Conference Paper
Full-text available
Despite the increasing usage of cloud computing, there are still issues unsolved due to inherent problems of cloud computing such as unreliable latency, lack of mobility support and location-awareness. Fog computing can address those problems by providing elastic resources and services to end users at the edge of network, while cloud computing are more about providing resources distributed in the core network. This survey discusses the definition of fog computing and similar concepts, introduces representative application scenarios, and identifies various aspects of issues we may encounter when designing and implementing fog computing systems. It also highlights some opportunities and challenges, as direction of potential future work, in related techniques that need to be considered in the context of fog computing.
Article
Full-text available
This paper proposes embedding an artificial neural network into a wireless sensor network in fully parallel and distributed computation mode. The goal is to equip the wireless sensor network with computational intelligence and adaptation capability for enhanced autonomous operation. The applicability and utility of the proposed concept is demonstrated through a case study whereby a Hopfield neural network configured as a static optimizer for the weakly-connected dominating set problem is embedded into a wireless sensor network to enable it to adapt its network infrastructure to potential changes on-the-fly and following deployment in the field. Minimum weakly-connected dominating set defined for the graph model of the wireless sensor network topology is employed to represent the network infrastructure and can be recomputed each time the sensor network topology changes. A simulation study using the TOSSIM emulator for TinyOS-Mica sensor network platform was performed for mote counts of up to 1000. Time complexity, message complexity and solution quality measures were assessed and evaluated for the case study. Simulation results indicated that the wireless sensor network embedded with Hopfield neural network as a static optimizer performed competitively with other local or distributed algorithms for the weakly connected dominating set problem to establish its feasibility.
Conference Paper
Full-text available
This paper examines some of the most promising and challenging scenarios in IoT, and shows why current compute and storage models confined to data centers will not be able to meet the requirements of many of the applications foreseen for those scenarios. Our analysis is particularly centered on three interrelated requirements: 1) mobility; 2) reliable control and actuation; and 3) scalability, especially, in IoT scenarios that span large geographical areas and require real-time decisions based on data analytics. Based on our analysis, we expose the reasons why Fog Computing is the natural platform for IoT, and discuss the unavoidable interplay of the Fog and the Cloud in the coming years. In the process, we review some of the technologies that will require considerable advances in order to support the applications that the IoT market will demand.
Article
Full-text available
The Internet of Things paradigm envisions the pervasive interconnection and cooperation of smart things over the current and future Internet infrastructure. The Internet of Things is, thus, the evolution of the Internet to cover the real world, enabling many new services that will improve people's everyday lives, spawn new businesses, and make buildings, cities, and transport smarter. Smart things allow indeed for ubiquitous data collection or tracking, but these useful features are also examples of privacy threats that are already now limiting the success of the Internet of Things vision when not implemented correctly. These threats involve new challenges such as the pervasive privacy-aware management of personal data or methods to control or avoid ubiquitous tracking and profiling. This paper analyzes the privacy issues in the Internet of Things in detail. To this end, we first discuss the evolving features and trends in the Internet of Things with the goal of scrutinizing their privacy implications. Second, we classify and examine privacy threats in this new setting, pointing out the challenges that need to be overcome to ensure that the Internet of Things becomes a reality. Copyright © 2013 John Wiley & Sons, Ltd.
Article
Full-text available
Fog Computing extends the Cloud Computing paradigm to the edge of the network, thus enabling a new breed of applications and services. Defining characteristics of the Fog are: a) Low latency and location awareness; b) Wide-spread geographical distribution; c) Mobility; d) Very large number of nodes, e) Predominant role of wireless access, f) Strong presence of streaming and real time applications, g) Heterogeneity. In this paper we argue that the above characteristics make the Fog the appropriate platform for a number of critical Internet of Things (IoT) services and applications, namely, Connected Vehicle, Smart Grid, Smart Cities, and, in general, Wireless Sensors and Actuators Networks (WSANs).
Article
In this study, the authors focus on theoretical modelling of the fog computing architecture and compare its performance with the traditional cloud computing model. Existing research works on fog computing have primarily focused on the principles and concepts of fog computing and its significance in the context of internet of things (IoT). This work, one of the first attempts in its domain, proposes a mathematical formulation for this new computational paradigm by defining its individual components and presents a comparative study with cloud computing in terms of service latency and energy consumption. From the performance analysis, the work establishes fog computing, in collaboration with the traditional cloud computing platform, as an efficient green computing platform to support the demands of the next generation IoT applications. Results show that for a scenario where 25% of the IoT applications demand real-time, low-latency services, the mean energy expenditure in fog computing is 40.48% less than the conventional cloud computing model.
Article
The cloud is migrating to the edge of the network, where routers themselves may become the virtualisation infrastructure, in an evolution labelled as "the fog". However, many other complementary technologies are reaching a high level of maturity. Their interplay may dramatically shift the information and communication technology landscape in the following years, bringing separate technologies into a common ground. This paper offers a comprehensive definition of the fog, comprehending technologies as diverse as cloud, sensor networks, peer-to-peer networks, network virtualisation functions or configuration management techniques. We highlight the main challenges faced by this potentially breakthrough technology amalgamation.
Article
We consider optimal distributed computation of a given function of distributed data. The input (data) nodes and the sink node that receives the function form a connected network that is described by an undirected weighted network graph. The algorithm to compute the given function is described by a weighted directed acyclic graph and is called the computation graph. An embedding defines the computation communication sequence that obtains the function at the sink. Two kinds of optimal embeddings are sought, the embedding that: 1) minimizes delay in obtaining function at sink, and 2) minimizes cost of one instance of computation of function. This abstraction is motivated by three applications—in-network computation over sensor networks, operator placement in distributed databases, and module placement in distributed computing. We first show that obtaining minimum-delay and minimum-cost embeddings are both NP-complete problems and that cost minimization is actually MAX SNP-hard. Next, we consider specific forms of the computation graph for which polynomial-time solutions are possible. When the computation graph is a tree, a polynomial-time algorithm to obtain the minimum-delay embedding is described. Next, for the case when the function is described by a layered graph, we describe an algorithm that obtains the minimum-cost embedding in polynomial time. This algorithm can also be used to obtain an approximation for delay minimization. We then consider bounded treewidth computation graphs and give an algorithm to obtain the minimum-cost embedding in polynomial time.
Article
Self-awareness facilitates a proper assessment of cost-constrained cyber-physical systems, allocating limited resources where they are most needed. Together, situation awareness and attention are key enablers for self-awareness in efficient distributed sensing and computing networks.
Article
We consider in-network computation of an arbitrary function over an arbitrary communication network. A network with capacity constraints on the links is given. Some nodes in the network generate data, e.g., like sensor nodes in a sensor network. An arbitrary function of this distributed data is to be obtained at a terminal node. The structure of the function is described by a given computation schema, which in turn is represented by a directed tree. We design computing and communicating schemes to obtain the function at the terminal at the maximum rate. For this, we formulate linear programs to determine network flows that maximize the computation rate. We then develop a fast combinatorial primal-dual algorithm to obtain near-optimal solutions to these linear programs. As a subroutine for this, we develop an algorithm for finding the minimum cost embedding of a tree in a network with any given set of link costs. We then briefly describe extensions of our techniques to the cases of multiple terminals wanting different functions, multiple computation schemas for a function, computation with a given desired precision, and to networks with energy constraints at nodes.