Content uploaded by Avishek Nag
Author content
All content in this area was uploaded by Avishek Nag on Apr 13, 2018
Content may be subject to copyright.
1
A Neural-Network-Based Realization of In-Network
Computation for the Internet of Things
Nicholas Kaminski, Irene Macaluso, Emanuele Di Pascale, Avishek Nag, John Brady, Mark Kelly, Keith Nolan,
Wael Guibene, Linda Doyle
Abstract—Ultra-dense Internet of Things (IoT) networks and
machine type communications herald an enormous opportunity
for new computing paradigms and are serving as a catalyst for
profound change in the evolution of the Internet. We explore
leveraging the communication within IoT to serve data processing
by appropriately shaping the aggregate behaviour of a network
to parallel more traditional computation methods. This paper
presents an element of this vision, whereby we map the operations
of an artificial neural network onto the communication of an
IoT network for simultaneous data processing and transfer.
That is, we provide a framework to treat a network holistically
as an artificial neural network, rather than placing neural
networks within the network. The operation of components of a
neural network, neurons and connections between neurons, are
performed by the various elements of the IoT network, i.e., the
devices and their connections. The proposed approach reduces
the latency in delivering processed information and supports the
locality of information inherent to IoT by removing the need for
transfer to remote data processing sites.
Index Terms—Internet of Things, Artificial Neural Networks,
Wireless Sensor Networks
I. INTRODUCTION
IoT technology is rapidly progressing due to manufacturing
advancements with respect to size, weight, power, and cost
of next-generation low-power radio frequency transceivers
and micro-controllers. These advancements coupled with the
design and fabrication into single packages has resulted in
highly integrated System-on-a-Chip realisations. As a result,
IoT networks have significantly grown across a wide variety
of domains and the number of IoT devices is forecasted to
grow to about 20 billion by 2020 [1].
However, current IoT nodes are mostly passive producers
of raw data to be consumed elsewhere. Information typically
flows through the network to reach a Fog collector or the
Cloud, with no processing other than the pre-processing at
the sources [2]. Depending on the application, the remotely
processed and aggregated data is then forwarded back to
actuator devices often located in geographical proximity of
the originating sensor nodes. The collective capability of the
IoT network is only exploited by aggregating and analysing
information created by different devices. However, ultra-dense
IoT networks and machine type communications herald an
enormous opportunity for new computing paradigms and are
serving as a catalyst for profound change in the evolution of
the Internet. One such example is leveraging the network to
N. Kaminski, I. Macaluso, E. Di Pascale, A. Nag and L. Doyle are with
CONNECT Center, Trinity College Dublin, Ireland.
J. Brady, M. Kelly, K. Nolan and W. Guibene are with Intel Labs, Ireland.
perform traditional computing functions like processing and
storage to efficiently transform data into information.
In this paper, we utilise the collective behaviour of an
IoT network to perform computation simultaneously with
communication and hence unlock the IoT network’s poten-
tial to act as an integrated computation and communication
system, and not simply as the producer of information and/
or the end-consumer of processed information. In particular,
we propose a framework that maps computational processes
to communicative processes, allowing IoT nodes to collabora-
tively process the data while it flows through the network.
By treating the whole network as a computer, rather than
placing computational elements within the network, no further
processing is necessary once information reaches its intended
destination. As a first realisation of this concept, this paper
maps the computational approaches of an Artificial Neural
Network (ANN) to the communication within an IoT network.
By shaping the aggregate behaviour of the communicative
processes in parallel with the computational approach of a
typical ANN, we propose truly decentralised IoT networks
enabled with computational intelligence.
The merits of the proposed approach are twofold: (a)
it reduces the latency in delivering processed information;
(b) it supports the locality of information inherent to IoT.
Latency requirements are especially critical when the data
collected by IoT devices is used for automatic or semi-
automatic control applications. In such cases, the consumers
of processed information are likely located in proximity of the
sensing devices. By pushing processing into the IoT devices
the proposed approach avoids forwarding the information to
external systems and then back to the IoT network, thus sig-
nificantly reducing the overall latency. The same mechanism
also preserves locality of information which is important for
privacy and security.
We adopt feedforward neural networks as the reference
framework for the generic parallel processor, implemented
by the IoT network. In fact, the computation performed
by a feedforward neural network, which can approximate
any measurable function arbitrarily well, can be partitioned
into different tasks for concurrent execution. The proposed
framework exploits these features to process the data while
optimising the use of resources in the IoT network. In our
model all computations performed in a neuron are considered
as an atomic computing task and these tasks are distributed
into multiple IoT nodes subject to the resources available
at each node, the connectivity of the IoT network, and the
producers and consumers of information in the network.
2
II. DATA PROCE SS IN G ARCHITECTURES FOR IOT
The goal of the IoT concept is to provide a mechanism
that connects physical things to other physical things in an
informational sense. This notion of interconnected objects
emanates from the use of Radio Frequency Identification
(RFID) technology and wireless sensor networks (WSNs).
Evolutions of these initial areas provide the springboard for
IoT by offering physical objects that are uniquely identifiable
with communication capabilities. Connecting such objects into
device-driven networks with sensor devices directly interacting
with actuators forms the core of the IoT concept.
As devices at the edge of networks directly interact with
each other and the environment, edge computing becomes in-
creasingly important to support IoT operation [3]. Buyya et al.
[4] note a continuing trend towards the cloud-based approaches
to computing, but serving the needs of ubiquitous devices with
such methods is inherently limited by the bottleneck of com-
munication networks [5]. Avoiding such bottlenecks motivates
moving computational resources towards the edge of networks
into smaller and more local compute centers, as suggested
by fog-computing approaches [2]. In fact, proponents of fog
computing laud the architecture’s ability to deliver the lower
latency compute support demanded by typical IoT applications
[6], [7]. To this end, a new concept of mist computing [8]
evolved that pushes computation even beyond the boundaries
fog computing directly into the IoT devices. Our approach of
directly employing the communication between edge devices
to serve computation is the utmost realisation of this trend.
We note that IoT provides a powerful force for moving
computation into the farthest edges of networks.
In the context of fog computing, several works provide
a more detailed examination of how to achieve perimeter
computation. For example, Hong et al. provides a program-
ming model for dynamically scaling fog resources to suit IoT
workloads in [9]. Other works consider the decomposition of
computing tasks for deployment across fog resources, with
possible extension into the mist [10], [11]. Each of these typi-
cal examples separates computational tasks into pieces suitable
for deployment upon various platforms within a network. In
these works the authors consider the communication network
as simply a tool for exchanging data between computing
platforms, without considering network topology or operation
to support computation, i.e., computation and communication
are considered as separate processes in current literature. Our
approach extends beyond prior work by jointly considering
communication and computation.
Edge computing also takes advantage of the locality of
information inherent to IoT, which demands consideration
of several additional aspects. The localisation of information
within a system with several participants has long been a topic
of interest in the area of multi-agent systems as a means to
improve overall system performance [12]. Within this area,
research focuses on appropriately structuring interactions be-
tween system elements based on the notion that each element
fundamentally accesses different information. For example,
[13] employs the notion of locality of information to capture
the local performance of elements within a network. Finally,
the work of Ziegeldorf et al. [14] reminds us that the locality
of information itself has implications about the privacy of
services associated with the information at hand. Indeed,
our work directly considers the local nature of information
available to devices in an IoT network by matching the
computational structure to the location of interest.
III. FRA ME WORK
The proposed framework harnesses the computation and
communication capabilities of an IoT network to process the
collected information by allotting the different components of
a neural network (neurons and connections between neurons)
to the various elements of the IoT network (the devices and
their connections). In other words, the basic idea is to map
neurons into IoT nodes and connections between neurons into
wireless links. It should be noted that an IoT device can
implement one or more neurons and the connection between
two neurons can be realized through multiple wireless links.
Our mapping framework is driven by the minimisation of the
cost to deliver information from the input nodes to the output
nodes through the hidden neurons, subject to the capabilities
of each device and the connectivity of IoT network. The “cost”
is a generic term referring to any performance metric; we will
explore two possible choices in subsection III-C.
The first step in the framework is the identification of the
input and output nodes in the IoT network, which depend
on the specific application. It should be noted that the same
IoT network can implement multiple applications, for each of
which a separate instance of this framework can be run. The
second step requires the selection and training of the neural
network implementing the function of interest. This step is
performed offline; it is assumed throughout the paper that
an appropriate neural network is available as an input to the
optimisation framework. The third step involves generating a
representation of the physical IoT network, namely the IoT
topology and the resources available at each node (e.g., as
a function of the residual memory and/or power available).
The connectivity of the IoT network can be abstracted on the
basis of the proximity of the devices and the current power
levels of the devices in conjunction with standard path loss
and interference models. Relevant network-state information
also includes the cost associated with the use of a link (e.g.
transmit power, link delay, etc.). The final step is the optimal
mapping which is described in detail below.
A. Variables and Parameters
•G(N, E ): IoT network topology with Nbeing the set of
nodes and Ethe set of edges.
•xi,j : Binary variable that takes value ‘1’ if node iis
selected as hidden node j.
•dk,j : Cost of the optimal path between nodes kand j.
•S: Set of source nodes for the trained neural network.
•H: Set of hidden nodes for the trained neural network.
•O: Set of output nodes for the trained neural network.
•T(i): Upper bound on the number of hidden neurons that
can be mapped on node i.
B. Constraints
The choice of IoT devices and wireless paths that implement
the neural network is constrained by the resource usage on
3
IoT Topology & Resources
•Objective: Minimise Total Transmit Power or Overall Transmit Time
•Constraints:
‣Each hidden node is mapped into one IoT node
‣Each IoT node can at most operate as T hidden nodes to ensure energy and computational resources
availability
Mapping Optimisation
Neural Network
(Trained Offline)
<R
n
1,R
n
2,···>
wi
Input
Hidden
Output
…
Input IoT Nodes
Output IoT Nodes
<R
n
1,R
n
2,···>
:Resources
at node n
wi
: i -th link weight
Input Nodes
Output Nodes
Hidden Nodes
Output IoT Topology with
Mapped Neural Network
Fig. 1: The optimal mapping determines the IoT nodes and links that implement a given neural network. The search space is
shaped by the constraints on the IoT nodes connectivity and resources, e.g., some IoT nodes may implement multiple neurons.
each IoT device, which shapes the search space of the optimal
mapping problem. The constraints imposed are such that each
IoT node ican at most operate as T(i)hidden neurons, where
T(i)is a function of the resources available at the node and
the requirements of other active processes in the node:
X
j∈H
xi,j ≤T(i)∀i∈N(1)
Moreover, to ensure the integrity of the mapping model
additional constraints ensure that only one of the IoT nodes
can be selected for each hidden neuron:
X
i∈N
xi,j = 1 ∀j∈H(2)
C. Objective Functions
We consider two objective functions for the optimal map-
ping of neurons into IoT nodes: i) minimises the overall
cost of communication, ii) minimises the maximum cost of
communication. In the first case, if the weight of each edge
in Gcorresponds to the transmit power between nodes, the
overall cost of communication corresponds to the total transmit
power required to deliver the processed information to the
output nodes. In the second case, if the weight of each edge
in Gcorresponds to the expected transmit time between nodes
the objective function corresponds to the maximum transmit
time to deliver the processed information to the output nodes.
The transmit power objective function is given by:
min X
j∈H
X
i∈N
(X
s∈S
ds,i +X
o∈O
di,o)xi,j (3)
Whereas, the transmit time objective function in given by:
min X
j∈H
X
i∈N
(max
s∈S(ds,i) + max
o∈O(di,o))xi,j (4)
A schematic diagram of the optimal mapping mechanism
is depicted in Fig. 1. The optimal mapping, which can be
formulated as the integer linear program described above,
identifies the IoT nodes that act as hidden neurons and the
optimal paths on the physical IoT topology from the inputs
to the outputs via the hidden neurons. The choice of the
objective function for the optimal mapping model depends on
the application of the IoT network and the specific scenario.
IV. CAS E STU DY
We now investigate the behavior of the proposed framework
over various IoT network topologies with respect to the two
objective functions defined in the previous section. In all cases,
it is assumed that all input and output nodes cannot operate
as hidden neurons (i.e. T(i) = 0,∀i∈S∪O), whereas all
other IoT nodes can operate at most as one hidden neuron.
We used CPLEX Concert library to find the optimal solution
to the linear optimisation problem.
We first consider a square N×Nlattice topology. Apart
from its simplicity and generality, we picked a lattice topology
as it represents a good match for a number of real world
scenarios, such as a grid of parking sensors for which we
would like to estimate the short-term occupancy probability.
Edges of the topology graph are weighted according to the
objective function of choice, although in our experiments all
edges have the same weight of 1. An illustrative example of the
application of our framework on such a topology is shown on
the left side of Fig. 2, where three sensor nodes were randomly
selected as inputs for the neural network, and another node was
marked as the output, e.g., as an actuator node which needs
to perform a certain operation depending on the results of the
neural network computation.
In order to investigate the benefits of our framework in terms
of locality of information, we further constrained input and
4
Input Node Output Node Hidden Neuron
(Transmit Power)
Hidden Neuron
(Transmit Time)
Fig. 2: Neural network mapping on a lattice topology (left) and a topology of two sparsely interconnected sub-nets (right).
0
2
4
6
8
10
12
14
16
T. Power T. Time T. Power T. Time T. Power T. Time T. Power T. Time Central
h/i = 0.5 h/i = 1 h/i = 1.5 h/i = 2 -
Edges
Longest Input-Output Path
N = 9 N = 11 N = 13
Fig. 3: Longest path between input and output nodes for the two proposed objective functions and the centralised approach,
with increasing number of nodes Nin each row of the lattice and increasing ratios h/i of hidden neurons to input nodes. In
all cases there are i= 4 input nodes and 1 output node.
output nodes to only be picked from one quadrant of the lattice
(e.g., the bottom right quarter of the grid). Next, we compared
the length of the longest messaging path between input and
output nodes using either of the objective functions proposed
in subsection III-C with a baseline centralised approach. In
particular, in both the distributed scenarios, messages from
the input nodes need to reach the hidden neuron nodes for
intermediate processing before they can be forwarded to the
output nodes; whereas in the centralised case messages need
to reach a gateway placed in the middle of the grid (either to
be sent to the cloud for processing or, equivalently, to benefit
from fog computing processing) before the desired outcome
can be sent to the output nodes. Hence, in this experiment we
investigate the effect of scaling up the size of the lattice and
increasing the ratio of hidden neurons to input nodes on the
length of the longest input-output path and on the maximum
delay experienced by our messages of interest in the network
(neglecting the impact of collisions and re-transmissions).
We ran each scenario 500 times and averaged the results,
which are presented in Fig. 3. It is apparent that our approach
allows us to fully exploit the locality of information whenever
that is applicable, i.e., when the input and output nodes are
clustered together in the network, reducing the path traveled
by messages compared to a centralised solution even when
the number of hidden neurons required for the computation
of interest is as high as twice the number of input nodes.
Naturally the price to pay for this extended decentralised
processing is an increase in the number of messages sent,
which, in the trivial case that we are considering here of one
hidden neuron per device, will be equal to |H|×(|S|+|O|).
Note that, as expected, the Transmit Time objective function
produces solutions with slightly shorter longest paths than the
Transmit Power one, but at the expense of slightly higher
number of total messages.
We also designed an experiment where two densely con-
nected sub-networks are linked with limited number of con-
necting edges, e.g., two floors of an industrial complex with
a mesh-like network of sensors on each floor interconnected
at some gateway points. More specifically, we created two
separate topologies where edges between any pair of nodes are
added with probability p, which hence determines the density
of the graph. The two sub-nets are then joined by creating a
small fixed number of additional connecting edges between
their respective nodes in order to obtain a single connected
5
topology with few gateway links between the two original sub-
nets. Three input nodes are selected randomly from one of the
sub-nets, and one output node is picked from the other, to
evaluate the effect of the connecting edge bottlenecks on the
mapping algorithms. The number of hidden neurons is set to
5. An illustrative example for a composite graph with p= 0.5
is shown on the right side of Fig. 2, from which it would
appear that the two objective functions differ in the way they
cluster hidden neurons over the topology.
To better understand this, we ran a larger simulation cam-
paign for the two sub-nets scenario, where we tracked the
number of hidden nodes placed in the “input” sub-net (i.e.,
the sub-net from which we picked the input sensor nodes)
and in the “output” sub-net respectively, and also the number
of connecting edges used, by each of the two optimisation
algorithms over 1000 iterations and for increasing values of p,
i.e., for increasingly dense networks. The results are presented
in Fig. 4; error margins with 95% confidence were not included
in the figure to keep it readable but were always within at
most 1% of the results shown. When using the Transmit
Power objective function, hidden nodes are mostly placed
in the input network and the trend grows as the density of
the graph increases. On the other hand, in the case of the
Transmit Time objective function the hidden neurons are more
evenly shared between the input and output networks. As a
consequence of this, on average the Transmission-Time-based
mapping tends to use a larger number of connections between
the two networks. Finally, the use of connecting edges appears
to be mostly constant with the value of p, with only a slight
increase for the case of fully connected sub-graphs.
V. DISCUSSION
In this paper we propose a neural-network-based implemen-
tation of the In-Network Computation concept, which exploits
the communications between IoT devices to perform general
computation which in this specific case is the online processing
of the data transiting the network. The theoretical foundation
for this approach is given by the Universal Approximation
Theorem which, put simply, roughly states that a neural
network with a single hidden layer is sufficient to compute
a bounded approximation of a generic continuous function.
This idea allows us to incorporate intelligence into a net-
work of cheap, low-power IoT sensors and devices. This is
particularly useful in the context of WSNs, as it means that
critical processing of the inputs measured by the sensor nodes
of the network can be performed locally and on-the-fly as the
data traverses the network to reach the actuator nodes, rather
than relying on remote servers or cloud-based solutions which
would increase latency and decrease information privacy. Our
approach also has the advantage of naturally scaling up
with the amount of IoT nodes unlike a solution based on
external processing that requires additional backhauling and
computational resources with increasing number of sensors.
As there are many ways to map a neural network to a physi-
cal topology of IoT devices, we have developed a framework to
optimise the placement of the hidden neurons on the available
physical nodes. Specifically, we have shown two examples
minimising respectively (a) the total transmission power or
(b) the maximum transmission time. Additional models can
be added to the framework in the future, e.g., minimising data
loss in the presence of interference etc. As a proof of concept,
we have shown how this idea could be fleshed out over either
a lattice or a composite topology, evaluating the effects of
choosing different metrics for the optimisation algorithm.
Many works address the problem of in-network computation
in WSNs. Most of the approaches in literature focus on
determining the maximum achievable computation rate of
specific functions [15]–[17]. Typically these works consider
symmetric functions and assume a single collector node for
the processed data. More recently, approaches considering the
distributed implementation of a generic function have been
considered [18], [19]. The function is described as a weighted
directed acyclic graph in [18] and as a directed tree in [19].
Both works, unlike us, skips the resource availability in each
node, which shapes the search space of the optimal mapping.
While we believe that the concept of embedding a neural
network in an IoT network is radically novel in its entirety,
there are some previous related works targeting specific as-
pects of this problem, or sharing the same aim but with
a different approach. For example, in [11] the authors de-
velop neural network architectures that can be used to report
information from a sensor network in a more “cognitive”
manner. While there are certain similarities to the work we
are presenting here, their proposal largely comes down to a
method for data aggregation and filtering. Indeed the legacy
of this work is more closely related to cognitive networks than
to the support of general computing.
The motivations of the work presented in [10] are very
much in line with ours, but their work essentially consists in a
middleware layer to distribute neural network building blocks
over multiple heterogeneous devices, not unlike what can be
achieved by TensorFlow when multiple processing resources
are available. Most importantly, the “IoT network” used for
their experiments is composed of a laptop, an embedded
device with a specialized low power GPU, and a powerful
server – in other words, powerful machines that have little in
common with the typical lightweight IoT devices we consider.
Finally, the authors did not develop optimization algorithms
to distribute these functionalities over the available devices,
although this is mentioned as possible future work.
Similarly, in [20] the authors present a similar vision with a
more specialized focus on self-adapting WSNs. They propose
using a Hopfield neural network to solve an heuristic algorithm
for determining the minimum weakly-connected dominating
set of nodes in the network – a problem which is of relevance
in many wireless communication protocols. Since every node
is mapped to a neuron, they do not introduce an optimization
framework in their proposal. General-purpose computation
over a WSN through neural networks is mentioned as a
possibility but not further investigated.
Lastly, the patent described in [21] describes a method to
map a neural network to a network of embedded devices. How-
ever, their solution also lacks the mapping optimization aspect
of our approach – nodes are instead uniformly mapped to
available embedded devices. Their work also rigidly requires a
full mesh network between the embedded devices, which can
6
0.0%
10.0%
20.0%
30.0%
40.0%
50.0%
60.0%
70.0%
80.0%
90.0%
100.0%
0.4 0.5 0.6 0.7 0.8 0.9 1 0.4 0.5 0.6 0.7 0.8 0.9 1
Hidden Neuron Distribution, 5 connecting edges
Used Connecting Edges Input Subnet Output Subnet
Transmit Power
Transmit Time
Fig. 4: Distribution of hidden neurons and percentage of connecting edges used for the two considered objective functions and
for increasing values of the edge creation probability p.
be a very restrictive assumption in many sparse deployment
scenarios. Our solution does not require full mesh connectivity,
using intermediate node relays instead whenever necessary.
Future work will focus on refining the abstractions used to
model the physical IoT devices supporting the neural network,
allowing us to implement more refined optimization strategies
taking into account the capabilities of each device in terms of
processing, communications, storage, power etc. Furthermore,
given the unreliable nature of the low-cost devices and wireless
communication mechanisms that we are considering, we aim
to investigate strategies to recover from network or node faults.
Such events could be tackled either in the neural network
domain, i.e., through the re-deployment of a new neural
network configuration on the surviving nodes and links, or in
the actual IoT network domain, i.e., by adopting re-routing
mechanisms or re-transmission request protocols; however,
particular attention should be paid to the potential overlap of
the recovery functions of these two domains to avoid disruptive
feedback loops.
ACKNOWLEDGMENT
This publication has emanated from research supported in
part by a research grant from Science Foundation Ireland (SFI)
and is co-funded under the European Regional Development
Fund under Grant Number 13/RC/2077 (CONNECT).
REFERENCES
[1] “http://www.gartner.com/newsroom/id/3165317.”
[2] F. Bonomi, R. Milito, J. Zhu, and S. Addepalli, “Fog computing and its
role in the Internet of Things,” in Proceedings of the first edition of the
MCC workshop on Mobile cloud computing, 2012, pp. 13–16.
[3] M. Yannuzzi, R. Milito, R. Serral-Gracia, D. Montero, and M. Ne-
mirovsky, “Key Ingredients in an IoT Recipe: Fog Computing, Cloud
Computing, and More Fog Computing,” in 2014 IEEE 19th International
Workshop on Computer Aided Modeling and Design of Communication
Links and Networks (CAMAD), dec 2014.
[4] R. Buyya, C. S. Yeo, S. Venugopal, J. Broberg, and I. Brandic, “Cloud
Computing and Emerging IT Platforms: Vision, Hype, and Reality for
Delivering Computing as the 5th Utility,” Future Generation Computer
Systems, vol. 25, no. 6, pp. 599–616, jun 2009.
[5] L. M. Vaquero and L. Rodero-Merino, “Finding your Way in the Fog,”
ACM SIGCOMM Computer Communication Review, vol. 44, no. 5, pp.
27–32, oct 2014.
[6] S. Yi, C. Li, and Q. Li, “A Survey of Fog Computing,” in Proceedings
of the 2015 Workshop on Mobile Big Data - Mobidata’15, 2015.
[7] S. Sarkar and S. Misra, “Theoretical Modelling of Fog Computing: a
Green Computing Paradigm to Support IoT Applications,” IET Net-
works, vol. 5, no. 2, pp. 23–29, mar 2016.
[8] J. S. Preden, K. Tammemae, A. Jantsch, M. Leier, A. Riid, and
E. Calis, “The Benefits of Self-Awareness and Attention in Fog and
Mist Computing,” Computer, vol. 48, no. 7, pp. 37–45, jul 2015.
[9] K. Hong, D. Lillethun, U. Ramachandran, B. Ottenwlder, and B. Kolde-
hofe, “Mobile Fog: a Programming Model for Large-Scale Applications
on the Internet of Things,” in Proceedings of the second ACM SIG-
COMM workshop on Mobile cloud computing - MCC’13, 2013.
[10] E. De Coninck, T. Verbelen, B. Vankeirsbilck, S. Bohez, S. Leroux,
and P. Simoens, “DIANNE,” in Proceedings of the 2nd Workshop on
Middleware for Context-Aware Applications in the IoT - M4IoT 2015,
New York, New York, USA, 2015, pp. 19–24.
[11] L. Reznik and G. Von Pless, “Neural Networks for Cognitive Sensor Net-
works,” in International Joint Conference on Neural Networks (IJCNN),
2008, pp. 1235–1241.
[12] H. Van Dyke Parunak, ““Go to the Ant”: Engineering Principles from
Natural Multi-Agent System,” Annals of Operations Research, vol. 75,
no. 0, pp. 69–101, 1997.
[13] F. Chinchilla, M. Lindsey, and M. Papadopouli, “Analysis of Wireless
Information Locality and Association Patterns in a Campus,” in IEEE
INFOCOM, 2004.
[14] J. H. Ziegeldorf, O. G. Morchon, and K. Wehrle, “Privacy in the Inter-
net of Things: Threats and Challenges,” Security and Communication
Networks, vol. 7, no. 12, pp. 2728–2742, jun 2013.
[15] A. Giridhar and P. Kumar, “Toward a theory of in-network computation
in wireless sensor networks,” IEEE Communications Magazine, vol. 44,
no. 4, pp. 98–107, 2006.
[16] N. Khude, A. Kumar, and A. Karnik, “Time and energy complexity of
distributed computation in wireless sensor networks,” in Proceedings
IEEE 24th Annual Joint Conference of the IEEE Computer and Com-
munications Societies., vol. 4, 2005, pp. 2625–2637.
[17] L. Ying, R. Srikant, and G. E. Dullerud, “Distributed symmetric function
computation in noisy wireless sensor networks,” IEEE Transactions on
Information Theory, vol. 53, no. 12, pp. 4826–4833, 2007.
[18] P. Vyavahare, N. Limaye, and D. Manjunath, “Optimal embedding of
functions for in-network computation: Complexity analysis and algo-
rithms,” IEEE/ACM Transactions on Networking, vol. 24, no. 4, pp.
2019 – 2032, 2015.
[19] V. Shah, B. K. Dey, and D. Manjunath, “Network flows for function
computation,” IEEE Journal on Selected Areas in Communications,
vol. 31, no. 4, pp. 714–730, 2013.
[20] J. Li and G. Serpen, “Adaptive and Intelligent Wireless Sensor Networks
through Neural Networks: an Illustration for Infrastructure Adaptation
through Hopfield Network,” Applied Intelligence, mar 2016.
[21] N. Kambhatla, D. Kanevsky, and W. W. Zadrozny, “US6418423 B1-
Method and Apparatus for Executing Neural Network Applications on
a Network of Embedded Devices,” 2002.