ArticlePDF Available

Deploying a Wireless Sensor Network on an Active Volcano

Authors:

Abstract

Augmenting heavy and power-hungry data collection equipment with lighten smaller wireless sensor network nodes leads to faster, larger deployments. Arrays comprising dozens of wireless sensor nodes are now possible, allowing scientific studies that aren't feasible with traditional instrumentation. Designing sensor networks to support volcanic studies requires addressing the high data rates and high data fidelity these studies demand. The authors' sensor-network application for volcanic data collection relies on triggered event detection and reliable data retrieval to meet bandwidth and data-quality demands.
Geoffrey Werner-Allen,
Konrad Lorincz,
and Matt Welsh
Harvard University
Omar Marcillo
and Jeff Johnson
University of New Hampshire
Mario Ruiz
and Jonathan Lees
University of North Carolina
Deploying a Wireless
Sensor Network on
an Active Volcano
Augmenting heavy and power-hungry data collection equipment with lighter,
smaller wireless sensor network nodes leads to faster,larger deployments. Arrays
comprising dozens of wireless sensor nodes are now possible, allowing scientific
studies that aren’t feasible with traditional instrumentation. Designing sensor
networks to support volcanic studies requires addressing the high data rates and
high data fidelity these studies demand. The authors’ sensor-network application
for volcanic data collection relies on triggered event detection and reliable data
retrieval to meet bandwidth and data-quality demands.
W
ireless sensor networks — in
which numerous resource-limited
nodes are linked via low-band-
width wireless radios — have been the
focus of intense research during the past
few years. Since their conception, they’ve
excited a range of scientific communities
because of their potential to facilitate data
acquisition and scientific studies. Collab-
orations between computer scientists and
other domain scientists have produced
networks that can record data at a scale
and resolution not previously possible.
Taking this progress one step further,
wireless sensor networks can potentially
advance the pursuit of geophysical stud-
ies of volcanic activity.
Two years ago, our team of computer
scientists at Harvard University began
collaborating with volcanologists at the
University of North Carolina, the Univer-
sity of New Hampshire, and the Instituto
Geofísico in Ecuador. Studying active
volcanoes typically involves sensor ar-
rays built to collect seismic and infra-
sonic (low-frequency acoustic) signals.
Our group is among the first to study the
use of tiny, low-power wireless sensor
nodes for geophysical studies. In 2004,
we deployed a small wireless sensor net-
work on Volcán Tungurahua in central
Ecuador as a proof of concept.
1
For three
days, three nodes equipped with micro-
phones collected continuous data from
the erupting volcano.
In August 2005, we deployed a larger,
more capable network on Volcán Reven-
tador in northern Ecuador. The array
consisted of 16 nodes equipped with seis-
moacoustic sensors deployed over 3 km.
18 MARCH • APRIL 2006 Published by the IEEE Computer Society 1089-7801/06/$20.00 © 2006 IEEE IEEE INTERNET COMPUTING
Sensor-Network Applications
The system routed the collected data through a
multihop network and over a long-distance radio
link to an observatory, where a laptop logged the
collected data. Over three weeks, our network cap-
tured 230 volcanic events, producing useful data
and letting us evaluate the performance of large-
scale sensor networks for collecting high-
resolution volcanic data.
In contrast with existing volcanic data-
acquisition equipment, our nodes are smaller,
lighter, and consume less power. The resulting spa-
tial distribution greatly facilitates scientific stud-
ies of wave propagation phenomena and volcanic
source mechanisms. Additionally — and extremely
important for successful collaboration — this vol-
canic data-collection application presents numer-
ous challenging computer science problems.
Studying active volcanoes necessitates high data
rates, high data fidelity, and sparse arrays with
high spatial separation between nodes. The inter-
section of these scientific requirements with wire-
less sensor network nodes’ current capabilities
creates difficult computer science problems that
require research and novel engineering.
Sensor Networks
for Volcanic Monitoring
Wireless sensor networks can greatly assist the
geophysics community. The increased scale
promised by lighter, faster-to-deploy equipment
will help address scientific questions beyond cur-
rent equipment's’ practical reach. Today’s typical
volcanic data-collection station consists of a
group of bulky, heavy, power-hungry components
that are difficult to move and require car batter-
ies for power. Remote deployments often require
vehicle or helicopter assistance for equipment
installation and maintenance. Local storage is also
a limiting factor — stations typically log data to a
Compact Flash card or hard drive, which
researchers must periodically retrieve, requiring
them to regularly return to each station. Although
these limitations make it difficult to deploy large
networks of existing equipment, such large-scale
experiments could help us achieve important
insights into volcanoes’ inner workings. Volcanic
tomography,
2
for example, is one approach to the
study of volcanoes’ interior structure; collecting
and analyzing signals from multiple stations can
produce precise mappings of the volcanic edifice.
In general, such mappings’ precision and accura-
cy increases as stations are added to the data-col-
lection network. Studies such as these could help
resolve debates over the physical processes at
work within a volcano’s interior.
The geophysics community has well-
established tools and techniques it uses to process
signals extracted by volcanic data-collection net-
works. These analytical methods require that our
wireless sensor networks provide data of extreme-
ly high fidelity — a single missed or corrupted sam-
ple can invalidate an entire record. Small
differences in sampling rates between two nodes
can also frustrate analysis, so samples must be
accurately time stamped to allow comparisons
between nodes and between networks.
An important feature of volcanic signals is that
much of the data analysis focuses on discrete
events, such as eruptions, earthquakes, or tremor
activity. Although volcanoes differ significantly in
the nature of their activity, during our deployment,
many interesting signals at Reventador spanned
less than 60 seconds and occurred several dozen
times per day. This let us design the network to
capture time-limited events, rather than continu-
ous signals.
Of course, recording individual events doesn’t
adequately answer all the scientific questions that
volcanologists pose. Indeed, understanding long-
term trends requires complete waveforms spanning
long time intervals. However, wireless sensor nodes’
low radio bandwidth makes them inappropriate for
such studies; thus, we focused on triggered event
collection when designing our network.
Volcanic studies also require large internode
separations to obtain widely separated views of
seismic and infrasonic signals as they propagate.
Array configurations often comprise one or more
possibly intersecting lines of sensors, and the
resulting topologies raise new challenges for sen-
sor-network design, given that much previous
work has focused on dense networks in which
each node has several neighbors. Linear configu-
rations can also affect achievable network band-
width, which degrades when data must be
IEEE INTERNET COMPUTING www.computer.org/internet/ MARCH • APRIL 2006 19
Deploying a Wireless Sensor Network
Studying active volcanoes
necessitates high data rates, high data
fidelity, and sparse arrays with high
spatial separation between nodes.
transmitted over multiple hops. Node failure
poses a serious problem in sparse networks
because a single failure can obscure a large por-
tion of the network.
Sensor-Network
Application Design
Given wireless sensor network nodes’ current
capabilities, we set out to design a data-collection
network that would meet the scientific require-
ments we outlined in the previous section. Before
describing our design in detail, let’s take a high-
level view of our sensor node hardware and
overview the network’s operation. Figure 1 shows
our sensor network architecture.
Network Hardware
Our sensor network on Reventador comprised 16
stations equipped with seismic and acoustic sen-
sors. Each station consisted of a Moteiv TMote
Sky wireless sensor network node (www.moteiv.
com) , an 8-dBi 2.4-GHz external omnidirectional
antenna, a seismometer, a microphone, and a cus-
tom hardware interface board. We fitted each of
14 nodes with a Geospace Industrial GS-11 geo-
phone — a single-axis seismometer with a corner
frequency of 4.5 Hz — oriented vertically. We
equipped the two remaining nodes with triaxial
Geospace Industries GS-1 seismometers with cor-
ner frequencies of 1 Hz, yielding separate signals
in each of the three axes.
The TMote Sky is a descendant of the Uni-
veristy of California, Berkeley’s Mica “mote” sen-
sor node. It features a Texas Instruments MSP430
microcontroller, 48 Kbytes of program memory, 10
Kbytes of static RAM, 1 Mbyte of external flash
memory, and a 2.4-GHz Chipcon CC2420 IEEE
802.15.4 radio. The TMote Sky was designed to run
TinyOS,
3
and all of our software development used
this environment. We chose the TMote Sky because
the MSP430 microprocessor provides several con-
figurable ports that easily support external devices,
and the large amount of flash memory was useful
for buffering collected data, as we describe later.
We built a custom hardware board to integrate
the TMote Sky with the seismoacoustic sensors.
The board features up to four Texas Instruments
AD7710 analog-to-digital converters (ADCs), pro-
viding resolution of up to 24 bits per channel. The
MSP430 microcontroller provides on-board ADCs,
but they’re unsuitable for our application. First,
they provide only 16 bits of resolution, whereas
we required at least 20 bits. Second, seismoa-
coustic signals require an aggressive filter centered
around 50 Hz. Because implementing such a filter
using analog components isn’t feasible, it’s usual-
ly approximated digitally, which requires several
factors of oversampling. To perform this filtering,
the AD7710 samples at more than 30 kHz, while
presenting a programmable output word rate of
100 Hz. The high sample rate and computation that
digital filtering requires are best delegated to a
specialized device.
A pair of alkaline D cell batteries powered each
sensor node — our network’s remote location
made it important to choose batteries maximizing
node lifetime while keeping cost and weight low.
D cells provided the best combination of low cost
and high capacity, and they can power a node for
more than a week. Roughly 75 percent of the
power each node draws is consumed by the sen-
sor interface board, primarily due to the ADCs’
high power consumption. During our three-week
deployment, we changed batteries on the entire
sensor array only twice.
The network is monitored and controlled by a
laptop base station, located at a makeshift volcano
observatory roughly 4 km from the sensor network
itself. FreeWave radio modems using 9-dBi direc-
tional Yagi antennae were used to establish a long-
distance radio link between the sensor network
and the observatory.
Typical Network Operation
Each node samples two or four channels of seis-
moacoustic data at 100 Hz, storing the data in
20 MARCH • APRIL 2006 www.computer.org/internet/ IEEE INTERNET COMPUTING
Sensor-Network Applications
Figure 1. The volcano monitoring sensor-network architecture. The
network consists of 16 sensor nodes, each with a microphone
and siesmometer, collecting seismic and acoustic data on volcanic
activity. Nodes relay data via a multihop network to a gateway node
connected to a long-distance FreeWave modem, providing radio
connectivity with a laptop at the observatory. A GPS receiver is used
along with a multihop time-synchronization protocol to establish a
network-wide timebase.
FreeWave radio
modem for long-distance
communication to base
GPS receiver
for time sync
200-400 m
Sensor nodes
local flash memory. Nodes also transmit periodic
status messages and perform time synchronization,
as described later. When a node detects an inter-
esting event, it routes a message to the base sta-
tion laptop. If enough nodes report an event within
a short time interval, the laptop initiates data col-
lection, which proceeds in a round-robin fashion.
The laptop downloads between 30 and 60 seconds
of data from each node using a reliable data-
collection protocol, ensuring that the system
retrieves all buffered data from the event. When
data collection completes, nodes return to sam-
pling and storing sensor data.
Overcoming High Data Rates:
Event Detection and Buffering
When designing high-data-rate sensing applica-
tions, we must remember an important limitation
of current sensor-network nodes: low radio band-
width. IEEE 802.15.4 radios, such as the Chipcon
CC2420, have raw data rates of roughly 30 Kbytes
per second. However, overheads caused by pack-
et framing, medium access control (MAC), and
multihop routing reduce the achievable data rate
to less than 10 Kbytes per second, even in a
single-hop network.
Consequently, nodes can acquire data faster
than they can transmit it. Simply logging data to
local storage for later retrieval is also infeasible for
these applications. The TMote Sky’s flash memory
fills in roughly 20 minutes when recording two
channels of data at 100 Hz. Fortunately, many
interesting volcanic events will fit in this buffer. For
a typical earthquake or explosion at Reventador, 60
seconds of data from each node is adequate.
Each sensor node stores sampled data in its
local flash memory, which we treat as a circular
buffer. Each block of data is time stamped using
the local node time, which is later mapped to a
global network time, as we explain in the next sec-
tion. Each node runs an event detector on locally
sampled data. Good event-detection algorithms
produce high detection rates while maintaining
small false-positive rates. The detection algorithm’s
sensitivity links these two metrics — a more sensi-
tive detector correctly identifies more events at the
expense of producing more false positives.
The data set that our previous deployment at
Volcán Tungurahua
1
produced aided us in de-
signing the event detector. We implemented a
short-term average/long-term average threshold
detector, which computes two exponentially
weighted moving averages (EWMAs) with differ-
ent gain constants. When the ratio between the
short-term average and the long-term average
exceeds a fixed threshold, the detector fires. The
detector threshold lets nodes distinguish between
low-amplitude signals, perhaps from distant earth-
quakes, and high-amplitude signals from nearby
volcanic activity.
When the event detector on a node fires, it
routes a small message to the base-station laptop.
If enough nodes report events within a certain time
window, the laptop initiates data collection from
the entire network (including nodes that didn’t
report the event). This global filtering prevents
spurious event detections from triggering a data-
collection cycle. Fetching 60 seconds of data from
all 16 nodes in the network takes roughly one
hour. Because nodes can only buffer 20 minutes of
eruption data locally, each node pauses sampling
and reporting events until it has uploaded its data.
Given that the latency associated with data collec-
tion prevents our network from capturing all
events, optimizing the data-collection process is a
focus of future work.
Reliable Data Transmission
and Time Synchronization
Extracting high-fidelity data from a wireless sen-
sor network is challenging for two primary
reasons. First, the radio links are lossy and fre-
quently asymmetrical. Second, the low-cost crys-
tal oscillators on these nodes have low tolerances,
causing clock rates to vary across the network.
Much prior research has focused on addressing
these challenges.
4,5
We developed a reliable data-collection proto-
col, called Fetch, to retrieve buffered data from
each node over a multihop network. Samples are
buffered locally in blocks of 256 bytes, then tagged
with sequence numbers and time stamps. During
transmission, a sensor node fragments each
requested block into several chunks, each of which
is sent in a single radio message. The base-station
laptop retrieves a block by flooding a request to
the network using Drip, a variant of the TinyOS
Trickle
6
data-dissemination protocol. The request
contains the target node ID, the block sequence
number, and a bitmap identifying missing chunks
in the block.
The target node replies by sending the request-
ed chunks over a multihop path to the base station.
Our system constructs the routing tree using Multi-
HopLQI, a variant of the TinyOS MintRoute
4
rout-
ing protocol modified to select routes based on the
IEEE INTERNET COMPUTING www.computer.org/internet/ MARCH • APRIL 2006 21
Deploying a Wireless Sensor Network
CC2420 link quality indicator (LQI) metric. Link-
layer acknowledgments and retransmissions at
each hop improve reliability. Retrieving one
minute of stored data from a two-channel sensor
node requires fetching 206 blocks and can take
several minutes to complete, depending on the
multihop path’s quality and the node’s depth in the
routing tree.
Scientific volcano studies require sampled data
to be accurately time stamped; in our case, a glob-
al clock accuracy of ten milliseconds was suffi-
cient. We chose to use the Flooding Time
Synchronization Protocol (FTSP),
5
developed at
Vanderbilt University, to establish a global clock
across our network. FTSP’s published accuracy is
very high, and the TinyOS code was straightfor-
ward to integrate into our application. One of the
nodes used a Garmin GPS receiver to map the
FTSP global time to GMT. Unfortunately, FTSP
occasionally exhibited unexpected behavior, in
which nodes would report inaccurate global times,
preventing some data from being correctly time
stamped. We’re currently developing techniques to
correct our data set’s time stamps based on the
large amount of status messages logged from each
node, which provide a mapping from the local
clock to the FTSP global time.
Command and Control
A feature missing from most traditional volcanic
data-acquisition equipment is real-time network
control and monitoring. The long-distance radio
link between the observatory and the sensor net-
work lets our laptop monitor and control the net-
work’s activity. We developed a Java-based GUI for
monitoring the network’s behavior and manually
setting parameters, such as sampling rates and
event-detection thresholds. In addition, the GUI was
responsible for controlling data collection follow-
ing a triggered event, moving significant complex-
ity out of the sensor network. The laptop logged all
packets received from the sensor network, facilitat-
ing later analysis of the network’s operation.
The GUI also displayed a table summarizing
network state, based on the periodic status mes-
sages that each node transmitted. Each table entry
included the node ID; local and global time
stamps; various status flags; the amount of local-
ly stored data; depth, parent, and radio link qual-
ity in the routing tree; and the node’s temperature
and battery voltage. This functionality greatly
aided sensor deployment by letting a team mem-
ber rapidly determine whether a new node had
joined the network as well as the quality of its
radio connectivity.
Deploying on Volcán Reventador
Volcán Reventador is located in northern Ecuador,
roughly three hours from the capital, Quito. Long
dormant, Reventador reawakened suddenly in
2002, erupting with massive force. Ash thrown into
the air blanketed Quito’s streets 100 kilometers to
the east, closing schools and the airport. Pyroclas-
tic flows raced down the mountain, flattening
forests, displacing an oil pipeline, and severing a
major highway. After 18 months of quiescence,
renewed activity began in November 2004. During
our deployment, Reventador’s activity consisted of
discrete, relatively small explosive events that
ejected incandescent blocks, gas, and ash several
times a day. Corresponding seismic activity includ-
ed explosion earthquakes, extended-duration
shaking (tremors), and shallow-rock-fracturing
earthquakes that might have been associated with
magma migration within the volcano.
Several features of Volcán Reventador made it
ideal for our experiment. Reaching 3,500 meters at
its peak, Reventador sits at a low elevation com-
pared to other Ecuadorean volcanoes, making
deployment less strenuous. Its climate is moderate,
with temperatures ranging between 10 and 30
degrees Celsius. The 2002 explosion produced
pyroclastic flows that left large parts of the flanks
denuded of vegetation. Given that our radio anten-
nas’ effectiveness can be severely degraded by
obstacles to line-of-sight, the lack of vegetation
greatly simplified sensor-node positioning.
Our base while working at Reventador was the
Hosteria El Reventador, a small hotel located
nearby on the highway from Quito to Lago Agria.
The hotel provided us with space to set up our
equipment and ran an electric generator that
powered our laptops and other equipment at the
makeshift observatory.
Sensor-Network Device
Enclosures and Physical Setup
A single sensor network node, interface board, and
battery holder were all housed inside a small
weatherproof and watertight Pelican case, as Fig-
ure 2 shows. We installed environmental connec-
tors through the case, letting us attach cables to
external sensors and antennae without opening
the case and disturbing the equipment inside. For
working in wet and gritty conditions, these exter-
nal connectors became a tremendous asset.
22 MARCH • APRIL 2006 www.computer.org/internet/ IEEE INTERNET COMPUTING
Sensor-Network Applications
Installing a station involved covering the Pel-
ican case with rocks to anchor it and shield the
contents from direct sunlight. We elevated the
antennae on 1.5-meter lengths of PVC piping to
minimize ground effects, which can reduce radio
range. We buried the seismometers nearby, but far
enough away that they remained undisturbed by
any wind-induced shaking of the antenna pole.
Typically, we mounted the microphone on the
antenna pole and shielded it from the wind and
elements with plastic tape. Installation took sever-
al minutes per node, and the equipment was suf-
ficiently light and small that an individual could
carry six stations in a large pack. The PVC poles
were light but bulky and proved the most awkward
part of each station to cart around.
Network Location and Topology
We installed our stations in a roughly linear con-
figuration that radiated away from the volcano’s
vent and produced an aperture of more than three
kilometers. We attempted to position the stations
as far apart as the radios on each node would
allow. Although our antennae could maintain
radio links of more than 400 meters, the geogra-
phy at the deployment site occasionally required
installing additional stations to maintain radio
connectivity. Other times, we deployed a node
expecting it to communicate with an immediate
neighbor but later noticed that the node was
bypassing its closest companion in favor of a
node closer to the base station. Most nodes com-
municated with the base station over three or
fewer hops, but a few were moving data over as
many as six.
In addition to the sensor nodes, we used sever-
al other pieces of equipment. Three Freewave
(www.freewave.com) radio modems provided a
long-distance, reliable radio link between the sen-
sor network and the observatory laptop. Each Free-
wave required a car battery for power, recharged
by solar panels. A small number of Crossbow
(www.xbow.com) MicaZ sensor network nodes
served supporting roles. One interfaced between
the network and the Freewave modem and anoth-
er was attached to a GPS receiver to provide a
global timebase.
Early Results
We deployed our sensor network at Volcán Reven-
tador for more than three weeks, during which
time we collected seismoacoustic signals from sev-
eral hundred events. We’ve only just begun rigor-
ously analyzing our system’s performance, but
we’ve made some early observations.
In general, we were pleased with our systems’
performance. During the 19-day deployment, we
retrieved data from the network 61 percent of the
time. Many short outages occurred because — due
to the volcano’s remote location — powering the
logging laptop around the clock was often impos-
sible. By far the longest continuous network out-
age was due to a software component failure,
which took the system offline for three days until
researchers returned to the deployment site to
reprogram nodes manually.
Our event-triggered model worked well. Dur-
ing the deployment, our network detected 230
eruptions and other volcanic events, and logged
nearly 107 Mbytes of data. Figure 3 shows an
IEEE INTERNET COMPUTING www.computer.org/internet/ MARCH • APRIL 2006 23
Deploying a Wireless Sensor Network
Figure 2. A two-component station. The blue
Pelican case contains the wireless sensor node and
hardware interface board. The external antenna is
mounted on the PVC pole to reduce ground
effects. A microphone is taped to the PVC pole,
and a single seismometer is buried nearby.
example of a typical earthquake our network
recorded. By examining the data downloaded from
the network, we verified that the local and global
event detectors were functioning properly. As we
described, we disabled sampling during data col-
lection, implying that the system was unable to
record two back-to-back events. In some instances,
this meant that a small seismic event would trig-
ger data collection, and we’d miss a large explo-
sion shortly thereafter. We plan to revisit our
approach to event detection and data collection to
take this into account.
O
ur deployment raises many exciting directions
for future work. We plan to continue improv-
ing our sensor-network design and pursuing
additional deployments at active volcanoes. This
work will focus on improving event detection and
prioritization, as well as optimizing the data-
collection path. We hope to deploy a much larg-
er (100-node) array for several months, with
continuous Internet connectivity via a satellite
uplink. We’re collaborating with the SensorWebs
project at NASA and the Jet Propulsion Lab to
allow our ground-based sensor network to trig-
ger satellite imaging of the volcano after a large
eruption. We assembled equipment required to
test this idea at Reventador, but were unable to
establish a reliable Internet connection at the
deployment site via satellite.
A more ambitious research goal involves
sophisticated distributed data-processing within the
sensor network itself. Sensor nodes, for example,
can collaborate to perform calculations of energy
release, signal correlation, source localization, and
perhaps tomographic imaging of the volcanic edi-
fice. By pushing this computation into the network,
we can greatly reduce the radio bandwidth require-
ments and scale up to much larger arrays. We’re
excited by the opportunities that sensor networks
have opened up for geophysical studies.
References
1. G. Werner-Allen et al., “Monitoring Volcanic Eruptions
with a Wireless Sensor Network,” Proc. 2nd European
Workshop Wireless Sensor Networks (EWSN 05), IEEE
Press, 2005; www.eecs.harvard.edu/~mdw/papers/volcano
-ewsn05.pdf.
2. J.M. Lees, “The Magma System of Mount St. Helens: Non-
linear High Resolution P-Wave Tomography,” J. Volcanic
and Geothermal Research, vol. 53, nos. 1–4, 1992, pp.
103–116.
3. J. Hill et al., “System Architecture Directions for Networked
Sensors,” Proc. 9th Int’l Conf. Architectural Support for
Programming Languages and Operating Systems, ACM
Press, 2000, pp. 93–104.
4. A. Woo, T. Tong, and D. Culler, “Taming the Underlying
Challenges of Reliable Multihop Routing in Sensor Net-
works,” Proc. 1st ACM Conf. Embedded Networked Sensor
Systems (SenSys 03), ACM Press, 2003, pp. 14–27.
5. M. Maroti et al., “The Flooding Time Synchronization Pro-
tocol,” Proc. 2nd ACM Conf. Embedded Networked Sensor
Systems, ACM Press, 2004, pp. 39–49.
6. P. Levis et al., “Trickle: A Self-Regulating Algorithm for
Code Propagation and Maintenance in Wireless Sensor
Networks,” Proc. 1st Usenix/ACM Symp. Networked Sys-
tems Design and Implementation (NSDI 04), Usenix Assoc.,
2004, pp. 15–28.
Geoffrey Werner-Allen is a second-year PhD candidate in com-
puter science in the Division of Engineering and Applied
Science at Harvard University. His research interests
include wireless sensor networks, biologically inspired and
distributed algorithms, tool development, and software
engineering. Werner-Allen has an AB in physics from Har-
vard University. He is a student member of the ACM. Con-
tact him at werner@eecs.harvard.edu; www.eecs.harvard.
edu/~werner.
Konrad Lorincz is a fourth-year PhD candidate in computer sci-
ence at Harvard University. His research interests include
distributed systems, networks, wireless sensor networks,
location tracking, and software engineering. Lorincz has
24 MARCH • APRIL 2006 www.computer.org/internet/ IEEE INTERNET COMPUTING
Sensor-Network Applications
Figure 3. An event captured by our network. The event shown
was a volcano tectonic (VT) event and had no interesting acoustic
component. The data shown has undergone several rounds of
postprocessing, including timing rectification. We show only
seismic signals.
Node ID
204
213
208
206
209
207
200
201
250
203
202
205
212
210
251
214
04:07:10 04:07:20 04:07:30 04:07:40
Time (UTC)
04:07:50
an MS in computer science from Harvard University. Con-
tact him at konrad@eecs.harvard.edu; www.eecs.harvard.
edu/~konrad.
Mario Ruiz is an associate professor at the Escuela Politecnica
Nacional and a third-year PhD student at the University of
North Carolina, Chapel Hill. His research field is volcano
seismology. Ruiz has an MS in geophysics from the New
Mexico Institute of Technology. Contact him at mruiz@
email.unc.edu.
Omar Marcillo is a graduate student in the Department of Earth
Sciences at the University of New Hampshire. His primary
research interest is the development of instrumentation for
volcano monitoring and geophysical studies. Contact him
at oed4@unh.edu.
Jeff Johnson is a research assistant professor of geophysics and
volcanology in the Department of Earth Sciences at the
University of New Hampshire. His research interests
include eruption dynamics and volcano monitoring. John-
son has an MS from Stanford and a PhD in geophysics
from the University of Washington. He is an active mem-
ber of the American Geophysical Union and Seismologi-
cal Society of America and the International Association
of Volcanology and Chemistry of the Earth’s Interior. Con-
tact him at jeff.johnson@unh.edu; http://earth.unh.edu/
johnson/johnson.htm.
Jonathan Lees is an associate professor in the Department of
Geological Sciences at the University of North Carolina,
Chapel Hill, where he specializes in geophysics, seismology
and volcanology. His research is directed toward under-
standing the dynamics of volcanic explosions as they relate
to the shallow conduit system as well as the deep plumb-
ing structure of the volcano edifice. Lees has a PhD in geo-
physics from the University of Washington. He is an active
member of the American Geophysical Union, the Society
of Exploration Geophysicists, and the Seismological Soci-
ety of America. Contact him at jonathan_lees@unc.edu;
www.unc.edu/~leesj.
Matt Welsh is an assistant professor of computer science at
Harvard University. His research interests include operat-
ing system, network, and programming-language support
for massive-scale distributed systems, including Internet
services and sensor networks. Welsh has a PhD in comput-
er science from the University of California, Berkeley. He
is a member of the ACM and the IEEE. Contact him at
mdw@eecs.harvard.edu; www.eecs.harvard.edu/~mdw.
IEEE INTERNET COMPUTING www.computer.org/internet/ MARCH • APRIL 2006 25
Deploying a Wireless Sensor Network
... As long as 15 years ago, wireless sensor networks were identified as promising tools for the surveillance of active volcanoes [1,2]. On one hand, IoT wireless sensor networks open up the perspective to densely cover volcanic edifices, including the areas most exposed to natural hazards, with cheap and low-consumption sensors providing information in quasi-real time. ...
... All these challenges have in practice considerably limited the adoption of IoT technologies to monitor active volcanoes. The first pioneering works explored the deployment of an IoT network collecting seismic and infrasonic signals [1,4]. Performance evaluations were conducted to identify the optimum number of sensors to be deployed a posteriori based on simulation results, considering throughput, packet loss, and end-to-end delay as metrics to satisfy the real-time requirements [5]. ...
... It is to our knowledge the first documented return of experience following the continuous operation of an IoT sensor network on an active volcano for several years, including during a period of intense eruptive unrest. The existing literature documents the development of an IoT-based monitoring system [6,7], feasibility studies and performance evaluation [5], and short-term deployments [1,2,9]. Our documented experience aims at providing useful advice to teams interested in equipping active volcanoes with IoT infrastructures. ...
Article
Full-text available
This paper describes the successes and failures after 4 years of continuous operation of a network of sensors, communicating nodes, and gateways deployed on the Etna Volcano in Sicily since 2019, including a period of Etna intense volcanic activity that occurred in 2021 and resulted in over 60 paroxysms. It documents how the installation of gateways at medium altitude allowed for data collection from sensors up to the summit craters. Most of the sensors left on the volcanic edifice during winters and during this period of intense volcanic activity were destroyed, but the whole gateway infrastructure remained fully operational, allowing for a very fruitful new field campaign two years later, in August 2023. Our experience has shown that the best strategy for IoT deployment on very active and/or high-altitude volcanoes like Etna is to permanently install gateways in areas where they are protected both from meteorological and volcanic hazards, that is mainly at the foot of the volcanic edifice, and to deploy temporary sensors and communicating nodes in the more exposed areas during field trips or in the summer season.
... Modern IoT applications not only require the scalability and lightweight operation that RPL provides but may also impose a heavy traffic load and dense deployments. e.g., habitat monitoring [3], underground mining [4], and smart grids [5]. ...
Article
Full-text available
RPL—Routing Protocol for Low-Power and Lossy Networks (usually pronounced “ripple”)—is the de facto standard for IoT networks. However, it neglects to exploit IoT devices’ full capacity to optimize their transmission power, mainly because it is quite challenging to do so in parallel with the routing strategy, given the dynamic nature of wireless links and the typically constrained resources of IoT devices. Adapting the transmission power requires dynamically assessing many parameters, such as the probability of packet collisions, energy consumption, the number of hops, and interference. This paper introduces Adaptive Control of Transmission Power for RPL (ACTOR) for the dynamic optimization of transmission power. ACTOR aims to improve throughput in dense networks by passively exploring different transmission power levels. The classic solutions of bandit theory, including the Upper Confidence Bound (UCB) and Discounted UCB, accelerate the convergence of the exploration and guarantee its optimality. ACTOR is also enhanced via mechanisms to blacklist undesirable transmission power levels and stabilize the topology of parent–child negotiations. The results of the experiments conducted on our 40-node, 12-node testbed demonstrate that ACTOR achieves a higher packet delivery ratio by almost 20%, reduces the transmission power of nodes by up to 10 dBm, and maintains a stable topology with significantly fewer parent switches compared to the standard RPL and the selected benchmarks. These findings are consistent with simulations conducted across 7 different scenarios, where improvements in end-to-end delay, packet delivery, and energy consumption were observed by up to 50%.
... Through local decision-making processes, they can transmit the sensed data to the user [15]. WSNs have great potential for many applications in scenarios such as military target tracking and surveillance [16,17], natural disaster relief [18], biomedical health monitoring [19,20], and hazardous environment exploration and seismic sensing [21]. ...
... Modern IoT applications require not only the scalability and lightweight operation that RPL provides but also demand heavy traffic and/or dense deployments. Applications demanding heavy traffic from resource-constrained devices include but are not limited to habitat monitoring [3], industrial underground infrastructure for mining [4], and smart-grids [5]. However, this protocol stack shows degraded performance under a high load of traffic or dense topologies [6]. ...
Preprint
Full-text available
p>Routing Protocol for Low-power and lossy networks (RPL), as the de-facto routing protocol for IoT networks, neglects to exploit IoT devices' full capacity to tune their transmission power. One of the reasons is that optimizing the transmission power in parallel with the routing strategy is challenging, given the dynamic nature of wireless links and the constrained resources in IoT devices. Optimizing the transmission power requires evaluating the probability of packet collisions, energy consumption, the number of hops, and interference. We propose Adaptive Control of Transmission pOwer for RPL (ACTOR) for dynamic optimization of transmission power. ACTOR aims at improving throughput in dense networks by passively exploring different transmission power levels. The extent of resources used for this exploration significantly affects the network throughput. Thus, the exploration needs to adapt to dynamism in the environment. We formulate this exploration strategy using the Multi-Armed Bandit framework. The classic solutions of bandit theory including Upper Confidence Bound and Discounted Upper Confidence Bound accelerate the convergence of the exploration and guarantee its optimality. We also enhance ACTOR by mechanisms from RPL to blacklist undesirable transmission power levels and stabilize the topology. Results of the experiments on our 40-node testbed and simulations show that ACTOR achieves higher throughput (increasing the packet delivery ratio by 20%), energy consumption, end-to-end delay, and the number of retransmissions are significantly improved against the standard RPL and the selected benchmark.</p
Article
Distributed filtering algorithms have gained widespread application in wireless sensor networks attributed to their advantages of low communication overhead and strong robustness. We propose a novel distributed Kalman filter. The core idea is that each node selects an adjacent node with the minimum cost function, which is followed by data fusion between the two nodes through the covariance intersection method. The specific selection process significantly reduces the complexity of the algorithm, and the unique fusion method ensures data reliability while improving estimation accuracy. We demonstrate that the designed distributed Kalman filtering algorithm is unbiased and consistent. Simulation results are provided to verify the correctness and performance of our algorithm.
Article
Communication presents a critical challenge for emerging intermittently powered batteryless sensors. Batteryless devices that operate entirely on harvested energy often experience frequent, unpredictable power outages and have trouble keeping time accurately. Consequently, effective communication using today’s low-power wireless network standards and protocols becomes difficult, particularly because existing standards are usually designed to support reliably powered devices with predictable node availability and accurate timekeeping capabilities for connection and congestion management. In this paper, we present Greentooth, a robust and energy-efficient wireless communication protocol for intermittently-powered sensor networks. It enables reliable communication between a receiver and multiple batteryless sensors using TDMA-style scheduling and low-power wake-up radios for synchronization. Greentooth employs lightweight and energy-efficient connections that are resilient to transient power outages, while significantly improving network reliability, throughput, and energy efficiency of both the battery-free sensor nodes and the receiver—which could be untethered and energy-constrained. We evaluate Greentooth using a custom-built batteryless sensor prototype on synthetic and real-world energy traces recorded from different locations in a garden across different times of the day. Results show that Greentooth achieves 73% and 283% more throughput compared to AWD MAC and RI-CPT-WUR respectively under intermittent ambient solar energy, and over 2x longer receiver lifetime.
Chapter
Environmental incidents like fires, gas leaks and explosions that happen at random and unforeseen times might be greatly helped by event detection via a wireless sensor network. Determining the occurrence of a specific event is hence one of a system’s most important tasks. The system must be able to gather and infer environmental data that will enable isolating and recognizing specific occurrences in order to do this. Several alternative event detection methods, such as those that can be handled by separate sensors, remote groups, or fusion centers, are employed in wireless sensor networks depending on how the environmental data is acquired. This paper provides an overview of event detection mechanisms and challenges, as well as the hypothesis, and covers the fundamental requirements for sensing systems. Furthermore, relevant research work on environmental event detection in current event detection approaches is reviewed and discussed. Thus, the purpose of this paper is to conduct a relative analysis of the event detection mechanism using fusion center-based fuzzy logic for improving the detection system’s performance.
Conference Paper
Full-text available
The dynamic and lossy nature of wireless communication poses major challenges to reliable, self-organizing multihop networks. These non-ideal characteristics are more problematic with the primitive, low-power radio transceivers found in sensor networks, and raise new issues that routing protocols must address. Link connectivity statistics should be captured dynamically through an efficient yet adaptive link estimator and routing decisions should exploit such connectivity statistics to achieve reliability. Link status and routing information must be maintained in a neighborhood table with constant space regardless of cell density. We study and evaluate link estimator, neighborhood table management, and reliable routing protocol techniques. We focus on a many-to-one, periodic data collection workload. We narrow the design space through evaluations on large-scale, high-level simulations to 50-node, in-depth empirical experiments. The most effective solution uses a simple time averaged EWMA estimator, frequency based table management, and cost-based routing.
Conference Paper
Full-text available
Wireless sensor network applications, similarly to other distributed systems, often require a scalable time synchronization service enabling data consistency and coordination. This paper describes the Flooding Time Synchronization Protocol (FTSP), especially tailored for applications requiring stringent precision on resource limited wireless platforms. The proposed time synchronization protocol uses low communication bandwidth and it is robust against node and link failures. The FTSP achieves its robustness by utilizing periodic flooding of synchronization messages, and implicit dynamic topology update. The unique high precision performance is reached by utilizing MAC-layer time-stamping and comprehensive error compensation including clock skew estimation. The sources of delays and uncertainties in message transmission are analyzed in detail and techniques are presented to mitigate their effects. The FTSP was implemented on the Berkeley Mica2 platform and evaluated in a 60-node, multi-hop setup. The average per-hop synchronization error was in the one microsecond range, which is markedly better than that of the existing RBS and TPSN algorithms.
Conference Paper
Full-text available
To support network programming, we present Deluge, a reliable data dissemination protocol for propagating large data objects from one or more source nodes to many other nodes over a multihop, wireless sensor network. Deluge builds from prior work in density-aware, epidemic maintenance protocols. Using both a real-world deployment and simulation, we show that Deluge can reliably disseminate data to all nodes and characterize its overall performance. On Mica2-dot nodes, Deluge can push nearly 90 bytes/second, one-ninth the maximum transmission rate of the radio supported under TinyOS. Control messages are limited to 18% of all transmissions. At scale, the protocol exposes interesting propagation dynamics only hinted at by previous dissemination work. A simple model is also derived which describes the limits of data propagation in wireless networks. Finally, we argue that the rates obtained for dissemination are inherently lower than that for single path propagation. It appears very hard to significantly improve upon the rate obtained by Deluge and we identify establishing a tight lower bound as an open problem.
Conference Paper
Full-text available
This paper describes our experiences using a wireless sensor network to monitor volcanic eruptions with low-frequency acoustic sensors. We developed a wireless sensor array and deployed it in July 2004 at Volcan Tingurahua, an active volcano in central Ecuador. The network collected infrasonic (low-frequency acoustic) signals at 102 Hz, transmitting data over a 9 km wireless link to a remote base station. During the deployment, we collected over 54 hours of continuous data which included at least 9 large explosions. Nodes were time-synchronized using a separate GPS receiver, and our data was later correlated with that acquired at a nearby wired sensor array. In addition to continuous sampling, we have developed a distributed event detector that automatically triggers data transmission when a well-correlated signal is received by multiple nodes. We evaluate this approach in terms of reduced energy and bandwidth usage, as well as accuracy of infrasonic signal detection.
Article
Full-text available
Technological progress in integrated, low-power, CMOS communication devices and sensors makes a rich design space of networked sensors viable. They can be deeply embedded in the physical world and spread throughout our environment like smart dust. The missing elements are an overall system architecture and a methodology for systematic advance. To this end, we identify key requirements, develop a small device that is representative of the class, design a tiny event-driven operating system, and show that it provides support for e#cient modularity and concurrency-intensive operation. Our operating system fits in 178 bytes of memory, propagates events in the time it takes to copy 1.25 bytes of memory, context switches in the time it takes to copy 6 bytes of memory and supports two level scheduling. The analysis lays a groundwork for future architectural advances. 1. INTRODUCTION As the post-PC era emerges, several new niches of computer system design are taking shape with characteristics...
Article
High-resolution, three-dimensional images of P-wave velocity anomalies below Mt. St. Helens, Washington, were derived using tomographic inversion. The model is a target volume parameterized by blocks 0.5 km per side. The area included 39 stations and 5454 local events leading to 35,475 rays used in the inversion. To diminish the effects of noisy data, the Laplacian was constrained to be small within horizontal layers, providing smoothing of the model. Non-linear effects were compensated for by iterating three-dimensional ray tracing (using pseudo-bending) between inversions and relocating earthquakes relative to the updated three-dimensional model. The structural differences between the linear and non-linear inversions appear to be insignificant, although the amplitudes of the anomalies are larger in the non-linear models. Results indicate a low-velocity anomaly (> 7%), approximately 1 km in lateral extent, from 1.5 to 3 km depths. Between 3 and 6 km depth the anomaly appears to spread out. Below 6 km depth the low-velocity feature changes to a higher-velocity perturbation with lower-velocity perturbations flanking around the perimeter of the volcano. The higher-velocity material, which correlates with the higher seismicity at that depth, is interpreted as being a plug capping the low-velocity magma chamber which begins below 9 km depth.