Conference PaperPDF Available

Quality-Driven Volcanic Earthquake Detection Using Wireless Sensor Networks

Authors:

Abstract

Volcano monitoring is of great interest to public safety and scientific explorations. However, traditional volcanic instrumentation such as broadband seismometers are expensive, power-hungry, bulky, and difficult to install. Wireless sensor networks (WSNs) offer the potential to monitor volcanoes at unprecedented spatial and temporal scales. However, current volcanic WSN systems often yield poor monitoring quality due to the limited sensing capability of low-cost sensors and unpredictable dynamics of volcanic activities. Moreover, they are designed only for short-term monitoring due to the high energy consumption of centralized data collection. In this paper, we propose a novel quality-driven approach to achieving real-time, in-situ, and long-lived volcanic earthquake detection. By employing novel in-network collaborative signal processing algorithms, our approach can meet stringent requirements on sensing quality (low false alarm/missing rate and precise earthquake onset time) at low power consumption. We have implemented our algorithms in TinyOS and conducted extensive evaluation on a testbed of 24 TelosB motes as well as simulations based on real data traces collected during 5.5 months on an active volcano. We show that our approach yields near-zero false alarm/missing rate and less than one second of detection delay while achieving up to 6-fold energy reduction over the current data collection approach.
Quality-driven Volcanic Earthquake Detection using Wireless Sensor Networks
Rui Tan1Guoliang Xing1Jinzhu Chen1Wenzhan Song2Renjie Huang2
1Michigan State University, USA 2Washington State University, USA
Abstract—Volcano monitoring is of great interest to public
safety and scientific explorations. However, traditional volcanic
instrumentation such as broadband seismometers are expen-
sive, power-hungry, bulky, and difficult to install. Wireless
sensor networks (WSNs) offer the potential to monitor volcanos
at unprecedented spatial and temporal scales. However, current
volcanic WSN systems often yield poor monitoring quality
due to the limited sensing capability of low-cost sensors and
unpredictable dynamics of volcanic activities. Moreover, they
are designed only for short-term monitoring due to the high
energy consumption of centralized data collection. In this
paper, we propose a novel quality-driven approach to achieving
real-time, in-situ, and long-lived volcanic earthquake detection.
By employing novel in-network collaborative signal processing
algorithms, our approach can meet stringent requirements
on sensing quality (low missing/false alarm rates and pre-
cise earthquake onset time) at low power consumption. We
have implemented our algorithms in TinyOS and conducted
extensive evaluation on a testbed of 24 TelosB motes as well
as simulations based on real data traces collected during 5.5
months on an active volcano. We show that our approach
yields near-zero false alarm and missing rates and less than one
second of detection delay while achieving up to 6-fold energy
reduction over the current data collection approach.
I. INT ROD UC TI ON
In the last two decades, volcanic eruptions have led to
a death toll of over 30,000 and damage of billions of
dollars [1]. The recent eruptions of Volcano Eyjafjallaj ¨
okull
in Iceland caused the disruption of air traffic across Eu-
rope. Traditional volcano monitoring systems often employ
broadband seismometers which, although can yield high-
fidelity seismic monitoring signals, are expensive, power-
hungry, bulky, and difficult to install. These limitations
have largely prevented them from wide deployment, even
for many threatening volcanoes. For instance, Mount St.
Helens, an active volcano in northwestern U.S., is currently
monitored by less than 10 stations [2] providing limited
coverage and coarse-grain monitoring.
The advances of wireless sensor networks (WSNs) have
made it possible to greatly improve volcanic monitor-
ing quality through numerous low-cost sensors. Moreover,
WSNs enable fast ad hoc system deployment that is largely
impossible in the past. Recent pilot deployments on several
active volcanoes [2]–[4] have demonstrated the feasibility
and scientific value of WSNs to volcano monitoring. How-
ever, the current efforts of these projects are mostly focused
on communication and networking issues such as reliable
data delivery, time synchronization, and network manage-
ment. In order to detect earthquake events, sensory data are
transmitted to the base station for centralized processing.
However, due to the sheer amount of raw data gathered
at high sampling rates, such a data collection approach
leads to excessive energy consumption and short system
lifetime. Moreover, it has poor timeliness due to the limited
bandwidth of low-cost sensors. For instance, as shown in [4],
collecting one minute of seismic data over a multi-hop link
can take up to six minutes. Although data transmission can
be reduced by event-triggered data collection approaches [4],
the existing earthquake detection algorithms [5] are heuristic
in nature and often lead to excessive event misses. For
instance, only about 5% of seismic events were successfully
detected in a recent WSN deployment at Volca´
n Reventador
in northern Ecuador [4].
In this paper, we push state of the art to real-time, in-
situ, and long-lived volcano monitoring systems with assured
sensing performance. In particular, we aim to completely
avoid raw data transmission by developing advanced in-
network signal processing algorithms for volcanic earth-
quake detection. To this end, the following challenges must
be addressed. First, volcanic earthquake is a sophisticated
physical process featured by highly dynamical magnitude
and variable source location. These unpredictable dynamics
must be properly dealt with in the sensing algorithms.
Second, compared with traditional expensive monitoring
instruments, low-cost wireless sensors often have limited
sensing capability such as low signal-to-noise ratio and
narrow responsive frequency band. Therefore, they must
efficiently collaborate in signal processing to achieve the
stringent sensing quality requirements. Third, the computa-
tion as well as inter-node communication overhead must be
minimized to improve timeliness and extend system lifetime.
We make the following major contributions in this paper:
We develop a novel quality-driven approach to detect-
ing volcanic earthquakes based on collaborative signal
processing algorithms. Our fundamental methodology
is to drive the system design based on user’s require-
ments on system sensing quality while minimizing
sensors’ energy consumption.
We develop new sensing algorithms based on the ex-
tensive analysis of real data traces collected on Mount
St. Helens [2]. First, we propose a Bayesian detection
algorithm based on a novel joint statistical model of
seismic signal energy and frequency spectrum. Second,
we develop a near-optimal sensor selection algorithm
signal
amplitude
FFT
P-phase
picker
multi-scale
bayesian detector
sensor
selection
decision
fusion
sys-level
onset time
estimation
energy scales
local decisions
system
decisions
earthquake onset time
(millisec precision)
selection
earthquake onset time (second precision)
seismic
wave
monitoring quality
specification
Figure 1. System architecture. White blocks are components at a sensor;
shadowed blocks are components at the base station; solid line represents
data flow; dotted line represents control flow.
that chooses the minimum subset of informative sen-
sors to yield system detection results. The above two
algorithms enable the system to achieve satisfactory
sensing quality in the presence of unpredictable dy-
namics of volcanic earthquakes. Moreover, they only
generate light traffic from sensors to the base station
and completely avoid the transmission of raw seismic
data.
We have implemented our algorithms on a testbed of
24 TelosB motes. We conduct testbed experiments and
extensive simulations based on real data traces collected
by 12 nodes on Mount St. Helens [2] that contain more
than 128 significant earthquake events. Experimental
results show that our approach yields near-zero false
alarm and missing rates and less than one second of
detection delay while achieving up to 6-fold energy
reduction over the current data collection approach.
Moreover, our approach allows a system to configure
its sensing quality under different energy budgets.
The rest of this paper is organized as follows. Section II
reviews related work. Section III provides an overview of
our approach. Section IV presents the earthquake detection
algorithm run by sensors locally. Section V develops a near-
optimal sensor selection algorithm. Section VI discusses
earthquake onset time estimation. Section VII presents im-
plementation details and Section VIII evaluates our ap-
proach. Section IX concludes this paper.
II. RELATED WORK
In 2004, four MICA2 motes were deployed on Volca´
n
Tungurahua in central Ecuador [3], which is the first mote-
based volcano monitoring system. The system lived for three
days and successfully collected the data of at least 9 large
explosions. In 2005, the same group deployed 16 Tmote
nodes equipped with seismic and acoustic sensors at Volca ´
n
Reventador in northern Ecuador for three weeks [4] [6]. The
main objective of the above two deployments is to collect
high-resolution/fidelity sensor data for domain scientists. A
simple event-triggered data collection approach based on
the STA/LTA (short-term average over long-term average)
[5] earthquake detection algorithm is developed to reduce
data transmission. However, this heuristic approach cannot
yield provable and satisfactory detection performance. For
instance, although the systems had zero false alarm rate,
they suffered very low detection probabilities (about 5%)
[4]. Moreover, collected data are processed in a centralized
fashion leading to significant bandwidth requirement and
energy consumption.
In the Optimized Autonomous Space In-situ Sensorweb
(OASIS) project [2], 15 iMote2-based nodes were air-
dropped on Mount St. Helens in State of Washington in
July 2009. Previous efforts of this project were focused
on fundamental communication and networking issues such
as reliable data delivery and time synchronization. Simi-
lar to earlier deployments [4] [6], the heuristic STA/LTA
earthquake detection algorithm is adopted, which does not
provide provable sensing quality. To our best knowledge, the
issue of online real-time in-network earthquake detection has
not been addressed.
There exist a vast of well-established tools and techniques
for processing sensor data in seismology community [5] [7]
[8]. However, most of them are designed to centrally pro-
cess seismic signals collected from traditional seismological
stations. Specifically, seismic data must be logged at the
stations and then transmitted or manually fetched to a base
station for centralized processing [4] [6].
III. APP ROAC H OVE RVIEW
In this section, we provide an overview of our approach
to detecting volcanic earthquakes using a WSN. Our ap-
proach is designed to meet two key objectives of volcano
monitoring. First, the system sensing quality must satisfy
the Neyman-Pearson (NP) requirement [9] including upper-
bounded false alarm rate and lower-bounded detection prob-
ability. For instance, seismologists may request that no more
than 1% of detection reports are false alarms and the system
can successfully detect at least 90% earthquake events.
Second, the computation and communication overhead of
sensors must be minimized to improve timeliness and extend
system lifetime.
We assume that the network comprises a base station
and a number of sensors distributed on the volcano. In this
paper, we assume that all sensors are of seismic modality,
which is consistent with several first-generation volcano
monitoring WSNs [2] [4]. Our approach comprises a group
of detection algorithms that run at sensors and the base
station, respectively. They work together to achieve the
requirements on sensing quality. A system architecture of
our approach is shown in Figure 1. Each sensor detects
earthquake event every sampling period based on seismic
frequency spectrum. To handle the earthquake dynamics
such as highly dynamical magnitude and variable source
location, each sensor maintains separate statistical models of
frequency spectrum for different scales of seismic signal en-
ergy received by sensor. Our study shows that the frequency-
based detector typically has better detection performance
when the sensor receives higher signal energy. Therefore,
in our approach, the base station first selects a minimum
subset of informative sensors based on the signal energies
received by sensors while satisfying system sensing quality
requirements. The selected sensors then compute seismic
frequency spectrum using fast Fourier transform (FFT) and
make local detection decisions which are then transmitted
to the base station for fusion. In addition to the detection of
earthquake occurrences, node-level earthquake onset time is
critical for localizing earthquake source. In our approach,
the base station first identifies an individual earthquake and
estimates a coarse onset time. The coarse onset time is then
fed back to sensors, which will pick the P-phase (i.e., the
arrival time of wavefront) from buffered raw seismic data
using an existing algorithm [7].
Our approach has the following advantages. First, differ-
ent from existing heuristic earthquake detection algorithms
such as STA/LTA, our model-driven approach can meet
various user requirements on sensing quality, including
bounded false alarm rate and detection probability. Second,
by employing novel in-network data fusion schemes, our
approach incurs low communication overhead. Specifically,
in each sampling period, only signal energy represented by
an integer needs to be sent to the base station. Only when
the system detection performance meets user’s requirement,
local decisions made by sensors are transmitted to the
base station. Third, the sensor selection algorithm allows
a network to achieve desired trade-off between system
sensing quality and computational overhead at sensors. In
particular, based on the requirement on energy-efficiency,
only the minimum number of sensors are selected to execute
computation-intensive signal processing algorithms such as
FFT.
IV. LOC AL EA RTH QUAKE DETE CT IO N AT SENSORS
In this section, we design a local earthquake detection
algorithm that runs at sensors locally. In order to achieve sat-
isfactory sensing performance, the following questions must
be addressed. First, what information does a sensor need to
sample? Due to the resource limitation of low-cost sensors,
the amount of sampled information must be minimized while
critical features of earthquake should be conserved. Second,
how to represent the sampled information using a sensing
model? In particular, the overhead of computing and storing
the model should be affordable for low-cost sensors. Third,
how to accurately detect earthquakes based on the sensing
model and real-time measurements? In the following, we
first present a case study of sensors’ measurements in
earthquakes and then address the above questions.
0
1
2
3
4
5
6
7
0 50 100 150 200 250 300
Signal energy (×104)
Time (second)
Node1
Node9
(a) Event 1
0
2
4
6
8
0 50 100 150 200 250 300
Signal energy (×102)
Time (second)
Node1
Node9
(b) Event 2
Figure 2. Seismic signal energy.
0 50 100 150 200 250 300
Time (second)
0
10
20
30
40
50
Frequency (Hz)
0
0.1
0.2
(a) Event 1
0 50 100 150 200 250 300
Time (second)
0
10
20
30
40
50
Frequency (Hz)
0
0.1
0.2
(b) Event 2
Figure 3. Frequency spectrum of Node1.
A. A Case Study of Earthquake Sensing
Detecting volcanic earthquakes using low-cost accelerom-
eters in WSN is challenging due to the dynamics of earth-
quake, e.g., significantly variable magnitude and source
location. Moreover, as seismic signal attenuates with the
propagation distance, the sensors far away from the earth-
quake source receive weak signals and hence have lower
detectabilities. Such a phenomenon is referred to as the
locality of earthquake in this paper. In this section, we
illustrate the locality of earthquake using a case study, which
motivates us to propose a novel sensing model for volcanic
earthquake detection.
The case study is based on the seismic data traces
collected by 12 nodes in the OASIS project on Mount
St. Helens [2]. We examine micro-scale signal energy and
frequency spectrum which are two basic statistics computed
from sensors’ raw data. Figures 2(a) and 2(b) plot the signal
energy received by Node1 and Node9 in two earthquake
events, respectively. From the figures, we can see that
Node9 receives higher signal energy than Node1 in Event
1, while Node1 receives significantly higher signal energy
than Node9 in Event 2. This example shows that the signal
energy received by a sensor varies significantly due to the
change of the earthquake source location as well as its
magnitude. Therefore, simple threshold detection approaches
based on signal energy [10]–[12] would not address the
dynamics of volcanic earthquakes. Figures 3(a) and 3(b) plot
the spectrum of Node1 in the two events, respectively. As
the signal energy of Event 1 is much stronger than that of
Event 2 (about 100 times), Node1 has significantly different
frequency spectra in the two events. Specifically, the received
seismic energy is mainly distributed within [0 Hz,5Hz]in
Event 1 and [5 Hz,10 Hz]in Event 2. Figures 4(a) and 4(b)
0 50 100 150 200 250 300
Time (second)
0
10
20
30
40
50
Frequency (Hz)
0
0.1
0.2
(a) Event 1
0 50 100 150 200 250 300
Time (second)
0
10
20
30
40
50
Frequency (Hz)
0
0.1
0.2
(b) Event 2
Figure 4. Frequency spectrum of Node9.
plot the spectrum of Node9. We can see that Node9 has
insignificant frequency feature in Event 2 due to very weak
signals. Moreover, from Figures 3(a) and 4(a), we can see
that Node1 and Node9 have different frequency spectra in
the absence of earthquake. We can make two important
observations from this case study for constructing earth-
quake sensing model. First, in order to achieve satisfactory
sensing quality, signal energy and frequency spectrum must
be jointly considered for detecting earthquakes. Second, the
frequency spectra for different scales of signal energy sensed
by a sensor vary considerably and hence require different
mathematical representations.
B. Feature Extraction
To capture the significant temporal dynamics of earth-
quake, sensors have to perform detections at a short period,
e.g., per second. In the following, we discuss efficient sam-
pling schemes to obtain both frequency spectrum and signal
energy. The seismic waves emitted by an earthquake can be
classified as the primary wave (P-wave) and shear wave (S-
wave). The P-wave is faster than S-wave and its frequency
is typically from 1Hz to 10 Hz, while the slower S-wave
often has a frequency of lower than 1Hz [8]. Different from
the high-cost broadband seismometers that are traditionally
used by the seismological community, low-cost accelerom-
eters in WSNs, e.g., 1221J-002 from Silicon Designs [2],
are only responsive to P-wave. As a result, the seismic
energy measured by these accelerometers in the presence
of earthquake is mainly distributed within [1 Hz,10 Hz]. As
shown in Section IV-A, frequency spectrum is expected to
be a robust feature for detecting earthquakes using low-
cost accelerometers. Suppose the sampling rate is fHz. By
applying FFT to the raw seismic data received during one
second, a sensor obtains the frequency spectrum that ranges
from 0Hz to f/2Hz. Each component of the spectrum
represents the percentage of signal energy that is located
in the corresponding frequency.
The sampling rate of accelerometers can be high (up
to 400 Hz). In order to reduce the computation overhead
of sensors, we construct feature vector from the frequency
spectrum as follows. The frequency spectrum is evenly
divided into nbins. Let xdenote the feature vector at a
sensor. The ith component of the feature vector, i.e.,x[i],
is the sum of spectrum components in the ith bin. Hence,
x[i]is the percentage of signal energy that is distributed
in (i·f
2nHz,(i+1)·f
2nHz], where i= 0,1, . . . , n 1. As the
dimension of feature vector, i.e.,n, determines the compu-
tation complexity of the training and detection algorithms at
sensors, nshould be chosen to achieve satisfactory trade-off
between detection accuracy and computation overhead.
In addition to frequency spectrum, signal energy received
by sensors is also an important feature that quantifies the
earthquake magnitude. The signal energy at a sensor is often
estimated by the mean square of seismic intensities during
asampling period [13]. To be consistent with the above
frequency analysis, we let the sampling period be one second
in this work. Let yidenote the ith seismic intensity and e
denote the signal energy. For a sampling rate of fHz, the
signal energy is computed by e=1
fPf
i=1(yi¯y)2, where ¯y
is the mean of seismic intensities during a sampling period.
C. A Multi-scale Sensing Model
We now propose a multi-scale Bayesian model that jointly
accounts for signal energy and frequency spectrum received
by a sensor to deal with the dynamics and locality of
earthquakes that are discussed in Section IV-A. In the multi-
scale Bayesian model, the range of signal energy is divided
into Kconsecutive sub-ranges, denoted by {Rp|p[1, K]}.
Each sensor maintains K+ 1 n-dimensional normal distri-
butions, which are denoted by {Np|p[0, K]}. Note that
nis the dimension of the frequency feature vector. The
distribution N0represents the model of frequency feature
vector in the absence of earthquake and {Np|p[1, K]}
correspond to the cases when earthquake happens and the
received signal energy falls into the pth energy range, i.e.,
eRp. Each normal distribution Npis characterized by
its mean vector and covariance matrix, which are denoted
by mpand Cp. Specifically, mp[i] = E[x[i]|eRp]
and Cp[i, j] = cov(x[i]|eRp,x[j]|eRp), where
x[i]is the ith component of the frequency feature vector.
With the above model, the frequency spectra for different
scales of signal energy are characterized by separate normal
distributions that carry sensing quality information. Such a
model allows us to precisely describe sensors’ performance
in the presence of earthquake dynamics and locality.
We now discuss how to divide the signal energy range.
The range of signal intensity measured by a sensor depends
on its bit-depth and calibration. Therefore, for different
sensor products, the range of signal intensity varies sig-
nificantly. However, through proper normalization to signal
intensity, we can develop a universal scale scheme for signal
energy. In this work, we employ a base-10 logarithmic scale
to represent the signal energy range, which is consistent
with many widely adopted earthquake magnitude scales
such as the Richter magnitude scale [14]. Specifically, we
let p=blog10 ecwhere eis the received signal energy.
Therefore, the pth energy scale range, Rp, is [10p,10p+1)
and pis referred to as energy scale hereafter. For example,
the signal energy ranges from 10 to 106for the data traces
collected in the OASIS project [2] and therefore the energy
scale is from 1to 6.
In order to build the multi-scale Bayesian model, we need
to compute the mean vector mpand covariance matrix Cp
using enough samples. As both mean and covariance can be
updated efficiently with incremental algorithms when a new
sample is available, the model learning can be performed
on each sensor locally at low cost. Specifically, a sensor
learns its sensing model as follows. When no earthquake
occurs, the sensor updates the distribution N0using the
current extracted frequency feature vector; otherwise, it
first computes the energy scale pand then updates the
corresponding distribution Np. This model learning process
can be conducted offline with data traces. Alternatively, it
can be conducted online with the ground truth informa-
tion from high-quality sensors. Seismological monitoring
infrastructures already deployed on active volcanoes can
be used to generate ground truth information for training
newly deployed low-cost sensors. As these infrastructures
are often power-hungry, they can be turned off when the
training completes.
D. Local Bayesian Detector
Based on the multi-scale Bayesian model presented in
Section IV-C, we design a Bayesian detector for each sensor
to achieve optimal local detection performance. The detector
makes a decision based on both the energy scale pand
frequency feature vector x. The local decisions of sensors are
then fused at base station to improve system sensing quality,
which will be discussed in Section V. A sensor makes a
decision between the hypotheses that there is earthquake or
not (denoted by Hpand H0, respectively):
H0:x N (m0,C0); Hp:x N (mp,Cp).
Let Idenote the local decision made by the sensor. Specif-
ically, if the sensor accepts the null hypothesis H0,I= 0;
otherwise, I= 1. The detection performance is usually char-
acterized by two metrics, namely, false alarm rate (denoted
by PF) and detection probability (denoted by PD). PFis the
probability that the sensor decides I= 1 when the ground
truth is H0.PDis the probability that the sensor decides
I= 1 when the ground truth is Hp. Among many existing
decision criteria, the minimum error rate criterion is the most
widely adopted one that jointly accounts for false alarms and
misses. Moreover, in contrast to other complicated decision
criteria, the minimum error rate criterion has a closed-
form decision function, which can largely reduce the com-
putation overhead at sensors. Given the frequency feature
x, the decision functions for minimum error rate are [9]
gi(x) = ln P(Hi)1
2ln |Ci| − 1
2(xmi)TC1
i(xmi)
for i∈ {0, p}, where P(Hi)is the prior probability of the
ground truth Hiand |Ci|represents the determinant of Ci.
The local detection decision Iis made by g0(x)
I=0
I=1
gp(x).
However, the matrix computations are too expensive for low-
cost sensors when the dimension is high (e.g., up to 10). In
our approach, if sensors are trained in an online fashion as
discussed in Section IV-C, each sensor transmits the mean
vectors and covariance matrices to the base station, which
computes the determinant and inverse of the covariance
matrices and then transmits them back to sensors.
Under the above decision rule, the false alarm rate and
detection probability of the sensor are given by PF=
RRφ(x|m0,C0)dxand PD=RRφ(x|mp,Cp)dx, where
R={x|g0(x)< gp(x)}and φ(x|mi,Ci)is the proba-
bility distribution function (PDF) of the normal distribution
N(mi,Ci). Specifically,
φ(x|mi,Ci
)= 1
(2π)n
2|Ci|1
2
exp(xmi)TC1
i(xmi)
2,
where nis the dimension of x. We note that each pair
of (H0, Hp)where p[1, K]gives a pair of (PF, PD).
However, it is usually difficult to obtain the closed-form
expression for the integral region Rfor computing PFand
PD. In our approach, the base station computes the PF
and PDfor each pair of (H0, Hp)through Monte Carlo
simulation. The PF’s and PDs for each sensor are stored
at the base station, which will be used to select the most
informative sensors to detect earthquakes as discussed in
Section V.
V. DYNAMIC SENSOR SELECTI ON F OR DECISION
FUSION
As discussed in Section IV-D, sensors can yield local
detection decisions by a Bayesian detector. However, the
accuracy of these decisions may be poor due to the lim-
ited sensing capability of low-cost sensors. Therefore, a
system-wide detection consensus is often desired for high-
fidelity volcano monitoring. In our approach, the base station
generates system detection decision by fusing the local
decisions from sensors. As sensors yield different sensing
performances due to the dynamics and locality of volcanic
earthquake as discussed in Section IV, it is desirable for the
base station to select a subset of sensors with the best signal
quality to achieve maximum system detection performance.
Moreover, the sensor selection avoids unnecessary expensive
feature extraction at the sensors with low signal quality. In
this section, we first introduce the decision fusion model
and analyze its performance. We then formulate the sensor
selection as an optimization problem and develop a near-
optimal solution.
A. Decision Fusion Model
As one of basic data fusion schemes [15], decision fusion
is preferable for WSNs due to its low communication cost
[11]. We use a widely adopted decision fusion model called
equal gain combining (EGC) [10]–[12] that fuses sensors’
local decisions with equal weight. Suppose there are n
sensors taking part in the fusion and let Iidenote the local
decision of sensor i. The EGC compares the test statistic
Λ = Pn
i=1 Iiagainst a threshold denoted by η. If Λexceeds
η, the base station decides that an earthquake has occurred;
otherwise, it makes a negative decision.
We now analyze the system detection performance of the
EGC fusion model. In the absence of earthquake, the local
decision of sensor i,Ii|H0, follows the Bernoulli distribution
with αias success probability. As sensors have different
false alarm rates, the test statistic Λ|H0follows a generalized
Binomial distribution. The probability mass function (PMF)
of Λ|H0is given by
P(Λ = λ|H0) = X
||S||=λ,SY
iS
αiY
jSC
(1 αj),(1)
where Sis any subset of sensors with size of λand SC
represents the complement of S. Hence, the cumulative
distribution function (CDF), denoted by FΛ|H0(x), is given
by FΛ|H0(x) = Pbxc
λ=0 P(Λ = λ|H0). Therefore, the system
false alarm rate can be computed as PF= 1 FΛ|H0(η).
Similarly, the system detection probability can be computed
as PD= 1FΛ|H1(η). Note that replacing αiin (1) with βi
yields the PMF of Λ|H1. However, computing the CDF of
Λhas a complexity of O(2n)and hence is infeasible when
the number of fused sensors is large.
We now propose approximate formulae for the system
detection performance of the EGC fusion model when the
number of fused sensors is large. As sensors independently
make local decisions, the mean and variance of Λ|H0
are given by E|H0] = Pn
i=1 E[Ii|H0] = Pn
i=1 αi,
Var[Λ|H0] = Pn
i=1 Var[Ii|H0] = Pn
i=1 αiα2
i. Lya-
punov’s central limit theorem (CLT) [16] is a CLT variant
for independent but non-identically distributed variables. We
have proved the Lyapunov condition for a sequence of
Bernoulli random variables in [12]. Therefore, according
to Lyapunov’s CLT, Λ|H0follows the normal distribution
when nis large, i.e.,Λ|H0∼ N Pn
i=1 αi,Pn
i=1 αiα2
i.
Similarly, Λ|H1∼ N Pn
i=1 βi,Pn
i=1 βiβ2
i. Hence, the
system false alarm rate and detection probability can be
approximated by
PF'Q ηPn
i=1 αi
pPn
i=1 αiα2
i!, PD'Q ηPn
i=1 βi
pPn
i=1 βiβ2
i!,
where Q(·)is the Q-function of the standard normal distri-
bution, i.e.,Q(x) = 1
2πR
xet2/2dt.
B. Dynamic Sensor Selection Problem
The case study in Sectin IV-A shows that a sensor
exhibits different frequency patterns for different energy
0
2
4
6
8
10
12
12345 0
4
8
12
16
B-distance
Error rate (%)
Energy scale p
Bhattacharyya distance Detection error rate
Figure 5. Bhattacharyya distance and corresponding detection error rate
versus energy scale. The results show the standard deviation over 12
sensors.
0 5 10 15 20 25 30 35 40
Earthquake event
2
4
6
8
10
12
Sensor ID
0
1
2
3
4
5
6
Energy scale p
Figure 6. Dynamics of energy scale received by sensors.
scales. Moreover, sensors receive significantly different en-
ergy scales due to the locality of earthquake. Our objective
is to select a subset of sensors with the best signal quality
to maximize system detection performance. To this end, we
first examine the sensing performance diversity of sensors
based on data traces collected in OASIS [2]. The result mo-
tivates us to formulate a dynamic sensor selection problem
to achieve satisfactory trade-off between system detection
performance and computation overhead at sensors. For each
sensor, we compute the Bhattacharyya distance [9], which
is a widely adopted detectability measure, between the pth
distribution Npand the noise distribution N0within its
multi-scale Bayesian model. Figure 5 plots the error bars
of Bhattacharyya distance and the corresponding detection
error rate versus the energy scale p. We can see that the
frequency-based detector has better performance when a
sensor receives stronger signal energy. Moreover, sensors
show significant performance variance for the same energy
scale. Figure 6 plots the maximum energy scale measured
by sensors in 40 earthquake events.
We can make two important observations from Figures 5
and 6. First, for a particular event, sensors have different de-
tection performances due to different received energy scales.
As a result, sensors with poor sensing performances should
be excluded from participating in system decision fusion.
Moreover, if a sensor has sufficient sensing performance
for system decision fusion, it must make local decisions
by costly FFT to extract frequency features. Therefore, it
is desirable to select the minimum subset of informative
sensors to fuse their decisions. Second, each sensor has un-
predictable signal energy pattern due to the stochastic nature
of earthquake magnitude and source location. Although the
optimal sensor selection can be pre-computed for all possible
combinations of sensors’ energy scales, both the time and
storage complexities are exponential, i.e.,O(KN), where K
is the number of energy scales and Nis the total number
of sensors. Therefore, the sensors that have the best sensing
performances must be dynamically selected in each sampling
period.
We now formally formulate the sensor selection problem.
We aim to select the minimum number of sensors being
involved in the feature extraction and decision fusion pro-
cesses, subject to bounded system detection performance.
We adopt the Neyman-Pearson (NP) criterion [9] for char-
acterizing system detection performance, i.e., we allow users
to specify the upper and lower bounds on system false
alarm rate and detection probability, respectively. NP cri-
terion is useful when the two types of errors, i.e., false
alarms and misses, need separate considerations. There
exists a fundamental trade-off between the two metrics for
any detection system, i.e., higher detection probability is
always achieved at the price of higher false alarm rate
[15]. Depending on the characteristics of volcanoes to be
monitored, seismologists may have different requirements
on false alarm rate and detection probability. For instance,
for an active volcano with frequent tiny earthquakes, it is
desirable to reduce false alarms to avoid excessive sensor
energy consumption and prolong system lifetime. On the
other hand, for a dormant volcano, it is more critical to
detect every important earthquake event while a higher false
alarm rate can be tolerated. Note that our approach can
be easily extended to address other performance metrics
such as error rate that jointly accounts for false alarms and
misses. In Appendix A, we discuss the extension to address
the minimax error rate criterion [9]. Based on the decision
fusion model in Section V-A, the sensor selection problem
is formally formulated as follows:
Problem 1. Given the local false alarm rates and detection
probabilities of all sensors, i.e.,{αi, βi|i[1, N]}, to find
a subset of sensors, S, and the decision fusion threshold at
the base station, η, such that ||S|| is minimized, subject to
that the system false alarm rate is upper-bounded by αand
the system detection probability is lower-bounded by β.
The brutal-force solution, i.e., iterating all possible subsets
of sensors, has an exponential complexity of O(2N). As
the dynamic sensor selection is conducted every sampling
period (one second in our system), such a complexity would
impede system timeliness. In the rest of this section, we first
reduce the complexity of Problem 1 with approximations and
then develop a near-optimal sensor selection algorithm with
polynomial complexity.
C. Problem Reduction
We adopt a divide-and-conquer strategy to solve Prob-
lem 1. The sub-problem of Problem 1 is to select nsensors
out of the total Nsensors such that the system detection
performance is optimized. By iterating nfrom 1to N,
Problem 1 is solved once the optimal solution of the sub-
0
0.05
0.1
0.15
0.2
1 5 10 15 20
Error of PF
Sensor number n
α= 0.05
α= 0.01
Figure 7. Error of PFdue to approximation (99.5% confidence level).
problem satisfies the detection performance requirement.
The brutal-force search for the sub-problem has a complexity
of ON
n. Our analysis shows that the sub-problem can
be reduced to a sorting problem with polynomial complexity.
We first analyze the condition for the NP criterion. We
assume that nis large enough such that the detection per-
formance approximations made in Section V-A are accurate.
We will discuss how to deal with the inaccuracy caused
by small nin Section V-D. Due to the fundamental trade-
off between false alarm rate and detection probability [15],
detection probability is maximized when false alarm rate is
set to its upper bound. Therefore, by letting PF=α, the
detection threshold at the base station is
η=
n
X
i=1
αi+Q1(α)·v
u
u
t
n
X
i=1
αiα2
i,(2)
where Q1(·)is the inverse function of Q(·). Hence, the
system detection probability is PD=Q(f), where
f=Q1(α)pPn
i=1 αiα2
i+Pn
i=1(αiβi)
pPn
i=1 βiβ2
i
.(3)
As Q(·)is a decreasing function, PDis maximized if f
is minimized. Therefore, the sub-problem is equivalent to
minimizing f.
However, the function fhas a complex non-linear re-
lationship with each sensor’s detection performance repre-
sented by αiand βi. We now propose a linear approximation
to f. The Monte Carlo simulations show that fincreases
with Pn
i=1(αiβi)with high probability (95%). The
details of the simulations can be found in Appendix B.
Therefore, the sub-problem is reduced to selecting nsensor
to minimize Pn
i=1(αiβi), which can be easily solved
by sorting sensors ascendingly according to the value of
(αiβi).
D. Dynamic Sensor Selection Algorithm
In this section, we develop a dynamic sensor selection
algorithm to solve Problem 1 based on the analysis in Sec-
tion V-C. Before presenting the algorithm, we first discuss
how to handle the inaccuracy of the normal approximations
made in Section V-A. Regarding the divide-and-conquer
strategy proposed in Section V-C, when nis small, we
compute the PMF of Λ(given by (1)) and then search
for the optimal detection threshold. A question is when we
should switch to the normal approximations. We propose
15:04:00 15:06:00 15:08:00
(a) The original per-second decisions.
15:04:00 15:06:00 15:08:00
(b) After applying opening operator to (a).
15:04:00 15:06:00 15:08:00
(c) After applying closing operator to (b).
Figure 8. Morphological processing of an earthquake event that occurred
on October 31, 2009.
a numerical approach to determine the switching point for
n, denoted by ns. We investigate the impact of non the
accuracy of system false alarm rate. Specifically, we first
compute the detection threshold ηusing (2) with randomly
generated local false alarm rates, i.e.,αi. We then compute
the true PFwith the threshold ηusing (1). Figure 7 plots
the error between the requested PF(i.e.,α) and the true
PF. For instance, if α= 0.05 and n= 15, the maximum
error of PFis about 0.05 and hence the true PFis within
[α0.05, α + 0.05]. The figure also shows that if more
stringent requirement is imposed, i.e., smaller α, the error
decreases accordingly. We can evaluate the impact of non
the accuracy of detection probability as well. With such an
approach, we can choose the switching point nsto achieve
desired accuracy.
The pseudo code of our near-optimal dynamic sensor
selection algorithm is listed in Algorithm 1. With the so-
lution given by Algorithm 1, the base station will request
the selected sensors to perform FFT and make their local
decisions. Finally, the base station compares the sum of
local decisions against the detection threshold ηto make
a system detection decision. In the absence of earthquake,
Algorithm 1 would terminate without a solution. As a
result, no sensor will be selected and hence costly seismic
processing algorithms such as FFT can be avoided.
VI. EA RTH QUA KE ON SE T TIM E ESTIMATION
In addition to the accurate detection of earthquake occur-
rences, another import requirement of volcano monitoring
is to identify individual earthquakes as well as estimate
their onset times and durations [4] [8] [7]. In particular,
the fine-grained per-node earthquake onset times are critical
for localizing earthquakes. In this section, we develop a
two-phase approach to estimating earthquake onset time
with millisecond precision. Specifically, in the first phase,
the base station correlates the per-second detection results
yielded by the decision fusion process to identify the on-
set time and duration of individual earthquakes, which is
referred to as system-level onset time estimation. In the
second phase, given the system-level onset time, each sensor
locally executes an existing P-phase (i.e., the arrival time
of P-wave front) picking algorithm [7] and outputs onset
Algorithm 1 Dynamic sensor selection algorithm
Input: local PF’s and PD’s {αii|i[1, N ]}, system performance require-
ments {α, β}
Output: minimum subset S, detection threshold η
1: sort sensors according to (αiβi)ascendingly
2: for n= 1 to Ndo
3: if n<nsthen
4: for all subset Swith size of ndo
5: compute the PMF of Λusing (1)
6: if exists ηsuch that system PFαand PDβthen
7: return Sand η
8: end if
9: end for
10: else
11: S={top nsensors }
12: compute fwith Susing (3)
13: if Q(f)βthen
14: compute ηwith Susing (2)
15: return Sand η
16: end if
17: end if
18: end for
19: exit with no solution
time estimate with improved accuracy, which is referred to
as node-level onset time estimation. By taking advantage
of accurate occurrence detection results, such a two-phase
approach avoids unnecessary execution of the computation-
intensive P-phase picker at sensors.
A. System-level Onset Time Estimation
In this section, we discuss how to temporally correlate the
per-second detection results yielded by the decision fusion
process to identify individual earthquake events. Although
the system detection performance can be improved by
fusing sensors’ local detection results, the possibilities of
false alarms and misses cannot be completely eliminated.
In particular, they must be properly dealt with in order
to precisely estimate the onset time of earthquake events.
Figure 8(a) shows the per-second decision sequence yielded
by the decision fusion process. A key observation from real
measurements is that a true earthquake event often generates
clustered detection results and hence isolated positive or
negative decisions are likely false alarms and misses. Based
on this observation, we propose to use mathematical mor-
phology [17] to identify individual events in the presence
of random detection errors. Mathematical morphology is
a widely adopted tool to identify geometrical structures in
image processing.
In the morphological processing, by applying the opening
operator [17] on the decision sequence, multiple continuous
false alarms can be eliminated if the number is less than
the operator diameter. Moreover, by applying the closing
operator [17], multiple continuous misses can be restored as
successful detections if the number is less than the operator
diameter. We have implemented the two morphological oper-
ators and applied the combination of them to the per-second
decision sequence. In this work, the diameter of the opening
operator is set to be 3. Under such a setting, we ignore
any detected event that lasts for shorter than 3seconds. As
shown in Figure 8(b), several isolated false alarms before the
earthquake event are eliminated by the opening operator. The
diameter of the closing operator should be set according to
the earthquake recurrence interval [8] of the volcano. In this
work, we set 11 and the closing operator shows satisfactory
performance. As shown in Figure 8(c), the closing operator
can join the fragmented positive decisions to yield a sin-
gle earthquake event. Given the morphologically processed
detection decisions, it is easy to determine the earthquake
onset time and duration.
B. Node-level Onset Time Estimation
The granularity of the system-level onset time estimation
is equal to the sampling period, which is one second in
our system. However, localizing earthquake source requires
the onset times estimated by spatially distributed nodes with
millisecond precision [8]. We now discuss our approach to
node-level onset time estimation with improved accuracy.
Once the morphological filter presented in Section VI-A
yields a system-level onset time estimate, the base station
notifies the sensors to perform node-level onset time esti-
mation by running the automatic P-phase picking algorithm
proposed in [7]. Given a coarse estimate of earthquake onset
time, the algorithm constructs two regression models for the
buffered seismic data before and after the coarse onset time,
respectively. The algorithm then picks a millisecond onset
time that has maximum likelihood to match the regression
models. It is important to note that the algorithm relies
on correct onset time estimate with the granularity of one
second, which is guaranteed by the other modules in our
approach.
VII. IMP LE ME NTATION
We have implemented the proposed detection algorithms
in TinyOS 2.1.0 on TelosB platform and conducted testbed
experiments in laboratory. In the future work, we plan to
deploy our implementation on the OASIS system [2] that is
currently monitoring Mount St. Helens. Our implementation
uses 45.3KB ROM and 9.5KB RAM when a sensor buffers
8 seconds of raw data. Several important implementation
details are presented as follows.
Data acquisition: To improve the realism of testbed ex-
periments, we create a volume of 320KB on mote’s flash
and load it with the seismic data traces collected in OASIS
[2]. We implement a nesC module that provides the standard
ReadStream interface to read seismic data from flash to sim-
ulate data acquisition in real deployments. A node acquires
100 seismic intensities every sampling period. When the
sampling period is set to be one second, the sampling rate
is consistent with previous deployments [2] [4].
Seismic processing: We use the KissFFT [18] library to
compute the frequency spectrum of seismic signals. In par-
ticular, we use the fixed-point FFT routines that are suitable
for the 16-bit processor on TelosB mote. We modified the
P-phase picking software developed in [7] to run in TinyOS.
However, the picker requires 6.5KB ROM and 12.5KB RAM
which are not available on TelosB platform. Hence, the
picker is only evaluated in TinyOS simulator (i.e., TOSSIM)
[19]. In the future work, we plan to evaluate it on more
powerful hardware platform such as iMote2 nodes used in
the OASIS system [2].
Networking: Sensors are organized into a multi-hop tree
rooted at the base station. In order to achieve timeliness,
sensors are scheduled in a TDMA fashion. Specifically, a
sensor reserves 250 ms for the FFT and Bayesian detector
in each sampling period. The remaining time is divided into
a number of slots, which are distributed among sensors for
transmitting energy scales and local decisions. In order to
reduce transmissions, the packets are aggregated along the
routing path to the base station. For instance, when a non-
leaf node has received all the energy scales from its children,
it aggregates them together with its own into a single packet
before forwarding. In our implementation, an energy scale
entry is 1 byte where node ID uses 5 bits and energy scale
uses 3 bits. Moreover, to improve reliability, a sensor buffers
energy scale or decision packets from its children for at most
8 sampling periods. When a sensor has received all packets
from its children for the current sampling period, it sends out
the aggregated packets for the current and previous sampling
periods. The sensor selection and decision fusion algorithms
presented in Section V are implemented in Java on a desktop
computer that serves as the base station. The sensor selection
algorithm typically takes 10 ms to 20 ms, and hence has little
impact on the timeliness of event detection.
VIII. PER FO RM AN CE EVALUATIO N
We conduct testbed experiments as well as extensive
simulations based on real data traces collected by 12 nodes
in the OASIS project [2]. The data set used in our evaluation
spans 5.5 months (from October 1, 2009 to March 15,
2010) and comprises 128 manually selected segments. Each
segment is 10 minutes and contains one or more signifi-
cant earthquake events. In Section VIII-A, we present the
experimental results on energy usage and communication
performance using a testbed of 24 TelosB motes. In Sec-
tion VIII-B, we present the simulation results on detection
performance in TOSSIM.
A. Testbed Experiments
1) Methodology: The multi-scale Gaussian model of each
sensor is trained offline using randomly selected 64 data seg-
ments. The ground truth information regarding the presence
of earthquake event is generated by the STA/LTA algorithm
using the data traces of Node01 in the deployment. The
STA/LTA threshold is set to be 2, which is suggested by the
volcanologists at U.S. Geological Survey [2]. We note that
the STA/LTA algorithm can yield detection errors.
3
9
15
21
DC CV STA/LTA DFSS
Energy usage (J)
Transmitting
Receiving
Seismic processing
Figure 9. Total energy consumption of 12
nodes for 10 minutes. DC represents data
collection, CV represents Chair-Varshney.
40
50
60
70
80
90
100
600 700 800 900 1000
Reception ratio (%)
Sampling period (ms)
Packet aggregation
Naive forwarding
Figure 10. Reception ratio of energy scale
information at the base station in a 3-hop
network.
0
1
2
3
4
5
012345678910
Distance from vent (km)
Time (s), origin is at 07:06:39, Oct 22, 2009
Node04
Node01
Node06
Node13
Figure 11. Explosive event. System-level onset time
is at the 3th second.
0
1.5
3
4.5
6
0 100 200 300 400 500 600
Energy usage (mJ)
Time (s)
27.36 mJ 24.66 mJ
Data collection
Chair-Varshney
STA/LTA
DFSS
Figure 12. Energy consumption of Node 11.
In this section, our approach is referred to as decision
fusion with sensor selection (DFSS). We compare our ap-
proach with the following three baseline approaches. (1)
In the data collection approach, each node transmits com-
pressed raw data to the base station. We adopt incremental
encoding to compress raw data, which can achieve 4-fold
data volume reduction for 32-bit seismic signal in the
absence of earthquake. Note that the OASIS system [2]
currently adopts data collection and analyzes collected data
offline at servers. (2) In the STA/LTA approach, each node
makes local detection decision by the STA/LTA algorithm.
If more than 30% nodes make positive decisions, the base
station first waits 30 seconds and then downloads one minute
of compressed raw data from all nodes. Note that these set-
tings are consistent with the detection approach in [4]. (3) In
the Chair-Varshney approach, each node performs FFT and
makes a local detection decision every sampling period. The
base station fuses the local decisions by the Chair-Varshney’s
rule [20] that is the optimal decision fusion model. Specifi-
cally, the test statistic is Λ = Pn
i=1 log βi(1αi)
αi(1βi)·Ii. As the
Chair-Varshney’s rule inherently accounts for the diversity of
sensors’ sensing qualities by weighting their local decisions,
it is unnecessary to perform sensor selection. However,
the Chair-Varshney’s rule has no closed-form formula for
its detection performance. Hence, we use a brutal-force
approach to compute the CDF of Λand find the detection
threshold that satisfies detection performance requirements.
Note that the brutal-force algorithm runs at the base station.
The following experiments are conducted in two network
topologies: an one-hop network composed of 12 TelosB
motes and a 3-hop network composed of 24 TelosB motes.
2) Timeliness and Energy Consumption: In this section,
12 TelosB motes are organized into an one-hop network
and each one corresponds to a node in OASIS [2]. We first
evaluate the timeliness of our approach. As discussed in
Section VI-B, accurate per-second detection is a prerequisite
of the millisecond onset time estimation. Hence, one second
is the delay bound in each sampling period. The average time
of each component of our system is as follows: computing
an energy scale for one second of seismic data take 6.7ms;
transmitting a TinyOS message with default size takes
9ms; FFT and the local Bayesian detector take 164.7ms.
Therefore, our approach can achieve satisfactory timeliness
on low-cost sensors with limited computational capability.
We now evaluate the energy consumption of various
approaches. We measure the execution time of seismic
processing and count the transmitted and received packets.
The energy consumption is then estimated based on the
measured current usage of processor and transceiver [21].
Figure 12 shows the energy consumption trace of Node 11
for 10 minutes. There is a significant earthquake event from
the 245th to 265th second. As the byte length of encoded raw
data increases in the presence of event, data collection has
a spike during the earthquake. STA/LTA yields a high spike
after the event, as it transmits compressed data at high speed
after a detection. Note that STA/LTA has a false alarm at
around the 550th second. Figure 9 shows the corresponding
breakdown of energy consumption. We can see that Chair-
Varshney consumes a significant amount of energy in seismic
processing, as it performs FFT on every node all the time.
Suppose two carbon-zinc AA batteries are used, which have
total 4680 J of energy storage [22]. The projected lifetime
of a node is 19 days and 3.9 months for data collection and
our approach, respectively.
3) Communication Performance: We now evaluate the
communication performance of our approach in a 3-hop
network composed of 24 TelosB motes. We adopt the naive
forwarding as the baseline approach, where an intermediate
node forwards a received packet immediately without aggre-
gation. Figure 10 plots the reception ratio of energy scale
information at the base station versus sampling period. Due
to limited wireless bandwidth, we observe low reception
ratios when the sampling period is shorter than 600 ms.
0
2
4
6
8
10
0 1 2.5 5 7.5 10
PF(%)
α(%)
Chair-Varshney
DFSS
Figure 13. System PFversus
requested PF.
0
2
4
6
8
10
0.01 0.1 1 5 10
Selected sensors
α(%)
Figure 14. The number of selected
sensors versus requested PF.
0
2
4
6
8
10
12
14
-1.5-2-2.5-3-3.5-4
False alarms
log10 α
STA/LTA
Chair-Varshney
DFSS
(a) False alarms
0
5
10
15
20
25
30
-1.5-2-2.5-3-3.5-4
Misses
log10(1 β)
STA/LTA
Chair-Varshney
DFSS
(b) Misses.
Figure 15. Detection errors after morphological processing. The test data
lasts for 19 hours and includes 139 earthquake events.
However, our approach can reach a reception ratio of 93.5%
when the sampling period is one second which is consistent
with the setting in real deployments [2] [4]. In contrast, naive
forwarding only achieves a reception ratio of 77%.
B. Trace-driven Simulations
In addition to the testbed experiments, we also conduct
simulations in TOSSIM [19] based on real data traces. The
trace-driven simulations allow us to extensively evaluate the
detection performance under various settings. Our evaluation
is mainly focused on two aspects. First, we examines the
detection performance of various approaches in a long period
of time (based on the data traces that span 5.5 months).
Second, we evaluate the configurability of our approach with
respect to system sensing qualities such as false alarm rate.
Figure 13 plots the false alarm rate of the per-second
system detection results without morphological processing
versus the requested false alarm rate. We can see that when
the requested PFis greater than 5%, the measured PFof
our approach flats out, as Algorithm 1 can find a solution
with minimum size of two sensors mostly. Moreover, as all
sensors are always involved in the fusion process, Chair-
Varshney has poor configurability as shown in Figure 13.
Figure 14 plots the number of selected sensors versus the
requested false alarm rate. The error bar shows one standard
deviation over 139 earthquake events. When lower perfor-
mance requirement is imposed (i.e., greater α), fewer sensors
will be selected, which means less energy consumption.
This result shows that our approach yields interesting trade-
off between energy consumption and detection performance.
Figure 15 plots the number of false alarms and misses of
earthquake events that are identified by the morphological
processing. We can conclude that our approach generates
fewer detection errors than STA/LTA and has comparable
detection performance with Chair-Varshney. However, as
shown in Figures 12 and 9, the later consumes significantly
more energy. When the detection performance requirement
is extremely high (e.g.,β= 1 104), the sensor selection
algorithm exits with no solution and hence no detection is
made as discussed in Section V-D. As a result, the system
misses more events as shown in Figure 15(b). Therefore, in
practice, the requirement on detection probability should be
set in the achievable range to avoid the saturation. Alterna-
tively, we can select all sensors to perform detection when
no solution is found by Algorithm 1. Although this strategy
achieves the maximum system detection performance, it
leads to significant energy consumption as all the sensors
always perform the costly FFT even when no earthquake
occurs.
Finally, we evaluate the accuracy of node-level onset
time estimation. Figure 11 shows a typical earthquake event
measured by four sensors and the node-level onset time
estimates shown as vertical lines. The Y-axis of the figure
represents the slant distance from the vent. We can see that
the event shows a suddenly strong shake at its start and the
sensor closest to the vent receives the signal before others.
This indicates that the event is a typical explosive earthquake
at the vent. More results on node-level onset time estimation
can be found in Appendix C.
IX. CONCLUSION
WSN has been increasingly deployed for monitoring ac-
tive volcanoes. This paper presents a quality-driven approach
to detecting highly dynamical volcanic earthquakes based
on in-network collaborative signal processing. In particular,
we aim to minimize sensors’ energy consumption subject
to sensing quality requirements. Our approach is evaluated
through testbed experiments and extensive simulations based
on real data traces collected on Mount St. Helens. The results
show that our approach can significantly reduce energy con-
sumption compared with state-of-the-art approaches while
providing assured system sensing quality.
REF ER EN CE S
[1] http://www.pbs.org/wnet/nature/forces/lava.html.
[2] W. Song, R. Huang, M. Xu, A. Ma, B. Shirazi, and
R. LaHusen, “Air-dropped sensor network for real-time
high-fidelity volcano monitoring,” in MobiSys, 2009.
[3] G. Werner-Allen, J. Johnson, M. Ruiz, J. Lees, and
M. Welsh, “Monitoring volcanic eruptions with a wire-
less sensor network,” in EWSN, 2005.
[4] G. Werner-Allen, K. Lorincz, J. Johnson, J. Lees, and
M. Welsh, “Fidelity and yield in a volcano monitoring
sensor network,” in OSDI, 2006.
[5] E. Endo and T. Murray, “Real-time seismic amplitude
measurement (RSAM): a volcano monitoring and pre-
diction tool,” Bulletin of Volcanology, vol. 53, no. 7,
1991.
[6] G. Werner-Allen, K. Lorincz, M. Ruiz, O. Marcillo,
J. Johnson, J. Lees, and M. Welsh, “Deploying a
wireless sensor network on an active volcano,IEEE
Internet Computing, vol. 10, no. 2, 2006.
[7] R. Sleeman and T. Van Eck, “Robust automatic P-phase
picking: an on-line implementation in the analysis
of broadband seismogram recordings,” Physics of the
Earth and Planetary Interiors, vol. 113, 1999.
[8] K. Aki and P. Richards, Quantitative seismology. Uni-
versity Science Books, 2002.
[9] R. Duda, P. Hart, and D. Stork, Pattern Classification.
Wiley, 2001.
[10] R. Niu and P. K. Varshney, “Distributed detection and
fusion in a large wireless sensor network of random
size,” EURASIP J. Wireless Communications and Net-
working, no. 4, 2005.
[11] T. Clouqueur, K. K. Saluja, and P. Ramanathan, “Fault
tolerance in collaborative sensor networks for target
detection,” IEEE Trans. Comput., vol. 53, no. 3, 2004.
[12] R. Tan, G. Xing, J. Wang, and H. C. So, “Exploiting
reactive mobility for collaborative target detection in
wireless sensor networks,IEEE Trans. Mobile Com-
put., vol. 9, no. 3, 2010.
[13] X. Sheng and Y. Hu, “Maximum likelihood multiple-
source localization using acoustic energy measure-
ments with wireless sensor networks,IEEE Trans.
Signal Process., vol. 53, no. 1, 2005.
[14] B. Gutenberg and C. F. Richter, “Magnitude and energy
of earthquakes,Science, vol. 83, no. 2147, 1936.
[15] P. K. Varshney, Distributed Detection and Data Fusion.
Springer, 1996.
[16] R. B. Ash and C. A. Dol´
eans-Dade, Probability &
Measure Theory, 2nd ed. A Harcourt Science and
Technology Company, 1999.
[17] P. Soille, Morphological image analysis: principles and
applications. Springer-Verlag, 2003.
[18] “KissFFT,” 2010, http://sourceforge.net/projects/
kissfft/.
[19] P. Levis, N. Lee, M. Welsh, and D. Culler, “TOSSIM:
Accurate and scalable simulation of entire TinyOS
applications,” in SenSys, 2003.
[20] Z. Chair and P. Varshney, “Optimal data fusion in mul-
tiple sensor detection systems,” IEEE Trans. Aerospace
Electron. Syst., Jan. 1990.
[21] Moteiv, “Telos (rev b): Preliminary datasheet,” 2004.
[22] http://www.allaboutbatteries.com/Energy-tables.html.
APP EN DI X
A. Minimax Error Rate Criterion
For the minimax error criterion [9], the worst case de-
tection error rate is minimized. It is useful when we need
to jointly consider the two types of errors. Moreover, its
performance is robust to unknown and changeable prior
probabilities, i.e.,P(H0)and P(H1). The dynamic sensor
selection problem based on the minimax error criterion is as
follows.
Problem 2. Given the local false alarm rates and detection
probabilities of all sensors, i.e.,{αi, βi|i[1, N]}, to find
a subset of sensors, S, and the detection threshold at the
base station, η, such that ||S|| is minimized, subject to that
the minimax error rate is upper bounded by .
The condition of minimax error rate is PF+PD= 1 [9].
Solving the above condition gives the detection threshold,
η=(Pn
i=1αi)·pPn
i=1βiβ2
i+(Pn
i=1βi)·pPn
i=1αiα2
i
pPn
i=1 αiα2
i+pPn
i=1 βiβ2
i
.
The minimax error rate, denoted by PE, is given by PE=
PF=Q(f2), where
f2=Pn
i=1(αiβi)
pPn
i=1 αiα2
i+pPn
i=1 βiβ2
i
.
The minimax error rate PEis minimized if f2is minimized.
Therefore, the sub-problem under the minimax error crite-
rion is equivalent to minimizing f2.
The function f2also has a complex non-linear relationship
with each sensor’s detection performance represented by αi
and βi. The Monte Carlo simulations in Appendix B show
that f2increases with Pn
i=1(αiβi)with high probability.
Therefore, by slightly changing the return condition in
Algorithm 1, we will have the algorithm for minimax error
criterion.
B. Evaluation of Monotonicity
In this section, we evaluate the monotonicity of fand f2
by Monte Carlo method. We denote g=Pn
i=1(αiβi). We
now evaluate the monotonicity of fand f2with respect to g,
respectively. For each trial of the Monte Carlo simulation,
two points are randomly and uniformly sampled from the
2n-dimensional space {αi, βi|i[1, n], αi(0,1), βi
(0,1)}to calculate f,f2and g. We conduct a large number
of trials (106in this work) to estimate the probabilities that
for f2increases with g. Figure 16 plots the probabilities
that for f2increases with gversus the number of sensors,
i.e.,n. The results show that both fand f2increases with
the corresponding gwith a probability of at least 95% when
the number of sensors exceeds 12.
C. More Results on Node-level Onset Time Estimation
Different from Figure 11, the event in Figure 17 has a
smooth start and is first measured by the sensors that are
deployed on the middle of the volcano. This type of event
often implies a deep earthquake source under the volcano
edifice. Most of events recorded on Mount St. Helens belong
to this type. Figure 18 plots the average delay of P-phase
with respect to Node 01 which is a sensor on the middle of
0.82
0.84
0.86
0.88
0.9
0.92
0.94
0.96
0.98
1
0 5 10 15 20 25 30
Probability
The number of sensors
f, NP
f2, minimax
Figure 16. The probabilities that for f2increases with gversus the
number of sensors.
0
1
2
3
4
5
012345678
Distance from vent (km)
Time (s), origin is at 08:25:50, Oct 20, 2009
Node04
Node01
Node06
Node13
Figure 17. Tectonic event. System-level onset time is at the 3th second.
the volcano. We note that these results are consistent with
that in [4].
D. Discussion: Ill-conditioned Covariance Matrix
Although we propose a general Bayesian detection model
that accounts for the correlations between different fre-
quency components in Section IV, our experiences show
that the covariance matrix, i.e.,C, is often an ill-conditioned
matrix, especially when the dimension of Cis high (e.g., up
to 10). For an ill-conditioned covariance matrix, small nu-
merical errors, e.g., the accumulative error in the incremental
model learning algorithm, can lead to significant errors in
computing its determinant and inverse matrix that are used
to construct the detector. In particular, we occasionally expe-
rienced negative determinant of a covariance matrix, which
contradicts the positive semi-definite property of covariance
matrix. However, handling ill-conditioned covariance matrix
is still an open research issue. Currently, we only con-
sider the variance of each frequency component (i.e., the
diagonal elements of C) when we construct the detector.
The evaluation shows that such a lite version still yields
satisfactory detection performance. The general approach
-20
0
20
40
60
80
100
120
140
160
180
0 1 2 3 4 5
P-phase delay (millisec)
Distance to the vent (kilometer)
Node04
Node10
Node 01,05,12,14
Node 02
Node 06 Node 13
Figure 18. Delay of P-phase arrival (with respect to Node01) versus
distance to the vent. The results are the average over 63 earthquake events.
that jointly accounts for the correlation between different
frequency components is left for our future work.
... The first wave is called the P-wave (Primary wave), and the second wave is referred to as the S-wave (Shear wave) [2]. P-wave and S-wave have fast propagation values of 6-7 km/s and 3-5 km/s, respectively [3], so the P-wave propagates faster than the S-wave [2,4]. ...
... The first wave is called the P-wave (Primary wave), and the second wave is referred to as the S-wave (Shear wave) [2]. P-wave and S-wave have fast propagation values of 6-7 km/s and 3-5 km/s, respectively [3], so the P-wave propagates faster than the S-wave [2,4]. P-wave has a higher frequency than S-wave, where P-wave ranges from 1-10Hz, while S-wave has a frequency below 1Hz [2], [4]- [6]. ...
... P-wave and S-wave have fast propagation values of 6-7 km/s and 3-5 km/s, respectively [3], so the P-wave propagates faster than the S-wave [2,4]. P-wave has a higher frequency than S-wave, where P-wave ranges from 1-10Hz, while S-wave has a frequency below 1Hz [2], [4]- [6]. However, S-wave can do more damage than Pwave [3]. ...
Article
Full-text available
The current earthquake monitoring system uses a seismometer that can capture seismic vibrations very well but is expensive, heavy, and difficult to launch. Therefore, earthquake monitoring stations can only be launched in a few places in small numbers. This study aims to implement a Wireless Sensor Network (WSN) system for earthquake monitoring. The WSN system has advantages in cost, size, and ease of launch, so it is very appropriate to be used for this purpose. An earthquake detection sensor system has been designed in this study using a vibration sensor and a piezoelectric sensor. When an earthquake occurs, the resulting shock will trigger the vibration sensor and activate the sensor node. The shock data is then captured by the piezo sensor and processed by the microcontroller using Fast Fourier Transform (FFT) to determine the frequency value of the shock. The data is then sent to a gateway via a sensor network and uploaded to the Cayenne monitoring website. Operators can then view the data on the website. Three sensor nodes are implemented in this study. The test is done by placing those sensor nodes together in random positions. A shock is then given to the three sensor nodes, and the resulting data is then observed. The results show that the three sensors can detect, retrieve, process, and send shock data to the Cayenne monitoring website.
... WSN and IoT also play a great role in earthquake monitoring, detection, prediction, and management. Tan et al. [228] described in their current research, a quality-obsessed volcanic earthquake detection using WSN. They outlined novel qualities oriented strategy to real-time, long-lived volcanic detection in their article. ...
... ▸ This model can trace and locate thousands of people in extremely critical disaster situations. Tan, R. et al. (2010) [228] WSN ▸ Proposed quality-driven volcanic earthquake detection using WSN. ...
... ▸ This model can trace and locate thousands of people in extremely critical disaster situations. Tan, R. et al. (2010) [228] WSN ▸ Proposed quality-driven volcanic earthquake detection using WSN. ...
... I. INTRODUCTION An IoT-system is composed of a number of independent low-power IoT-devices which form an ad hoc wireless network as they get installed over a specific area for serving a particular purpose, for example, volcanic earthquake prediction [1], monitoring a bridge [2] etc. The structure or the topology of the constructed wireless network depends on the area/volume of the place where the IoT-devices get installed. ...
... Results are shown in Figures 8(a,b), (c,d), and (e,f), for these three different cluster sizes, respectively. 1 As the size of the cluster grows, the number of nodes increases while the diameter is kept constant. In LWB, slots take the same time for all cluster sizes but more slots are needed for more nodes. ...
Preprint
Full-text available
p>IoT-technology is gaining a wide popularity over a large range of applications including not only monitoring of structures but also management and control of smart-systems. An IoT-system, in general, is composed of a number of IoT-devices which form a wireless decentralized setting as they get installed over a specific area to serve a particular purpose. The structure of the underlying wireless network depends on the structure of the target where the system gets deployed and hence, widely varies based on the exact application. Such structural variations often have an impact on the performance of the underlying IoT-protocols. Unfortunately most of the network protocols do not take care of such issues explicitly. For instance, although there have been quite significant development in the data-sharing protocols, especially with the advent of Synchronous-Transmission (ST), most of them are designed without considering the variation in the structural formation of the base networks. These protocols are tested over either in small scale simulated networks or in testbed settings bearing fixed/homogeneous structures. In this work, we demonstrate that the property of self-adaptability in an IoT-system can enable it not only to run faster but also save substantial energy which is an extremely important issue in the context of low-power system, in general. In particular, we design and implement a flexible and structure-adaptive many-to-many data-sharing protocol FlexiCast. Through extensive experiments under emulation-settings and IoT-testbeds we demonstrate that FlexiCast performs upto 49% faster and consumes upto 53% lesser energy compared to the case when it does not adapt to the network structure.</p
... I. INTRODUCTION An IoT-system is composed of a number of independent low-power IoT-devices which form an ad hoc wireless network as they get installed over a specific area for serving a particular purpose, for example, volcanic earthquake prediction [1], monitoring a bridge [2] etc. The structure or the topology of the constructed wireless network depends on the area/volume of the place where the IoT-devices get installed. ...
... Results are shown in Figures 8(a,b), (c,d), and (e,f), for these three different cluster sizes, respectively. 1 As the size of the cluster grows, the number of nodes increases while the diameter is kept constant. In LWB, slots take the same time for all cluster sizes but more slots are needed for more nodes. ...
Preprint
Full-text available
Efficient sharing of data is a key component in any decentralized system. There have been quite sig- nificant developments in the data-sharing protocols, especially with the advent of the synchronous/concurrent transmission based strategies. However, most of these existing data sharing strategies are tested either in small scale simulated networks or in testbed settings with fixed usual homogeneous structures. Real-life IoT/WSN networks, in contrast, often bear diverse structural patterns due to a highly skewed distribution of the nodes, e.g., highly dense in some places, while sparsely distributed along pathways in some other places. Our study reveals that that while the existing state-of-the-art protocols perform very well in uniformly distributed networks, they dwindle in face of these skewed networks commonly found in real life. Lack of the ability of these protocols to self-adjust as per the underlying network structure is one of the prime reasons for such performance degradation. In this work, we design and implement a flexible and self-adjusting many-to-many communication protocol FlexiCast and demonstrate that it consistently performs better than the current state-of-the-art protocols in a wide variety of networks. We show that FlexiCast can achieve all-to-all data sharing up to 1.77 times faster while consuming up to 1.87 times lesser energy compared to the state-of-the-art protocols.
... eir functioning involves the usage of a vast number of hardware sensors and they are typically realized using Wireless Sensor Networks, IoT devices, and smart phones but are certainly not restricted to these architectures only. Research in this field is further classified into areas such as health monitoring [1][2][3][4], traffic management [5], intelligent agriculture [6], smart power grids [7], environment monitoring [8][9][10], human psychology [11][12][13][14], smart water grids, smart homes [15], and smart offices [16]. Each of these applications involves the formulation of architecture that integrates sensing, storage, communication, processing, and human computer interfacing. ...
... : Implementation architecture for the proposed system. 10 Scientific Programming ...
Article
Full-text available
Stress is a complex multifaceted concept that is the result of adverse or demanding circumstances. Workers, especially health care workers, suffer significantly from distress, burnout, and other physical illnesses such as hypertension and diabetes caused by stress. Numerous stress detection systems are realized but they only help in detecting the stress in early stages, and, for regularizing it, these systems employ other means. These systems lack any inherent feature for regularization of stress. In contributing toward this aim, a novel system “EEG-Based Aptitude Detection System” is proposed. This system will help in considering working aptitude of employees working in work places with an intention to help them in assigning proper job roles based on their working aptitude. Selection of right job role for workers not only helps in uplifting productivity but also helps in regulating stress level of employees caused by improper job role assignments and reduces fatigue. Being able to select right job role for workers will help them in providing productive working environment. This paper presents detail layered architecture, implementation details, and outcomes of the proposed novel system. Integration of this system in work places will help supervisors in utilizing the human resource more suitably and will help in regulating stress related issues with improvement in overall performance of entire office. In this work, different implementation architectures based on KNN, SVM, DT, NB, CNN, and LSTM are tested, where LSTM has provided better results and achieved accuracy up to 94% in correctly classifying an EEG signal. The rest of the details can be seen in Sections 3 and 5.
... This necessitates the sharing of sensing infrastructure among multiple applications in addition to minimize the deployment cost and better utilization of network resources [6,8,17]. Queries from different applications targeted to a shared WSN may require continuous sensing of one or more physical attributes, such as temperature, light, smoke, humidity, etc. from a region of interest (RoI) during a certain period of time [1,9,12,15,22]. For the successful execution of such a query, a set of independent sensing tasks are generated and allocated to the relevant sensor nodes to measure the appropriate physical phenomena from the specified RoI for the specified time duration. ...
Article
Full-text available
In multiapplication sharing wireless sensor networks, various application queries exhibit similarity in their spatial, temporal, and sensing attribute requirements, thus result in redundant sensing tasks. The dissemination and execution of such redundant sensing tasks cause network traffic overhead and quick energy drop of the sensor nodes. Existing task scheduling and allocation mechanisms focus on reducing upstream traffic by maximizing data sharing among sensing tasks. However, downstream traffic due to sensing tasks dissemination plays a crucial role in large-scale WSNs and required to be addressed. This paper proposes a query similarity index based query preprocessing mechanism that prevents the generation of redundant sensing tasks by creating a common query corresponding to the overlapping functional requirements of the queries and reduces the downstream as well as upstream traffic significantly. The performance evaluation reveals approximately 60% reduction in downstream traffic, 20–40% reduction in upstream traffic, and 40% reduction in energy consumption when compared with state-of-the-art mechanisms.
... On the other hand, suspicious maritime events, such as arrival at critical points of a vessel trajectory, should be detected within seconds to enable action if necessary (Patroumpas et al. 2017). However, it is challenging to conduct real-time detection due to sensor limitations, data transmission, and semantic ambiguity issues (Tan et al. 2010;Lejeune et al. 2015). ...
Article
Full-text available
The advancements of sensing technologies, including remote sensing, in situ sensing, social sensing, and health sensing, have tremendously improved our capability to observe and record natural and social phenomena, such as natural disasters, presidential elections, and infectious diseases. The observations have provided an unprecedented opportunity to better understand and respond to the spatiotemporal dynamics of the environment, urban settings, health and disease propagation, business decisions, and crisis and crime. Spatiotemporal event detection serves as a gateway to enable a better understanding by detecting events that represent the abnormal status of relevant phenomena. This paper reviews the literature for different sensing capabilities, spatiotemporal event extraction methods, and categories of applications for the detected events. The novelty of this review is to revisit the definition and requirements of event detection and to layout the overall workflow (from sensing and event extraction methods to the operations and decision-supporting processes based on the extracted events) as an agenda for future event detection research. Guidance is presented on the current challenges to this research agenda, and future directions are discussed for conducting spatiotemporal event detection in the era of big data, advanced sensing, and artificial intelligence.
... Wireless sensor based systems and networks are the most common approach followed to realize smart systems. These systems are emerging very rapidly as control and monitoring subsystems, for variety of applications such as Structural Monitoring [1], Industrial Process Control [3], Smart Offices [2], [3], Military or Border Surveillance [4], [5], Environmental Conditions Monitoring [6], Intelligent Agriculture [7], Health Care [8], [9], Pothole Detection [10], Home Intelligence [11], Smart Cities [12], Earth Quake Monitoring [13], Infant Monitoring [14], Lie Detection and Emotion Recognition [15], [16], Automatic Traffic Control [17], Internet of Things [18], etc. ...
Article
Full-text available
Smart offices are places that aim to provide a favorable, interactive and healthy environment for the employees. As a result their working efficiency is enhanced. In order to achieve these aims numerous systems such as, lie detector, emotion detection and recognition, automation of office equipment has been proposed and implemented. To contribute towards one of the goals of smart offices, a novel ”Multi Modal Aptitude Detection System” is being proposed. The proposed system helps in analyzing the working aptitude of employees in order to help them in providing more conducive and healthy working environment. In this paper, a detailed layered framework, implementation details, and achieved outcomes related to proposed novel system are presented. Results achieved shows that this novel system after successfully becoming a part of smart office can bring significant enhancements in the efficiency of employees. Proposed system helps in learning and identifying job preferences of employees based on their working aptitude. The system is validated and showed accuracy of 96% in determining an employee’s aptitude.The results are further verified by computing the F1 scores, more details regarding testing and validation are provided in the result section.
Article
Full-text available
A maximum likelihood (ML) acoustic source location estimation method is presented. This method uses acoustic signal energy measurements taken at individual sensors of an ad hoc wireless sensor network to estimate the locations of multiple acoustic sources. Compared to existing acoustic energy based source localization methods, this proposed ML method delivers more accurate results and offers the enhanced capability of multiple source localization. A multi-resolution search algorithm and an expectation-maximization (EM) like iterative algorithm are proposed to expediate the computation of source locations. The Cramer-Rao Bound (CRB) of the ML source location estimate has been derived. When there is only a single source in the sensor field, the corresponding CRB formulation can be used to analyze the impacts of sensor placement to the accuracy of location estimates. Extensive simulations have been conducted. Empirically, it is observed that this proposed ML method consistently outperforms existing acoustic energy based source localization methods. An example applying this method to track military vehicles using real world experiment data also demonstrates the performance advantage of this proposed method over a previously proposed acoustic energy source localization method.
Chapter
Full-text available
All of us frequently encounter decision-making problems in every day life. Based on our observations regarding a certain phenomenon, we need to select a particular course of action from a set of possible options. This problem involving a single decision maker is typically a difficult one. Decision making in large-scale systems consisting of multiple decision makers is an even more challenging problem. Group decision-making structures are found in many real world situations. Application areas include financial institutions, air-traffic control, oil exploration, medical diagnosis, military command and control, electric power networks, weather prediction, and industrial organizations. For example, a medical doctor may order multiple diagnostic tests and seek additional peer opinions before a major surgical procedure is carried out, or a military commander may use data from radar and IR sensors along with intelligence information while deciding whether or not to launch an offensive. In many applications, multiple decision makers arise naturally, e.g., managers in an industrial organization. In many other applications, additional decision makers are employed to improve system performance. For example, deployment of multiple sensors for signal detection in a military surveillance application improves system survivability, results in improved detection performance or in a shorter decision time to attain a prespecified performance level, and may provide increased coverage in terms of surveillance region and number of targets.
Conference Paper
Full-text available
This paper presents the design and deployment experience of an air-dropped wireless sensor network for volcano hazard monitoring. The deployment of ve stations into the rugged crater of Mount St. Helens only took one hour with a heli- copter. The stations communicate with each other through an amplied 802 :15:4 radio and establish a self-forming and self-healing multi-hop wireless network. The distance be- tween stations is up to 2 km. Each sensor station collects and delivers real-time continuous seismic, infrasonic, light- ning, GPS raw data to a gateway. The main contribution of this paper is the design and evaluation of a robust sen- sor network to replace data loggers and provide real-time long-term volcano monitoring. The system supports UTC- time synchronized data acquisition with 1ms accuracy, and is online congurable. It has been tested in the lab environ- ment, the outdoor campus and the volcano crater. Despite the heavy rain, snow, and ice as well as gusts exceeding 120 miles per hour, the sensor network has achieved a remark- able packet delivery ratio above 99% with an overall system uptime of about 93:8% over the 1:5 months evaluation period after deployment. Our initial deployment experiences with the system have alleviated the doubts of domain scientists and prove to them that a low-cost sensor network system can support real-time monitoring in extremely harsh envi- ronments.
Article
This book provides an introductory treatment of the fundamentals of decision-making in a distributed framework. Classical detection theory assumes that complete observations are available at a central processor for decision-making. More recently, many applications have been identified in which observations are processed in a distributed manner and decisions are made at the distributed processors, or processed data (compressed observations) are conveyed to a fusion center that makes the global decision. Conventional detection theory has been extended so that it can deal with such distributed detection problems. A unified treatment of recent advances in this new branch of statistical decision theory is presented. Distributed detection under different formulations and for a variety of detection network topologies is discussed. This material is not available in any other book and has appeared relatively recently in technical journals. The level of presentation is such that the hook can be used as a graduate-level textbook. Numerous examples are presented throughout the book. It is assumed that the reader has been exposed to detection theory. The book will also serve as a useful reference for practicing engineers and researchers. I have actively pursued research on distributed detection and data fusion over the last decade, which ultimately interested me in writing this book. Many individuals have played a key role in the completion of this book.
Chapter
Bayesian hypothesis testing for the parallel fusion network was discussed extensively in Chapter 3. This chapter considers the problem of Bayesian hypothesis testing for several other network topologies. In Section 4.2, we consider the serial or tandem network, a widely studied network topology. System design methodology is developed and its performance is compared with that of the parallel network. Interesting issues, such as sequencing and placement of detectors, are also discussed. A brief discussion on tree networks is presented in Section 4.3. In Section 4.4, distributed detection networks with feedback are treated. In this class of networks, information flows both downstream as well as upstream, i.e., toward the fusion center and away from it. Feedback is shown to improve system performance. An important issue for this configuration is its data transmission requirement. Two protocols are presented that reduce this requirement. Finally, a unified methodology is presented to represent any decentralized detection network structure. Decision rules are also obtained. This methodology is applicable to detection networks that include memory as well as feedback.
Article
Seismicity is one of the most commonly monitored phenomena used to determine the state of a volcano and for the prediction of volcanic eruptions. Although several real-time earthquake-detection and data acquisition systems exist, few continuously measure seismic amplitude in circumstances where individual events are difficult to recognize or where volcanic tremor is prevalent. Analog seismic records provide a quick visual overview of activity; however, continuous rapid quantitative analysis to define the intensity of seismic activity for the purpose of predicing volcanic eruptions is not always possible because of clipping that results from the limited dynamic range of analog recorders. At the Cascades Volcano Observatory, an inexpensive 8-bit analog-to-digital system controlled by a laptop computer is used to provide 1-min average-amplitude information from eight telemetered seismic stations. The absolute voltage level for each station is digitized, averaged, and appended in near real-time to a data file on a multiuser computer system. Raw realtime seismic amplitude measurement (RSAM) data or transformed RSAM data are then plotted on a common time base with other available volcano-monitoring information such as tilt. Changes in earthquake activity associated with dome-building episodes, weather, and instrumental difficulties are recognized as distinct patterns in the RSAM data set. RSAM data for domebuilding episodes gradually develop into exponential increases that terminate just before the time of magma extrusion. Mount St. Helens crater earthquakes show up as isolated spikes on amplitude plots for crater seismic stations but seldom for more distant stations. Weather-related noise shows up as low-level, long-term disturbances on all seismic stations, regardless of distance from the volcano. Implemented in mid-1985, the RSAM system has proved valuable in providing up-to-date information on seismic activity for three Mount St. Helens eruptive episodes from 1985 to 1986 (May 1985, May 1986, and October 1986). Tiltmeter data, the only other telemetered geophysical information that was available for the three dome-building episodes, is compared to RSAM data to show that the increase in RSAM data was related to the transport of magma to the surface. Thus, if tiltmeter data is not available, RSAM data can be used to predict future magmatic eruptions at Mount St. Helens. We also recognize the limitations of RSAm data. Two examples of RSAM data associated with phreatic or shallow phreatomagmatic explosions were not preceded by the same increases in RSAM data or changes in tilt associated with the three dome-building eruptions.
Article
The onset of a seismic signal is determined through joint AR modeling of the noise and the seismic signal, and the application of the Akaike Information Criterion (AIC) using the onset time as parameter. This so-called AR-AIC phase picker has been tested successfully and implemented on the Z-component of the broadband station HGN to provide automatic P-phase picks for a rapid warning system. The AR-AIC picker is shown to provide accurate and robust automatic picks on a large experimental database. Out of 1109 P-phase onsets with signal-to-noise ratio (SNR) above 1 from local, regional and teleseismic earthquakes, our implementation detects 71% and gives a mean difference with manual picks of 0.1 s. An optimal version of the well-established picker of Baer and Kradolfer [Baer, M., Kradolfer, U., An automatic phase picker for local and teleseismic events, Bull. Seism. Soc. Am. 77 (1987) 1437–1445] detects less than 41% and gives a mean difference with manual picks of 0.3 s using the same dataset.