Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                

Faulkner 11 Next

Download as pdf or txt
Download as pdf or txt
You are on page 1of 13

See discussions, stats, and author profiles for this publication at: https://www.researchgate.

net/publication/221284283

The next big one: Detecting earthquakes and other rare events from
community-based sensors

Conference Paper · April 2011


Source: DBLP

CITATIONS READS
153 799

6 authors, including:

K. Mani Chandy Andreas Krause


California Institute of Technology ETH Zurich
320 PUBLICATIONS   20,758 CITATIONS    380 PUBLICATIONS   25,948 CITATIONS   

SEE PROFILE SEE PROFILE

Some of the authors of this publication are also working on these related projects:

Distributed Algorithms View project

Sense and Respond Systems View project

All content following this page was uploaded by Andreas Krause on 03 June 2014.

The user has requested enhancement of the downloaded file.


The Next Big One: Detecting Earthquakes and other Rare
Events from Community-based Sensors

Matthew Faulkner Michael Olson Rishi Chandy Jonathan Krause


Caltech Caltech Caltech Caltech
K. Mani Chandy Andreas Krause
Caltech Caltech

ABSTRACT sensing projects are using smart phones to monitor traffic and detect
Can one use cell phones for earthquake early warning? Detecting accidents [13, 17, 15]; monitor and improve population health [5],
rare, disruptive events using community-held sensors is a promising and map pollution [18, 29]. Detecting rare, disruptive events, such as
opportunity, but also presents difficult challenges. Rare events are earthquakes, using community-held sensors is a particularly promis-
often difficult or impossible to model and characterize a priori, yet ing opportunity [4], but also presents difficult challenges. Rare
we wish to maximize detection performance. Further, heteroge- events are often difficult or impossible to model and characterize
neous, community-operated sensors may differ widely in quality a priori, yet we wish to maximize detection performance. Further,
and communication constraints. heterogeneous, community-operated sensors may differ widely in
In this paper, we present a principled approach towards detecting quality and communication constraints, due to variability in hard-
rare events that learns sensor-specific decision thresholds online, in ware and software platforms, as well as differing in environmental
a distributed way. It maximizes anomaly detection performance at a conditions.
fusion center, under constraints on the false alarm rate and number In this paper, we present a principled approach towards detecting
of messages per sensor. We then present an implementation of our rare events from community-based sensors. Due to the unavailabil-
approach in the Community Seismic Network (CSN), a commu- ity of data characterizing the rare events, our approach is based
nity sensing system with the goal of rapidly detecting earthquakes on anomaly detection; sensors learn models of normal sensor data
using cell phone accelerometers, consumer USB devices and cloud- (e.g., acceleration patterns experienced by smartphones under typi-
computing based sensor fusion. We experimentally evaluate our cal manipulation). Each sensor then independently detects unusual
approach based on a pilot deployment of the CSN system. Our observations (which are considered unlikely with respect to the
results, including data from shake table experiments, indicate the model), and notifies a fusion center. The fusion center then decides
effectiveness of our approach in distinguishing seismic motion from whether a rare event has occurred or not, based on the received
accelerations due to normal daily manipulation. They also provide messages. Our approach is grounded in the theory of decentral-
evidence of the feasibility of earthquake early warning using a dense ized detection, and we characterize its performance accordingly.
network of cell phones. In particular, we show how sensors can learn decision rules that
allow us to control system-level false positive rates and bound the
amount of required communication in a principled manner while
Categories and Subject Descriptors simultaneously maximizing the detection performance.
C.2.1 [Computer-Communication Networks]: Network Architec- As our second main contribution, we present an implementation
ture and Design; G.3 [Probability and Statistics]: Experimental of our approach in the Community Seismic Network (CSN). The goal
Design; I.2.6 [AI]: Learning of our community sensing system is to detect seismic motion using
accelerometers in smartphones and other consumer devices (Fig-
General Terms ure 1(c)), and issue real-time early-warning of seismic hazards (see
Algorithms, Measurement Figure 1(a)). The duration of the warning is the time between a per-
son or device receiving the alert and the onset of significant shaking
Keywords (see Figure 1(b)); this duration depends on the distance between the
Sensor networks, community sensing, distributed anomaly detection location of initial shaking and the location of the receiving device,
and on delays within the network and fusion center. Warnings of
1. INTRODUCTION up to tens of seconds are possible [1], and even warnings of a few
seconds help in stopping elevators, slowing trains, and closing gas
Privately owned commercial devices equipped with sensors are
valves. Since false alarms can have high costs, it is important to
emerging as a powerful resource for sensor networks. Community
limit the false positive rate of the system.
Using community-based sensors for earthquake early warning
is particularly challenging due to the large variety of sensor types,
Permission to make digital or hard copies of all or part of this work for sensor locations, and ambient noise characteristics. For example,
personal or classroom use is granted without fee provided that copies are a sensor near a construction site will have different behavior than
not made or distributed for profit or commercial advantage and that copies a sensor in a quiet zone. Moreover, sensor behavior may change
bear this notice and the full citation on the first page. To copy otherwise, to over time; for example, construction may start in some places and
republish, to post on servers or to redistribute to lists, requires prior specific
permission and/or a fee.
stop in others. With thousands of sensors, one cannot expect to
IPSN’11, April 12–14, 2011, Chicago, Illinois. know the precise characteristics of each sensor at each point in time;
Copyright 2011 ACM 978-1-4503-0512-9/11/04 ...$10.00.
S
hak
ingi
n: .
41s

(a) Earthquake (Chino Hills, Magn. 5.4, July 2008) (b) Early warning (c) Phidget sensor with housing
Figure 1: (a) Seismic waves (P- and S-waves) during an earthquake. (b) Anticipated user interface for early warning using our
Google Android application. (c) 16-bit USB MEMS accelerometers with housing that we used in our initial deployment.

these characteristics have to be deduced by algorithms. A system case, Ms,t = 1 means that sensor s at time t estimates that an
that scales to tens of thousands or millions of sensors must limit the event happened; Ms,t = 0 means that sensor s at time t estimates
rate of message traffic so that it can be handled efficiently by the that no event happened at that time. For large networks, we want to
network and fusion center. For example, one million phones would minimize the number of messages sent. Since the events are assumed
produce approximately 30 Terabytes of accelerometer data each to be rare, we only need to send messages (that we henceforth call
day. Another key challenge is to develop a system infrastructure picks) for Ms,t = 1; sending no message implies Ms,t = 0. Based
that has low response time even under peak load (messages sent by on the received messages, the fusion center then decides how to
millions of phones during an earthquake). Moreover, the Internet respond: It produces an estimate E bt ∈ {0, 1}. If Ebt = Et , it makes
and computers in a quake zone are likely to fail with the onset the correct decision (true positive if Et = 1 or true negative if
of intensive shaking. So, data from sensors must be sent out to a Et = 0). If E bt = 0 when Et = 1, it missed an event and thus
distributed, resilient system that has data centers outside the quake produced a false negative. Similarly, if E bt = 1 when Et = 0,
zone. The CSN uses cloud-computing based sensor fusion to cope it produced a false positive. False positives and false negatives
with these challenges. In this paper, we report our initial experience can have very different costs. In our earthquake example, a false
with the CSN, and experimentally evaluate our detection approach positive could lead to incorrect warning messages sent out to the
based on data from a pilot deployment. Our results, including data community and consequently lead to inappropriate execution of
from shaketable experiments that allow us to mechanically play remedial measures. On the other hand, false negatives could lead
back past earthquakes, indicate the effectiveness of our approach to missed opportunities for protecting infrastructure and saving
in distinguishing seismic motion from accelerations due to normal lives. In general, our goal will be to minimize the frequency of
daily manipulation. They also provide evidence for the feasibility false negatives while constraining the (expected) frequency of false
of earthquake early warning using a dense network of cell phones. positives to a tolerable level (e.g., at most one false alarm per year).
In summary, our main contributions are
Classical Decentralized Detection. How should each sensor,
• a novel approach for online decentralized anomaly detection, based on its measurements Xs,t , decide when to pick (send Ms,t =
• a theoretical analysis, characterizing the performance of our 1)? The traditional approach to decentralized detection assumes that
detection approach, we know how likely particular observations Xs,t are, in case of an
• an implementation of our approach in the Community Seismic event occurring or not occurring. Thus, it assumes we have access to
Network, involving smartphones, USB MEMS accelerome- the conditional probabilities P [Xs,t | Et = 0] and P [Xs,t | Et = 1].
ters and cloud-computing based sensor fusion, and In this case, under the common assumptions that the sensors’ mea-
• a detailed empirical evaluation of our approach characteriz- surements are independent conditional on whether there is an event
ing the achievable detection performance when using smart- or not, it can be shown that the optimal strategy is to perform hier-
phones to detect earthquakes. archical hypothesis testing [27]: we define two thresholds τ, τ 0 , and
let Ms,t = 1 iff
2. PROBLEM STATEMENT P [Xs,t | Et = 1]
We consider the problem of decentralized detection of rare events, ≥ τ. (1)
P [Xs,t | Et = 0]
such as earthquakes, under constraints on the number of messages
sent by each sensor. Specifically, a set of N sensors make repeated i.e., if the likelihood ratio exceeds τ . Similarly, the fusion center
observations Xt = (X1,t , . . . , XN,t ) from which we would like to sets E bt = 1 iff
detect the occurrence of an event Et ∈ {0, 1}. Here, Xs,t is the Bin(St ; p1 ; N )
measurement of sensor s at time t, and Et = 1 iff there is an event ≥ τ 0, (2)
Bin(St ; p0 ; N )
(e.g., an earthquake) at time t. Xs,t may be a scalar (e.g., acceler- P
ation), or a vector of features containing information about Fourier where St = s Ms,t is the number of picks at time t; p` =
frequencies, moments, etc. during a sliding window of data (see P [Ms,t = 1 | Et = `] is the sensor-level true (` = 1) and false
Section 4.2 for a discussion of features that we use in our system). (` = 0) positive rate respectively; and Bin(·, p, N ) is the probability
We are interested in the decentralized setting, where each sensor mass function of the Binomial distribution. Asymptotically opti-
s analyzes its measurements Xs,t , and sends a message Ms,t to the mal decision performance in either a Bayesian or Neyman-Pearson
fusion center. Here we will focus on binary messages (i.e., each framework can be obtained by using the decision rules (1) and (2)
sensor gets to vote on whether there is an event or not). In this with proper choice of the thresholds τ and τ 0 [27].
There has also been work in quickest detection or change de- earthquake monitoring example, the cell phones can collect data of
tection (cf., [22] for an overview), where the assumption is that acceleration experienced under normal operation (lying on a table,
there is some time point t0 at which the event occurs; Xs,t will being carried in a backpack, etc.). Further, if we have hope of
be distributed according to P [Xs,t | Et = 0] for all t < t0 , and detecting earthquakes, the signal Xs,t must be sufficiently different
according to P [Xs,t | Et = 1] for all t ≥ t0 . In change detection, from normal data (thus P [Xs,t | Et = 0] must be low when Et =
the system trades off waiting (gathering more data) and improved 1). This suggests that each sensor should decide whether to pick or
detection performance. However, in case of rare transient events not based on the likelihood L0 (x) = P [x | Et = 0] only; sensor s
(such as earthquakes) that may occur repeatedly, the distributions will pick (Ms,t = 1) iff, for its current readings Xs,t = x it holds
P [Xs,t | Et = 1] are expected to change with t for t ≥ t0 . that

Challenges for the Classical Approach. Detecting rare events L0 (x) < τs (3)
from community-based sensors has three main challenges: for some sensor specific threshold τs . See Figure 5(c) for an illus-
tration. Note that using this decision rule, for a pick it holds that
(i) Sensors are highly heterogeneous (i.e., the distributions
P [Ms,t = 1 | Et = e] = P [L0 (Xs,t ) < τs | Et = e] = pe . This
P [Xs,t | Et ] are different for each sensor s)
anomaly detection approach hinges on the following fundamental
(ii) Since events are rare, we do not have sufficient data to obtain
anti-monotonicity assumption: that
good models for P [Xs,t | Et = 1]
(iii) Bandwidth limitations may limit the amount of communica- P [x | Et = 1] P [x0 | Et = 1]
L0 (x) < L0 (x0 ) ⇔ > , (4)
tion (e.g., number of picks sent). P [x | Et = 0] P [x0 | Et = 0]
Challenge (i) alone would not be problematic – classical decentral- i.e., the less probable x is under normal data, the larger the likelihood
ized detection can be extended to heterogeneous sensors, as long ratio gets in favor of the anomaly. The latter is the assumption on
as we know P [Xs,t | Et ]. For the case where we do not know which most anomaly detection approaches are implicitly based.
P [Xs,t | Et ], but we have training examples (i.e., large collections Under this natural anti-monotonicity assumption, the decision rules
of sensor data, annotated by whether an event is present or not), (3) and (1) are equivalent, for an appropriate choice of thresholds.
we can use techniques from nonparametric decentralized detection
P ROPOSITION 1. Suppose Condition (4) holds. Then for any
[20]. In the case of rare events, however, we may be able to col-
threshold τ for rule (1), there exists a threshold τs such that rule (3)
lect large amounts of data for P [Xs,t | Et = 0] (i.e., characterizing
makes identical decisions.
the sensors in the no-event case), while still collecting extremely
little (if any) data for estimating P [Xs,t | Et = 1]. In our case, we Since the sensors do not know the true distribution P [Xs,t | Et = 0],
would need to collect data from cell phones experiencing seismic they use an online density estimate Pb [Xs,t | Et = 0] based on col-
motion of earthquakes ranging in magnitude from three to ten on lected data. The fusion center will then perform hypothesis testing
the Gutenberg-Richter scale, while resting on a table, being carried based on the received picks Ms,t . In order for this approach to
in a pocket, backpack, etc. Furthermore, even though we can col- succeed we have to specify:
lect much data for P [Xs,t | Et = 0], due to challenge (iii) we may
(i) How can the sensors estimate the distribution P b [Xs,t | Et = 0]
not be able to transmit all the data to the fusion center, but have
to estimate this distribution locally, possibly with limited memory. in an online manner, while using limited resources (CPU, bat-
We also want to choose decision rules (e.g., of the form (1)) that tery, memory, I/O)?
minimize the number of messages sent. (ii) How should the sensors choose the thresholds τs ?
(iii) Which true positive and false positive rates p1 , p0 and thresh-
old τ 0 , cf., (2), should the fusion center use?
3. ONLINE DECENTRALIZED ANOMALY
DETECTION We now discuss how our approach addresses these questions.
We now describe our approach to online, decentralized detection Online Density Estimation. For each sensor s, we have to, over
of anomalous events. time, estimate the distribution of normal observations L
b 0 (Xs,t ) =
Assumptions. In the following, we adopt the assumption of classi- P [Xs,t | Et = 0], as well as the activation threshold τs . There
b
cal decentralized detection that sensor observations are conditionally are various techniques for online density estimation. Parametric
independent given Et , and independent across time (i.e., the dis- approaches assume that the density P [Xs,t | Et = 0] is in some
tributions P [Xs,t | Et = 0] do not depend on t). For earthquake parametric family of distributions:
detection this assumption is reasonable (since most of the noise is P [Xs,t | Et = 0] = φ(Xs,t , θ).
explained through independent measurement noise and user activ-
ity). While spatial correlation may be present, e.g., due to mass The goal then is to update the parameters θ as more data is obtained.
events such as rock concerts, it is expected to be relatively rare. In particular, mixture distributions, such as mixtures of Gaussians,
Furthermore, if context about such events is available in advance, it are a flexible parametric family for density estimation. If access to a
can be taken into account. We defer treatment of spatial correlation batch of training data is available, algorithms such as Expectation
to future work. We do not assume that the sensors are homogeneous Maximization can be used to obtain parameters that maximize the
(i.e., P [Xs,t | Et = 0] may depend on s). Our approach can be likelihood of the data. However, due to memory limitations, it is
extended in a straightforward manner if the dependence on t is peri- rarely possible to keep all data in memory; furthermore, model train-
odic (e.g., the background noise changes based on the time of day, ing would grow in complexity as more data is collected. Fortunately,
or day within week). We defer details to a long version of this paper. several effective techniques have been proposed for incremental
optimization of the parameters, based on Variational Bayesian tech-
Overview. The key idea behind our approach is that since sensors niques [24] and particle filtering [8]. Online nonparametric density
obtain a massive amount of normal data, they can accurately estimate estimators (whose complexity, such as the number of mixture com-
P [Xs,t | Et = 0] purely based on their local observations. In our ponents, can increase with the amount of observed data) have also
been developed [10]. In this paper, we use Gaussian mixture models Data: Estimated sensor ROC curve, N sensors, communication
for density estimation. constraints p̄, bound on fusion-level false positives P̄
Result: sensor operating point (p0 , p1 )
Online Threshold Estimation. Online density estimators as intro- 1 for ith operating point (pi0 , pi1 ) s.t. pi0 ≤ p̄ do
duced above allow us to estimate P b [Xs,t | Et = 0]. The remaining
2 //Do Neyman-Pearson hypothesis testing to evaluate pi0
question is how the sensor-specific thresholds τs should be chosen.
Compute N (pi0 ) = min{N 0 : S>N 0 Bin(S; pi0 ; N ) ≤ P̄ }
P
3
The key idea is the following. Suppose we would like to control i
= S>N (pi ) Bin(S; pi1 ; N )
P
the per-sensor false positive rate p0 (as needed to perform hypoth- 4 Compute PD
0
Compute PF = S>N (pi ) Bin(S; pi0 ; N )
i P
esis testing in the fusion center). Since the event is assumed to be 5 0
extremely rare, with very high probability (close to 1) every pick 6
i
Choose ` = arg maxi PD and set (p0 , p1 ) = (p`0 , p`1 )
Ms,t = 1 will be a false alarm. Thus, we would like to choose our
threshold τs such that, if we obtain a measurement Xs,t = x at Figure 2: Threshold Optimization procedure
random, with probability 1 − p0 , it holds that L b 0 (x) ≥ τs .
This insight suggests a natural approach to choosing τs : For
each training example xs,t , we calculate its likelihood L b 0 (xs,t ) =
is bounded by P̄ . It can be shown that the optimal decision rule (2)
P [xs,t | Et = 0]. We then choose τs to be the p0 -th percentile
b
is equivalent to setting Ebt = 1 iff St ≥ N (p0 ), for some number
of the data set L = {L b 0 (xs,1 ), . . . , L
b 0 (xs,t )}. As we gather an N (p0 ) that only depends on the total number N of sensors, and the
increasing amount of data, as long as we use a consistent density sensor false-positive rate p0 . Thus, to control the fusion-level false
estimator, this procedure will converge to the correct decision rule. positive rate PF we, perhaps surprisingly, do not need to know the
In practice, due to memory and computation constraints, we can- value for p1 , since PF does not depend on p1 :
not keep the entire data set L of likelihoods and reestimate τs at X X
every time step. Unfortunately, percentiles do not have sufficient PF = Bin(S; p0 ; N ) and PD = Bin(S; p1 ; N ).
statistics as the mean and other moments do. Moreover, Munro S>N (p0 ) S>N (p0 )
and Paterson [19] show that computing rank queries exactly re- Thus, our online anomaly detection approach leads to decision rules
quires Ω(n) space. Fortunately, several space-efficient online ε- that provide guarantees about the fusion-level false positive rate.
approximation algorithms for rank queries have been developed. An Our goal is not just to bound the false positive rate, but also to
algorithm that selects an element of rank r0 from N elements for a maximize detection performance. The detection performance PD
query rank r is said to be uniform ε-approximate if above depends on the sensor-level true positive rate p1 . If we have
|r0 − r| an accurate estimate of p1 , all sensors are homogeneous and the
≤ε anti-monotonicity condition (4) holds, the following result, which is
N
One such algorithm which requires logarithmic space is given by a consequence of [27], holds:
[11]. We do not present details here due to space limitations. We T HEOREM 3. Suppose condition (4) holds and the sensors are
summarize our analysis in the following proposition: all homogeneous (i.e., P [Xs,t | Et ] is independent of s). Further
P ROPOSITION 2. Suppose that we use a uniformly consistent suppose that for each sensor-level false-positive rate p0 we know
density estimator (i.e., lim supx {P b [x | Et = 0]−P [x | Et = 0]} → its true-positive rate p1 . Then one can choose an operating point
0 a.s.). Further suppose that τs,t is an ε-accurate threshold obtained (p∗0 , p∗1 ) that is asymptotically optimal (as N → ∞).
through percentile estimation for p0 . Then for any ε > 0, there Unfortunately, without access to training data for actual events
exists a time t0 suchhthat for all t ≥ ti0 , it holds that the false positive (e.g., sensor recordings during many large earthquakes), we cannot
probability pb0 = P L b 0 (xs,t ) < τs is |b p0 − p0 | ≤ 2ε. obtain an accurate estimate for p1 . However, in Section 5, we show
The proof of Proposition 2, which we omit for space limitations, how we can obtain an empirical estimate pb1 of p1 by performing
shaketable experiments. Suppose now that we have an estimate pb1
hinges on the fact that if the estimate Lb 0 (x) converges uniformly
of p1 . How does the detection performance degrade with the accu-
to L0 (x), the p0 -th percentiles (for 0 < p0 < 1) converge as well.
racy of pb1 ? Suppose we have access to an estimate of the sensors’
Furthermore, the use of an ε-approximate percentile can change the
Receiver Operator Characteristic (ROC) curve, i.e., the dependency
false positive rate by at most ε.
of the achievable true positive rates pb1 (p0 ) as a function of the
Uniform convergence rates for density estimation have been estab-
false positive rate (see Figure 7(a) for an example). Now we can
lished as well [9], allowing us to quantify the time required until the
system operates at ε-accurate false positive rates. Since we assume view both the estimated rates PbD ≡ PbD (b p1 (p0 )) ≡ PbD (p0 ) and
that communication is expensive, we may impose an upper bound PF = PF (p0 ) as functions of the sensor-level false positive rate p0 .
b b
on the expected number of messages sent by each sensor. This can Based on the argument above, we have that PbF (p0 ) = PF (p0 ), i.e.,
be achieved by imposing an upper bound p̄ on p0 , again relying on the estimated false positive rate is exact, but in general PbD (p0 ) 6=
the fact that events are extremely rare. We present more details in PD (p0 ). Fortunately, it can be shown that if the estimated ROC
the next section. curve is conservative (i.e., pb1 (p0 ) ≤ p1 (p0 ) for all rates p0 ), then
it holds that PbD (p0 ) ≤ PD (p0 ) is an underestimate of the true
Hypothesis Testing for Sensor Fusion. Above, we discussed how detection probability. Thus, if we are able to obtain a pessimistic
we can obtain local decision rules that allow us to control the sensor- estimate of the sensors’ ROC curves, we can make guarantees about
level false positive rate p0 in a principled manner, and in the fol- the performance of the decentralized anomaly detection system. We
lowing we assume that the sensors operate at this false positive rate. can now choose the optimal operating point by
However, in order to perform hypothesis testing as in (2), it appears
that we also need an estimate of the sensor-level true-positive rate p1 . max PbD (p0 ) s.t. PbF (p0 ) ≤ P̄ ,
p0 ≤p̄
Suppose that we would like to maximize the detection rate PD
at the fusion center while guaranteeing a false positive rate PF that and are guaranteed that the optimal value of this program is a pes-
simistic estimate of the true detection performance, while PbF is in Figure 1(c)), as well as accelerometers in Google Android smart-
fact the exact false alarm rate. Algorithm 2 formalizes this procedure. phones (see Figure 1(b)) – other types of phones will be included
We summarize our analysis in the following theorem: in the future. Each of the sensors has unique advantages. The USB
sensors provide higher fidelity measurements. By firmly affixing
T HEOREM 4. If we use decentralized anomaly detection to con- them to a non-carpeted floor (preferably) or a wall, background
trol the sensor false positive rate p0 , and if we use a conservative noise can be drastically removed. However, their deployment relies
estimated ROC curve (p0 , pb1 ), then Algorithm 2 chooses an oper- on the community purchasing a separate piece of hardware (cur-
ating point p0 to maximize a lower bound on the true detection rently costing roughly USD 150 including custom housing). In
performance, i.e., PbD (p0 ) ≤ PD (p0 ). contrast, the Google Android platform has a large market share,
currently approximately 16.3% and is projected to grow further [3].
4. THE COMMUNITY SEISMIC NETWORK Android based smartphones typically contain 3-axis accelerometers,
and integration of an Android phone into the CSN only requires
Early Warning Alerts downloading a free application. On the other hand, the built-in
accelerometers are of lower quality (our experiments showed a typi-
cal resolution of approximately 13 bits), and phones are naturally
exposed to frequent acceleration during normal operation. We have
also built early prototypes of stand-alone devices on top of Arduino
boards that connect through USB or WiFi to computing systems
Google
Associator with access to the cloud.
App Engine
Are inexpensive accelerometers sensitive enough to detect seis-
mic motion? We performed experiments to assess the fidelity of the
Memcache
Datastore Phidgets and a variety of Android phones (the HTC Dream, HTC
Hero and Motorola Droid). We placed the sensors on a stationary
Registration Heartbeat Pick
surface and recorded for an hour. We found that when resting, the
phones experienced noise with standard deviation ≈ 0.08 m/s2 ,
Client Interaction
while the Phidgets experienced noise with standard deviation of
≈ 0.003 m/s2 . Earthquakes with magnitude 4 on the Gutenberg-
Richter scale achieve an acceleration of approximately 0.12 m/s2
close to the epicenter, which can be detected with the Phidgets, but
barely exceeds the background noise level of the phones. However,
earthquakes of magnitude 5 achieve acceleration of 0.5 m/s2 , in-
creasing to roughly 1.5 m/s2 for magnitude 6 events. These phones
Figure 3: Overview of the CSN system. sample their accelerometers at between 50Hz and 100Hz, which is
comparable to many high fidelity seismic senors. These numbers
We are building a Community Seismic Network (CSN) to: (a) pro- suggest that cell phone accelerometers should be sensitive enough
vide warning about impending shaking from earthquakes, (b) guide to be able to detect large earthquakes.
first responders to areas with the greatest damage after an earthquake So far, the CSN is a research prototype, and sensors have been
(c) obtain fine-granularity maps of subterranean structures in areas deployed and are operated by the members of our research group.
where geological events such as earthquakes or landslides occur, and We anticipate opening the system to a larger community in the near
(d) provide detailed analysis of deformations of internal structures future. The research reported in this paper presents a feasibility
of buildings (that may not be visible to inspectors) after geological study that we performed in order to justify the deployment of the
events. The CSN is a challenging case study of community sense system. Figure 4(a) presents the locations where messages have
and response systems because the community has to be involved to been reported from in our network.
obtain the sensor density required to meet these goals, the benefits of 4.2 Android Client
early warning of shaking are substantial, and frequent false warnings
result in alerts being ignored. Figure 4(b) presents an overview of our Android client application.
The technical innovations of the CSN include the use of widely It consists of several components, which we explain in the following.
heterogeneous sensors managed by a cloud computing platform that The client for the Phidget sensors follows a similar implementation.
also executes data fusion algorithms and sends alerts. Heterogeneous A central policy decision of the system was that the only manner
sensors include cell phones, stand-alone sensors that communicate in which information is exchanged between a client computer and
to the cloud computing system in different ways, and accelerometers the cloud computing system is for the client to send a message to
connected through USB to host computers which are then connected the cloud and await a reply: in effect to execute a remote procedure
to the cloud through the Internet. Figure 3 presents an overview call. All information exchanges are initiated by a client, never by
of the CSN. An advantage of the cloud computing system is that the cloud. This helps ensure that participants in the CSN are only
sensors anywhere in the world with Internet access including areas sent information at points of their choosing.
such as India, China, and South America can connect to the system Registration. Upon the first startup, the application registers
easily merely by specifying a URL. with the Cloud Fusion Center (CFC). The CFC responds with a
unique identifier for the client, which will be used in all subsequent
4.1 Community Sensors: Android and USB communications with the CFC.
Accelerometers
The Community Seismic Network currently uses two types of Picking Algorithm. A background process runs continuously,
sensors: 16-bit MEMS accelerometers manufactured by Phidgets, collecting data from the sensor. The picking algorithm generates
Inc., used as USB-accessories to laptops and desktop computers (see "pick" messages by analyzing raw accelerometer data to determine
Picking
Pick Sender
Server Algorithm

Parameter
User Interface Heartbeat
Updates

Alert Listener
Location
Estimation
Registration

Background Service

(a) Pick locations (b) Android client architecture (c) Shaketable


Figure 4: (a) Map of locations where measurements have been reported from during our pilot deployment of CSN. (b) Architecture
of the phone client software. (c) Experimental setup for playing back historic earthquakes on a shaketable, and testing their effect
on the sensors of the CSN system.

if the data in the recent past is anomalous. The algorithm executes it is more efficient than receiving regular location updates. Second,
in the background without a user being aware of its existence. It sending the location is helpful in order to facilitate faster association
implements the approach discussed in Section 3. by avoiding database lookups for every stationary sensor pick. While
For density estimation, we use a Gaussian mixture model for it should be possible to improve detection performance at the CFC
P [Xs,t | E]. The most important design choice is the representa- by sending more information or additional rounds of messages, it is
tion Xs,t of the sensor data. Our approach is to compute various unclear if the cost of this communication is acceptable. Electricity
features from short time windows (similar to phonemes in speech and the Internet may be lost shortly after a large quake, and so
recognition). The idea is that normal acceleration, e.g., due to our system is designed to use minimal messages to report crucial
walking, or manual phone operation, lead to similar signatures of information as quickly as possible.
features.
A first challenge is that phones frequently change their orientation. Heartbeats. At some prespecified interval, “heartbeat” messages
Since accelerometers measure gravity, we first determine (using a are sent to the CFC, allowing the CFC to keep track of which phones
decaying average) and subtract out the gravity component from the are currently running the program. The heartbeats contain the time,
[X, Y, Z]-components of the signal. We then rotate the centered location, waveform logs, and a parameter version number. Using the
signal so that the estimated gravity component points in the negative parameter version number, the CFC can determine whether to send
Z direction [0, 0, −1]. Figures 5(a) and 5(b) illustrate this process. updated parameters to each phone or not. This mechanism allows
Since we cannot consistently orient the other axes, we use features modifications to the picking algorithm without requiring changes to
that are invariant under rotation around the vertical (Z) axis, by the underlying client software.
replacing the [X, Y ] component by its Euclidean norm ||[X, Y ]||2 .
We consider time windows of 2.5 seconds length and, for both the User interface. While the main application runs in the background
Z and ||[X, Y ]||2 components calculate 16 Fourier coefficients, the using Android’s multitasking capability, the application provides a
second moment, and the maximum absolute acceleration. This pro- user interface to display the recorded waveforms. We are currently
cedure results in a 36-dimensional feature vector. To avoid the curse collaborating with a USGS led effort in earthquake early warning.
of dimensionality we perform linear dimensionality reduction by Our application will connect to the early warning system and be
projection on the top 16 components. These principal components able to issue warnings about when shaking will occur, as well as
can be computed using online algorithms [30]. While PCA captures the estimated epicenter of the event (see Figure 1(b)). While the
most variance in the training data (normal acceleration patterns), it application is currently a research prototype and not yet deployed
is expected that unusual events may carry energy in directions not in public use, we anticipate that the capability of real-time early
spanned by the principal components. We therefore add the projec- warning may convince users to download and install the application.
tion error (amount of variance not captured by the projection) as an
Power Usage. Battery drain is an important factor in users’ de-
additional feature. We arrived at this choice of features, as well as
cisions to install and run our application. In our experiments on
the number k = 6 of mixture components through cross-validation
the Motorola Droid, the battery life exceeded 25 hours while con-
experiments, using our experimental setup discussed in Section 5.
tinuously running the client (but without any further operation of
Our threshold for picking is obtained using online percentile
the phone). This running time would not inconvenience a user
estimation, as detailed in Section 3. In order to bootstrap the deploy-
who charges their phone each night. However, we are planning to
ment of Gaussian mixture models to new phones, our phone client
perform further power optimizations and possibly implement duty
has the capability of updating its statistical model via messages from
cycling prior to public release of the client.
the CFC. The threshold by which the algorithm on a client computer
determines whether an anomaly is present can also be changed by a 4.3 Cloud Fusion Center
message from the cloud computer. This allows the CFC to throttle
the rate at which a given sensor generates pick messages. The Cloud Fusion Center (CFC) performs the fusion-level hypoth-
esis test defined in (2) and administers the network. In devising a
Pick Reporting. Whenever the picking algorithm declares a pick, system to serve as a logically central location, we evaluated building
a message is sent to the CFC, which includes the time, location, and our own server network, using virtualized remote servers, having
estimated amplitude of the data which triggered the pick. Including collocated servers, and building our application to work on Google
the location is important for two reasons. First, for mobile clients, App Engine. App Engine was chosen as the platform for the CFC
for several reasons: easy scalability, built in data security, and ease
(a) Before gravity correction (b) After gravity correction (c) GMM based picking
Figure 5: (a) 5 hours of recording three-axis accelerometer data during normal cell phone use. (b) Data from (a) after removing
gravity and appropriate signal rotation. (c) Illustration of the density estimation based picking algorithm. The red plane shows an
operating threshold. Acceleration patterns for which the density does not exceed the threshold result in a pick.

of maintenance. phenomenon known as a loading request. This request is named in


The App Engine platform is designed from the ground up to be this manner because it is the first request to a new instance of the
scalable to an arbitrary number of clients. As we expect to grow application. That is, when App Engine allocates a new instance to
our sensor network to a very high sensor density, this element of the serve increasing traffic demands, it sends an initial live request to
platform’s design is very important. What makes the scalability of that instance. In Java, this results in the initialization of the Java
the platform easily accessible is the fact that incoming requests are Virtual Machine, including loading all of the appropriate libraries.
automatically load-balanced between instances that are created and Over the last three months, we experienced loading requests with
destroyed based on current demand levels. This reduces algorithmic a median frequency of 9.52% of all requests. While this means
complexity, as the load balancing of the network is handled by the that 90.48% of requests did not experience increased latency as a
platform rather than by code written for our CSN system. result of the platform, the remaining requests experienced a median
A second consideration in our selection was data security. With increased processing duration of 5,400 ms. Because of the extreme
the other solutions we had available to us, if the data we collected penalty paid by loading requests, when examining average request
was stored on the server network we were using, then, without re- duration, their presence dominates the figures. This results in an
dundant servers in separate geographies, we risked losing all of unusual property of App Engine, which is that the system performs
our data to the very earthquakes we hoped to record. App Engine much better at higher constant request loads.
solves this problem for us by automatically replicating datastore Fig. 6(a), shows that, as the request volume increases, the average
writes to geographically separate data centers as the writes are pro- duration of each request decreases. This is a result of a reduced
cessed. This achieves the level of data redundancy and geographical impact of loading requests. This data leads to the conclusion that
separation we require, without forcing us to update our algorithms. if we avoid potential bottleneck points such as datastore writes, we
Other network storage solutions would have been possible as well, can expect cloud performance to stay the same or get better for any
but having it built into the platform meant that latency for code increased future load imposed on the system (e.g., as the number of
accessing the storage would be lower. sensors scales up).
A final compelling reason to select the App Engine was its ease
of maintenance. Rather than spending time building server images Design Implications. When designing an algorithm to run on App
and designing them to coordinate with each other, we were able Engine, the algorithm has to fit inside of the constraints imposed
to immediately begin working on the algorithms that were most by the architecture. There are a few factors to consider. First, as a
important to us. Server maintenance, security, and monitoring are result of the automatic scaling done by App Engine, every request to
all handled by the App Engine and do not take away from the time the system exists in isolation. That is, the running requests maintain
of the research team members. no shared state, nor do they have any inter-process communication
App Engine also includes a number of other benefits we con- channels. Additionally, since there are no long running background
sidered. First, it utilizes the same front ends that drive Google’s processes, maintaining any form of state generated as a result of
search platform, and, consequently, greatly reduces latency to the successive calls is more difficult. In order to accurately ascertain the
platform from any point in the world. Since we plan to expand this number of incoming picks in a unit time over a specified geography,
project beyond Southern California, this is very useful. Second, we had to surmount these hurdles.
the platform supports live updates to running applications. Rather The only common read/write data sources provided are memcache
than devising server shutdown, update, and restart mechanisms as (a fast key value store) and datastore. The datastore is a persistent
is commonly required, we can simply redeploy the application that object store used for permanent data archiving for future analysis or
serves our sensors and all new requests to the CFC will see the new retrieval. Long term state which changes infrequently, such as the
code instead of the old code with no loss of availability. number of active sensors in a given region, is stored and updated in
All of these features do not come without a price, however. We the datastore, but cached in the memcache for quick access. Due
will discuss what we perceive as the two largest drawbacks of the to its slower performance, particularly in aggregating writes for
platform: loading requests and design implications. immediate retrieval by other processes, it is unsuitable for short
term state aggregation.
Loading Requests. Because App Engine dynamically scales the Short term state, such as the number of picks arriving in an
number of available instances available to serve a given applica- interval of time in a particular region, is stored in memcache. While
tion as the volume of requests per unit time changes, it creates a memcache is not persistent, as objects can be ejected from the cache
'#"
!"#$%&#'()*+',-$%./0'123'
'!"
&"
%"
$"
#"
!"
!" (!" '!!" '(!" #!!" #(!" )!!" )(!"
4-56#$'/7'()*+'8#9-#2:2')0'%0';/-$'

(a) Pick duration (b) Associator (c) Android Client


Figure 6: (a) Average duration of a pick request as a function of system load. (b) Model of dispersed sensors using a hash to a uniform
grid to establish proximity. (c) A picture of the CSN android client in debug mode, capturing picks.

due to memory constraints, operations that utilize the memcache are records from several dangerously large events. One approach to
much faster. Memcache is ideal for computations that need to occur overcome this limitation is to simulate sensor observations from
quickly, and, because memcache allows values to set an expiry time, existing seismic records, and use these simulated observations for
it is also perfect for data whose usefulness expires after a period of testing. The Southern California Seismic Network, a network of
time. That is, after a long enough period of time has passed since a several hundred high-fidelity seismometers, provides a database of
pick arrived, it can no longer be used in detecting an event; therefore, such records. We extract a set of 32 records of moderately large
its contributed value to the memcache can be safely expired. (M5-5.5) events from stations at distances of under 40 km from the
Memcache operates as a key value store, effectively a distributed event epicenter. Simulated sensor observations are produced by sub-
hash table. In order to determine how many sensors sent picks in sampling these records to 50 samples per second and superimposing
a given period of time, we devised a system of keys which could them onto segments of Android or Phidget data. As we will see
be predictably queried to ascertain the number of reporting sensors. in our shaketable experiments, this method of obtaining simulated
We used a geography hashing scheme to ascribe an integer value sensor data yields a reasonable estimate of detection performance
to every latitude/longitude pair, which generates a uniform grid of when we reproduce quake records using a shaketable and directly
cells whose size we can control, with each sensor fitting into one sense the acceleration with both Androids and Phidgets.
cell in the grid (see Fig. 6(b)). Incoming picks then update the key
Picking Algorithm Evaluation. In our first experiment, we eval-
corresponding to a string representation of the geographical hash
uate the sensor-level effectiveness of our density-based anomaly
and a time bucket derived by rounding the arrival time of the pick to
detector. We compare four approaches: two baselines and two
the nearest second.
versions of our algorithm.
In this manner, independent processes aggregate their state, and
each process runs the hypothesis testing algorithm of Section 3 in 1. A hypothesis-testing based approach (as used by classical de-
the cell whose state it updated to determine the value of E bt . If centralized detection), which uses a GMM-based density esti-
Ebt = 0, then no action needs to be taken. If E bt = 1 a task queue mate both for P [Xs,t | Et = 0], as well as P [Xs,t | Et = 1].
task is launched to initiate the alert process; the task is named using For training data, we use 80 historic earthquake examples of
the hash values that generated the alert. Each named task creates a magnitude M4.5-5, superimposed on the sensor data.
’tombstone’ (a marker in the system) on execution which prevents 2. A domain specific baseline algorithm, STA/LTA, which ex-
additional tasks with the same name from being created, so even if ploits the fact that the energy in earthquakes is broadband in
successive picks also arrive at the Ebt = 1 conclusion, we know that 0-10Hz. It compares the energy in those frequencies in the
only one alert will be sent out for a given set of inputs. last 2.5s to the energy at those frequencies in the previous 5s;
a sharp rise in this ratio is interpreted as a quake.
3. A simplified GMM based approach, which uses features from
5. EXPERIMENTS a sliding window of 2.5s length
Could a network of cheap community sensors detect the next 4. Our full GMM approach, which combines combines features
large earthquake? We obtain accurate estimates of the distribution of the last 2.5s with features from the previous 5s (to better
of normal sensor data by collecting records from volunteers’ phones detect the onset of transient events).
and and USB accelerometers. Using an earthquake shaketable and
Notice that implementing the hypothesis testing baseline in an actual
records of ground acceleration gathered by the Southern California
system would require waiting until the sensors experienced such a
Seismic Network (SCSN) during moderately large earthquakes, we
number of earthquakes, carefully annotating the data, and then train-
obtain estimates of each sensor’s ROC curves. These estimates of
ing a density estimator. On the other hand, our anomaly detection
sensor performance allow us to evaluate the effect of network density
approach can be used as soon as the sensors have gathered enough
and composition on the detection rate. Finally, we apply the learned
data for an estimate of P [Xs,t | Et = 0]. We applied each of these
detection models to data from the 2010 Baja California M7.2 quake.
four algorithms to test data created by superimposing historic earth-
Data Sets. While earthquakes are rare, data gathered from com- quake recordings of magnitude M5-5.5 on phone and Phidget data
munity sensors can be plentiful. To characterize “normal” (back- that was not used for training. The resulting estimated sensor ROC
ground) data, seven volunteers from our research group carried curves are shown in Fig. 7(a) and Fig. 7(b), respectively.
Android phones throughout their daily routines to gather over 7GB First note that in general the performance for the Phidgets is much
of phone accelerometer data. Similarly, an initial deployment of 20 better than for the phones. This is expected, as phones are subject to
USB accelerometers recorded 55GB of acceleration over a period much more background noise, and the quality of the accelerometers
of 4 months. However, due to the infrequent occurrence of large in the Phidgets is better than those in the phones. For example,
earthquakes, it could require many years of observation to obtain while the STA/LTA baseline provides good performance for the
(a) Android detection performance (b) Phidget detection performance (c) Detection rates

(d) 1 pick per minute, LtSt features (e) 1 pick per minute, alternate view (f) Detection rate on Baja M7.2 (100 itera-
tions averaged)

Figure 7: In all plots, the system-level false positive rate is constrained to 1 per year and the achievable detection performance is
shown. (a,b) Sensor level ROC curves on magnitude M5-5.5 events, for Android (a) and Phidget (b) sensors. (c) Detection rate as a
function of the number of sensors in a 20 km × 20 km cell. We show the achievable performance guaranteeing one false positive per
year, while varying the number of cells covered. (d,e) Detection performance for one cell, depending on the number of phones and
Phidgets. (f) Actual detection performance for the Baja event. Note that our approach outperforms classical hypothesis testing, and
closely matches the predicted performance.

Phidgets (achieving up to 90% detection performance with minimal would likely detect the earthquake when computing features based
false positives), it performs extremely poorly for the phone client on a sliding window of length 2.5s. However, in order to achieve
(where it barely outperforms random guessing). The other tech- larger spatial coverage we will need many spatial cells of 20 km ×
niques achieve close to 100% true positive rate even for very small 20 km. For example, roughly 200 such cells would be needed to
false positive rates. For the phone data, both our anomaly detection cover the Greater Los Angeles area. Increasing the number of cells
approaches outperform the hypothesis testing baseline, even though additively increases the number of false positives due to the fact
they use less training data (no data about historic earthquakes). In that multiple hypotheses (one per cell) are tested simultaneously.
particular for low false positive rates (less than 5%), the full GMM Consequently, to maintain our objective of one system-wide false
LtSt model outperforms the simpler model (that only considers 2.5s positive per year, we must decrease the rate of annual false positives
sliding windows). Overall, we find that both for the phones and the per cell. The effect on detection rates from this compensation as a
Phidgets, we can achieve detection performance far better than ran- function of the total number of cells is shown in Figure 7(c). Notice
dom guessing, even for very small false positive rates, and even for that even for 200 cells, approximately 60 phones per cell suffice to
lower magnitude (M5-5.5) events. We expect even better detection achieve close to 100% detection performance, as long as they are
performance for stronger events. located close to the epicenter.
Sensor Fusion. Based on the estimated sensor-level ROC curves, Sensor Type Tradeoffs. A natural question is what is the tradeoff
we can now estimate the fusion-level detection performance. To between the different sensor types? Figures 7(d) and 7(e) shows
avoid overestimating the detection performance, we reduce the esti- the estimated detection performance as a function of the number
mated true positive rates, assuming that a certain fraction of the time of Phidgets and number of phones in the area, when constrained to
(10% in our case) the sensors produce pure random noise. We now one false alarm per year. Our results indicate that approximately 50
need to specify communication constraints p̄ on how frequently a phones or 10 Phidgets should be enough to detect a magnitude 5
message can be sent from each sensor, as well as a bound P̄ on the and above event with close to 100% success.
fusion-level false positive rate. We choose p̄ to be at most one mes- The results in Figures 7(d) and 7(e) also allow us to estimate how
sage per minute, and P̄ to be at most one fusion-level false positive we could ensure sufficient detection performance if a given area
per year. This fusion-level false positive rate was chosen as repre- contains only a limited number of active phone clients. For example,
sentative of the time scale the CSN must operate on; in practice this if only 25 phones are active in a cell, we could manually deploy 5
would depend on the cost of taking unnecessary response measures. additional Phidgets to boost the detection performance from close
We consider sensors located in geospatial areas of size 20 km × to 70% to almost 100%.
20 km, called cells. The choice of this area is such that, due to the Notice that all these results assume that the sensors are located
speed of seismic waves (≈ 5-10 km/s), most sensors within one cell close to the epicenter (as they assume the sensors experience max-
4 4 4

2 2 2
m/s2

m/s2

m/s2
0 0 0

−2 −2 −2

5 10 15 20 5 10 15 20 5 10 15 20
seconds seconds seconds
(a) Episensor Recording (b) Android on Table (c) Android in Backpack
Figure 8: Shake table comparison of 24-bit EpiSensor, Android, and Android in a backpack. Notice that the phone recordings closely
match those of the affixed high-fidelity EpiSensor.

imum acceleration), and are thus to be taken with some care. Cov- and thresholds are then applied to these observations to produce
ering an area such as Greater Los Angeles likely requires tens of picks; the fusion center hypothesis test is then performed and the
thousands of sensors. decision is made whether an event has occurred or not. The average
detection rates for each deployment size (averaged over 100 itera-
Shaketable Validation. Our previous experiments have used syn-
tions, using different Android data to simulate each observation) are
thetically produced data (recorded seismic events superimposed on
shown in Figure 7(f) along with the estimated detection rates for
phone recordings) to simulate how different detection algorithms
the GMM-based anomaly detection. The latter estimate is based on
may respond to a moderately large earthquake. Is such a simulation-
the ROC that we estimated using a different collection of seismic
based approach valid? Would these sensors actually detect an earth-
events, as explained in our Picking Algorithm Evaluation section.
quake from their own recordings? To answer these questions, we
Notice that the actual detection performance matches well the pre-
take recordings of three large historical earthquakes, and play them
dicted detection performance. As baseline, we compare against the
back on a shaketable (see Figure 4(c) for an illustration).
hypothesis testing based baseline (trained on 80 smaller-magnitude
First, we test the ability of Android phones to accurately capture
earthquakes). Anomaly detection significantly outperforms hypoth-
seismic events, relative to one of the sensors used in the SCSN. We
esis testing, and suggests that a deployment of 60 phones in a cell
reproduce records of three large M6-8 earthquakes on the shaketable,
60 km from the epicenter would have been quite likely to detect the
and record the motion using one Android placed on the table, and
Baja, California M7.2 event.
another in a backpack on the table. Ground truth acceleration is
provided by a 24-bit EpiSensor accelerometer mounted to the table. 6. RELATED WORK
A sample record from each sensor is shown in Figure 8. Unlike
In the following, we review prior work that is related to various
the EpiSensor, the phones are not affixed and are free to slide. The
aspects of this paper and that has not been discussed yet.
backpack also introduces an unpredictable source of error. Despite
these significant challenges, after properly resampling both signals Distributed and Decentralized Detection. There has been a great
and aligning them temporally, we obtain an average correlation deal of work in decentralized detection. The classical hierarchical
coefficient of 0.745, with a standard deviation of 0.0168. This result hypothesis testing approach has been analyzed by Tsitsiklis [27].
suggests that the phones reproduce the waveforms rather faithfully. Chamberland et al. [2] study classical hierarchical hypothesis test-
A more important question than faithful reproduction of wave- ing under bandwidth constraints. Their goal is to minimizes the
forms is whether the sensors can detect an earthquake played back probability of error, under constraint on total network bandwidth
on the shaketable. To assess this, we use the model trained on back- (similar to our constraint p̄ on the number of messages sent per sen-
ground noise data, as described above. We further use percentile es- sor). Both these approaches require models for P [Xs,t | Et = 1],
timation to choose the operating point which we experimentally de- which are not available in our case. Wittenburg et al. [31] study
termined to lead to high system-level detection performance above. distributed event detection in WSN. In contrast to the work above,
All six of the recordings (three from the phone on the table and three their approach is distributed rather than decentralized: nearby nodes
from the phone in the backpack) were successfully detected. collaborate by exchanging feature vectors with neighbors before
making decision. Their approach requires a training phase, provid-
The Previous Big One. To perform an end-to-end test of the ing examples of events that should be detected. Martinic et al. [16]
entire system, we performed an experiment with the goal to find also study distributed detection on multi-hop networks. Nodes are
out whether our CSN would have been able to detect the last big clustered into cells, and the observations within a cell are compared
event. A recent major earthquake in Southern California occurred against a user-supplied “event signature” (a general query on the
on April 4, 2010. This M7.2 quake in Baja, California was recorded cell’s values) at the cell’s leader node (cluster head). The communi-
by SCSN, although the nearest station was more than 60 km from cation requirements of the last two approaches are difficult to meet
the event epicenter. Using 8 recordings of this event, at distances of in community sensing applications, since sensors may not be able to
63 km to 162 km, we produce simulated Android data and evaluate communicate with their neighbors due to privacy and security restric-
how many phones would have been needed to detect this event. tions. Both approaches require prior models (training data providing
Specifically, we constrain the system as before to one false alarm examples of events that should be detected, or appropriately formed
per year, and one message per minute in order to determine detec- queries) that may not be available in the seismic monitoring domain.
tion thresholds, sensor operating points and sensor thresholds for
both the GMM anomaly and hypothesis testing detector, for each Anomaly Detection. There has also been significant amount of
deployment size. We then simulate observations for each sensor in prior work on anomaly detection. Yamanishi et al. [32] develop the
a deployment ranging from 1 sensor to 100 sensors. The models SmartSifter approach that uses Gaussian or kernel mixture models
to efficiently learn anomaly detection models in an online manner.
While results apply only in the centralized setting, they support the by categorizing body motion and comparing activities to exercise
idea of using GMMs for anomaly detection could be extended to goals [5]. Like our CSN, these applications stand to benefit from the
learn, for each phone, a GMM that adapts to non-stationary sources high density of existing community sensors, but are fundamentally
of data. Davy et al. [7] develop an online approach for anomaly different in their aim of monitoring phenomena rather than detecting
detection using online Support Vector machines. One of their ex- infrequent events.
periments is to detect anomalies in accelerometer recordings of
industrial equipment. They use produce frequency-based (spec- 7. CONCLUSIONS
trogram) features, similar to the features we use. However, their
We studied the problem of detecting rare, disruptive events us-
approach assumes the centralized setting.
ing community-held sensors. Our approach learns local statistical
Subramaniam et al. [25] develop an approach for online out-
models characterizing normal data (e.g., acceleration due to normal
lier detection in hierarchical sensor network topologies. Sensors
manipulation of a cellphone), in an online manner. Using local
learn models of their observations in an online way using kernel
online percentile estimation, it can choose operating points that
density estimators, and these models are folded together up the hi-
guarantee bounds on the sensor-level false positive frequency, as
erarchy to characterize the distribution of all sensors in the network.
well as the number of messages sent per sensor. We then showed
Rajasegarar et al. [23] study distributed anomaly detection using
how a conservative estimate of the sensors’ ROC curves can be
one-class SVMs in wireless sensor networks. They assume a tree
used to make detection decisions at a fusion center which guarantee
topology. Each sensor clusters its (recent) data, and reports the clus-
bounds on the false positive rates for the entire system, as well as
ter descriptions to its parent. Clusters are merged, and propagated
maximize a lower bound on the detection performance. The pes-
towards the root. The root then decides if the aggregate clusters are
simistic predicted true positive rates allow us to assess whether a
anomalous. Both approaches above are not suitable for the com-
given density of sensors is sufficient for the intended detection task.
munity sensing communication model, where each sensor has to
This online decentralized anomaly detection approach allows us
make independent decisions. Zhang et al. [33] demonstrate online
to cope with the fundamental challenge that rare events are very
SVMs to detect anomalies in process system calls in the context of
difficult or impossible to model and characterize a priori. It also
intrusion detection. Onat et al. [21] develop a system for detecting
allows the use of heterogeneous, community-operated sensors that
anomalies based on sliding window statistics in mobile ad hoc net-
may differ widely in quality and communication constraints.
works (MANETs). However, their approach requires for nodes to
We then presented an implementation of our approach in the Com-
share observations with their neighbors.
munity Seismic Network (CSN), a novel community sensing project
Seismic Networks. Perhaps the most closely related system is with the goal of rapidly detecting earthquakes using cell phone ac-
the QuakeCatcher network [4]. While QuakeCatcher shares the use celerometers and consumer USB devices. We presented empirical
of MEMS accelerometers in USB devices and laptops, our system evidence suggesting how cloud-computing is an appropriate plat-
differs in its use of algorithms designed to execute efficiently on form for real-time detection of rare, disruptive events, as it naturally
cloud computing systems and statistical algorithms for detecting copes with peaked load, and is designed for redundancy and replica-
rare events, particularly with heterogeneous sensors including mo- tion. We furthermore experimentally assessed the sensitivity of our
bile phones (which create far more complex statistical challenges). sensors, estimating and evaluating ROC curves using experiments
Kapoor et al. [14] analyze the increase in call volume after or involving data obtained through the playback of historical earth-
during an event to detect earthquakes. Another related effort is quakes on shaketables. These assessments provide evidence of the
the NetQuakes project [28], which deploys expensive stand-alone likely detection performance of the CSN as a function of the sensor
seismographs with the help of community participation. Our CSN density. For example, we found that approximately 100 Android
Phidget sensors achieve different tradeoffs between cost and ac- clients, or 20 Phidgets per 20 km × 20 km area may be sufficient to
curacy. Several Earthquake Early Warning (EEW) systems have achieve close to 100% detection probability for events of magnitude
been developed to process data from existing sparse networks of 5 and above, while bounding the false positve rate by 1 per year.
high-fidelity seismic sensors (such as the SCSN). The Virtual Seis- While these results are very promising, they have to be taken with
mologist [6] applies a Bayesian approach to EEW, using prior infor- some care. In particular, the results are based on the assumption
mation and seismic models to estimate the magnitude and location that the phones are located very close to the epicenter of the quake
of an earthquake as sources of information arrive. ElarmS [1] uses (so they experience maximum acceleration). To enable coverage of
the frequency content of initial P-wave measurements from sensors the Greater Los Angeles area, this would require a uniformly high
closest to the epicenter, and applies an attenuation function to esti- density of sensors (tens of thousands of sensors) across the entire
mate ground acceleration at further locations. We view our approach domain. We will defer the detailed study of spatial effects, and num-
of community seismic networking as fully complementary to these bers of sensors needed to achieve spatial coverage, to future work.
efforts by providing a higher density of sensors and greater chance We believe that our results provide an important step towards
of measurements near to the epicenter. Our experiments provide harnessing community-held sensors to provide strong benefits for
encouraging results on the performance improvements that can be our society.
obtained by adding community sensors to an existing deployment
Acknowledgments. The authors wish to thank the CSN Team
of sparse but high quality sensors.
Robert Clayton, Thomas Heaton, Monica Kohler, Julian Bunn, An-
Community and Participatory Sensing. Community sensing has nie Liu, Leif Strand, Minghei Cheng and Daniel Rosenberg. This
been used effectively in a variety of problem domains. Several re- research was partially supported by ONR grant N00014-09-1-1044,
searchers [13, 26, 17, 12, 15] have used mobile phones to monitor NSF grants CNS-0932392 and IIS-0953413, a grant from the Moore
traffic and road conditions. Community sensors offer great poten- Foundation and a gift from Microsoft Corporation.
tial in environmental monitoring [18, 29] by obtaining up-to-date
measurements of the conditions participants are exposed to. Mobile
phones and body sensors are used to encourage physical activity
8. REFERENCES Embedded network sensor systems, pages 323–336. ACM,
2008.
[1] R. Allen, H. Brown, M. Hellweg, O. Khainovski, P. Lombard,
[18] M. Mun, S. Reddy, K. Shilton, N. Yau, J. Burke, D. Estrin,
and D. Neuhauser. Real-time earthquake detection and hazard
M. Hansen, E. Howard, R. West, and P. Boda. Peir, the
assessment by ElarmS across California. Geophysical
personal environmental impact report, as a platform for
Research Letters, 36(5), 2009.
participatory sensing systems research. In Proceedings of the
[2] J. Chamberland and V. Veeravalli. Decentralized detection in
7th international conference on Mobile systems, applications,
sensor networks. Signal Processing, IEEE Transactions on,
and services, pages 55–68. ACM, 2009.
51(2):407–416, 2003.
[19] J. I. Munro and M. S. Paterson. Selection and sorting with
[3] cnet news. Android market share to surge over next four years.
limited storage. Theoretical Computer Science,
http://news.cnet.com/8301-1035_3-20015799-94.html,
12(3):315–323, 1980.
September 2010.
[20] X. Nguyen, M. Wainwright, and M. Jordan. Nonparametric
[4] E. Cochran, J. Lawrence, C. Christensen, and R. Jakka. The
decentralized detection using kernel methods. Signal
Quake-Catcher Network: Citizen Science Expanding Seismic
Processing, IEEE Transactions on, 53(11):4053–4066, 2005.
Horizons. Seismological Research Letters, 80(1):26, 2009.
[21] I. Onat and A. Miri. An intrusion detection system for
[5] S. Consolvo, D. McDonald, T. Toscos, M. Chen, J. Froehlich,
wireless sensor networks. In Wireless And Mobile Computing,
B. Harrison, P. Klasnja, A. LaMarca, L. LeGrand, R. Libby,
Networking And Communications, 2005.(WiMob’2005),
et al. Activity sensing in the wild: a field trial of ubifit garden.
IEEE International Conference on, volume 3, pages 253–259.
In Proc. of CHI, pages 1797–1806. ACM, 2008.
IEEE, 2005.
[6] G. Cua, M. Fischer, T. Heaton, and S. Wiemer. Real-time
[22] H. V. Poor and O. Hadjiliadis. Quickest Detection. Cambridge
performance of the Virtual Seismologist earthquake early
University Press., 2009.
warning algorithm in southern California. Seismological
[23] S. Rajasegarar, C. Leckie, J. Bezdek, and M. Palaniswami.
Research Letters, 80(5):740, 2009.
Centered hyperspherical and hyperellipsoidal one-class
[7] M. Davy, F. Desobry, A. Gretton, and C. Doncarli. An online
support vector machines for anomaly detection in sensor
support vector machine for abnormal events detection. Signal
networks. Information Forensics and Security, IEEE
processing, 86(8):2009–2025, 2006.
Transactions on, 5(3):518–533, 2010.
[8] P. Fearnhead. Particle filters for mixture models with an
[24] M. Sato. Online model selection based on the variational
unknown number of components. Statistics and Computing,
Bayes. Neural Computation, 13(7):1649–1681, 2001.
14(1):11–21, 2004.
[25] S. Subramaniam, T. Palpanas, D. Papadopoulos,
[9] E. Giné-Masdéu and A. Guillou. Rates of strong uniform
V. Kalogeraki, and D. Gunopulos. Online outlier detection in
consistency for multivariate kernel density estimators. Ann
sensor data using non-parametric models. In Proceedings of
Inst. H. Poincaré, 38:907–921, 2002.
the 32nd international conference on Very large data bases,
[10] R. Gomes, M. Welling, and P. Perona. Incremental learning of
pages 187–198. VLDB Endowment, 2006.
nonparametric Bayesian mixture models. In Proc. of CVPR
[26] A. Thiagarajan, L. Ravindranath, K. LaCurts, S. Madden,
2008. IEEE, 2008.
H. Balakrishnan, S. Toledo, and J. Eriksson. VTrack: accurate,
[11] M. Greenwald and S. Khanna. Space-efficient online
energy-aware road traffic delay estimation using mobile
computation of quantile summaries. In Proceedings of the
phones. In Proceedings of the 7th ACM Conference on
2001 ACM SIGMOD international conference on
Embedded Networked Sensor Systems, pages 85–98. ACM,
Management of data, pages 58–66. ACM, 2001.
2009.
[12] R. Herring, A. Hofleitner, S. Amin, T. Nasr, A. Khalek,
[27] J. Tsitsiklis. Decentralized detection by a large number of
P. Abbeel, and A. Bayen. Using mobile phones to forecast
sensors. Mathematics of Control, Signals, and Systems
arterial traffic through statistical learning. Submitted to
(MCSS), 1(2):167–182, 1988.
Transportation Research Board, 2009.
[28] USGS. Netquakes.
[13] B. Hoh, M. Gruteser, R. Herring, J. Ban, D. Work, J. Herrera,
http://earthquake.usgs.gov/monitoring/netquakes, 2010.
A. Bayen, M. Annavaram, and Q. Jacobson. Virtual trip lines
[29] P. Völgyesi, A. Nádas, X. Koutsoukos, and Á. Lédeczi. Air
for distributed privacy-preserving traffic monitoring. In Proc.
quality monitoring with sensormap. In Proceedings of the 7th
of the International conference on Mobile systems,
international conference on Information processing in sensor
applications, and services, pages 17–20. Citeseer, 2008.
networks, pages 529–530. IEEE Computer Society, 2008.
[14] A. Kapoor, N. Eagle, and E. Horvitz. People, Quakes, and
[30] M. K. Warmuth and D. Kuzmin. Randomized pca algorithms
Communications: Inferences from Call Dynamics about a
with regret bounds that are logarithmic in the dimension.
Seismic Event and its Influences on a Population. In
Journal of Machine Learning Research, 9:2287–2320, 2008.
Proceedings of AAAI Symposium on Artificial Intelligence for
[31] G. Wittenburg, N. Dziengel, C. Wartenburger, and J. Schiller.
Development, 2010.
A system for distributed event detection in wireless sensor
[15] A. Krause, E. Horvitz, A. Kansal, and F. Zhao. Toward
networks. In Proceedings of the 9th ACM/IEEE International
community sensing. In Proceedings of the 7th international
Conference on Information Processing in Sensor Networks,
conference on Information processing in sensor networks,
pages 94–104. ACM, 2010.
pages 481–492. IEEE Computer Society, 2008.
[32] K. Yamanishi, J. Takeuchi, G. Williams, and P. Milne. On-line
[16] F. Martincic and L. Schwiebert. Distributed event detection in
unsupervised outlier detection using finite mixtures with
sensor networks. In Systems and Networks Communications,
discounting learning algorithms. Data Mining and Knowledge
2006. ICSNC’06. International Conference on, page 43. IEEE,
Discovery, 8(3):275–300, 2004.
2006.
[33] Y. Zhang, W. Lee, and Y. Huang. Intrusion detection
[17] P. Mohan, V. Padmanabhan, and R. Ramjee. Nericell: rich
techniques for mobile wireless networks. Wireless Networks,
monitoring of road and traffic conditions using mobile
9(5):545–556, 2003.
smartphones. In Proceedings of the 6th ACM conference on

View publication stats

You might also like