Conference PaperPDF Available

Understanding Events for Wide-Area Situational Awareness

Authors:

Abstract

With synchrophasor-based wide area situational awareness systems, the number of data signals that an operator must process at any given time, especially during disturbances, can be overwhelming. To assist both the operations team as well other teams monitoring and studying the state of the power system, we propose an event understanding framework that processes raw PMU data, generates and represents pertinent event metadata that can be searched and browsed, and derives inferences that can be used to automatically generate reports on important grid behaviors. In this paper, we describe how we detect basic events on the grid and describe an event ontology that provides a vocabulary to categorize these events. We extend this ontology by introducing spatial and temporal relations. As a first use case from post-mortem analysis, we demonstrate how an end user can search for and retrieve event episodes as part of “what if” scenario analysis. As a second use case, we show how to “screen” fault locations with voltage profiles, a base model of the domain, an inference model, and application of a rule-based reasoner. Based on our initial results, we conclude that this is a promising step towards fault localization, and consequently, automatic, post-disturbance report generation.
Understanding Events for Wide-Area Situational
Awareness
Chumki Basu, Ankit Agrawal, Jagabondhu Hazra,
Ashok Kumar, Deva P. Seetharam
Smarter Energy Systems
IBM India Research Laboratory, Bangalore, India
{chumbasu, anagraw7, jahazra1, ashokponkumar,
dseetharam}@in.ibm.com
Jean Béland (1), Sébastien Guillon (2), Innocent
Kamwa (1), Claude Lafond (1)
(1) Hydro-Québec Research Institute (IREQ), Varennes,
(Québec), Canada
(2) Hydro-Québec TransÉnergie, Montréal (Québec),
Canada
(1) {beland.jean, kamwa.innocent, lafond.claude}@ireq.ca,
(2) guillon.sebastien@hydro.qc.ca
Abstract—With synchrophasor-based wide area situational
awareness systems, the number of data signals that an operator
must process at any given time, especially during disturbances,
can be overwhelming. To assist both the operations team as well
other teams monitoring and studying the state of the power
system, we propose an event understanding framework that
processes raw PMU data, generates and represents pertinent
event metadata that can be searched and browsed, and derives
inferences that can be used to automatically generate reports on
important grid behaviors. In this paper, we describe how we
detect basic events on the grid and describe an event ontology
that provides a vocabulary to categorize these events. We extend
this ontology by introducing spatial and temporal relations. As a
first use case from post-mortem analysis, we demonstrate how an
end user can search for and retrieve event episodes as part of
“what if” scenario analysis. As a second use case, we show how to
“screen” fault locations with voltage profiles, a base model of the
domain, an inference model, and application of a rule-based
reasoner. Based on our initial results, we conclude that this is a
promising step towards fault localization, and consequently,
automatic, post-disturbance report generation.
Keywords—wide area situational awareness, PMU data
analytics, representing and reasoning about grid events.
I.
I
NTRODUCTION
Wide-area situational awareness systems enable very low
latency and high throughput monitoring, archiving, reporting,
and querying of the state of the power grid. To achieve these
goals, enhanced data handling techniques must be developed
that enrich the functionality provided by current PDC (phasor
data concentrator) technology. For example, event detection
and monitoring are among functions supported by some PDCs.
If we itemize sample event metadata, these would include
event detection criteria and event triggers with threshold levels
[1]. In this paper, we show how a rich set of event metadata can
be formally represented in an ontology of event types. We
encode in this ontology both temporal and spatial relations. The
ontology is then used to support advanced querying (search and
retrieval) and automatic report generation. We expect that the
combination of this knowledge representation and the methods
that use it would enhance how grid events are understood by
application end users.
II. S
TATE OF THE
A
RT
Hydro-Quebec has a long history in the development of
wide-area monitoring systems [2]. The current system in use
today is SMDA (ver5.0) [2][3], which is our baseline. The
framework we propose in this paper complements the existing
technology, integrating elements of both real-time processing
(event detection) and offline processing (historical queries and
report generation). Though the scope of the work described in
the paper is limited to PMU data, we do not foresee any
limitations in the approach that would preclude incorporating
data from multiple data sources (e.g., SCADA, EMS, etc). We
have designed a system architecture that supports very low
latency data ingestion, long-term data storage and retrieval,
and automated reporting. Since there are different user groups
that would benefit from our system, we start by interpreting
raw measurements, and eventually, generate high-level
inferences about events and behaviors on the grid.
We believe wide-area situational awareness systems
should be robust to different sources of data. In addition, there
are common principles that should generalize well across
domains. Towards this end, we build on prior art in the areas
of event detection [4][5] as well as situational awareness
applications in other domains [6][7]. We are also aware of the
common information model (CIM) [8] and ontology
development work in the power systems domain [9][10].
III. B
ASIC
E
VENT
D
ETECTION AND
C
ATEGORIZATION
Our system aggregates and records in real-time voltage
magnitude and phase angle measurements from PMU data files
(currently, in Macrodyne format). Frequency values are
computed according to guidelines provided in [3].
A. Basic Events
Basic events are occurrences of fundamental interest in the
power system. They are detected by the system in real-time and
are characterized by the following configurable parameters: 1)
978-1-4799-3653-3/14/$31.00 ©2014 IEEE
percentage of limit violations, 2) fixed time window when the
violations occur, and 3) threshold or limit for each measured
value. For example, a basic event may be detected if 60% of
the data samples for voltage magnitude within a 200
millisecond time window exceed an upper limit of 1.05 p.u.
For each basic event, we maintain continuous records of PMU
data (as event data records) at the highest sampling rate (60 Hz)
for the time span given by an event analysis window (10
minutes). Event metadata as well continuous event data records
are stored in a timeseries database.
To specialize our event detection approach to the power
system domain, we discuss several filters required to interpret
the data. First, we consider how to handle noise. One option is
to apply a “low pass filter” – in other words, we filter each
phase angle measurement using a low pass filter having a
specific “cutoff frequency”. The value of the cutoff frequency
should be configurable (initially, we assume a default value of
.2Hz). We note that this filter serves a dual purpose – by
computing the difference of the output value of this filter with
each new phasor and comparing it to a pre-defined threshold,
we define a criteria for angle recording. Next, we propose a
conceptualization and organization of power system events.
B. Event Categorization
For event categorization, we adopt a hierarchical approach
shown in Figure 1: at the top-level, we start with a “basic (grid)
event”, which is specialized as we move down the hierarchy.
Our system recognizes three types of common basic events:
“voltage”, “frequency”, and “phase angle” events. (Note that
there are many other types of power system events and the
ontology can be extended to include these). Each event type
can be further subcategorized. For example a voltage event
may be subcategorized as “voltage dip”, “voltage swell”, etc.
The system determines the subcategory of the violation by
computing an attribute function. For example, a (binary)
attribute function for a voltage dip checks for a reduction in
nominal voltage in excess of 10% for a period ranging from 8
msec to 1 min and assigns the value “true” if this condition is
satisfied [11]. We encode this information in an event ontology
– a formal conceptualization of a specific domain (vocabulary)
along with associated constraints (relations) [12].
We adopt the convention used by [13] to define our
ontology structure or schema as a 6-tuple: {C, R, A, H
C
, prop,
att}. This schema consists of two disjoint sets, C, representing
the set of concepts or the vocabulary terms, and R, representing
the relations. Our concept hierarchy, H
C
, is depicted in Figure
1; we show the taxonomic link – known as “IS-A”, class-
subclass, or inheritance relation – between concepts.
The functions, prop and attr, relate concepts non-
taxonomically and relate concepts to literals, respectively. Prop
functions express either spatial or temporal relations between
concepts. Spatial relations are commonly organized into three
classes: topological, spatial order, and metric [14]. An example
of a topological relation is adjacent to. An example of a spatial
order relation is east/west/north/south of. An example of a
metric relation that exploits explicit measurements is near.
Spatial relations are typically defined between locations, which
in our case would be the locations of PMUs or specific
substations. We take advantage of this feature, and pre-
compute the values of these (binary) relations for the
substations in our network topology. During event
categorization, using the “location” field value stored in an
event record, we automatically map these binary relations and
their associated values to the event concept hierarchy.
Figure 1. Event hierarchy
Event Type Functional Specificatio
n
Voltage sag
/
dip
Reduction of more than 10% (up to 30%) of
nominal voltage lasting from 8 msecs to 1 min
Voltage swell
Increase of mo
r
e than 10% (up to 30%) abov
e
nominal voltage lasting from 8 msecs to 1 min
Voltage interruption
Measured voltage close to 0 pu lasting from several
secs to more than 1 min
Undervoltag
e
Reduction of more than 10% (up to 20%) of
nominal voltage lasting for more than 1 min
Overvoltag
e
Increase of 10% (up to 20%) above nominal voltage
lasting for more than 1 min
Underfrequency
Decrease in frequency of - 0.20 Hz to -
1 Hz from
fundamental frequency (60 Hz) in less than 10 sec
(time window can be adjusted up to 5 minutes)
Overfrequency
Increase in frequency of + 0.2 Hz to + 0.5 Hz from
fundamental frequency (60 Hz) in less than 10 sec
(time window can be adjusted up to 5 minutes)
Steady oscillation
Rate of change of phase angle greater than
40-60
degrees every few seconds
Transient oscillation
Rate of
change of phase angle greater than
40-60
degrees every few cycles (cycle is 20 msecs)
Table 1. Event detector functions (attributes)
Quality Metric Computed Value
Relationship Richness
0.90
Attribute Richness
1.3
Inheritance Richness
1.6
Table 2. Ontology schema quality metrics
Volta
g
e Event
Fre
q
uenc
y
Event
Basic
(Grid) Event
Phase Angle Event
Voltage
Event
Voltage
Interruption
Undervoltage
Voltage
Sag/Dip
Voltage
Swell
Overvoltage
Phase Angle
Event
Steady
Oscillation
Transient
Oscillation
Underfrequency Overfrequency
Frequency
Event
An example of a temporal relation is followed-by(C
1,
C
2
),
which is interpreted as event C
1
happens before event C
2
.
Another example of a metric, temporal relation is within. We
discuss these temporal relations in greater detail in Section IV
and spatial relations in Section V.
The function, attr, maps to a special kind of relation, A,
representing an attribute of a concept. We represent a set of
“event detector” functions associated with leaf nodes in the
concept hierarchy as attributes of the leaf nodes; a specification
of these detector functions is given in Table 1. For the sake of
discussion, we have assigned sample threshold values to
variables that are defined in the functional specifications [11].
In an operational setting, these values could be learned from
historical data, operational studies, and experience.
Ontology evaluation is an evolving area of research. There
are multiple approaches that are structural [15] (comparison to
existing domain ontologies), data driven [16] (comparison to a
domain corpus), or application driven (task-based evaluation).
While there has been significant progress on standardizing
terminology and classes for power system stability [17] (and
while the upper level of our concept hierarchy is structurally
similar to this classification), gold-standard domain ontologies
that describe power system events are yet to be developed. For
this reason and since our ontology was derived largely from a
domain corpus, we rely on task-based evaluation for the
applications discussed in Sections IV and V.
In addition, we use quality metrics introduced in [13] to
evaluate properties, relationship richness, attribute richness,
and inheritance richness, of our ontology schema. Relationship
richness is the fraction of non-inheritance relations compared
to the total number of relations in the schema. The closer this
value is to 1, the higher the percentage of non-inheritance
relations and the richer the taxonomy. Attribute richness is the
average number of attributes per concept. Higher values of this
metric indicate greater knowledge encoded per concept.
Inheritance richness is the average number of subcategories per
concept. Higher values indicate a horizontal ontology (fewer
levels, larger fan-out of concepts). In Table 2, we present the
value of each metric for our ontology.
IV. S
EARCH
Q
UERIES AND
“W
HAT
I
F
S
CENARIOS FOR
P
OST
-M
ORTEM
A
NALYSIS
In “what if” analysis, an end user synthesizes new scenarios
by changing the values of variables and observing the
consequences. Since we continuously record PMU data during
events, we have a rich, historical database of event data that
can be queried. During post-mortem analysis following a
disturbance, the end user may be interested in finding
(infrequent) co-occurrences between events in a “free play”
mode and observing the outcome.
An event episode is a sequence of basic events observed in
temporal proximity for a given time duration. An example of
an event episode is A followed-by B. Event episodes were
introduced as part of complex event processing of time series
data to handle sequences of events [5]. We provide a search
interface to retrieve episodes, where a sample query could be:
“Retrieve all voltage dips followed by a frequency dip within
500 msecs”. (We assume there may be multiple intervening
events between a starting event and the consequent event.) As
the ontology evolves, additional language primitives may be
added to make the query language more expressive.
We analyze episodes to find unusual occurrences or
patterns in the historical data. We can define triggers, e.g., “If
the frequency of an event episode is unusually high, then this
episode should be flagged for further study”. This is formalized
in [5] by defining the frequency of an episode, α, in an event
sequence s as the fraction of windows, w, of width win, in
which the episode occurs (1) and “unusually high” as the
frequency exceeding a minimum threshold (2):
win)W(s,
win winsWw
winsfr
α
α
|),(
),,(
=
(1)
frminwinsfr _),,(
α
(2)
Additionally, we are interested in triggers based on
(maximum) amplitude: “If the amplitude observed in an event
episode is unusually high, then this episode should be flagged
for further study”, which we compute as the ratio of the
maximum amplitude observed in α to the historical average of
the amplitude observed for α. While not making any claim in
terms of root cause, we hypothesize some correlation between
the initial event of an episode and the events that follow. We
create correlation triggers such as “If we observe events A and
B in a window, then we can also expect that event C should
follow in the same window”. To do this, given an episode, β,
in a window, win, we compute the conditional probability (3)
of observing a subsuming episode, γ, in the same window [5]:
),,(
),,(
winsfr
winsfr
β
γ
(3)
V. S
UMMARIZING
V
OLTAGE
P
ROFILES TO
S
CREEN
FA
ULT
L
OCATIONS
Generating reports is an important part of post-mortem
analysis, especially in situations where there is missing data
(e.g., no data from a disturbance recorder). When recorded data
is available from multiple sources, we face the challenge of
sifting through and summarizing the data. While broadly
tackling automatic report generation is outside the scope of this
paper, we propose summarizing voltage profiles to screen fault
locations, which may be included in disturbance reports.
A. Background and Motivation
A voltage profile often incorporates two modalities: a
voltage chart and a text description. We use as a template,
voltage profiles provided in the final NERC report on the grid
disturbance of August 14, 2003 [18], specifically in the “North
to South” and “West to East” directions. As in the report, our
profiles should convey how observed voltages are distributed
spatially for some time unit. As a use case, we show how
building on spatial relations (topological, spatial order, and
metric) in the ontology, we reason about the general location of
a fault, deriving high-level interpretations of grid behavior.
We analyze PMU data for a three minute interval collected
from 8 substations in early 2013. A voltage event was observed
during the first minute starting at 15:44:16. We select a
representative measurement for the time domain – from voltage
measurements (in p.u.), we choose the peak negative amplitude
for each location during the first minute of data recording.
B. Reasoning about Grid Behavior
In Figures 2 and 3, we show sample North-to-South and
West-to-East voltage profiles for a day in 2013 when an event
was observed starting at 15:44:16. As a pre-processing step, we
built adjacency graphs bottom-up starting with an initial
location, the northern(western)-most substation in the network.
Using the adjacent to relation in the ontology, we found all
adjacent locations and selected the one that was the shortest
distance away longitudinally (south of) and latitudinally (east
of), respectively. We built the adjacency graphs by iteratively
applying this procedure to the closest adjacent substation for
each location. From the voltage measurements reported by
SMDA, we identify the voltage for each location just before
the event (“init curve”) and the peak negative amplitude of the
voltage during the event (“final curve”) and plot these two
curves, respectively, in Figures 2 and 3.
While we can consider the “init” curves in both figures as
signatures of the normal behavior of the grid (just prior to an
event), how do we interpret the change in behavior shown
across curves? We can write a simple procedure to compute the
difference in voltage between corresponding values in the
“init” and “final” curves. If we rely only on this metric, then
the suggested fault location would be OUT (i). However, we
cannot determine the significance of a deviation simply by
looking at the absolute value of the difference. We must also
consider power system characteristics that are both static – is
the substation near a load area/far from a generation area, etc. –
as well as dynamic (resulting from control actions such as load
shedding). Below, we discuss how to reason about changes by
generating an inference model from a “base (domain) model”.
First, we use Resource Description Framework (RDF) [19]
as a language to represent facts about power system behavior.
Then, we build a base model of the domain that describes
general truths or assertions and conditional assertions. A fact-
based representation of the “init” curve in Figure 2 using RDF
triples is given in Table 3 under “General facts”. In Table 3, we
also present a set of facts (“Event-specific facts”) describing
the “final” curve. A comparison of the two sets of facts reveals
(in bold) where the observed behavior during an event differs
from the pattern just before the event. However, given this
knowledge, we only increase the set of possible fault locations
to include LOC, CHI, MIC, RIM, and NIC (ii). Further
reasoning is required to reduce this set to a single location.
We are interested in entailments of new assertions from an
initial set of facts using rule-based reasoning. Based on
analysis of PMU data collected over a period of months, a
domain expert identified a set of rules for power system
characteristics that influence voltage “behavior” in a service
area. Re-writing this domain knowledge as rules in RDF, we
have the following conjunctions (commas represent “and”) of
RDF triples in Table 4. We list rules derived by the domain
expert that lead to state changes (increase/decrease) in voltage.
The right-hand side of each rule is an RDF triple representing
an entailed assertion.
We apply a rule-based reasoner to derive additional
information about conditions at the time of the fault and
generate our inference model. We are interested in the “net”
state change in voltage per location. We note that there was
load shedding across the power network, which affected all of
the locations. However, for locations in the south (and closer to
the load as well as the line that was lost), the “net” change
tends to be a “decrease” in voltage (as determined by the
number of rules that entail this assertion). This explains the
behavior observed at LOC, MIC, NIC, and OUT. Even though
located in the southern block, the sustained level of voltage
observed at RIM can be explained by our inference model from
its proximity to a (wind) generation source. Similarly, the net
change in voltage for CHI, which is in the north, tends to be an
“increase” in voltage. However, this differs from the observed
behavior during the event, where we saw a significant dip in
voltage levels at CHI. This anomaly suggests the fault is
located in close proximity to CHI (iii).
For the West-to-East analysis, as before, we encode general
and event-specific facts describing the curves in Figure 3.
Adding event-specific facts does not contradict the normal
behavior pattern except for the pair, NIC and MIC (iv).
However, in analyzing this case, we realized that there was key
domain knowledge missing from the base model. We had not
specified what can be inferred for substations that are near one
another. (We use Euclidean distance as the metric to calculate
“nearness” between two substations.) We also encode a new
domain rule (Rule 6): Substations that are near each other
should report similar measurements. Two voltage
measurements are defined to be similar if their difference is
less than or equal to 0.05 p.u.
RDF Representation of Facts
General Facts: (LG2 hasVoltage V1), (LOC hasVoltage
V2), (CHI
hasVoltage V3), (MIC hasVoltage V4), (RIM hasVoltage V5), (NIC
hasVoltage V6), (BOU hasVoltage V7), (OUT hasVoltage V8), (LG2 near
Gen), (LOC near Gen), (CHI near Gen), (MIC near Gen), (RIM near Load),
(NIC near Load), (BOU near Load), (OUT near Load), (V1 greaterThan
V2), (V2 lessThan V3), (V3 greaterThan V4), (V4 greaterThan V5), (V5
lessThan V6), (V6 greaterThan V7), (V7 greaterThan V8)
Event-
s
pecific F
a
cts: (V1 greaterThan V2), (V2
g
reaterThan V3), (V3
lessThan V4), (V4 greaterThan V5), (V5 greaterThan V6), (V6 greaterThan
V7), (V7 greaterThan V8)
Table 3. Facts describing the grid both before and during an event
Domain Rules RDF Representatio
n
Rule1: If A is nea
r
a
generation
source, then the voltage is higher at A (?a nea
r
?b), (?b hasType
Gen), (?a
hasVoltage ?v1) -> (?v1 higherAt ?a)
Rule2: If
A is near a load, then the
voltage is lower at A (?a nea
r
?b), (?b hasType
Load),
(?a
hasVoltage ?v1) -> (?v1 lowerAt ?a)
Rule3: If there is load shedding at
A,
then the voltage is higher at A (?a shedLoad true), (?a hasVoltage
?v1) -> (?v1 higherAt ?a)
Rule4: If there is generation rejection
at A, then the voltage is lower at A (?a rejectGen true), (?a hasVoltage
?v1) -> (?v1 lowerAt ?a)
Rule5: If a line is lost nea
r
A, then
the voltage is lower at A (?l lineLost true), (?a nea
r
?l), (?a
hasVoltage ?v1) -> (?v1 lowerAt ?a)
Table 4. Domain-specific rules for reasoning
North-South Profile
0.75
0.8
0.85
0.9
0.95
1
1.05
LG2 LOC CHI MIC RIM NIC BOU OUT
Voltage (pu)
Init
Final
Figure 2. North-to-south
voltage profile
West-East Profile
0.75
0.8
0.85
0.9
0.95
1
1.05
LG2 OUT CHI BOU NIC MIC RIM LOC
Voltage (pu)
Init
Final
Figure 3.
W
est-to-east voltage profil
e
Generating the inference model, we find that for the pair,
LG2 and CHI (v), the voltage readings differ by 0.06-0.07 p.u.,
and therefore, are not similar. As a final step in summarizing
the results from our “screening” analysis, we use backed-off
MLE estimates [20] to calculate the probability of a fault
location given zero or more other locations as fault indicators
(we refer to these as the relevant “context” of a fault), which
we derived in (i)-(v). We conclude that the fault occurred near
CHI, which we validated with the domain expert. Currently,
fault localization for a single event using SMDA takes several
minutes. These initial results suggest that our method could be
used to “screen” the data and recommend a pruned set of
candidate fault locations to the operator with low latency.
VI. C
ONCLUSIONS
A
ND
F
UTURE
W
ORK
In this paper, we have presented an event understanding
framework that processes PMU data and derives high-level
interpretations of the data with low latency to provide grid
operators with increased situational awareness. We have
discussed two use cases from post-mortem analysis. In future
work, we will refine and extend our knowledge-based methods,
e.g., providing real-time information to operators so that they
can respond proactively to events as they evolve on the grid.
R
EFERENCES
[1] “IEEE PC37.244 Draft Guide for Phasor 1 Data Concentrator
Requirements for 2 Power System Protection, Control and 3
Monitoring”, IEEE Power and Energy Society, June 19, 2012.
[2] I. Kamwa, J. Beland, G. Trudel, R. Grondin, C. Lafond, and L. McNabb.
“Wide-Area Monitoring and Control at Hydro-Québec: Past, Present and
Future”, In Proceedings of IEEE Power Engineering Society General
Meeting, 2006.
[3] J. Goulet and J. Beland. Système de Mesure du Décalage Angulaire –
SMDA, March, 2005.
[4] V. Guralnik and J. Srivastava, “Event Detection from Time Series
Data”. In Proceedings of ACM SigKDD Conference on Knowledge
Discovery and Data Mining, San Diego, CA, 1999, pp. 33-42.
[5] H. Manilla, H. Toivonen, and A. I. Verkamo, “Discovery of Frequent
Episodes in Event Sequences”, Data Mining and Knowledge Discovery,
1(3), November, 1997, pp. 259-289.
[6] H. Cheng, D. Butler, and C. Basu, “ViTex: Video to Tex and its
Application in Aerial Video Surveillance,” In Proceedings of IEEE
Conference on Computer Vision and Pattern Recognition (CVPR), June,
2006, pp. 586-593.
[7] A. Sadagic, G. Welch, C. Basu, C. Darken, R. Kumar, H. Fuchs, H.
Cheng, J. M. Frahm, M. Kolsch, N. Rowe, H. Towles, J. Wachs, and A.
Lastra, “New Generation of Instrumented Ranges: Enabling Automated
Performance Analysis,” In Proceedings of 2009 Interservice/Industry
Training, Simulation, and Education Conference (I/ITSEC-2009),
Orlando, FL.
[8] A. W. McMorran, “An Introduction to IEC 61970-301 & 61968-11: The
Common Information Model”, January 2007.
[9] Y. Pradeep, S. Kapharde, and R. K. Joshi. “High Level Event Ontology
for Multiarea Power System”, In IEEE Transactions on Smart Grid, Vol.
3, No. 1, March 2012.
[10] http://sites.ieee.org/pes-mas/upper-ontology/
[11] “Characteristics and Target Values of the Voltage Supplied by Hydro-
Quebec transmission system”, Report by TransEnergie, July 5, 2001
(translation).
[12] Guarino, N., & Poli, R. (1995). “The Role of Formal Ontology in the
Information Technology,” International Journal of Human-Computer
Studies, 43, 623-624.
[13] S. Tartir, I. B. Arpinar, M. Moore, A. P. Sheth, and B. Aleman-Beza.
“OntoQA: Metric-Based Ontology Quality Analysis”, In IEEE
Workshop on Knowledge Acquisition
from Distributed, Autonomous, Semantically Heterogeneous Data and
Knowledge Sources, Nov, 2005.
[14] M. J. Egenhofer, “A Formal Definition of Binary Topological
Relations,” In Proceedings of the Third International Conference on
Foundations of Data Organization and Algorithms, June, 1989, pp.
457-472.
[15] A. Maechde and S. Staab. “Measuring Similarity between Ontologies,”
In Proceedings of the European Conference on Knowledge Acquisition
and Management (EKAW) 2002, Madrid, Spain, October 1-4, 2002.
[16] C. Brewster, H. Alani, S. Dasmahapatra and Y. Wilks. “Data driven
ontology evaluation,” In Proceedings of the International Conference on
Language Resources and Evaluation (LREC 2004), May 2004, Lisbon,
Portugal, pp. 24-30.
[17] P. Kundur, J. Paserba, V. Ajarrapu, G. Andersson, A. Bose, C.
Canizares, N. Hatziargyriou, D. Hill, A. Stankovic, C. Taylor, T. V.
Cutsem, and V. Vittal, “Definition and Classification of Power System
Stability”, In IEEE Transactions on Power Systems, Vol 19, No. 2, May
2004.
[18] “Technical Analysis of the August 14, 2003 Blackout: What Happened,
Why, and What Did We Learn?”, Report to the NERC Board of Trustees
by the NERC Steering Group, July, 2004.
[19] http://www.w3.org/standards/techs/rdf
[20] S. Katz. “Estimation of Probabilities from Sparse Data for the Language
Model Component of a Speech Recogniser,” IEEE Transactions on
Acoustics, Speech, and Signal Processing, Vol. ASSP-35, No. 3, 1987.
... WASA has been realized by the combination of sophisticated technologies such as IoT into situational awareness, which enables the expansion of system monitoring at any time and from any location [146]. This paradigm aims to enable low latency and high throughput monitoring, archiving, reporting, and querying of the state of the power grid [147]. Additionally, WASA can aid power providers in responding quickly to network events, reducing the likelihood of catastrophic failures such as large-scale blackouts. ...
Article
Full-text available
The rapid development of new information and communication technologies (ICTs) and the deployment of advanced Internet of Things (IoT)-based devices has led to the study and implementation of edge computing technologies in smart grid (SG) systems. In addition, substantial work has been expended in the literature to incorporate artificial intelligence (AI) techniques into edge computing, resulting in the promising concept of edge intelligence (EI). Consequently, in this article, we provide an overview of the current state-of-the-art in terms of EI-based SG adoption from a range of angles, including architectures, computation offloading, and cybersecurity concerns. The basic objectives of this article are fourfold. To begin, we discuss EI and SGs separately. Then we highlight contemporary concepts closely related to edge computing, fundamental characteristics, and essential enabling technologies from an EI perspective. Additionally, we discuss how the use of AI has aided in optimizing the performance of edge computing. We have emphasized the important enabling technologies and applications of SGs from the perspective of EI-based SGs. Second, we explore both general edge computing and architectures based on EI from the perspective of SGs. Thirdly, two basic questions about computation offloading are discussed: what is computation offloading and why do we need it? Additionally, we divided the primary articles into two categories based on the number of users included in the model, either a single user or a multiple user instance. Finally, we review the cybersecurity threats with edge computing and the methods used to mitigate them in SGs. Therefore, this survey comes to the conclusion that most of the viable architectures for EI in smart grids often consist of three layers: device, edge, and cloud. In addition, it is crucial that computation offloading techniques must be framed as optimization problems and addressed effectively in order to increase system performance. This article typically intends to serve as a primer for emerging and interested scholars concerned with the study of EI in SGs.
... Rule-based reasoning offers a natural way of handling and inferring knowledge. A rule-based knowledge system features modular structure, can easily be extended with additional rules, and provides a uniform representation of knowledge (Basu et al., 2014). However, it provides limited expressiveness to describe certain complex features. ...
Article
Smart CPSs (S-CPSs) have been evolving beyond what was identified by the traditional definitions of CPSs. The objective of our research is to investigate the concepts and implementations of reasoning processes for S-CPSs, and more specifically, the frameworks proposed for the fuzzy front end of their reasoning mechanisms. The objectives of the paper are: (i) to analyze the framework concepts and implementations of CPS, (ii) to review the literature concerning system-level reasoning and its enablers from the points of view of the processed knowledge, building awareness, reasoning mechanisms, decision making, and adaptation. Our findings are: (i) awareness and adaptation behaviors are considered as system-level smartness of S-CPSs that are not achieved by traditional design approaches; (ii) model-based and composability approaches insufficiently support the development of reasoning mechanisms for S-CPSs; (iii) frameworks for development of reasoning in S-CPS should support compositional design. Based on the conclusions above, we argue that coping with the challenges of compositionality requires both software-level integration and holistic fusion of knowledge by means of semantic transformations. This entails the need for a multi aspect framework that is able to capture at least conceptual, functional, architectural, informational, interoperation, and behavioral aspects. It needs further investigation if a compositionality enabling framework should appear in the form of a meta-framework (abstract) or in the form of a semantically integrated (concrete) framework. Highlights Smartness in CPSs is a holistic and synergistic behavioral characteristic. Complex mental representations are compositional. Compositionality is necessary for smart CPSs. Without a rigorous unifying framework, designing synthesis reasoning remains ad hoc.
... Several reasoning methods were applied in the context of smart systems, intelligent systems, and autonomous systems. For example, rule-based reasoning offers a natural way of handling and reasoning about knowledge, it has a modular nature offering easy extendibility of rules, and uniform representation of knowledge [36]. Probabilistic reasoning, such as Bayesian Networks (BNs), and Hidden Markov Models (HMM), is appropriate for reasoning with uncertainty [37]. ...
Conference Paper
Full-text available
Smart CPSs (S-CPSs) have been evolving beyond what was identified by the traditional definitions of CPSs. The objective of our research is to investigate the concepts and implementations of S-CPSs, and more specifically, the frameworks proposed for the fuzzy front end of their reasoning processes. The objectives of the paper are: (i) overview of the various framework concepts and implementations in the context of S-CPS, and (ii) analyze the presented frameworks from the points of view of structuring knowledge, building awareness, context-based reasoning, decision making, and functional and architectural adaptation. Our major findings are: (i) model-based and composability approaches do not support a development of S-CPSs; (ii) awareness and adaptation behaviors are considered as system level characteristics of S-CPSs that are not achieved by traditional design approaches; (iii) a new framework development should support a compositional design for reasoning in S-CPS. Based on the findings above, we argue that a development of S-CPSs should be supported by a proper framework development for compositional design of smart reasoning and coping with the challenges of compositionality requires both software-level integration and holistic fusion of knowledge by means of semantic transformations. This entails the need for a multi-aspect framework that can capture at least conceptual, functional, architectural, informational, interoperation, and behavioral aspects of designing smart reasoning platforms. It needs further investigation if a compositionality enabling framework should appear in the form of a meta-framework (abstract) or in the form of a semantically integrated (concrete) framework.
Chapter
The smart grid surpasses the traditional grid in terms of the type, scale, and speed of data generated during the transition. In addition to monitoring grid operations, the smart grid also focuses on gathering power consumption data from various user appliances. This necessitates the implementation of big data technology to efficiently manage, analyze, and even schedule grid operations. By doing so, the smart grid can operate with enhanced precision and efficiency while swiftly responding to user demands.
Article
Most of the recently occurred cascaded outages are resulted due to zone-3 mal-operations of distance relays. This paper introduces some new sensitivity factors to identify and monitor vulnerable relays functioning in the power transmission network. This information will help in enhancing the back-up protection operation of the power transmission system. To accomplish this task, indices such as line-outage-induced relay margin shift factor and generation-outage-induced relay margin shift factor are defined. An offline analysis of the power system is made to calculate the above two factors. This information helps in ranking the relays of the power system in terms of their vulnerability. Once the vulnerable relays are identified, their zone-3 operation is supervised by a new wide area information-based event detection logic. IEEE-9 bus and IEEE-39 bus test systems are used to validate the proposed scheme. The test results indicate that the proposed scheme can enhance the back-up protection operation of transmission network, which is very much essential to mitigate blackouts.
Article
Full-text available
This paper introduces various concepts that relate to information technology and the development of energy transmission and distribution. Key challenges need to be addressed in relation to energy consumption, such as the need to be responsive to current demand, which have been addressed through information technology systems. With the increased connectedness of energy systems, there has also been an increased need to ensure the information security of these systems. The Internet of Things (IoT) concept will be reviewed in relation to the connection of objects in energy systems as well as the concepts of Big Data and Cloud Computing. The former has developed in response to the need to predict energy usage more accurately and the latter offers the advantages of increased failover potential as well as much faster provisioning of enhanced capacity in IT systems to meet consumer demand.
Article
There are both economic and environmental urges for transition from the current outdated power grid to a sensor-embedded smart grid that monitors system stability, integrates distributed energy and schedules energy consumption for household users. Especially with the proliferation of intelligent measurement devices, exponential growth of data empowers this transition and brings new tools for the development of different applications in power system. Under this context, this paper presents a holistically overview on the state-of-the-art of big data technology in smart grid integration. First, the features of smart grid and the multisource of energy data are discussed. Then, this paper comprehensive summarizes the applications leveraged by big data in smart grid, which also contains some brand new applications with the latest big data technologies. Furthermore, some mainstream platforms and knowledge extraction techniques are looked to promote the big data insights. Finally, challenges and opportunities are pointed out in this paper as well.
Conference Paper
Full-text available
As the Semantic Web gains importance for sharing knowledge on the Internet this has lead to the development and publishing of many ontologies in different domains. When trying to reuse existing ontologies into their applications, users are faced with the problem of determining if an ontology is suitable for their needs. In this paper, we introduce OntoQA, an approach that analyzes ontology schemas and their populations (i.e. knowledgebases) and describes them through a well defined set of metrics. These metrics can highlight key characteristics of an ontology schema as well as its population and enable users to make an informed decision quickly. We present an evaluation of several ontologies using these metrics to demonstrate their applicability.
Conference Paper
Full-text available
Military training conducted on physical ranges that match a unit’s future operational environment provides an invaluable experience. Today, to conduct a training exercise while ensuring a unit’s performance is closely observed, evaluated, and reported on in an After Action Review, the unit requires a number of instructors to accompany the different elements. Training organized on ranges for urban warfighting brings an additional level of complexity—the high level of occlusion typical for these environments multiplies the number of evaluators needed. While the units have great need for such training opportunities, they may not have the necessary human resources to conduct them successfully. In this paper we report on our US Navy/ONR-sponsored project aimed at a new generation of instrumented ranges, and the early results we have achieved. We suggest a radically different concept: instead of recording multiple video streams that need to be reviewed and evaluated by a number of instructors, our system will focus on capturing dynamic individual warfighter pose data and performing automated performance evaluation. We will use an in situ network of automatically-controlled pan-tilt-zoom video cameras and personal position and orientation sensing devices. Our system will record video, reconstruct dynamic 3D individual poses, analyze, recognize events, evaluate performances, generate reports, provide real-time free exploration of recorded data, and even allow the user to generate ‘what-if’ scenarios that were never recorded. The most direct benefit for an individual unit will be the ability to conduct training with fewer human resources, while having a more quantitative account of their performance (dispersion across the terrain, ‘weapon flagging’ incidents, number of patrols conducted). The instructors will have immediate feedback on some elements of the unit’s performance. Having data sets for multiple units will enable historical trend analysis, thus providing new insights and benefits for the entire service.
Conference Paper
Full-text available
Ontologies now play an important role for many knowledge-intensive applications for which they provide a source of precisely defined terms. However, with their wide-spread usage there come problems concerning their proliferation. Ontology engineers or users frequently have a core ontology that they use, e.g., for browsing or querying data, but they need to extend it with, adapt it to, or compare it with the large set of other ontologies. For the task of detecting and retrieving relevant ontologies, one needs means for measuring the similarity between ontologies. We present a set of ontology similarity measures and a multiple-phase empirical evaluation.
Conference Paper
Full-text available
This paper recalls the 30-year history of wide-area measurements at Hydro-Quebec. At present, a state-of-the art, eight-PMU-based wide-area monitoring system commissioned in 2004 is on line, feeding the EMS with GPS-synchronized angles, frequencies, and harmonic distortion measurements from key 735-kV buses. Specific to Hydro-Quebec is the current use of one such system for frequency regulation reporting and control-room implementation of preventive measures against geomagnetic storm-induced contingencies. Building on this experience, the Hydro-Quebec research institute, together with TransEnergie, is developing advanced applications which, in the long term, will go beyond active monitoring to safely initiate targeted control actions aimed at extending system power transfer limits with respect to both transient and long-term stability. In addition to the major improvement in inter-area mode damping shown previously by the authors, this paper demonstrates that, surprisingly, wide-area control of static VAr compensators (SVC) will also vastly extend the first-swing stability margins. Finally, initial results on a wide-area measurement-based secondary voltage control of the extensive park of dynamic shunt compensators in our grid using a single pilot voltage from the load center are very encouraging
Conference Paper
Full-text available
In the past few years there has been increased interest in using data-mining techniques to extract interesting patterns from time series data generated by sensors monitoring temporally varying phenomenon. Most work has assumed that raw data is somehow processed to generate a sequence of events, which is then mined for interesting episodes. In some cases the rule for determining when a sensor reading should generate an event is well known. However, if the phenomenon is ill-understood, stating such a rule is difficult. Detection of events in such an environment is the focus of this paper. Consider a dynamic phenomenon whose behavior changes enough over time to be considered a qualitatively significant change. The problem we investigate is of identifying the time points at which the behavior change occurs. In the statistics literature this has been called the change-point detection problem. The standard approach has been to (a) upriori determine the number of change-points that are to be discovered, and (b) decide the function that will be used for curve fitting in the interval between successive change-points. In this paper we generalize along both these dimensions. We propose an iterative algorithm that fits a model to a time segment, and uses a likelihood criterion to determine if the segment should be partitioned further, i.e. if it contains a new change- point. In this paper we present algorithms for both the batch and incremental versions of the problem, and evaluate their behavior with synthetic and real data. Finally, we present initial results comparing the change-points detected by the batch algorithm with those detected by people using visual inspection.
Conference Paper
The evaluation of ontologies is vital for the growth of the Semantic Web. We consider a number of problems in evaluating a knowledge artifact like an ontology. We propose in this paper that one approach to ontology evaluation should be corpus or data driven. A corpus is the most accessible form of knowledge and its use allows a measure to be derived of the ‘fit’ between an ontology and a domain of knowledge. We consider a number of methods for measuring this ‘fit’ and propose a measure to evaluate structural fit, and a probabilistic approach to identifying the best ontology.
Article
Exchange of event information among multiple interconnected power system operators has become imperative for integrated and secure operation of the system. Considerable literature is reported on the standardization of power system data, while limited work has been done on the standardization of events. In this paper, we define a high level event ontology for power systems comprising seven concepts, namely, event, event extractor, event consumer, time, measurement, location, and level. These concepts are then developed with details extracted from the operating procedures followed by national and regional load dispatch centers in the Indian national grid. The methodology adopted for designing concrete sub-ontologies from the high level event ontology is also reported. The proposed event ontology has wide applications in the areas of i) event driven architecture (EDA) which facilitates the integration of event driven applications within and across the utilities, and ii) complex event processing (CEP) which facilitates development of sense-and-respond software capable of processing events extracted from large volumes of real-time data streams.
Article
Abstract—The problem of defining and classifying power system stability has been addressed by several previous CIGRE and IEEE Task Force reports. These earlier efforts, however, do not completely reflect current industry needs, experiences and understanding. In particular, the definitions are not precise and the classifications do not encompass all practical instability scenarios. This report developed by a Task Force, set up jointly by the CIGRE Study Committee 38 and the IEEE Power System Dynamic Performance Committee, addresses the issue of stability definition and classification in power systems from a fundamental viewpoint and closely examines the practical ramifications. The report aims to define power system stability more precisely, provide a system- atic basis for its classification, and discuss linkages to related issues such as power system reliability and security. Index Terms—Frequency stability, Lyapunov stability, oscilla-