Article

Making Sense of Cloud-Sensor Data Streams via Fuzzy Cogntive Maps and Temporal Fuzzy Concept Analysis

Authors:
To read the full-text of this research, you can request a copy directly from the authors.

Abstract

Understanding situations occurring within the physical world by analyzing streams of sensor data is a complex task for both human and software agents. In the area of situation awareness, the observer is typically overwhelmed by information overload and by intrinsic difficulties of making sense of spatially distributed and temporal-ordered sensor observations. Thus, it is desirable to design effective decision-support systems and develop efficient methods to handle sensor data streams. The proposed work is for the comprehension of the situations evolving along the timeline and the projection of recognized situations in the near future. The system analyzes semantic sensor streams, it extracts temporal pattern describing events flow and provides useful insights with respect to the operators’ goals. We implement a hybrid solution for situation comprehension and projection that combines data-driven approach, by using temporal extension of Fuzzy Formal Concept Analysis, and goal-driven approach, by using Fuzzy Cognitive Maps. The cloud-based architecture integrates a distributed algorithm to perform Fuzzy Formal Concept Analysis enabling to deal with deluge of sensor data stream acquired through a sensor-cloud architecture. We discuss the results in terms of prediction accuracy by simulating sensor data stream to early recognize daily life activities inside an apartment.

No full-text available

Request Full-text Paper PDF

To read the full-text of this research,
you can request a copy directly from the authors.

... In this article, we consider DA to be a collection of autonomous computing elements that appears to its users as a single coherent system [86]. In addition, a DA is often referred to as a distributed system [15,16], which, in turn, consists of multiple software components that could be on multiple computers, but run as a single system. These computers can either be located in close proximity, connected through a local network, or they can be situated far apart, linked by a wide area network. ...
... The quality assessment consisted in comparing the final 69 primary studies with an independent set of papers we knew that ought to be in the final set [4,15,16,19]. Also, the following questions were answered to assess the quality of the selected articles: ...
... Moreover, some IoT architectures are designed to run infinitely, creating the need for handling certain tasks such as hot swapping [80]. Toward those lines, in the area of CA methods, there are those that can handle the processing of data streams, usually called incremental [15,16,87] by only updating the output in the necessary parts with the arrival of each element of the stream. The main problem these methods have is that if the stream never stop producing elements, at some point the update they do will be too costly. ...
Article
Full-text available
The Internet of Things massive adoption in many industrial areas in addition to the requirement of modern services is posing huge challenges to the field of data mining. Moreover, the semantic interoperability of systems and enterprises requires to operate between many different formats such as ontologies, knowledge graphs, or relational databases, as well as different contexts such as static, dynamic, or real time. Consequently, supporting this semantic interoperability requires a wide range of knowledge discovery methods with different capabilities that answer to the context of distributed architectures (DA). However, to the best of our knowledge there is no general review in recent time about the state of the art of Concept Analysis (CA) and multi-relational data mining (MRDM) methods regarding knowledge discovery in DA considering semantic interoperability. In this work, a systematic literature review on CA and MRDM is conducted, providing a discussion on the characteristics they have according to the papers reviewed, supported by a clusterization technique based on association rules. Moreover, the review allowed the identification of three research gaps toward a more scalable set of methods in the context of DA and heterogeneous sources.
... Another approach is to apply temporal patterns which can be used to describe and provide more information on the events that the user performs (De Maio et al. 2017). The temporal paths are used to recognize or predict future events that the user may performed (De Maio et al. 2017). ...
... Another approach is to apply temporal patterns which can be used to describe and provide more information on the events that the user performs (De Maio et al. 2017). The temporal paths are used to recognize or predict future events that the user may performed (De Maio et al. 2017). In De Maio et al. (2017, the system combines the temporal extension of Fuzzy Formal Concept Analysis (data driven) and Fuzzy Cognitive Maps (goal driven) approaches for better decision making (De Maio et al. 2017). ...
... Another approach is to apply temporal patterns which can be used to describe and provide more information on the events that the user performs (De Maio et al. 2017). The temporal paths are used to recognize or predict future events that the user may performed (De Maio et al. 2017). In De Maio et al. (2017, the system combines the temporal extension of Fuzzy Formal Concept Analysis (data driven) and Fuzzy Cognitive Maps (goal driven) approaches for better decision making (De Maio et al. 2017). The system recognizes the following events: tiredness, sleeping, having breakfast, and having dinner (De Maio et al. 2017). ...
Article
Full-text available
The increase in elderly population especially in the developed countries and the number of elderly people living alone can result in increased healthcare costs which can cause a huge burden on the society. With fall being one of the biggest risk among the elderly population resulting in serious injuries, if not treated quickly. The advancements in technology, over the years, resulted in an increase in the research of different fall detection systems. Fall detection systems can be grouped into the following categories: camera-based, ambient sensors, and wearable sensors. The detection algorithm and the sensors used can affect the accuracy of the system. The detection algorithm used can either be a decision tree or machine learning algorithms. In this paper, we study the different fall detection systems and the problems associated with these systems. The fall detection model which most recent studies implements will be analysed. From the study, it is found that personalized models are the key, for creating an accurate model and not limiting users to specific activities to perform.
... Several detection technique to identify the attacks include: Signature based detection, Anomaly detection, Artificial neural network (ANN) based IDS, Fuzzy logic [18] with temporal [19] based IDS, Association rule based IDS, Support vector machine (SVM) based IDS, Genetic algorithm (GA) based IDS and other Hybrid techniques. In cloud system, these detection techniques are utilized under, Host based IDS, Network based IDS, Hypervisor based IDS and Distributed IDS [20]. ...
... The basics of ECC relevant to the present study is mentioned in this section. The ECC algorithm performs better cryptography with improved security [12][13][14][15][16][17][18][19][20][21], when it is compared with conventional cryptographic methods like DH, RSA and DSA. ...
... The same as CMs, FCMs have gained considerable research interest due to their ability in representing structured knowledge and system modeling in a number of fields including business [4], ecological engineering [5], management [6], system modeling [7], risk assessment [8], machine learning [9], etc. More information on FCMs, please refer to [10][11][12][13][14]. This growing interest leads to the demand for establishing more effective models which can better describe the complex real situations. ...
... 0.4] be the initial state values of three concepts. When t = 0, the concept values θ 1 (1), θ 2 (1) and θ 3 (1) are calculated as equations (12)- (14). ...
Article
Full-text available
As a novel generalization of fuzzy cognitive map (FCM), interval-valued fuzzy cognitive map (IVFCM) can provide more flexibility in modeling those increasingly complex system with uncertainty. However, the problem of aggregating IVFCMs has not been considered to this day. Concerning this key point, we propose ensemble IVFCMs via evidential reasoning (ER) approach. Firstly, we give a detailed analysis of IVIFS in terms of evidence theory and introduce the concept of augmented connection matrix within the framework of IVFCMs. Secondly, we present a theory of ensemble IVFCMs using the former work and ER approach, particular emphases are put on assessing the weights of different IVFCMs and aggregating them. Both theoretical analysis and practical examples show that the ensemble IVFCMs not only reflects the importance levels of different maps but also can achieve the goal of merging of information from different maps in system modeling.
... Several detection technique to identify the attacks include: Signature based detection, Anomaly detection, Artificial neural network (ANN) based IDS, Fuzzy logic [18] with temporal [19] based IDS, Association rule based IDS, Support vector machine (SVM) based IDS, Genetic algorithm (GA) based IDS and other Hybrid techniques. In cloud system, these detection techniques are utilized under, Host based IDS, Network based IDS, Hypervisor based IDS and Distributed IDS [20]. ...
... The basics of ECC relevant to the present study is mentioned in this section. The ECC algorithm performs better cryptography with improved security [12][13][14][15][16][17][18][19][20][21], when it is compared with conventional cryptographic methods like DH, RSA and DSA. ...
Article
Full-text available
At present, the sensor-cloud infrastructure is gaining popularity, since it offers a flexible, open and reconfigurable configuration over monitoring and controlling application. It handles mainly the user data, which is quite sensitive and hence the data protection in terms of integrity and authenticity is of greater concern. Thus, security is a major concern in such system, inclusive of intruders, who tries to access the infrastructure. In this paper, an improved encryption protocol for secured session keying between the users using a trusted services proposed over sensor-cloud architecture. This technique uses modified Elliptical Curve Cryptography (ECC) algorithm to improve the authentication of sensor nodes in the network. Further, Abelian group theory is designed to convert intruder deduction problem to linear deduction problem to resolve the complexity associated with manipulation of finding the intruders in the network.This helps to reduce the computational complexity of generating a secured message transmission and increased possibility to find the intruders in the network. The experimental validation with the proposed ECC in terms of computational cost proves that the proposed method attains lower computational cost and improved detection of intruders in the network. Also, the technique seems efficient and can be applied on practical cases, where other ECC algorithms fails while implementing it on a real time basis. © 2018 Springer Science+Business Media, LLC, part of Springer Nature
... It treats such maps as directed graphs in which the edges (and sometimes the nodes) are characterized by weighted factors (Kulinich 2014). In a cognitive map, concepts correspond to the nodes and cause-effect relationships between those concepts to the edges (Maio et al. 2017;Ö zesmi and Ö zesmi 2004). Like any graph, such a cognitive map is described by an adjacency matrix W, the elements w ij of which represent the weights of the edges connecting the corresponding nodes u 1 ,u 2 ,…,u n (Axelrod 1976). ...
Article
Full-text available
We propose an algorithm for computing the influence matrix and rank distribution of nodes of a weighted directed graph by calculating the nodes’ mutual impact. The algorithm of accumulative impact solves problems of dimension and computational complexity arising in the analysis of large complex systems. The algorithm calculates the mutual impact of each pair of vertices, making it possible to rank the nodes according to their importance within the system and to determine the most influential components. It produces results similar to those of the commonly used impulse method when applied to graphs that are impulse-stable in an impulse process, while overcoming the disadvantages of the impulse method in other situations. Results are always obtained regardless of impulse stability; they do not depend on the initial impulse, so that the initial values of the weights affect the calculation results. When elements in the adjacency matrix of the weighted directed graph are multiplied by a constant factor, scale invariance is not violated, and the full affect for each of the nodes scales proportionally. Several examples of analyses of weighted directed graphs, including one related to the practical problem of urban solid waste removal, are provided to demonstrate the advantages of the proposed algorithm.
... When w ij \0, it is indicated that with the increase of the state value of the jth node, the state value of the ith node will be reduced. When w ij ¼ 0, there is no correlation between the jth node and the ith node (De Maio et al. 2017). ...
Article
Full-text available
Time series exist widely in either nature or society such that the research on analysis of time series has great significance. However, considering the nonlinearity and uncertainty, the prediction of time series is still an open problem. In this paper, by means of the intuitionistic fuzzy set theory, we proposed a novel time series prediction scheme based on intuitionistic fuzzy cognitive map. In the previous research, intuitionistic fuzzy cognitive map, as a kind of knowledge-based modeling tool, is mainly used in decision-making field, where concept structure and weight matrix are usually obtained from experience of experts. To tackle with the diversity of time series, the proposed algorithm constructs the conceptual structure of cognitive map and weight matrix directly from raw sequential data, which effectively enlarges the application range by reducing human participation. Moreover, in order to appropriately calculate the hesitation degree, which is the key role for the application of intuitionistic fuzzy sets, we propose a real-time adjustable hesitation degree calculation scheme. By using this proposed method, hesitation degree can be adaptively adjusted by combining Femi formula with dynamic membership degree. A number of experiments are implemented to reveal feasibility and effectiveness of the proposed schemes.
... Since many related work addressing the specific activity of the emergency system has actually involved influencing factors, it is a feasible way to extract influencing factors from these literatures (Cavaliere et al. 2018;Loia et al. 2018). Based on an intensive literature review (Maio et al. 2017;Rathore et al. 2017), influencing factors have been managed to derive from summaries of factors in prior researches. However, limited to the source of the emergency system, it is unrealistic to improve all influencing factors simultaneously. ...
Article
Full-text available
High-risk emergency systems are emerging as a new generation technology to prevent disasters. Latest research points out that these systems could protect properties and lives in an efficient way. Limited to the sources, the feasible way to improve the performance of the system is to identify critical success factors (CSFs) and then optimize them. In this paper, a multi-criteria decision-making (MCDM) approach integrating Affinity Diagram, Decision Making Trial and Evaluation Laboratory (DEMATEL), fuzzy cognitive map (FCM) and Dempster–Shafer evidence theory (evidence theory) is proposed to identify critical success factors in high-risk emergency system. The DEMATEL and FCM are initially combined to tackle the decision-making problem in theory and practice. This model has ability to fuse technical, economic, political and social attributes. The proposed method is applied to select CSFs for Chongqing city.
... With the development of the age of the big data, one can get more data objects than ever. Some methods [41,42,43,44] have been proposed to deal with stream data, such as data obtained from all kinds of sensors and that from social media, which increase dynamically. However, these models generally use labeled objects, and these unlabelled objects are not used to construct concept approximation for rough set-based supervised learning, where these algorithms require a large number of labeled data, and labelling these data is expensive and laborious. ...
Article
Full-text available
With the advent of the age of big data, a typical big data set called limited labeled big data appears. It includes a small amount of labeled data and a large amount of unlabeled data. Some existing neighborhood-based rough set algorithms work well in analyzing the rough data with numerical features. But, they face three challenges: limited labeled property of big data, computational inefficiency and over-fitting in attribute reduction when dealing with limited labeled data. In order to address the three issues, a combination of neighborhood rough set and local rough set called local neighborhood rough set (LNRS) is proposed in this paper. The corresponding concept approximation and attribute reduction algorithms designed with linear time complexity can efficiently and effectively deal with limited labeled big data. The experimental results show that the proposed local neighborhood rough set and corresponding algorithms significantly outperform its original counterpart in classical neighborhood rough set. These results will enrich the local rough set theory and enlarge its application scopes.
... As reported in [20] , concepts are primarily cognitive structures and used to reconstruct and represent objects, segments, events of the surrounding world. If objects in formal contexts come with information related to time , it is possible to apply FCA to realize time-related knowledge discovery and representation [7,21,22] . The resulting formal concepts ( A, B ) are information granules . ...
Article
Studying aspects related to the occurrences and co-occurrences of events enables many interesting applications in several domains like Public Safety and Security. In particular, in Digital Forensics, it is useful to construct the timeline of a suspect, reconstructed by analysing social networking applications like Facebook and Twitter. One of the main limitations of the existing data analysis techniques, addressing the above issues, is their ability to work only on a single view on data and, thus, may miss the elicitation of interesting knowledge. This limitation can be overcome by considering more views and applying methods to asses such views, allowing human operators to move from a view to a more suitable one. This paper focuses on temporal aspects of data and proposes an approach based on Granular Computing to build multiple time-related views in order to interpret the extracted knowledge concerning the periodic occurrences of events. The proposed approach adopts Formal Concept Analysis (with time-related attributes) as an algorithm to realize granulations of data and defines a set of Granular Computing measures to interpret the formal concepts, whose extensional parts are formed by co-occurred events, in the lattices constructed by such algorithm. The applicability of the approach is demonstrated by providing a case study concerning a public dataset on forest fires occurred in the Montesinho natural park in Portugal.
Article
Full-text available
Situation awareness is the cognitive capability of human and artificial agents to perceive, understand and predict the status of the situation in an environment. Situation awareness systems aim at supporting the situation awareness of human and artificial agents using computational techniques, models, and approaches for supporting the assessment, tracking, and prediction of critical situations. Fuzzy logic formalisms have been extensively used in situation awareness systems thanks to their capability of dealing with uncertainties while providing agents with easily understandable models of situations and decisions. This paper proposes a systematic, unbiased, and updated review of the literature on fuzzy logic for situation awareness from 2010 to 2021, conducted using the PRISMA methodology, analyzing 139 articles. An in-depth discussion of the main open challenges and future research directions is provided.
Article
Fuzzy cognitive maps (FCMs) have been successfully applied to time series forecasting. However, it still remains challenging to handle multivariate long nonstationary time series, such as EEG data, which may change rapidly and have patterns of trend. To overcome this limitation, in this article, we propose a fast prediction model to deal with multivariate long nonstationary time series based on the combination of elastic net and high order fuzzy cognitive map (HFCM), which is termed as ElasticNet <sub xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">HFCM</sub> . The designed FCM models each variable by one node and the high-order FCM helps to capture the patterns of trend. A case study on predicting human actions through the Electroencephalogram (EEG) data in the form of multichannel long nonstationary time series is investigated based on the proposed prediction model. Specifically, we first predict EEG signals based on the historical data, then a 1D-convolutionary neural network (1D-CNN) is developed to classify the predicted time series. The experimental results on the Grasp-and-Lift dataset show that the proposal can predict the EEG data with lower prediction error compared with the other regression methods. The area under the curve scores obtained on the Grasp-and-Lift dataset by 1D-CNN are higher than those obtained by state-of-the-art classification methods for EEG data in most cases. These results illustrate that the proposal can predict and classify multivariate long nonstationary time series with high accuracy and efficiency.
Thesis
Les travaux présentés dans ce manuscrit s’appliquent au domaine de la gestion de crise française, et notamment à la phase de réponse qui suit un évènement majeur, comme une crue ou un accident industriel. Suite à l’évènement, des cellules de crise sont activées pour prévenir et traiter les conséquences de la crise. Elles font face, dans l’urgence, à de nombreuses difficultés. Les parties-prenantes sont nombreuses, autonomes et hétérogènes, la coexistence de plans d’urgence engendre des contradictions et des effets en cascade se nourrissent des interconnexions entre réseaux. Ces constats arrivent alors que les données disponibles sur les réseaux informatiques ne cessent de se multiplier. Elles sont, par exemple, émises par des capteurs de mesures, sur des réseaux sociaux, ou par des bénévoles. Ces données sont l’occasion de concevoir un système d’information capable de les collecter pour les interpréter en un ensemble d’information formalisé, utilisable en cellule de crise. Pour réussir, les défis liés aux 4Vs du Big data doivent être relevés en limitant le Volume, unifiant (la Variété) et améliorant la Véracité des données et des informations manipulées, tout en suivant la dynamique (Vélocité) de la crise en cours. Nos états de l’art sur les différentes parties de l’architecture recherchée nous ont permis de définir un tel système d’information. Ce dernier est aujourd’hui capable de (i) recevoir plusieurs types d’évènements émis de sources de données connues ou inconnues, (ii) d’utiliser des règles d’interprétations directement déduites de règles métiers réelles et (iii) de formaliser l’ensemble des informations utiles aux parties-prenantes. Son architecture fait partie des architectures orientées évènements, et coexiste avec l’architecture orientée services du logiciel développé par le laboratoire Centre de Génie Industriel (CGI). Le système d’information ainsi implémenté a pu être éprouvé sur un scénario de crue majeure en Loire Moyenne, élaboré par deux Services de Prévision des Crues (SPC) français. Le modèle décrivant la situation de crise courante, obtenu par le système d’information proposé, peut être utilisé pour (i) déduire un processus de réponse à la crise, (ii) détecter des imprévus ou (iii) mettre à jour une représentation de la situation en cellule de crise.
Chapter
In this chapter, we investigate smart maintenance for cyberware capacity management from the applications layer viewpoint. We describe the general knowledge of embedded software applications layer in Sect. 8.1. Then, in order to demonstrate how smart maintenance strategy can be better implemented in such intangible asset management, in particular, applications layer aspect, one representative research avenue is introduced in Sect. 8.2. Section 8.3 summarises this chapter.
Article
Full-text available
Nowadays, human activity recognition (HAR) is an important component of many ambient intelligent solutions where accelerometer and gyroscope signals give the information about the physical activity of an observed person. It has gained increasing attention by the availability of commercial wearable devices such as smartphones, smartwatches, etc. Previous studies have shown that HAR can be seen as a general machine learning problem with a particular data pre-processing stage. In the last decade, several researchers measured high recognition rates on public data sets with numerous “shallow” machine learning techniques. In some cases artificial neural networks (ANNs) produced better performance than other shallow techniques while in other cases it was less effective. After the appearance of deep learning, a significant part of HAR researches turned toward more complex solutions such as convolutional neural networks (CNNs) and they claimed that CNNs can substitute the feature extraction stage in shallow techniques and can outperform them. Therefore in the current state of the art, the efficiency of ANNs against CNNs and other machine learning techniques is unclear. The aim of this study is to investigate the performance of more ANN structures with different hyper-parameters and inputs on two public databases. The result will show that the two key factors in ANN design are the data-preprocessing and the hyper-parameter setup because the accuracy difference between a well and a badly parameterized ANN is huge. A well-tuned ANN with extracted features can outperform other machine learning methods in the HAR problem including CNN classifiers.
Conference Paper
Full-text available
Big Data Stream processing is among the most important computing trends nowadays. The growing interest on Big Data Stream processing comes from the need of many Internet-based applications that generate huge data streams, whose processing can serve to extract useful analytics and inform for decision making systems. For instance, an IoT-based monitoring systems for a supply-chain, can provide real time data analytics for the business delivery performance. The challenges of processing Big Data Streams reside on coping with real-time processing of an unbounded stream of data, that is, the computing system should be able to compute at high throughput to accommodate the high data stream rate generation in input. Clearly, the higher the data stream rate, the higher should be the throughput to achieve consistency of the processing results (e.g. preserving the order of events in the data stream). In this paper we show how to map the data stream processing phases (from data generation to final results) to a software chain architecture, which comprises five main components: sensor, extractor, parser, formatter and outputter. We exemplify the approach using the Yahoo!S4 for processing the Big Data Stream from FlightRadar24 global flight monitoring system.
Conference Paper
Full-text available
Many Internet-based applications generate huge data streams, which are known as Big Data Streams. Such applications comprise IoT-based monitoring systems, data analytics from monitoring online learning workspaces and MOOCs, global flight monitoring systems, etc. Differently from Big Data processing in which the data is available in databases, file systems, etc., before processing, in Big Data Streams the data stream is unbounded and it is to be processed as it becomes available. Besides the challenges of processing huge amount of data, the Big Data Stream processing adds further challenges of coping with scalability and high throughput to enable real time decision taking. While for Big Data processing the MapReduce framework has resulted successful, its batch mode processing shows limitations to process Big Data Streams. Therefore there have been proposed alternative frameworks such as Yahoo!S4, TwitterStorm, etc., to Big Data Stream processing. In this paper we implement and evaluate the Yahoo!S4 for Big Data Stream processing and exemplify through the Big Data Stream from global flight monitoring system.
Article
Full-text available
Smart environments rely on sensor data to provide necessary business intelligence in order to support decision-making. An efficient decision support model in such a context requires that sensor data are provided correctly and timely. Given the dynamicity of sensor environment, the diversity of their features and of user requirements, finding appropriate sensors having the required capabilities or replacing faulty ones constitutes a challenging task. Efficiently describing and organizing sensors in smart environments is essential to deliver a rapid adaptation to errors and availability of data. In this paper, we present an approach for organizing and indexing sensor services based on their capabilities. We introduce a feature-oriented capability model that puts forward the functional aspects of carried actions and model them as resource description framework (RDF) properties rather than focusing on the change in the state of the world. Using this model for describing sensor capabilities, we apply Formal Concept Analysis for organizing and indexing sensor services. We have experimented and evaluated our approach in the Digital Enterprise Research Institute, which has been retrofitted with various sensors to monitor temperature, motion, light and consumption of power within a building.
Article
Full-text available
The W3C Semantic Sensor Network Incubator group (the SSN-XG) produced an OWL~2 ontology to describe sensors and observations --- the SSN ontology, available at http://purl.oclc.org/NET/ssnx/ssn. The SSN ontology can describe sensors in terms of capabilities, measurement processes, observations and deployments. This article describes the SSN ontology. It further gives an example and describes the use of the ontology in recent research projects.
Conference Paper
Full-text available
While many existing formal concept analysis algorithms are efficient, they are typically unsuitable for distributed implementation. Taking the MapReduce (MR) framework as our inspiration we introduce a distributed approach for performing formal concept mining. Our method has its novelty in that we use a light-weight MapReduce runtime called Twister which is better suited to iterative algorithms than recent distributed approaches. First, we describe the theoretical foundations underpinning our distributed formal concept analysis approach. Second, we provide a representative exemplar of how a classic centralized algorithm can be implemented in a distributed fashion using our methodology: we modify Ganter's classic algorithm by introducing a family of MR* algorithms, namely MRGanter and MRGanter+ where the prefix denotes the algorithm's lineage. To evaluate the factors that impact distributed algorithm performance, we compare our MR* algorithms with the state-of-the-art. Experiments conducted on real datasets demonstrate that MRGanter+ is efficient, scalable and an appealing algorithm for distributed problems.
Article
Full-text available
Context-aware pervasive systems are emerging as an important class of applications. Such systems can respond intelligently to contextual information about the physical world acquired via sensors and information about the computational environment. A declarative approach to building context-aware pervasive systems is presented, and the notion of the situation program is introduced, which highlights the primacy of the situation abstraction for building context-aware pervasive systems. There is also a demonstration of how to manipulate situation programs using meta-programming within an extension of the Prolog logic programming language which is called LogicCAP. Such meta-reasoning enables complex situations to be described in terms of other situations. Furthermore, a discussion is given on how the design of situation programs can affect the properties of a context-aware system. The approach encourages a high-level of abstraction for representing and reasoning with situations, and supports building context-aware systems incrementally by providing modularity and separation of concerns.
Conference Paper
Full-text available
A problem in performing activity recognition on a large scale (i.e. in many homes) is that a labelled data set needs to be recorded for each house activity recognition is performed in. This is because most models for activity recognition require labelled data to learn their parameters. In this paper we introduce a transfer learning method for activity recognition which allows the use of existing labelled data sets of various homes to learn the parameters of a model applied in a new home. We evaluate our method using three large real world data sets and show our approach achieves good classification performance in a home for which little or no labelled data is available.
Conference Paper
Full-text available
In previous work, we have defined conceptual foundations that can be beneficially used in contex t modeling. These conceptual foundations include the separation of entity and context, and the characterization of context as either Intrinsic or Relational. This paper aims at extending this appro ach by introducing the ontological concept of Situation as means of composing the elements of our ontology (entities, intrinsic and relational contexts) to model particular states of affairs of interest. Our concepts have been inspired by and aligned with conceptual theories from the fields of philosophy and cognitiv e sciences.
Article
Full-text available
This paper describes work on the development of an actionable model of situation awareness for Army infantry platoon leaders using fuzzy cognitive mapping techniques. Developing this model based on the formal representation of the platoon leader provided by the Goal-Directed Task Analysis (GDTA) methodology advances current cognitive models because it provides valuable insight on how to effectively support human cognition within the decision-making process. We describe the modeling design approach and discuss validating the model using the VBS2 simulation environment.
Conference Paper
Full-text available
C-SPARQL is an extension of SPARQL to support contin- uous queries over RDF data streams. Supporting streams in RDF format guarantees interoperability and opens up important applications, in which reasoners can deal with knowledge that evolves over time. We present C-SPARQL by means of examples in Urban Computing.
Article
Full-text available
This paper presents a theoretical model of situation awareness based on its role in dynamic human decision making in a variety of domains. Situation awareness is presented as a predominant concern in system operation, based on a descriptive view of decision making. The relationship between situation awareness and numerous individual and environmental factors is explored. Among these factors, attention and working memory are presented as critical factors limiting operators from acquiring and interpreting information from the environment to form situation awareness, and mental models and goal-directed behavior are hypothesized as important mechanisms for overcoming these limits. The impact of design features, workload, stress, system complexity, and automation on operator situation awareness is addressed, and a taxonomy of errors in situation awareness is introduced, based on the model presented. The model is used to generate design implications for enhancing operator situation awareness and future directions for situation awareness research.
Article
Full-text available
Medical decision making can be regarded as a process, combining both analytical cognition and intuition. It involves reasoning within complex causal models of multiple concepts, usually described by uncertain, imprecise, and/or incomplete information. Aiming to model medical decision making, we propose a novel approach based on cognitive maps and intuitionistic fuzzy logic. The new model, called intuitionistic fuzzy cognitive map (iFCM), extends the existing fuzzy cognitive map (FCM) by considering the expert's hesitancy in the determination of the causal relations between the concepts of a domain. Furthermore, a modification in the formulation of the new model makes it even less sensitive than the original model to missing input data. To validate its effectiveness, an iFCM with 34 concepts representing fuzzy, linguistically expressed patient-specific data, symptoms, and multimodal measurements was constructed for pneumonia severity assessment. The results obtained reveal its comparative advantage over the respective FCM model by providing decisions that match better with the ones made by the experts. The generality of the proposed approach suggests its suitability for a variety of medical decision-making tasks.
Article
Full-text available
This research deals with the soft computing methodology of fuzzy cognitive map (FCM). Here a mathematical description of FCM is presented and a new methodology based on fuzzy logic techniques for developing the FCM is examined. The capability and usefulness of FCM in modeling complex systems and the application of FCM to modeling and describing the behavior of a heat exchanger system is presented. The applicability of FCM to model the supervisor of complex systems is discussed and the FCM-supervisor for evaluating the performance of a system is constructed; simulation results are presented and discussed.
Article
This work presents the Transition-Aware Human Activity Recognition (TAHAR) system architecture for the recognition of physical activities using smartphones. It targets real-time classification with a collection of inertial sensors while addressing issues regarding the occurrence of transitions between activities and unknown activities to the learning algorithm. We propose two implementations of the architecture which differ in their prediction technique as they deal with transitions either by directly learning them or by considering them as unknown activities. This is accomplished by combining the probabilistic output of consecutive activity predictions of a Support Vector Machine (SVM) with a heuristic filtering approach. The architecture is validated over three case studies that involve data from people performing a broad spectrum of activities (up to 33), while carrying smartphones or wearable sensors. Results show that TAHAR outperforms state-of-the-art baseline works and reveal the main advantages of the architecture.
Article
Situation awareness (SA) has become a widely used construct within the human factors community, the focus of considerable research over the past 25 years. This research has been used to drive the development of advanced information displays, the design of automated systems, information fusion algorithms, and new training approaches for improving SA in individuals and teams. In recent years, a number of papers criticized the Endsley model of SA on various grounds. I review those criticisms here and show them to be based on misunderstandings of the model. I also review several new models of SA, including situated SA, distributed SA, and sensemaking, in light of this discussion and show how they compare to existing models of SA in individuals and teams.
Article
In order to define systems enabling the automatic identification of occurring situations, numerous approaches employing intelligent software agents to analyse data coming from deployed sensors have been proposed. Thus, it is possible that more agents are committed to monitor the same phenomenon in the same environment. Redundancy of sensors and agents is needed, for instance, in real world applications in order to mitigate the risk of faults and threats. One of the possible side effects produced by redundancy is that agents, observing the same phenomenon, could provide discordant opinions. Indeed, solid mechanisms for reaching an agreement among these agents and produce a shared consensus on the same observations are needed. This paper proposes an approach to integrate a fuzzy-based consensus model into a Situation Awareness Framework. The main idea is to consider intelligent agents as experts claiming their opinions (preferences) on a phenomenon of interest.
Article
Microblogging services like Twitter and Facebook collect millions of user generated content every moment about trending news, occurring events, and so on. Nevertheless, it is really a nightmare to find information of interest through the huge amount of available posts that are often noise and redundant. In general, social media analytics services have caught increasing attention from both side research and industry. Specifically, the dynamic context of microblogging requires to manage not only meaning of information but also the evolution of knowledge over the timeline. This work defines Time Aware Knowledge Extraction (briefly TAKE) methodology that relies on temporal extension of Fuzzy Formal Concept Analysis. In particular, a microblog summarization algorithm has been defined filtering the concepts organized by TAKE in a time-dependent hierarchy. The algorithm addresses topic-based summarization on Twitter. Besides considering the timing of the concepts, another distinguish feature of the proposed microblog summarization framework is the possibility to have more or less detailed summary, according to the user's needs, with good levels of quality and completeness as highlighted in the experimental results.
Article
Data stream classification has drawn increasing attention from the data mining community in recent years. Relevant applications include network traffic monitoring, sensor network data analysis, Web click stream mining, power consumption measurement, dynamic tracing of stock fluctuations, to name a few. Data stream classification in such real-world applications is typically subject to three major challenges: concept drifting, large volumes, and partial labeling. As a result, training examples in data streams can be very diverse and it is very hard to learn accurate models with efficiency. In this paper, we propose a novel framework that first categorizes diverse training examples into four types and assign learning priorities to them. Then, we derive four learning cases based on the proportion and priority of the different types of training examples. Finally, for each learning case, we employ one of the four SVM-based training models: classical SVM, semi-supervised SVM, transfer semi-supervised SVM, and relational k-means transfer semi-supervised SVM. We perform comprehensive experiments on real-world data streams that validate the utility of our approach.
Conference Paper
Searching for interesting patterns in binary matrices plays an important role in data mining and, in particular, in formal concept analysis and related disciplines. Several algorithms for computing particular patterns represented by maximal rectangles in binary matrices were proposed but their major drawback is their computational complexity limiting their application on relatively small datasets. In this paper we introduce a scalable distributed algorithm for computing maximal rectangles that uses the map-reduce approach to data processing.
Conference Paper
In recent years, the success of Semantic Web is strongly related to the diffusion of numerous distributed ontologies enabling shared machine readable contents. Ontologies vary in size, semantic, application domain, but often do not foresee the representation and manipulation of uncertain information. Here we describe an approach for automatic fuzzy ontology elicitation by the analysis of web resources collection. The approach exploits a fuzzy extension of Formal Concept Analysis theory and defines a methodological process to generate an OWL-based representation of concepts, properties and individuals. A simple case study in the Web domain validates the applicability and the flexibility of this approach.
Article
An innovative and flexible model based on Grey Systems Theory and Fuzzy Cognitive Maps called Fuzzy Grey Cognitive Maps (FGCM) is proposed. It can be adapted to a wide range of problems, specially in multiple meaning-based environments. FGCMs offer some advantages in comparison with others similar tools. First, the FGCM model is defined specifically for multiple meanings (grey) environments. Second, the FGCM technique allows the defining of relationships between concepts. Through this characteristic, decisional models that are more reliable for interrelated environments are defined. Third, the FGCM technique is able to quantify the grey influence of the relationships between concepts. Through this attribute, a better support in grey environments can be reached. Finally, with this FGCM model it is possible to develop a what-if analysis with the purpose of describing possible grey scenarios. IT projects risks are modelled to illustrate the proposed technique.
Conference Paper
This article presents a method for analyzing the evolution of concepts represented by concept lattices in a time stamped database, showing how the concepts that evolve with time induce a change in the concept lattice. The purpose of this work is to extend formal concept analysis to handle temporal properties and represent temporally evolving attributes.
Article
Recognizing human activities from sensor readings has recently attracted much research interest in pervasive computing due to its potential in many applications, such as assistive living and healthcare. This task is particularly challenging because human activities are often performed in not only a simple (i.e., sequential), but also a complex (i.e., interleaved or concurrent) manner in real life. Little work has been done in addressing complex issues in such a situation. The existing models of interleaved and concurrent activities are typically learning-based. Such models lack of flexibility in real life because activities can be interleaved and performed concurrently in many different ways. In this paper, we propose a novel pattern mining approach to recognize sequential, interleaved, and concurrent activities in a unified framework. We exploit Emerging Pattern—a discriminative pattern that describes significant changes between classes of data—to identify sensor features for classifying activities. Different from existing learning-based approaches which require different training data sets for building activity models, our activity models are built upon the sequential activity trace only and can be applied to recognize both simple and complex activities. We conduct our empirical studies by collecting real-world traces, evaluating the performance of our algorithm, and comparing our algorithm with static and temporal models. Our results demonstrate that, with a time slice of 15 seconds, we achieve an accuracy of 90.96 percent for sequential activity, 88.1 percent for interleaved activity, and 82.53 percent for concurrent activity. Index Terms—Human activity recognition, pattern analysis, emerging pattern, classifier design and evaluation.
Conference Paper
The Fuzzy Cognitive Maps (FCM) of B. Kosko (1986) are useful tools for exploring the impacts of inputs to fuzzy dynamical systems. The development of an FCM often occurs within a group context because it is felt that the variety of perspectives on the given dynamical system improves the effort to identify the relevant concepts and the causal relationships between the concepts. The assumption is that combining incomplete, conflicting opinions of different experts may cancel out the effects of oversight, ignorance and prejudice. There is also then need to accommodate the inherent fuzziness of the problem. We present an integrated process for rating of the intensity of causal relationships, generating mean FCMs, assessing group consensus, and supporting the building of group consensus