scholarly journals Using Rough Sets to Improve Activity Recognition Based on Sensor Data

Sensors ◽  
2020 ◽  
Vol 20 (6) ◽  
pp. 1779 ◽  
Author(s):  
Hans W. Guesgen

Activity recognition plays a central role in many sensor-based applications, such as smart homes for instance. Given a stream of sensor data, the goal is to determine the activities that triggered the sensor data. This article shows how spatial information can be used to improve the process of recognizing activities in smart homes. The sensors that are used in smart homes are in most cases installed in fixed locations, which means that when a particular sensor is triggered, we know approximately where the activity takes place. However, since different sensors may be involved in different occurrences of the same type of activity, the set of sensors associated with a particular activity is not precisely defined. In this article, we use rough sets rather than standard sets to denote the sensors involved in an activity to model, which enables us to deal with this imprecision. Using publicly available data sets, we will demonstrate that rough sets can adequately capture useful information to assist with the activity recognition process. We will also show that rough sets lend themselves to creating Explainable Artificial Intelligence (XAI).

Sensors ◽  
2020 ◽  
Vol 20 (3) ◽  
pp. 879 ◽  
Author(s):  
Uwe Köckemann ◽  
Marjan Alirezaie ◽  
Jennifer Renoux ◽  
Nicolas Tsiftes ◽  
Mobyen Uddin Ahmed ◽  
...  

As research in smart homes and activity recognition is increasing, it is of ever increasing importance to have benchmarks systems and data upon which researchers can compare methods. While synthetic data can be useful for certain method developments, real data sets that are open and shared are equally as important. This paper presents the E-care@home system, its installation in a real home setting, and a series of data sets that were collected using the E-care@home system. Our first contribution, the E-care@home system, is a collection of software modules for data collection, labeling, and various reasoning tasks such as activity recognition, person counting, and configuration planning. It supports a heterogeneous set of sensors that can be extended easily and connects collected sensor data to higher-level Artificial Intelligence (AI) reasoning modules. Our second contribution is a series of open data sets which can be used to recognize activities of daily living. In addition to these data sets, we describe the technical infrastructure that we have developed to collect the data and the physical environment. Each data set is annotated with ground-truth information, making it relevant for researchers interested in benchmarking different algorithms for activity recognition.


2020 ◽  
Vol 12 (23) ◽  
pp. 4007
Author(s):  
Kasra Rafiezadeh Shahi ◽  
Pedram Ghamisi ◽  
Behnood Rasti ◽  
Robert Jackisch ◽  
Paul Scheunders ◽  
...  

The increasing amount of information acquired by imaging sensors in Earth Sciences results in the availability of a multitude of complementary data (e.g., spectral, spatial, elevation) for monitoring of the Earth’s surface. Many studies were devoted to investigating the usage of multi-sensor data sets in the performance of supervised learning-based approaches at various tasks (i.e., classification and regression) while unsupervised learning-based approaches have received less attention. In this paper, we propose a new approach to fuse multiple data sets from imaging sensors using a multi-sensor sparse-based clustering algorithm (Multi-SSC). A technique for the extraction of spatial features (i.e., morphological profiles (MPs) and invariant attribute profiles (IAPs)) is applied to high spatial-resolution data to derive the spatial and contextual information. This information is then fused with spectrally rich data such as multi- or hyperspectral data. In order to fuse multi-sensor data sets a hierarchical sparse subspace clustering approach is employed. More specifically, a lasso-based binary algorithm is used to fuse the spectral and spatial information prior to automatic clustering. The proposed framework ensures that the generated clustering map is smooth and preserves the spatial structures of the scene. In order to evaluate the generalization capability of the proposed approach, we investigate its performance not only on diverse scenes but also on different sensors and data types. The first two data sets are geological data sets, which consist of hyperspectral and RGB data. The third data set is the well-known benchmark Trento data set, including hyperspectral and LiDAR data. Experimental results indicate that this novel multi-sensor clustering algorithm can provide an accurate clustering map compared to the state-of-the-art sparse subspace-based clustering algorithms.


Sensors ◽  
2020 ◽  
Vol 20 (3) ◽  
pp. 825 ◽  
Author(s):  
Fadi Al Machot ◽  
Mohammed R. Elkobaisi ◽  
Kyandoghere Kyamakya

Due to significant advances in sensor technology, studies towards activity recognition have gained interest and maturity in the last few years. Existing machine learning algorithms have demonstrated promising results by classifying activities whose instances have been already seen during training. Activity recognition methods based on real-life settings should cover a growing number of activities in various domains, whereby a significant part of instances will not be present in the training data set. However, to cover all possible activities in advance is a complex and expensive task. Concretely, we need a method that can extend the learning model to detect unseen activities without prior knowledge regarding sensor readings about those previously unseen activities. In this paper, we introduce an approach to leverage sensor data in discovering new unseen activities which were not present in the training set. We show that sensor readings can lead to promising results for zero-shot learning, whereby the necessary knowledge can be transferred from seen to unseen activities by using semantic similarity. The evaluation conducted on two data sets extracted from the well-known CASAS datasets show that the proposed zero-shot learning approach achieves a high performance in recognizing unseen (i.e., not present in the training dataset) new activities.


Author(s):  
Christopher MacDonald ◽  
Michael Yang ◽  
Shawn Learn ◽  
Ron Hugo ◽  
Simon Park

Abstract There are several challenges associated with existing rupture detection systems such as their inability to accurately detect during transient (such as pump dynamics) conditions, delayed responses and their inability to transfer models to different pipeline configurations easily. To address these challenges, we employ multiple Artificial Intelligence (AI) classifiers that rely on pattern recognitions instead of traditional operator-set thresholds. AI techniques, consisting of two-dimensional (2D) Convolutional Neural Networks (CNN) and Adaptive Neuro Fuzzy Interface Systems (ANFIS), are used to mimic processes performed by operators during a rupture event. This includes both visualization (using CNN) and rule-based decision making (using ANFIS). The system provides a level of reasoning to an operator through the use of the rule-based AI system. Pump station sensor data is non-dimensionalized prior to AI processing, enabling application to pipeline configurations outside of the training data set. AI algorithms undergo testing and training using two data sets: laboratory-collected data that mimics transient pump-station operations and real operator data that includes Real Time Transient Model (RTTM) simulated ruptures. The use of non-dimensional sensor data enables the system to detect ruptures from pipeline data not used in the training process.


2009 ◽  
Vol 5 (3) ◽  
pp. 236-252 ◽  
Author(s):  
Xin Hong ◽  
Chris Nugent ◽  
Maurice Mulvenna ◽  
Sally McClean ◽  
Bryan Scotney ◽  
...  

2019 ◽  
Vol 16 (2) ◽  
pp. 678-690 ◽  
Author(s):  
Chao-Lin Wu ◽  
Ya-Hung Chen ◽  
Yi-Wei Chien ◽  
Ming-Je Tsai ◽  
Ting-Ying Li ◽  
...  

Author(s):  
Christopher Macdonald ◽  
Jaehyun Yang ◽  
Shawn Learn ◽  
Simon S. Park ◽  
Ronald J. Hugo

Abstract There are several challenges associated with existing pipeline rupture detection systems, including an inability to accurately detect during transient conditions (such as changes in pump operating points), an inability to easily transfer from one pipeline configuration to another, and relatively slow response times. To address these challenges, we employ multiple Artificial Intelligence (AI) classifiers that rely on pattern recognition instead of traditional operator-set thresholds. AI techniques, consisting of two-dimensional (2D) Convolutional Neural Networks (CNN) and Adaptive Neuro Fuzzy Interface Systems (ANFIS), are used to mimic processes performed by operators during a rupture event. This includes both visualization (using CNN) and rule-based decision making (using ANFIS). The system provides a level of reasoning to an operator through the use of rule-based AI. Pump station sensor data is non-dimensionalized prior to AI processing, enabling pipeline configurations outside of the training data set, independent of geometry, length, and medium. AI algorithms undergo testing and training using two data sets: laboratory-collected flow loop data that mimics transient pump-station operations and real operator data that include simulated ruptures using the Real Time Transient Model (RTTM). The multiple AI classifier results are fused together to provide higher reliability especially detecting ruptures from pipeline data not used in the training process.


Sign in / Sign up

Export Citation Format

Share Document