scholarly journals A Mutiscale Residual Attention Network for Multitask Learning of Human Activity Using Radar Micro-Doppler Signatures

2019 ◽  
Vol 11 (21) ◽  
pp. 2584 ◽  
Author(s):  
Yuan He ◽  
Xinyu Li ◽  
Xiaojun Jing

Short-range radar has become one of the latest sensor technologies for the Internet of Things (IoT), and it plays an increasingly vital role in IoT applications. As the essential task for various smart-sensing applications, radar-based human activity recognition and person identification have received more attention due to radar’s robustness to the environment and low power consumption. Activity recognition and person identification are generally treated as separate problems. However, designing different networks for these two tasks brings a high computational complexity and wastes of resources to some extent. Furthermore, there are some correlations in activity recognition and person identification tasks. In this work, we propose a multiscale residual attention network (MRA-Net) for joint activity recognition and person identification with radar micro-Doppler signatures. A fine-grained loss weight learning (FLWL) mechanism is presented for elaborating a multitask loss to optimize MRA-Net. In addition, we construct a new radar micro-Doppler dataset with dual labels of activity and identity. With the proposed model trained on this dataset, we demonstrate that our method achieves the state-of-the-art performance in both radar-based activity recognition and person identification tasks. The impact of the FLWL mechanism was further investigated, and ablation studies of the efficacy of each component in MRA-Net were also conducted.

Sensors ◽  
2012 ◽  
Vol 12 (6) ◽  
pp. 8039-8054 ◽  
Author(s):  
Oresti Banos ◽  
Miguel Damas ◽  
Hector Pomares ◽  
Ignacio Rojas

2020 ◽  
Author(s):  
Nooshin Bahador

BACKGROUND Multimodal wearable technologies have brought forward wide possibilities in human activity recognition, and more specifically personalized monitoring of eating habits. The emerging challenge now is the selection of most discriminative information from high dimensional data collected from multiple sources. The available fusion algorithms with their complex structure are poorly adopted to the computationally constrained environment which requires integrating information directly at the source, and therefore more simple low-level fusion method is needed. OBJECTIVE In the absence of a data combining process, the cost of directly applying high dimensional raw data to a deep classifier would be computationally expensive regarding the response time, energy consumption and memory requirement. Considering this, current study aimed to develop a data fusion technique in a computationally efficient way to achieve more comprehensive insight of human activity dynamics in a lower dimension. The major objective was considering statistical dependency of multisensory data and exploring inter-modality correlation patterns for different activity. METHODS In this technique, the information in time (regardless of the number of sources) is transformed into a 2D space that facilitates classification of eating episodes from others. This is based on a hypothesis that data captured by various sensors are statistically associated with each other and covariance matrix of all these signals has a unique distribution correlated with each activity which can be encoded on a contour representation. These representations are then used as input of a deep model to learn specific patterns associated with specific activity. RESULTS In order to show the generalizability of proposed fusion algorithm, two different scenarios were taken into account. These scenarios were different in terms of temporal segment size, type of activity, wearable device, subjects and deep learning architecture. The first scenario used dataset where a single participant performed a limit number of activities while wearing Empatica E4 wristband. In the second scenario, a dataset related to the activities of daily living was used where 10 different participants wearing Inertial Measurement Units during performing a more complex set of activities. The precision metric obtained from leave-one-subject-out cross-validation for second scenarios reached to 0.803. The impact of missing data on performance degradation was also evaluated. CONCLUSIONS To conclude, the proposed fusion technique provides the possibility of embedding joint variability information over different modalities in just a single 2D representation which results in obtaining a more global view of different aspects of daily human activities at hand, and yet preserving the desired performance level in activity recognition. CLINICALTRIAL


2021 ◽  
Author(s):  
Mohammed hashim B.A ◽  
Amutha R

Abstract Human Activity Recognition is the most popular research area in the pervasive computing field in recent years. Sensor data plays a vital role in identifying several human actions. Convolutional Neural Networks (CNNs) have now become the most recent technique in the computer vision phenomenon, but still it is premature to use CNN for sensor data, particularly in ubiquitous and wearable computing. In this paper, we have proposed the idea of transforming the raw accelerometer and gyroscope sensor data to the visual domain by using our novel activity image creation method (NAICM). Pre-trained CNN (AlexNet) has been used on the converted image domain information. The proposed method is evaluated on several online available human activity recognition dataset. The results show that the proposed novel activity image creation method (NAICM) has successfully created the activity images with a classification accuracy of 98.36% using pre trained CNN.


Author(s):  
Alireza Abedin ◽  
S. Hamid Rezatofighi ◽  
Qinfeng Shi ◽  
Damith C. Ranasinghe

Batteryless or so called passive wearables are providing new and innovative methods for human activity recognition (HAR), especially in healthcare applications for older people. Passive sensors are low cost, lightweight, unobtrusive and desirably disposable; attractive attributes for healthcare applications in hospitals and nursing homes. Despite the compelling propositions for sensing applications, the data streams from these sensors are characterised by high sparsity---the time intervals between sensor readings are irregular while the number of readings per unit time are often limited. In this paper, we rigorously explore the problem of learning activity recognition models from temporally sparse data. We describe how to learn directly from sparse data using a deep learning paradigm in an end-to-end manner. We demonstrate significant classification performance improvements on real-world passive sensor datasets from older people over the state-of-the-art deep learning human activity recognition models. Further, we provide insights into the model's behaviour through complementary experiments on a benchmark dataset and visualisation of the learned activity feature spaces.


2016 ◽  
Vol 2016 ◽  
pp. 1-12 ◽  
Author(s):  
Jin Lee ◽  
Jungsun Kim

Nowadays, human activity recognition (HAR) plays an important role in wellness-care and context-aware systems. Human activities can be recognized in real-time by using sensory data collected from various sensors built in smart mobile devices. Recent studies have focused on HAR that is solely based on triaxial accelerometers, which is the most energy-efficient approach. However, such HAR approaches are still energy-inefficient because the accelerometer is required to run without stopping so that the physical activity of a user can be recognized in real-time. In this paper, we propose a novel approach for HAR process that controls the activity recognition duration for energy-efficient HAR. We investigated the impact of varying the acceleration-sampling frequency and window size for HAR by using the variable activity recognition duration (VARD) strategy. We implemented our approach by using an Android platform and evaluated its performance in terms of energy efficiency and accuracy. The experimental results showed that our approach reduced energy consumption by a minimum of about 44.23% and maximum of about 78.85% compared to conventional HAR without sacrificing accuracy.


Sign in / Sign up

Export Citation Format

Share Document