Human Activity Recognition from Multiple Sensors Data Using Multi-fusion Representations and CNNs

Author(s):  
Farzan Majeed Noori ◽  
Michael Riegler ◽  
Md Zia Uddin ◽  
Jim Torresen
Sensor Review ◽  
2019 ◽  
Vol 39 (2) ◽  
pp. 288-306 ◽  
Author(s):  
Guan Yuan ◽  
Zhaohui Wang ◽  
Fanrong Meng ◽  
Qiuyan Yan ◽  
Shixiong Xia

Purpose Currently, ubiquitous smartphones embedded with various sensors provide a convenient way to collect raw sequence data. These data bridges the gap between human activity and multiple sensors. Human activity recognition has been widely used in quite a lot of aspects in our daily life, such as medical security, personal safety, living assistance and so on. Design/methodology/approach To provide an overview, the authors survey and summarize some important technologies and involved key issues of human activity recognition, including activity categorization, feature engineering as well as typical algorithms presented in recent years. In this paper, the authors first introduce the character of embedded sensors and dsiscuss their features, as well as survey some data labeling strategies to get ground truth label. Then, following the process of human activity recognition, the authors discuss the methods and techniques of raw data preprocessing and feature extraction, and summarize some popular algorithms used in model training and activity recognizing. Third, they introduce some interesting application scenarios of human activity recognition and provide some available data sets as ground truth data to validate proposed algorithms. Findings The authors summarize their viewpoints on human activity recognition, discuss the main challenges and point out some potential research directions. Originality/value It is hoped that this work will serve as the steppingstone for those interested in advancing human activity recognition.


2018 ◽  
Vol 2018 ◽  
pp. 1-11
Author(s):  
Hongjin Ding ◽  
Faming Gong ◽  
Wenjuan Gong ◽  
Xiangbing Yuan ◽  
Yuhui Ma

Current methods of human activity recognition face many challenges, such as the need for multiple sensors, poor implementation, unreliable real-time performance, and lack of temporal location. In this research, we developed a method for recognizing and locating human activities based on temporal action recognition. For this work, we used a multilayer convolutional neural network (CNN) to extract features. In addition, we used refined actionness grouping to generate precise region proposals. Then, we classified the candidate regions by employing an activity classifier based on a structured segmented network and a cascade design for end-to-end training. Compared with previous methods of action classification, the proposed method adds the time boundary and effectively improves the detection accuracy. To test this method empirically, we conducted experiments utilizing surveillance video of an offshore oil production plant. Three activities were recognized and located in the untrimmed long video: standing, walking, and falling. The accuracy of the results proved the effectiveness and real-time performance of the proposed method, demonstrating that this approach has great potential for practical application.


Author(s):  
Lidia Bajenaru ◽  
Ciprian Dobre ◽  
Radu-Ioan Ciobanu ◽  
Georgiana Dedu ◽  
Silviu-George Pantelimon ◽  
...  

Electronics ◽  
2021 ◽  
Vol 10 (14) ◽  
pp. 1715
Author(s):  
Michele Alessandrini ◽  
Giorgio Biagetti ◽  
Paolo Crippa ◽  
Laura Falaschetti ◽  
Claudio Turchetti

Photoplethysmography (PPG) is a common and practical technique to detect human activity and other physiological parameters and is commonly implemented in wearable devices. However, the PPG signal is often severely corrupted by motion artifacts. The aim of this paper is to address the human activity recognition (HAR) task directly on the device, implementing a recurrent neural network (RNN) in a low cost, low power microcontroller, ensuring the required performance in terms of accuracy and low complexity. To reach this goal, (i) we first develop an RNN, which integrates PPG and tri-axial accelerometer data, where these data can be used to compensate motion artifacts in PPG in order to accurately detect human activity; (ii) then, we port the RNN to an embedded device, Cloud-JAM L4, based on an STM32 microcontroller, optimizing it to maintain an accuracy of over 95% while requiring modest computational power and memory resources. The experimental results show that such a system can be effectively implemented on a constrained-resource system, allowing the design of a fully autonomous wearable embedded system for human activity recognition and logging.


Sign in / Sign up

Export Citation Format

Share Document