scholarly journals A concise review on sensor signal acquisition and transformation applied to human activity recognition and human–robot interaction

2019 ◽  
Vol 15 (6) ◽  
pp. 155014771985398 ◽  
Author(s):  
Lourdes Martínez-Villaseñor ◽  
Hiram Ponce

Human activitiy recognition deals with the integration of sensing and reasoning aiming to understand better people’s actions. Moreover, it plays an important role in human interaction, human–robot interaction, and brain–computer interaction. When these approaches have to be developed, different efforts from signal processing and artificial intelligence are considered. In that sense, this article aims to present a concise review of signal processing in human activitiy recognition systems and describe two examples and applications both in human activity recognition and robotics: human–robot interaction and socialization, and imitation learning in robotics. In addition, it presents ideas and trends in the context of human activity recognition for human–robot interaction that are important when processing signals within that systems.

2021 ◽  
Vol 11 (5) ◽  
pp. 2188
Author(s):  
Athanasios Anagnostis ◽  
Lefteris Benos ◽  
Dimitrios Tsaopoulos ◽  
Aristotelis Tagarakis ◽  
Naoum Tsolakis ◽  
...  

The present study deals with human awareness, which is a very important aspect of human–robot interaction. This feature is particularly essential in agricultural environments, owing to the information-rich setup that they provide. The objective of this investigation was to recognize human activities associated with an envisioned synergistic task. In order to attain this goal, a data collection field experiment was designed that derived data from twenty healthy participants using five wearable sensors (embedded with tri-axial accelerometers, gyroscopes, and magnetometers) attached to them. The above task involved several sub-activities, which were carried out by agricultural workers in real field conditions, concerning load lifting and carrying. Subsequently, the obtained signals from on-body sensors were processed for noise-removal purposes and fed into a Long Short-Term Memory neural network, which is widely used in deep learning for feature recognition in time-dependent data sequences. The proposed methodology demonstrated considerable efficacy in predicting the defined sub-activities with an average accuracy of 85.6%. Moreover, the trained model properly classified the defined sub-activities in a range of 74.1–90.4% for precision and 71.0–96.9% for recall. It can be inferred that the combination of all sensors can achieve the highest accuracy in human activity recognition, as concluded from a comparative analysis for each sensor’s impact on the model’s performance. These results confirm the applicability of the proposed methodology for human awareness purposes in agricultural environments, while the dataset was made publicly available for future research.


Sensors ◽  
2019 ◽  
Vol 19 (15) ◽  
pp. 3434 ◽  
Author(s):  
Nattaya Mairittha ◽  
Tittaya Mairittha ◽  
Sozo Inoue

Labeling activity data is a central part of the design and evaluation of human activity recognition systems. The performance of the systems greatly depends on the quantity and “quality” of annotations; therefore, it is inevitable to rely on users and to keep them motivated to provide activity labels. While mobile and embedded devices are increasingly using deep learning models to infer user context, we propose to exploit on-device deep learning inference using a long short-term memory (LSTM)-based method to alleviate the labeling effort and ground truth data collection in activity recognition systems using smartphone sensors. The novel idea behind this is that estimated activities are used as feedback for motivating users to collect accurate activity labels. To enable us to perform evaluations, we conduct the experiments with two conditional methods. We compare the proposed method showing estimated activities using on-device deep learning inference with the traditional method showing sentences without estimated activities through smartphone notifications. By evaluating with the dataset gathered, the results show our proposed method has improvements in both data quality (i.e., the performance of a classification model) and data quantity (i.e., the number of data collected) that reflect our method could improve activity data collection, which can enhance human activity recognition systems. We discuss the results, limitations, challenges, and implications for on-device deep learning inference that support activity data collection. Also, we publish the preliminary dataset collected to the research community for activity recognition.


Sensors ◽  
2015 ◽  
Vol 15 (4) ◽  
pp. 8192-8213 ◽  
Author(s):  
Gorka Azkune ◽  
Aitor Almeida ◽  
Diego López-de-Ipiña ◽  
Liming Chen

Sign in / Sign up

Export Citation Format

Share Document