scholarly journals Multimodal Database for Human Activity Recognition and Fall Detection

Proceedings ◽  
2018 ◽  
Vol 2 (19) ◽  
pp. 1237 ◽  
Author(s):  
Lourdes Martínez-Villaseñor ◽  
Hiram Ponce ◽  
Ricardo Abel Espinosa-Loera

Fall detection can improve the security and safety of older people and alert when fall occurs. Fall detection systems are mainly based on wearable sensors, ambient sensors, and vision. Each method has commonly known advantages and limitations. Multimodal and data fusion approaches present a combination of data sources in order to better describe falls. Publicly available multimodal datasets are needed to allow comparison between systems, algorithms and modal combinations. To address this issue, we present a publicly available dataset for fall detection considering Inertial Measurement Units (IMUs), ambient infrared presence/absence sensors, and an electroencephalogram Helmet. It will allow human activity recognition researchers to do experiments considering different combination of sensors.

Sensors ◽  
2020 ◽  
Vol 20 (15) ◽  
pp. 4083
Author(s):  
Friedrich Niemann ◽  
Christopher Reining ◽  
Fernando Moya Rueda ◽  
Nilah Ravi Nair ◽  
Janine Anika Steffens ◽  
...  

Optimizations in logistics require recognition and analysis of human activities. The potential of sensor-based human activity recognition (HAR) in logistics is not yet well explored. Despite a significant increase in HAR datasets in the past twenty years, no available dataset depicts activities in logistics. This contribution presents the first freely accessible logistics-dataset. In the ’Innovationlab Hybrid Services in Logistics’ at TU Dortmund University, two picking and one packing scenarios were recreated. Fourteen subjects were recorded individually when performing warehousing activities using Optical marker-based Motion Capture (OMoCap), inertial measurement units (IMUs), and an RGB camera. A total of 758 min of recordings were labeled by 12 annotators in 474 person-h. All the given data have been labeled and categorized into 8 activity classes and 19 binary coarse-semantic descriptions, also called attributes. The dataset is deployed for solving HAR using deep networks.


Sensors ◽  
2018 ◽  
Vol 18 (8) ◽  
pp. 2501 ◽  
Author(s):  
Mohammad Mokhlespour Esfahani ◽  
Maury Nussbaum

Wearable sensors and systems have become increasingly popular in recent years. Two prominent wearable technologies for human activity monitoring are smart textile systems (STSs) and inertial measurement units (IMUs). Despite ongoing advances in both, the usability aspects of these devices require further investigation, especially to facilitate future use. In this study, 18 participants evaluate the preferred placement and usability of two STSs, along with a comparison to a commercial IMU system. These evaluations are completed after participants engaged in a range of activities (e.g., sitting, standing, walking, and running), during which they wear two representatives of smart textile systems: (1) a custom smart undershirt (SUS) and commercial smart socks; and (2) a commercial whole-body IMU system. We first analyze responses regarding the usability of the STS, and subsequently compared these results to those for the IMU system. Participants identify a short-sleeved shirt as their preferred activity monitor. In additional, the SUS in combination with the smart socks is rated superior to the IMU system in several aspects of usability. As reported herein, STSs show promise for future applications in human activity monitoring in terms of usability.


Sensors ◽  
2021 ◽  
Vol 21 (3) ◽  
pp. 692
Author(s):  
Jingcheng Chen ◽  
Yining Sun ◽  
Shaoming Sun

Human activity recognition (HAR) is essential in many health-related fields. A variety of technologies based on different sensors have been developed for HAR. Among them, fusion from heterogeneous wearable sensors has been developed as it is portable, non-interventional and accurate for HAR. To be applied in real-time use with limited resources, the activity recognition system must be compact and reliable. This requirement can be achieved by feature selection (FS). By eliminating irrelevant and redundant features, the system burden is reduced with good classification performance (CP). This manuscript proposes a two-stage genetic algorithm-based feature selection algorithm with a fixed activation number (GFSFAN), which is implemented on the datasets with a variety of time, frequency and time-frequency domain features extracted from the collected raw time series of nine activities of daily living (ADL). Six classifiers are used to evaluate the effects of selected feature subsets from different FS algorithms on HAR performance. The results indicate that GFSFAN can achieve good CP with a small size. A sensor-to-segment coordinate calibration algorithm and lower-limb joint angle estimation algorithm are introduced. Experiments on the effect of the calibration and the introduction of joint angle on HAR shows that both of them can improve the CP.


Sensors ◽  
2021 ◽  
Vol 21 (20) ◽  
pp. 6927
Author(s):  
Xiaojuan Wang ◽  
Xinlei Wang ◽  
Tianqi Lv ◽  
Lei Jin ◽  
Mingshu He

Human activity recognition (HAR) based on wearable sensors is a promising research direction. The resources of handheld terminals and wearable devices limit the performance of recognition and require lightweight architectures. With the development of deep learning, the neural architecture search (NAS) has emerged in an attempt to minimize human intervention. We propose an approach for using NAS to search for models suitable for HAR tasks, namely, HARNAS. The multi-objective search algorithm NSGA-II is used as the search strategy of HARNAS. To make a trade-off between the performance and computation speed of a model, the F1 score and the number of floating-point operations (FLOPs) are selected, resulting in a bi-objective problem. However, the computation speed of a model not only depends on the complexity, but is also related to the memory access cost (MAC). Therefore, we expand the bi-objective search to a tri-objective strategy. We use the Opportunity dataset as the basis for most experiments and also evaluate the portability of the model on the UniMiB-SHAR dataset. The experimental results show that HARNAS designed without manual adjustments can achieve better performance than the best model tweaked by humans. HARNAS obtained an F1 score of 92.16% and parameters of 0.32 MB on the Opportunity dataset.


2021 ◽  
Author(s):  
Gábor Csizmadia ◽  
Krisztina Liszkai-Peres ◽  
Bence Ferdinandy ◽  
Ádám Miklósi ◽  
Veronika Konok

Abstract Human activity recognition (HAR) using machine learning (ML) methods is a relatively new method for collecting and analyzing large amounts of human behavioral data using special wearable sensors. Our main goal was to find a reliable method which could automatically detect various playful and daily routine activities in children. We defined 40 activities for ML recognition, and we collected activity motion data by means of wearable smartwatches with a special SensKid software. We analyzed the data of 34 children (19 girls, 15 boys; age range: 6.59 – 8.38; median age = 7.47). All children were typically developing first graders from three elementary schools. The activity recognition was a binary classification task which was evaluated with a Light Gradient Boosted Machine (LGBM)learning algorithm, a decision based method with a 3-fold cross validation. We used the sliding window technique during the signal processing, and we aimed at finding the best window size for the analysis of each behavior element to achieve the most effective settings. Seventeen activities out of 40 were successfully recognized with AUC values above 0.8. The window size had no significant effect. The overall accuracy was 0.95, which is at the top segment of the previously published similar HAR data. In summary, the LGBM is a very promising solution for HAR. In line with previous findings, our results provide a firm basis for a more precise and effective recognition system that can make human behavioral analysis faster and more objective.


Sensors ◽  
2018 ◽  
Vol 18 (12) ◽  
pp. 4189 ◽  
Author(s):  
Samanta Rosati ◽  
Gabriella Balestra ◽  
Marco Knaflitz

Human Activity Recognition (HAR) refers to an emerging area of interest for medical, military, and security applications. However, the identification of the features to be used for activity classification and recognition is still an open point. The aim of this study was to compare two different feature sets for HAR. Particularly, we compared a set including time, frequency, and time-frequency domain features widely used in literature (FeatSet_A) with a set of time-domain features derived by considering the physical meaning of the acquired signals (FeatSet_B). The comparison of the two sets were based on the performances obtained using four machine learning classifiers. Sixty-one healthy subjects were asked to perform seven different daily activities wearing a MIMU-based device. Each signal was segmented using a 5-s window and for each window, 222 and 221 variables were extracted for the FeatSet_A and FeatSet_B respectively. Each set was reduced using a Genetic Algorithm (GA) simultaneously performing feature selection and classifier optimization. Our results showed that Support Vector Machine achieved the highest performances using both sets (97.1% and 96.7% for FeatSet_A and FeatSet_B respectively). However, FeatSet_B allows to better understand alterations of the biomechanical behavior in more complex situations, such as when applied to pathological subjects.


Sign in / Sign up

Export Citation Format

Share Document