Large-Scale Personalized Human Activity Recognition Using Online Multitask Learning

2013 ◽  
Vol 25 (11) ◽  
pp. 2551-2563 ◽  
Author(s):  
Xu Sun ◽  
Hisashi Kashima ◽  
Naonori Ueda
Sensors ◽  
2021 ◽  
Vol 21 (24) ◽  
pp. 8337
Author(s):  
Hyeokhyen Kwon ◽  
Gregory D. Abowd ◽  
Thomas Plötz

Supervised training of human activity recognition (HAR) systems based on body-worn inertial measurement units (IMUs) is often constrained by the typically rather small amounts of labeled sample data. Systems like IMUTube have been introduced that employ cross-modality transfer approaches to convert videos of activities of interest into virtual IMU data. We demonstrate for the first time how such large-scale virtual IMU datasets can be used to train HAR systems that are substantially more complex than the state-of-the-art. Complexity is thereby represented by the number of model parameters that can be trained robustly. Our models contain components that are dedicated to capture the essentials of IMU data as they are of relevance for activity recognition, which increased the number of trainable parameters by a factor of 1100 compared to state-of-the-art model architectures. We evaluate the new model architecture on the challenging task of analyzing free-weight gym exercises, specifically on classifying 13 dumbbell execises. We have collected around 41 h of virtual IMU data using IMUTube from exercise videos available from YouTube. The proposed model is trained with the large amount of virtual IMU data and calibrated with a mere 36 min of real IMU data. The trained model was evaluated on a real IMU dataset and we demonstrate the substantial performance improvements of 20% absolute F1 score compared to the state-of-the-art convolutional models in HAR.


IoT ◽  
2020 ◽  
Vol 1 (2) ◽  
pp. 451-473
Author(s):  
Liliana I. Carvalho ◽  
Rute C. Sofia

Mobile sensing has been gaining ground due to the increasing capabilities of mobile and personal devices that are carried around by citizens, giving access to a large variety of data and services based on the way humans interact. Mobile sensing brings several advantages in terms of the richness of available data, particularly for human activity recognition. Nevertheless, the infrastructure required to support large-scale mobile sensing requires an interoperable design, which is still hard to achieve today. This review paper contributes to raising awareness of challenges faced today by mobile sensing platforms that perform learning and behavior inference with respect to human routines: how current solutions perform activity recognition, which classification models they consider, and which types of behavior inferences can be seamlessly provided. The paper provides a set of guidelines that contribute to a better functional design of mobile sensing infrastructures, keeping scalability as well as interoperability in mind.


Proceedings ◽  
2018 ◽  
Vol 2 (19) ◽  
pp. 1242 ◽  
Author(s):  
Macarena Espinilla ◽  
Javier Medina ◽  
Alberto Salguero ◽  
Naomi Irvine ◽  
Mark Donnelly ◽  
...  

Data driven approaches for human activity recognition learn from pre-existent large-scale datasets to generate a classification algorithm that can recognize target activities. Typically, several activities are represented within such datasets, characterized by multiple features that are computed from sensor devices. Often, some features are found to be more relevant to particular activities, which can lead to the classification algorithm providing less accuracy in detecting the activity where such features are not so relevant. This work presents an experimentation for human activity recognition with features derived from the acceleration data of a wearable device. Specifically, this work analyzes which features are most relevant for each activity and furthermore investigates which classifier provides the best accuracy with those features. The results obtained indicate that the best classifier is the k-nearest neighbor and furthermore, confirms that there do exist redundant features that generally introduce noise into the classification, leading to decreased accuracy.


Sensors ◽  
2021 ◽  
Vol 21 (8) ◽  
pp. 2654
Author(s):  
Xue Ding ◽  
Ting Jiang ◽  
Yi Zhong ◽  
Yan Huang ◽  
Zhiwei Li

Wi-Fi-based device-free human activity recognition has recently become a vital underpinning for various emerging applications, ranging from the Internet of Things (IoT) to Human–Computer Interaction (HCI). Although this technology has been successfully demonstrated for location-dependent sensing, it relies on sufficient data samples for large-scale sensing, which is enormously labor-intensive and time-consuming. However, in real-world applications, location-independent sensing is crucial and indispensable. Therefore, how to alleviate adverse effects on recognition accuracy caused by location variations with the limited dataset is still an open question. To address this concern, we present a location-independent human activity recognition system based on Wi-Fi named WiLiMetaSensing. Specifically, we first leverage a Convolutional Neural Network and Long Short-Term Memory (CNN-LSTM) feature representation method to focus on location-independent characteristics. Then, in order to well transfer the model across different positions with limited data samples, a metric learning-based activity recognition method is proposed. Consequently, not only the generalization ability but also the transferable capability of the model would be significantly promoted. To fully validate the feasibility of the presented approach, extensive experiments have been conducted in an office with 24 testing locations. The evaluation results demonstrate that our method can achieve more than 90% in location-independent human activity recognition accuracy. More importantly, it can adapt well to the data samples with a small number of subcarriers and a low sampling rate.


Author(s):  
Wenqiang Chen ◽  
Shupei Lin ◽  
Elizabeth Thompson ◽  
John Stankovic

On-body sensor-based human activity recognition (HAR) lags behind other fields because it lacks large-scale, labeled datasets; this shortfall impedes progress in developing robust and generalized predictive models. To facilitate researchers in collecting more extensive datasets quickly and efficiently we developed SenseCollect. We did a survey and interviewed student researchers in this area to identify what barriers are making it difficult to collect on-body sensor-based HAR data from human subjects. Every interviewee identified data collection as the hardest part of their research, stating it was laborious, consuming and error-prone. To improve HAR data resources we need to address that barrier, but we need a better understanding of the complicating factors to overcome it. To that end we conducted a series of control variable experiments that tested several protocols to ascertain their impact on data collection. SenseCollect studied 240+ human subjects in total and presented the findings to develop a data collection guideline. We also implemented a system to collect data, created the two largest on-body sensor-based human activity datasets, and made them publicly available.


Sign in / Sign up

Export Citation Format

Share Document