scholarly journals Deep Learning-Based Multi-Modal Data Fusion: A Case Study in Food Intake Episodes Detection Using Wearable Sensors (Preprint)

2020 ◽  
Author(s):  
Nooshin Bahador

BACKGROUND Multimodal wearable technologies have brought forward wide possibilities in human activity recognition, and more specifically personalized monitoring of eating habits. The emerging challenge now is the selection of most discriminative information from high dimensional data collected from multiple sources. The available fusion algorithms with their complex structure are poorly adopted to the computationally constrained environment which requires integrating information directly at the source, and therefore more simple low-level fusion method is needed. OBJECTIVE In the absence of a data combining process, the cost of directly applying high dimensional raw data to a deep classifier would be computationally expensive regarding the response time, energy consumption and memory requirement. Considering this, current study aimed to develop a data fusion technique in a computationally efficient way to achieve more comprehensive insight of human activity dynamics in a lower dimension. The major objective was considering statistical dependency of multisensory data and exploring inter-modality correlation patterns for different activity. METHODS In this technique, the information in time (regardless of the number of sources) is transformed into a 2D space that facilitates classification of eating episodes from others. This is based on a hypothesis that data captured by various sensors are statistically associated with each other and covariance matrix of all these signals has a unique distribution correlated with each activity which can be encoded on a contour representation. These representations are then used as input of a deep model to learn specific patterns associated with specific activity. RESULTS In order to show the generalizability of proposed fusion algorithm, two different scenarios were taken into account. These scenarios were different in terms of temporal segment size, type of activity, wearable device, subjects and deep learning architecture. The first scenario used dataset where a single participant performed a limit number of activities while wearing Empatica E4 wristband. In the second scenario, a dataset related to the activities of daily living was used where 10 different participants wearing Inertial Measurement Units during performing a more complex set of activities. The precision metric obtained from leave-one-subject-out cross-validation for second scenarios reached to 0.803. The impact of missing data on performance degradation was also evaluated. CONCLUSIONS To conclude, the proposed fusion technique provides the possibility of embedding joint variability information over different modalities in just a single 2D representation which results in obtaining a more global view of different aspects of daily human activities at hand, and yet preserving the desired performance level in activity recognition. CLINICALTRIAL

2020 ◽  
Vol 2020 ◽  
pp. 1-12 ◽  
Author(s):  
Huaijun Wang ◽  
Jing Zhao ◽  
Junhuai Li ◽  
Ling Tian ◽  
Pengjia Tu ◽  
...  

Human activity recognition (HAR) can be exploited to great benefits in many applications, including elder care, health care, rehabilitation, entertainment, and monitoring. Many existing techniques, such as deep learning, have been developed for specific activity recognition, but little for the recognition of the transitions between activities. This work proposes a deep learning based scheme that can recognize both specific activities and the transitions between two different activities of short duration and low frequency for health care applications. In this work, we first build a deep convolutional neural network (CNN) for extracting features from the data collected by sensors. Then, the long short-term memory (LTSM) network is used to capture long-term dependencies between two actions to further improve the HAR identification rate. By combing CNN and LSTM, a wearable sensor based model is proposed that can accurately recognize activities and their transitions. The experimental results show that the proposed approach can help improve the recognition rate up to 95.87% and the recognition rate for transitions higher than 80%, which are better than those of most existing similar models over the open HAPT dataset.


IEEE Access ◽  
2020 ◽  
Vol 8 ◽  
pp. 19799-19809 ◽  
Author(s):  
Senem Tanberk ◽  
Zeynep Hilal Kilimci ◽  
Dilek Bilgin Tukel ◽  
Mitat Uysal ◽  
Selim Akyokus

Sensors ◽  
2012 ◽  
Vol 12 (6) ◽  
pp. 8039-8054 ◽  
Author(s):  
Oresti Banos ◽  
Miguel Damas ◽  
Hector Pomares ◽  
Ignacio Rojas

Sensors ◽  
2021 ◽  
Vol 21 (8) ◽  
pp. 2760
Author(s):  
Seungmin Oh ◽  
Akm Ashiquzzaman ◽  
Dongsu Lee ◽  
Yeonggwang Kim ◽  
Jinsul Kim

In recent years, various studies have begun to use deep learning models to conduct research in the field of human activity recognition (HAR). However, there has been a severe lag in the absolute development of such models since training deep learning models require a lot of labeled data. In fields such as HAR, it is difficult to collect data and there are high costs and efforts involved in manual labeling. The existing methods rely heavily on manual data collection and proper labeling of the data, which is done by human administrators. This often results in the data gathering process often being slow and prone to human-biased labeling. To address these problems, we proposed a new solution for the existing data gathering methods by reducing the labeling tasks conducted on new data based by using the data learned through the semi-supervised active transfer learning method. This method achieved 95.9% performance while also reducing labeling compared to the random sampling or active transfer learning methods.


Sign in / Sign up

Export Citation Format

Share Document