scholarly journals Estimating Biomechanical Time-Series with Wearable Sensors: A Systematic Review of Machine Learning Techniques

Sensors ◽  
2019 ◽  
Vol 19 (23) ◽  
pp. 5227 ◽  
Author(s):  
Reed D. Gurchiek ◽  
Nick Cheney ◽  
Ryan S. McGinnis

Wearable sensors have the potential to enable comprehensive patient characterization and optimized clinical intervention. Critical to realizing this vision is accurate estimation of biomechanical time-series in daily-life, including joint, segment, and muscle kinetics and kinematics, from wearable sensor data. The use of physical models for estimation of these quantities often requires many wearable devices making practical implementation more difficult. However, regression techniques may provide a viable alternative by allowing the use of a reduced number of sensors for estimating biomechanical time-series. Herein, we review 46 articles that used regression algorithms to estimate joint, segment, and muscle kinematics and kinetics. We present a high-level comparison of the many different techniques identified and discuss the implications of our findings concerning practical implementation and further improving estimation accuracy. In particular, we found that several studies report the incorporation of domain knowledge often yielded superior performance. Further, most models were trained on small datasets in which case nonparametric regression often performed best. No models were open-sourced, and most were subject-specific and not validated on impaired populations. Future research should focus on developing open-source algorithms using complementary physics-based and machine learning techniques that are validated in clinically impaired populations. This approach may further improve estimation performance and reduce barriers to clinical adoption.

Author(s):  
Reed D. Gurchiek ◽  
Nicholas Cheney ◽  
Ryan S. McGinnis

Wearable sensors have the potential to enable comprehensive patient characterization and optimized clinical intervention. Critical to realizing this vision is accurate estimation of biomechanical time-series in daily-life, including joint, segment, and muscle kinetics and kinematics, from wearable sensor data. The use of physical models for estimation of these quantities often requires many wearable devices making practical implementation more difficult. However, regression techniques may provide a viable alternative by allowing the use of a reduced number of sensors for estimating biomechanical time-series. Herein, we review 46 articles that used regression algorithms to estimate joint, segment, and muscle kinematics and kinetics. We present a high-level comparison of the many different techniques identified and discuss the implications of our findings concerning practical implementation and further improving estimation accuracy. In particular, we found that several studies report the incorporation of domain knowledge often yielded superior performance. Further, most models were trained on small datasets in which case nonparametric regression often performed best. No models were open-sourced, and most were subject-specific and not validated on impaired populations. Future research should focus on developing open-source algorithms using complementary physics-based and machine learning techniques that are validated in clinically impaired populations. This approach may further improve estimation performance and reduce barriers to clinical adoption.


Sensors ◽  
2020 ◽  
Vol 21 (1) ◽  
pp. 194
Author(s):  
Sarah Gonzalez ◽  
Paul Stegall ◽  
Harvey Edwards ◽  
Leia Stirling ◽  
Ho Chit Siu

The field of human activity recognition (HAR) often utilizes wearable sensors and machine learning techniques in order to identify the actions of the subject. This paper considers the activity recognition of walking and running while using a support vector machine (SVM) that was trained on principal components derived from wearable sensor data. An ablation analysis is performed in order to select the subset of sensors that yield the highest classification accuracy. The paper also compares principal components across trials to inform the similarity of the trials. Five subjects were instructed to perform standing, walking, running, and sprinting on a self-paced treadmill, and the data were recorded while using surface electromyography sensors (sEMGs), inertial measurement units (IMUs), and force plates. When all of the sensors were included, the SVM had over 90% classification accuracy using only the first three principal components of the data with the classes of stand, walk, and run/sprint (combined run and sprint class). It was found that sensors that were placed only on the lower leg produce higher accuracies than sensors placed on the upper leg. There was a small decrease in accuracy when the force plates are ablated, but the difference may not be operationally relevant. Using only accelerometers without sEMGs was shown to decrease the accuracy of the SVM.


Author(s):  
Xianda Chen ◽  
Yifei Xiao ◽  
Yeming Tang ◽  
Julio Fernandez-Mendoza ◽  
Guohong Cao

Sleep apnea is a sleep disorder in which breathing is briefly and repeatedly interrupted. Polysomnography (PSG) is the standard clinical test for diagnosing sleep apnea. However, it is expensive and time-consuming which requires hospital visits, specialized wearable sensors, professional installations, and long waiting lists. To address this problem, we design a smartwatch-based system called ApneaDetector, which exploits the built-in sensors in smartwatches to detect sleep apnea. Through a clinical study, we identify features of sleep apnea captured by smartwatch, which can be leveraged by machine learning techniques for sleep apnea detection. However, there are many technical challenges such as how to extract various special patterns from the noisy and multi-axis sensing data. To address these challenges, we propose signal denoising and data calibration techniques to process the noisy data while preserving the peaks and troughs which reflect the possible apnea events. We identify the characteristics of sleep apnea such as signal spikes which can be captured by smartwatch, and propose methods to extract proper features to train machine learning models for apnea detection. Through extensive experimental evaluations, we demonstrate that our system can detect apnea events with high precision (0.9674), recall (0.9625), and F1-score (0.9649).


AI Magazine ◽  
2012 ◽  
Vol 33 (2) ◽  
pp. 55 ◽  
Author(s):  
Nisarg Vyas ◽  
Jonathan Farringdon ◽  
David Andre ◽  
John Ivo Stivoric

In this article we provide insight into the BodyMedia FIT armband system — a wearable multi-sensor technology that continuously monitors physiological events related to energy expenditure for weight management using machine learning and data modeling methods. Since becoming commercially available in 2001, more than half a million users have used the system to track their physiological parameters and to achieve their individual health goals including weight-loss. We describe several challenges that arise in applying machine learning techniques to the health care domain and present various solutions utilized in the armband system. We demonstrate how machine learning and multi-sensor data fusion techniques are critical to the system’s success.


2021 ◽  
Author(s):  
Hugo Abreu Mendes ◽  
João Fausto Lorenzato Oliveira ◽  
Paulo Salgado Gomes Mattos Neto ◽  
Alex Coutinho Pereira ◽  
Eduardo Boudoux Jatoba ◽  
...  

Within the context of clean energy generation, solar radiation forecast is applied for photovoltaic plants to increase maintainability and reliability. Statistical models of time series like ARIMA and machine learning techniques help to improve the results. Hybrid Statistical + ML are found in all sorts of time series forecasting applications. This work presents a new way to automate the SARIMAX modeling, nesting PSO and ACO optimization algorithms, differently from R's AutoARIMA, its searches optimal seasonality parameter and combination of the exogenous variables available. This work presents 2 distinct hybrid models that have MLPs as their main elements, optimizing the architecture with Genetic Algorithm. A methodology was used to obtain the results, which were compared to LSTM, CLSTM, MMFF and NARNN-ARMAX topologies found in recent works. The obtained results for the presented models is promising for use in automatic radiation forecasting systems since it outperformed the compared models on at least two metrics.


2020 ◽  
Author(s):  
Yosoon Choi ◽  
Jieun Baek ◽  
Jangwon Suh ◽  
Sung-Min Kim

<p>In this study, we proposed a method to utilize a multi-sensor Unmanned Aerial System (UAS) for exploration of hydrothermal alteration zones. This study selected an area (10m × 20m) composed mainly of the andesite and located on the coast, with wide outcrops and well-developed structural and mineralization elements. Multi-sensor (visible, multispectral, thermal, magnetic) data were acquired in the study area using UAS, and were studied using machine learning techniques. For utilizing the machine learning techniques, we applied the stratified random method to sample 1000 training data in the hydrothermal zone and 1000 training data in the non-hydrothermal zone identified through the field survey. The 2000 training data sets created for supervised learning were first classified into 1500 for training and 500 for testing. Then, 1500 for training were classified into 1200 for training and 300 for validation. The training and validation data for machine learning were generated in five sets to enable cross-validation. Five types of machine learning techniques were applied to the training data sets: k-Nearest Neighbors (k-NN), Decision Tree (DT), Random Forest (RF), Support Vector Machine (SVM), and Deep Neural Network (DNN). As a result of integrated analysis of multi-sensor data using five types of machine learning techniques, RF and SVM techniques showed high classification accuracy of about 90%. Moreover, performing integrated analysis using multi-sensor data showed relatively higher classification accuracy in all five machine learning techniques than analyzing magnetic sensing data or single optical sensing data only.</p>


Author(s):  
Afshin Rahimi ◽  
Mofiyinoluwa O. Folami

As the number of satellite launches increases each year, it is only natural that an interest in the safety and monitoring of these systems would increase as well. However, as a system becomes more complex, generating a high-fidelity model that accurately describes the system becomes complicated. Therefore, imploring a data-driven method can provide to be more beneficial for such applications. This research proposes a novel approach for data-driven machine learning techniques on the detection and isolation of nonlinear systems, with a case-study for an in-orbit closed loop-controlled satellite with reaction wheels as actuators. High-fidelity models of the 3-axis controlled satellite are employed to generate data for both nominal and faulty conditions of the reaction wheels. The generated simulation data is used as input for the isolation method, after which the data is pre-processed through feature extraction from a temporal, statistical, and spectral domain. The pre-processed features are then fed into various machine learning classifiers. Isolation results are validated with cross-validation, and model parameters are tuned using hyperparameter optimization. To validate the robustness of the proposed method, it is tested on three characterized datasets and three reaction wheel configurations, including standard four-wheel, three-orthogonal, and pyramid. The results prove superior performance isolation accuracy for the system under study compared to previous studies using alternative methods (Rahimi & Saadat, 2019, 2020).


Air passengers prediction is said to be the centre of gravity of the growth. With people on the move constantly, there is bound to be some dissatisfaction amongst the customers which could be due to various reason, varying from overbooking of flights to ground operations. This dissatisfaction can be controlled till a limit, in ballpark figuring. In the past, this has been done using various machine learning techniques. For this prediction, in this project, ARIMA Modeling is used which is a time series forecasting method, based on machine learning. To test the stationarity of the data, which is done using Dickey Fuller test. If the data is stationary, it is fit into the ARIMA Model. If the data isn’t stationary, it is made stationary by differencing or by logarithmic transformation. The logarithmic method to make the data stationary. Once the data is stationary, using the Partial autocorrelation function and the autocorrelation function, values of p and q are found, which are required in the time series method. These values are then fit into the ARIMA Modeling and hence, the results are predicted. Upon the use and fitting of various models, the ARIMA(2,1,2) has been the best fit, having the least RMS and RMSE values.


Author(s):  
Anna Ferrari ◽  
Daniela Micucci ◽  
Marco Mobilio ◽  
Paolo Napoletano

AbstractHuman activity recognition (HAR) is a line of research whose goal is to design and develop automatic techniques for recognizing activities of daily living (ADLs) using signals from sensors. HAR is an active research filed in response to the ever-increasing need to collect information remotely related to ADLs for diagnostic and therapeutic purposes. Traditionally, HAR used environmental or wearable sensors to acquire signals and relied on traditional machine-learning techniques to classify ADLs. In recent years, HAR is moving towards the use of both wearable devices (such as smartphones or fitness trackers, since they are daily used by people and they include reliable inertial sensors), and deep learning techniques (given the encouraging results obtained in the area of computer vision). One of the major challenges related to HAR is population diversity, which makes difficult traditional machine-learning algorithms to generalize. Recently, researchers successfully attempted to address the problem by proposing techniques based on personalization combined with traditional machine learning. To date, no effort has been directed at investigating the benefits that personalization can bring in deep learning techniques in the HAR domain. The goal of our research is to verify if personalization applied to both traditional and deep learning techniques can lead to better performance than classical approaches (i.e., without personalization). The experiments were conducted on three datasets that are extensively used in the literature and that contain metadata related to the subjects. AdaBoost is the technique chosen for traditional machine learning, while convolutional neural network is the one chosen for deep learning. These techniques have shown to offer good performance. Personalization considers both the physical characteristics of the subjects and the inertial signals generated by the subjects. Results suggest that personalization is most effective when applied to traditional machine-learning techniques rather than to deep learning ones. Moreover, results show that deep learning without personalization performs better than any other methods experimented in the paper in those cases where the number of training samples is high and samples are heterogeneous (i.e., they represent a wider spectrum of the population). This suggests that traditional deep learning can be more effective, provided you have a large and heterogeneous dataset, intrinsically modeling the population diversity in the training process.


Sign in / Sign up

Export Citation Format

Share Document