scholarly journals Feature Selection for Machine Learning Based Step Length Estimation Algorithms

Sensors ◽  
2020 ◽  
Vol 20 (3) ◽  
pp. 778
Author(s):  
Stef Vandermeeren ◽  
Herwig Bruneel ◽  
Heidi Steendam

An accurate step length estimation can provide valuable information to different applications such as indoor positioning systems or it can be helpful when analyzing the gait of a user, which can then be used to detect various gait impairments that lead to a reduced step length (caused by e.g., Parkinson’s disease or multiple sclerosis). In this paper, we focus on the estimation of the step length using machine learning techniques that could be used in an indoor positioning system. Previous step length algorithms tried to model the length of a step based on measurements from the accelerometer and some tuneable (user-specific) parameters. Machine-learning-based step length estimation algorithms eliminate these parameters to be tuned. Instead, to adapt these algorithms to different users, it suffices to provide examples of the length of multiple steps for different persons to the machine learning algorithm, so that in the training phase the algorithm can learn to predict the step length for different users. Until now, these machine learning algorithms were trained with features that were chosen intuitively. In this paper, we consider a systematic feature selection algorithm to be able to determine the features from a large collection of features, resulting in the best performance. This resulted in a step length estimator with a mean absolute error of 3.48 cm for a known test person and 4.19 cm for an unknown test person, while current state-of-the-art machine-learning-based step length estimators resulted in a mean absolute error of 4.94 cm and 6.27 cm for respectively a known and unknown test person.

Vibration ◽  
2021 ◽  
Vol 4 (2) ◽  
pp. 341-356
Author(s):  
Jessada Sresakoolchai ◽  
Sakdirat Kaewunruen

Various techniques have been developed to detect railway defects. One of the popular techniques is machine learning. This unprecedented study applies deep learning, which is a branch of machine learning techniques, to detect and evaluate the severity of rail combined defects. The combined defects in the study are settlement and dipped joint. Features used to detect and evaluate the severity of combined defects are axle box accelerations simulated using a verified rolling stock dynamic behavior simulation called D-Track. A total of 1650 simulations are run to generate numerical data. Deep learning techniques used in the study are deep neural network (DNN), convolutional neural network (CNN), and recurrent neural network (RNN). Simulated data are used in two ways: simplified data and raw data. Simplified data are used to develop the DNN model, while raw data are used to develop the CNN and RNN model. For simplified data, features are extracted from raw data, which are the weight of rolling stock, the speed of rolling stock, and three peak and bottom accelerations from two wheels of rolling stock. In total, there are 14 features used as simplified data for developing the DNN model. For raw data, time-domain accelerations are used directly to develop the CNN and RNN models without processing and data extraction. Hyperparameter tuning is performed to ensure that the performance of each model is optimized. Grid search is used for performing hyperparameter tuning. To detect the combined defects, the study proposes two approaches. The first approach uses one model to detect settlement and dipped joint, and the second approach uses two models to detect settlement and dipped joint separately. The results show that the CNN models of both approaches provide the same accuracy of 99%, so one model is good enough to detect settlement and dipped joint. To evaluate the severity of the combined defects, the study applies classification and regression concepts. Classification is used to evaluate the severity by categorizing defects into light, medium, and severe classes, and regression is used to estimate the size of defects. From the study, the CNN model is suitable for evaluating dipped joint severity with an accuracy of 84% and mean absolute error (MAE) of 1.25 mm, and the RNN model is suitable for evaluating settlement severity with an accuracy of 99% and mean absolute error (MAE) of 1.58 mm.


Sensors ◽  
2021 ◽  
Vol 21 (10) ◽  
pp. 3527
Author(s):  
Melanija Vezočnik ◽  
Roman Kamnik ◽  
Matjaz B. Juric

Inertial sensor-based step length estimation has become increasingly important with the emergence of pedestrian-dead-reckoning-based (PDR-based) indoor positioning. So far, many refined step length estimation models have been proposed to overcome the inaccuracy in estimating distance walked. Both the kinematics associated with the human body during walking and actual step lengths are rarely used in their derivation. Our paper presents a new step length estimation model that utilizes acceleration magnitude. To the best of our knowledge, we are the first to employ principal component analysis (PCA) to characterize the experimental data for the derivation of the model. These data were collected from anatomical landmarks on the human body during walking using a highly accurate optical measurement system. We evaluated the performance of the proposed model for four typical smartphone positions for long-term human walking and obtained promising results: the proposed model outperformed all acceleration-based models selected for the comparison producing an overall mean absolute stride length estimation error of 6.44 cm. The proposed model was also least affected by walking speed and smartphone position among acceleration-based models and is unaffected by smartphone orientation. Therefore, the proposed model can be used in the PDR-based indoor positioning with an important advantage that no special care regarding orientation is needed in attaching the smartphone to a particular body segment. All the sensory data acquired by smartphones that we utilized for evaluation are publicly available and include more than 10 h of walking measurements.


2019 ◽  
Vol 9 (6) ◽  
pp. 1048 ◽  
Author(s):  
Huy Tran ◽  
Cheolkeun Ha

Recently, indoor positioning systems have attracted a great deal of research attention, as they have a variety of applications in the fields of science and industry. In this study, we propose an innovative and easily implemented solution for indoor positioning. The solution is based on an indoor visible light positioning system and dual-function machine learning (ML) algorithms. Our solution increases positioning accuracy under the negative effect of multipath reflections and decreases the computational time for ML algorithms. Initially, we perform a noise reduction process to eliminate low-intensity reflective signals and minimize noise. Then, we divide the floor of the room into two separate areas using the ML classification function. This significantly reduces the computational time and partially improves the positioning accuracy of our system. Finally, the regression function of those ML algorithms is applied to predict the location of the optical receiver. By using extensive computer simulations, we have demonstrated that the execution time required by certain dual-function algorithms to determine indoor positioning is decreased after area division and noise reduction have been applied. In the best case, the proposed solution took 78.26% less time and provided a 52.55% improvement in positioning accuracy.


2021 ◽  
Author(s):  
Hangsik Shin

BACKGROUND Arterial stiffness due to vascular aging is a major indicator for evaluating cardiovascular risk. OBJECTIVE In this study, we propose a method of estimating age by applying machine learning to photoplethysmogram for non-invasive vascular age assessment. METHODS The machine learning-based age estimation model that consists of three convolutional layers and two-layer fully connected layers, was developed using segmented photoplethysmogram by pulse from a total of 752 adults aged 19–87 years. The performance of the developed model was quantitatively evaluated using mean absolute error, root-mean-squared-error, Pearson’s correlation coefficient, coefficient of determination. The Grad-Cam was used to explain the contribution of photoplethysmogram waveform characteristic in vascular age estimation. RESULTS Mean absolute error of 8.03, root mean squared error of 9.96, 0.62 of correlation coefficient, and 0.38 of coefficient of determination were shown through 10-fold cross validation. Grad-Cam, used to determine the weight that the input signal contributes to the result, confirmed that the contribution to the age estimation of the photoplethysmogram segment was high around the systolic peak. CONCLUSIONS The machine learning-based vascular aging analysis method using the PPG waveform showed comparable or superior performance compared to previous studies without complex feature detection in evaluating vascular aging. CLINICALTRIAL 2015-0104


Author(s):  
Mohammed Al Zobbi ◽  
Belal Alsinglawi ◽  
Omar Mubin ◽  
Fady Alnajjar

Coronavirus Disease 2019 (COVID-19) has affected day to day life and slowed down the global economy. Most countries are enforcing strict quarantine to control the havoc of this highly contagious disease. Since the outbreak of COVID-19, many data analyses have been done to provide close support to decision-makers. We propose a method comprising data analytics and machine learning classification for evaluating the effectiveness of lockdown regulations. Lockdown regulations should be reviewed on a regular basis by governments, to enable reasonable control over the outbreak. The model aims to measure the efficiency of lockdown procedures for various countries. The model shows a direct correlation between lockdown procedures and the infection rate. Lockdown efficiency is measured by finding a correlation coefficient between lockdown attributes and the infection rate. The lockdown attributes include retail and recreation, grocery and pharmacy, parks, transit stations, workplaces, residential, and schools. Our results show that combining all the independent attributes in our study resulted in a higher correlation (0.68) to the dependent value Interquartile 3 (Q3). Mean Absolute Error (MAE) was found to be the least value when combining all attributes.


Sensors ◽  
2020 ◽  
Vol 20 (18) ◽  
pp. 5343
Author(s):  
Miroslav Opiela ◽  
František Galčík

Indoor positioning systems for smartphones are often based on Pedestrian Dead Reckoning, which computes the current position from the previously estimated location. Noisy sensor measurements, inaccurate step length estimations, faulty direction detections, and a demand on the real-time calculation introduce the error which is suppressed using a map model and a Bayesian filtering. The main focus of this paper is on grid-based implementations of Bayes filters as an alternative to commonly used Kalman and particle filters. Our previous work regarding grid-based filters is elaborated and enriched with convolution mask calculations. More advanced implementations, the centroid grid filter, and the advanced point-mass filter are introduced. These implementations are analyzed and compared using different configurations on the same raw sensor recordings. The evaluation is performed on three sets of experiments: a custom simple path in faculty building in Slovakia, and on datasets from IPIN competitions from a shopping mall in France, 2018 and a research institute in Italy, 2019. Evaluation results suggests that proposed methods are qualified alternatives to the particle filter. Advantages, drawbacks and proper configurations of these filters are discussed in this paper.


2019 ◽  
Vol 9 (18) ◽  
pp. 3665 ◽  
Author(s):  
Ahmet Çağdaş Seçkin ◽  
Aysun Coşkun

Wi-Fi-based indoor positioning offers significant opportunities for numerous applications. Examining the Wi-Fi positioning systems, it was observed that hundreds of variables were used even when variable reduction was applied. This reveals a structure that is difficult to repeat and is far from producing a common solution for real-life applications. It aims to create a common and standardized dataset for indoor positioning and localization and present a system that can perform estimations using this dataset. To that end, machine learning (ML) methods are compared and the results of successful methods with hierarchical inclusion are then investigated. Further, new features are generated according to the measurement point obtained from the dataset. Subsequently, learning models are selected according to the performance metrics for the estimation of location and position. These learning models are then fused hierarchically using deductive reasoning. Using the proposed method, estimation of location and position has proved to be more successful by using fewer variables than the current studies. This paper, thus, identifies a lack of applicability present in the research community and solves it using the proposed method. It suggests that the proposed method results in a significant improvement for the estimation of floor and longitude.


2020 ◽  
Author(s):  
huiyi su ◽  
Wenjuan Shen ◽  
Jingrui Wang ◽  
Arshad Ali ◽  
Mingshi Li

Abstract Background: Aboveground biomass (AGB) is a fundamental indicator of forest ecosystem productivity and health and hence plays an essential role in evaluating forest carbon reserves and supporting the development of targeted forest management plans. Methods: Here, we proposed a random forest/co-kriging framework that integrates the strengths of machine learning and geostatistical approaches to improve the mapping accuracies of AGB in northern Guangdong province of China. We used Landsat time-series observations, Advanced Land Observing Satellite (ALOS) Phased Array L-band Synthetic Aperture Radar (PALSAR) data, and National Forest Inventory (NFI) plot measurements, to generate the forest AGB maps at three time points (1992, 2002, and 2010) showing the spatio-temporal dynamics of AGB in the subtropical forests in Guangdong, China. Results: The proposed model provided excellent performance for mapping AGB using spectral, textural, and topographical variables, and the radar backscatter coefficients. The root mean square error of the plot-level AGB validation was between 15.62 and 53.78 (t/ha), the mean absolute error ranged from 6.54 to 32.32 t/ha, and the relative improvement over the random forest algorithm was between 3.8% and 17.7%. The highest coefficient of determination (0.81) and the lowest mean absolute error (6.54 t/ha) were observed in the 1992 AGB map. The spectral saturation effect was minimized by adding the PALSAR data to the modeling variable set in 2010. By adding elevation as a covariable, the co-kriging outperformed the ordinary kriging method for the prediction of the AGB residuals, because co-kriging resulted in better interpolation results in the valleys and plains of the study area. Conclusions: Validation of the three AGB maps with an independent dataset indicated that the random forest/co-kriging performed best for AGB prediction, followed by random forest coupled with ordinary kriging (random forest/ordinary kriging), and the random forest model. The proposed random forest/co-kriging framework provides an accurate and reliable method for AGB mapping in subtropical forest regions with complex topography. The resulting AGB maps are suitable for the targeted development of forest management actions to promote carbon sequestration and sustainable forest management in the context of climate change.


2021 ◽  
Author(s):  
Sheena Agarwal ◽  
Kavita Joshi

Abstract<br>Identifying factors that influence interactions at the surface is still an active area of research. In this study, we present the importance of analyzing bondlength activation, while interpreting Density Functional Theory (DFT) results, as yet another crucial indicator for catalytic activity. We studied the<br>adsorption of small molecules, such as O 2 , N 2 , CO, and CO 2 , on seven face-centered cubic (fcc) transition metal surfaces (M = Ag, Au, Cu, Ir, Rh, Pt, and Pd) and their commonly studied facets (100, 110, and 111). Through our DFT investigations, we highlight the absence of linear correlation between adsorption energies (E ads ) and bondlength activation (BL act ). Our study indicates the importance of evaluating both to develop a better understanding of adsorption at surfaces. We also developed a Machine Learning (ML) model trained on simple periodic table properties to predict both, E ads and BL act . Our ML model gives an accuracy of Mean Absolute Error (MAE) ∼ 0.2 eV for E ads predictions and 0.02 Å for BL act predictions. The systematic study of the ML features<br>that affect E ads and BL act further reinforces the importance of looking beyond adsorption energies to get a full picture of surface interactions with DFT.<br>


Sign in / Sign up

Export Citation Format

Share Document