scholarly journals Sample entropy analysis for the estimating depth of anaesthesia through human EEG signal at different levels of unconsciousness during surgeries

PeerJ ◽  
2018 ◽  
Vol 6 ◽  
pp. e4817 ◽  
Author(s):  
Quan Liu ◽  
Li Ma ◽  
Shou-Zen Fan ◽  
Maysam F. Abbod ◽  
Jiann-Shing Shieh

Estimating the depth of anaesthesia (DoA) in operations has always been a challenging issue due to the underlying complexity of the brain mechanisms. Electroencephalogram (EEG) signals are undoubtedly the most widely used signals for measuring DoA. In this paper, a novel EEG-based index is proposed to evaluate DoA for 24 patients receiving general anaesthesia with different levels of unconsciousness. Sample Entropy (SampEn) algorithm was utilised in order to acquire the chaotic features of the signals. After calculating the SampEn from the EEG signals, Random Forest was utilised for developing learning regression models with Bispectral index (BIS) as the target. Correlation coefficient, mean absolute error, and area under the curve (AUC) were used to verify the perioperative performance of the proposed method. Validation comparisons with typical nonstationary signal analysis methods (i.e., recurrence analysis and permutation entropy) and regression methods (i.e., neural network and support vector machine) were conducted. To further verify the accuracy and validity of the proposed methodology, the data is divided into four unconsciousness-level groups on the basis of BIS levels. Subsequently, analysis of variance (ANOVA) was applied to the corresponding index (i.e., regression output). Results indicate that the correlation coefficient improved to 0.72 ± 0.09 after filtering and to 0.90 ± 0.05 after regression from the initial values of 0.51 ± 0.17. Similarly, the final mean absolute error dramatically declined to 5.22 ± 2.12. In addition, the ultimate AUC increased to 0.98 ± 0.02, and the ANOVA analysis indicates that each of the four groups of different anaesthetic levels demonstrated significant difference from the nearest levels. Furthermore, the Random Forest output was extensively linear in relation to BIS, thus with better DoA prediction accuracy. In conclusion, the proposed method provides a concrete basis for monitoring patients’ anaesthetic level during surgeries.

2020 ◽  
Vol 11 (1) ◽  
pp. 39
Author(s):  
Eric Järpe ◽  
Mattias Weckstén

A new method for musical steganography for the MIDI format is presented. The MIDI standard is a user-friendly music technology protocol that is frequently deployed by composers of different levels of ambition. There is to the author’s knowledge no fully implemented and rigorously specified, publicly available method for MIDI steganography. The goal of this study, however, is to investigate how a novel MIDI steganography algorithm can be implemented by manipulation of the velocity attribute subject to restrictions of capacity and security. Many of today’s MIDI steganography methods—less rigorously described in the literature—fail to be resilient to steganalysis. Traces (such as artefacts in the MIDI code which would not occur by the mere generation of MIDI music: MIDI file size inflation, radical changes in mean absolute error or peak signal-to-noise ratio of certain kinds of MIDI events or even audible effects in the stego MIDI file) that could catch the eye of a scrutinizing steganalyst are side-effects of many current methods described in the literature. This steganalysis resilience is an imperative property of the steganography method. However, by restricting the carrier MIDI files to classical organ and harpsichord pieces, the problem of velocities following the mood of the music can be avoided. The proposed method, called Velody 2, is found to be on par with or better than the cutting edge alternative methods regarding capacity and inflation while still possessing a better resilience against steganalysis. An audibility test was conducted to check that there are no signs of audible traces in the stego MIDI files.


2021 ◽  
Vol 2021 ◽  
pp. 1-14
Author(s):  
Hai-Bang Ly ◽  
Thuy-Anh Nguyen ◽  
Binh Thai Pham

Soil cohesion (C) is one of the critical soil properties and is closely related to basic soil properties such as particle size distribution, pore size, and shear strength. Hence, it is mainly determined by experimental methods. However, the experimental methods are often time-consuming and costly. Therefore, developing an alternative approach based on machine learning (ML) techniques to solve this problem is highly recommended. In this study, machine learning models, namely, support vector machine (SVM), Gaussian regression process (GPR), and random forest (RF), were built based on a data set of 145 soil samples collected from the Da Nang-Quang Ngai expressway project, Vietnam. The database also includes six input parameters, that is, clay content, moisture content, liquid limit, plastic limit, specific gravity, and void ratio. The performance of the model was assessed by three statistical criteria, namely, the correlation coefficient (R), mean absolute error (MAE), and root mean square error (RMSE). The results demonstrated that the proposed RF model could accurately predict soil cohesion with high accuracy (R = 0.891) and low error (RMSE = 3.323 and MAE = 2.511), and its predictive capability is better than SVM and GPR. Therefore, the RF model can be used as a cost-effective approach in predicting soil cohesion forces used in the design and inspection of constructions.


2021 ◽  
Author(s):  
Hangsik Shin

BACKGROUND Arterial stiffness due to vascular aging is a major indicator for evaluating cardiovascular risk. OBJECTIVE In this study, we propose a method of estimating age by applying machine learning to photoplethysmogram for non-invasive vascular age assessment. METHODS The machine learning-based age estimation model that consists of three convolutional layers and two-layer fully connected layers, was developed using segmented photoplethysmogram by pulse from a total of 752 adults aged 19–87 years. The performance of the developed model was quantitatively evaluated using mean absolute error, root-mean-squared-error, Pearson’s correlation coefficient, coefficient of determination. The Grad-Cam was used to explain the contribution of photoplethysmogram waveform characteristic in vascular age estimation. RESULTS Mean absolute error of 8.03, root mean squared error of 9.96, 0.62 of correlation coefficient, and 0.38 of coefficient of determination were shown through 10-fold cross validation. Grad-Cam, used to determine the weight that the input signal contributes to the result, confirmed that the contribution to the age estimation of the photoplethysmogram segment was high around the systolic peak. CONCLUSIONS The machine learning-based vascular aging analysis method using the PPG waveform showed comparable or superior performance compared to previous studies without complex feature detection in evaluating vascular aging. CLINICALTRIAL 2015-0104


2020 ◽  
Vol 12 (5) ◽  
pp. 2022 ◽  
Author(s):  
Kieu Anh Nguyen ◽  
Walter Chen ◽  
Bor-Shiun Lin ◽  
Uma Seeboonruang

This study continues a previous study with further analysis of watershed-scale erosion pin measurements. Three machine learning (ML) algorithms—Support Vector Machine (SVM), Adaptive Neuro-Fuzzy Inference System (ANFIS), and Artificial Neural Network (ANN)—were used to analyze depth of erosion of a watershed (Shihmen reservoir) in northern Taiwan. In addition to three previously used statistical indexes (Mean Absolute Error, Root Mean Square of Error, and R-squared), Nash–Sutcliffe Efficiency (NSE) was calculated to compare the predictive performances of the three models. To see if there was a statistical difference between the three models, the Wilcoxon signed-rank test was used. The research utilized 14 environmental attributes as the input predictors of the ML algorithms. They are distance to river, distance to road, type of slope, sub-watershed, slope direction, elevation, slope class, rainfall, epoch, lithology, and the amount of organic content, clay, sand, and silt in the soil. Additionally, measurements of a total of 550 erosion pins installed on 55 slopes were used as the target variable of the model prediction. The dataset was divided into a training set (70%) and a testing set (30%) using the stratified random sampling with sub-watershed as the stratification variable. The results showed that the ANFIS model outperforms the other two algorithms in predicting the erosion rates of the study area. The average RMSE of the test data is 2.05 mm/yr for ANFIS, compared to 2.36 mm/yr and 2.61 mm/yr for ANN and SVM, respectively. Finally, the results of this study (ANN, ANFIS, and SVM) were compared with the previous study (Random Forest, Decision Tree, and multiple regression). It was found that Random Forest remains the best predictive model, and ANFIS is the second-best among the six ML algorithms.


2020 ◽  
Vol 12 (7) ◽  
pp. 2749 ◽  
Author(s):  
Bojia Ye ◽  
Bo Liu ◽  
Yong Tian ◽  
Lili Wan

This paper proposes a new methodology for predicting aggregate flight departure delays in airports by exploring supervised learning methods. Individual flight data and meteorological information were processed to obtain four types of airport-related aggregate characteristics for prediction modeling. The expected departure delays in airports is selected as the prediction target while four popular supervised learning methods: multiple linear regression, a support vector machine, extremely randomized trees and LightGBM are investigated to improve the predictability and accuracy of the model. The proposed model is trained and validated using operational data from March 2017 to February 2018 for the Nanjing Lukou International Airport in China. The results show that for a 1-h forecast horizon, the LightGBM model provides the best result, giving a 0.8655 accuracy rate with a 6.65 min mean absolute error, which is 1.83 min less than results from previous research. The importance of aggregate characteristics and example validation are also studied.


2015 ◽  
Vol 76 (13) ◽  
Author(s):  
Siraj Muhammed Pandhiani ◽  
Ani Shabri

In this study, new hybrid model is developed by integrating two models, the discrete wavelet transform and least square support vector machine (WLSSVM) model. The hybrid model is then used to measure for monthly stream flow forecasting for two major rivers in Pakistan. The monthly stream flow forecasting results are obtained by applying this model individually to forecast the rivers flow data of the Indus River and Neelum Rivers. The root mean square error (RMSE), mean absolute error (MAE) and the correlation (R) statistics are used for evaluating the accuracy of the WLSSVM, the proposed model. The results are compared with the results obtained through LSSVM. The outcome of such comparison shows that WLSSVM model is more accurate and efficient than LSSVM.


2019 ◽  
Vol 20 (S2) ◽  
Author(s):  
Varun Khanna ◽  
Lei Li ◽  
Johnson Fung ◽  
Shoba Ranganathan ◽  
Nikolai Petrovsky

Abstract Background Toll-like receptor 9 is a key innate immune receptor involved in detecting infectious diseases and cancer. TLR9 activates the innate immune system following the recognition of single-stranded DNA oligonucleotides (ODN) containing unmethylated cytosine-guanine (CpG) motifs. Due to the considerable number of rotatable bonds in ODNs, high-throughput in silico screening for potential TLR9 activity via traditional structure-based virtual screening approaches of CpG ODNs is challenging. In the current study, we present a machine learning based method for predicting novel mouse TLR9 (mTLR9) agonists based on features including count and position of motifs, the distance between the motifs and graphically derived features such as the radius of gyration and moment of Inertia. We employed an in-house experimentally validated dataset of 396 single-stranded synthetic ODNs, to compare the results of five machine learning algorithms. Since the dataset was highly imbalanced, we used an ensemble learning approach based on repeated random down-sampling. Results Using in-house experimental TLR9 activity data we found that random forest algorithm outperformed other algorithms for our dataset for TLR9 activity prediction. Therefore, we developed a cross-validated ensemble classifier of 20 random forest models. The average Matthews correlation coefficient and balanced accuracy of our ensemble classifier in test samples was 0.61 and 80.0%, respectively, with the maximum balanced accuracy and Matthews correlation coefficient of 87.0% and 0.75, respectively. We confirmed common sequence motifs including ‘CC’, ‘GG’,‘AG’, ‘CCCG’ and ‘CGGC’ were overrepresented in mTLR9 agonists. Predictions on 6000 randomly generated ODNs were ranked and the top 100 ODNs were synthesized and experimentally tested for activity in a mTLR9 reporter cell assay, with 91 of the 100 selected ODNs showing high activity, confirming the accuracy of the model in predicting mTLR9 activity. Conclusion We combined repeated random down-sampling with random forest to overcome the class imbalance problem and achieved promising results. Overall, we showed that the random forest algorithm outperformed other machine learning algorithms including support vector machines, shrinkage discriminant analysis, gradient boosting machine and neural networks. Due to its predictive performance and simplicity, the random forest technique is a useful method for prediction of mTLR9 ODN agonists.


2020 ◽  
Author(s):  
huiyi su ◽  
Wenjuan Shen ◽  
Jingrui Wang ◽  
Arshad Ali ◽  
Mingshi Li

Abstract Background: Aboveground biomass (AGB) is a fundamental indicator of forest ecosystem productivity and health and hence plays an essential role in evaluating forest carbon reserves and supporting the development of targeted forest management plans. Methods: Here, we proposed a random forest/co-kriging framework that integrates the strengths of machine learning and geostatistical approaches to improve the mapping accuracies of AGB in northern Guangdong province of China. We used Landsat time-series observations, Advanced Land Observing Satellite (ALOS) Phased Array L-band Synthetic Aperture Radar (PALSAR) data, and National Forest Inventory (NFI) plot measurements, to generate the forest AGB maps at three time points (1992, 2002, and 2010) showing the spatio-temporal dynamics of AGB in the subtropical forests in Guangdong, China. Results: The proposed model provided excellent performance for mapping AGB using spectral, textural, and topographical variables, and the radar backscatter coefficients. The root mean square error of the plot-level AGB validation was between 15.62 and 53.78 (t/ha), the mean absolute error ranged from 6.54 to 32.32 t/ha, and the relative improvement over the random forest algorithm was between 3.8% and 17.7%. The highest coefficient of determination (0.81) and the lowest mean absolute error (6.54 t/ha) were observed in the 1992 AGB map. The spectral saturation effect was minimized by adding the PALSAR data to the modeling variable set in 2010. By adding elevation as a covariable, the co-kriging outperformed the ordinary kriging method for the prediction of the AGB residuals, because co-kriging resulted in better interpolation results in the valleys and plains of the study area. Conclusions: Validation of the three AGB maps with an independent dataset indicated that the random forest/co-kriging performed best for AGB prediction, followed by random forest coupled with ordinary kriging (random forest/ordinary kriging), and the random forest model. The proposed random forest/co-kriging framework provides an accurate and reliable method for AGB mapping in subtropical forest regions with complex topography. The resulting AGB maps are suitable for the targeted development of forest management actions to promote carbon sequestration and sustainable forest management in the context of climate change.


2022 ◽  
pp. 1-20
Author(s):  
Salim Moudache ◽  
◽  
Mourad Badri

This work aims to investigate the potential, from different perspectives, of a risk model to support Cross-Version Fault and Severity Prediction (CVFSP) in object-oriented software. The risk of a class is addressed from the perspective of two particular factors: the number of faults it can contain and their severity. We used various object-oriented metrics to capture the two risk factors. The risk of a class is modeled using the concept of Euclidean distance. We used a dataset collected from five successive versions of an open-source Java software system (ANT). We investigated different variants of the considered risk model, based on various combinations of object-oriented metrics pairs. We used different machine learning algorithms for building the prediction models: Naive Bayes (NB), J48, Random Forest (RF), Support Vector Machines (SVM) and Multilayer Perceptron (ANN). We investigated the effectiveness of the prediction models for Cross-Version Fault and Severity Prediction (CVFSP), using data of prior versions of the considered system. We also investigated if the considered risk model can give as output the Empirical Risk (ER) of a class, a continuous value considering both the number of faults and their different levels of severity. We used different techniques for building the prediction models: Linear Regression (LR), Gaussian Process (GP), Random forest (RF) and M5P (two decision trees algorithms), SmoReg and Artificial Neural Network (ANN). The considered risk model achieves acceptable results for both cross-version binary fault prediction (a g-mean of 0.714, an AUC of 0.725) and cross-version multi-classification of levels of severity (a g-mean of 0.758, an AUC of 0.771). The model also achieves good results in the estimation of the empirical risk of a class by considering both the number of faults and their levels of severity (intra-version analysis with a correlation coefficient of 0.659, cross-version analysis with a correlation coefficient of 0.486).


Seismic tremors everywhere throughout the globe have been a noteworthy reason for decimation and death toll and property. The following context expects to recognize earthquakes at a beginning time utilizing AI. This will help individuals and salvage groups to make their errand simpler. The information in this manner comprises of these seismic acoustic signals and the time of failure. The model is then prepared utilizing the CatBoost model and the utilization of Support Vector Machines. This will help foresee the time at which a Seismic tremor may happen. CatBoost Regression Algorithm gives a Mean Absolute Error of about 1.860. The Cross Validation (CV) Score for the Support Vector Machine (SVM) approach is -2.1651. The datasets metrics are not reliable on any outer parameter in this manner the variety of exactness is constrained, and high accuracy is accomplished.


Sign in / Sign up

Export Citation Format

Share Document