scholarly journals Effect of normalization methods on the performance of supervised learning algorithms applied to HTSeq-FPKM-UQ data sets: 7SK RNA expression as a predictor of survival in patients with colon adenocarcinoma

2017 ◽  
Vol 20 (3) ◽  
pp. 985-994 ◽  
Author(s):  
Leili Shahriyari

Abstract Motivation: One of the main challenges in machine learning (ML) is choosing an appropriate normalization method. Here, we examine the effect of various normalization methods on analyzing FPKM upper quartile (FPKM-UQ) RNA sequencing data sets. We collect the HTSeq-FPKM-UQ files of patients with colon adenocarcinoma from TCGA-COAD project. We compare three most common normalization methods: scaling, standardizing using z-score and vector normalization by visualizing the normalized data set and evaluating the performance of 12 supervised learning algorithms on the normalized data set. Additionally, for each of these normalization methods, we use two different normalization strategies: normalizing samples (files) or normalizing features (genes). Results: Regardless of normalization methods, a support vector machine (SVM) model with the radial basis function kernel had the maximum accuracy (78%) in predicting the vital status of the patients. However, the fitting time of SVM depended on the normalization methods, and it reached its minimum fitting time when files were normalized to the unit length. Furthermore, among all 12 learning algorithms and 6 different normalization techniques, the Bernoulli naive Bayes model after standardizing files had the best performance in terms of maximizing the accuracy as well as minimizing the fitting time. We also investigated the effect of dimensionality reduction methods on the performance of the supervised ML algorithms. Reducing the dimension of the data set did not increase the maximum accuracy of 78%. However, it leaded to discovery of the 7SK RNA gene expression as a predictor of survival in patients with colon adenocarcinoma with accuracy of 78%.

The present research explores the loyalty prediction problem of a brand through supervised learning algorithms of classifications: logistic regression, decision tree, support vector machine, bayes algorithm and K-nearest neighbors (KNN) algorithm. 265 customers’ FMCG loyalty sample data were taken and variables of the data set include; loyalty status, gender, family size, age, frequency of purchase, and FMCG purchase. Data have been analyzed with the help of Python packages such as Pandas (Data analysis), Numpy (Numerical calculation), Matplotlib (Visualization), and Sklearn (Modeling). Among the supervised classification algorithms, logistic regression has outperformed than other techniques


2021 ◽  
Vol 15 (4) ◽  
pp. 18-30
Author(s):  
Om Prakash Samantray ◽  
Satya Narayan Tripathy

There are several malware detection techniques available that are based on a signature-based approach. This approach can detect known malware very effectively but sometimes may fail to detect unknown or zero-day attacks. In this article, the authors have proposed a malware detection model that uses operation codes of malicious and benign executables as the feature. The proposed model uses opcode extract and count (OPEC) algorithm to prepare the opcode feature vector for the experiment. Most relevant features are selected using extra tree classifier feature selection technique and then passed through several supervised learning algorithms like support vector machine, naive bayes, decision tree, random forest, logistic regression, and k-nearest neighbour to build classification models for malware detection. The proposed model has achieved a detection accuracy of 98.7%, which makes this model better than many of the similar works discussed in the literature.


2014 ◽  
Vol 26 (3) ◽  
pp. 557-591 ◽  
Author(s):  
Stefano Baccianella ◽  
Andrea Esuli ◽  
Fabrizio Sebastiani

Ordinal classification (also known as ordinal regression) is a supervised learning task that consists of estimating the rating of a data item on a fixed, discrete rating scale. This problem is receiving increased attention from the sentiment analysis and opinion mining community due to the importance of automatically rating large amounts of product review data in digital form. As in other supervised learning tasks such as binary or multiclass classification, feature selection is often needed in order to improve efficiency and avoid overfitting. However, although feature selection has been extensively studied for other classification tasks, it has not for ordinal classification. In this letter, we present six novel feature selection methods that we have specifically devised for ordinal classification and test them on two data sets of product review data against three methods previously known from the literature, using two learning algorithms from the support vector regression tradition. The experimental results show that all six proposed metrics largely outperform all three baseline techniques (and are more stable than these others by an order of magnitude), on both data sets and for both learning algorithms.


2017 ◽  
Vol 10 (2) ◽  
pp. 695-708 ◽  
Author(s):  
Simon Ruske ◽  
David O. Topping ◽  
Virginia E. Foot ◽  
Paul H. Kaye ◽  
Warren R. Stanley ◽  
...  

Abstract. Characterisation of bioaerosols has important implications within environment and public health sectors. Recent developments in ultraviolet light-induced fluorescence (UV-LIF) detectors such as the Wideband Integrated Bioaerosol Spectrometer (WIBS) and the newly introduced Multiparameter Bioaerosol Spectrometer (MBS) have allowed for the real-time collection of fluorescence, size and morphology measurements for the purpose of discriminating between bacteria, fungal spores and pollen.This new generation of instruments has enabled ever larger data sets to be compiled with the aim of studying more complex environments. In real world data sets, particularly those from an urban environment, the population may be dominated by non-biological fluorescent interferents, bringing into question the accuracy of measurements of quantities such as concentrations. It is therefore imperative that we validate the performance of different algorithms which can be used for the task of classification.For unsupervised learning we tested hierarchical agglomerative clustering with various different linkages. For supervised learning, 11 methods were tested, including decision trees, ensemble methods (random forests, gradient boosting and AdaBoost), two implementations for support vector machines (libsvm and liblinear) and Gaussian methods (Gaussian naïve Bayesian, quadratic and linear discriminant analysis, the k-nearest neighbours algorithm and artificial neural networks).The methods were applied to two different data sets produced using the new MBS, which provides multichannel UV-LIF fluorescence signatures for single airborne biological particles. The first data set contained mixed PSLs and the second contained a variety of laboratory-generated aerosol.Clustering in general performs slightly worse than the supervised learning methods, correctly classifying, at best, only 67. 6 and 91. 1 % for the two data sets respectively. For supervised learning the gradient boosting algorithm was found to be the most effective, on average correctly classifying 82. 8 and 98. 27 % of the testing data, respectively, across the two data sets.A possible alternative to gradient boosting is neural networks. We do however note that this method requires much more user input than the other methods, and we suggest that further research should be conducted using this method, especially using parallelised hardware such as the GPU, which would allow for larger networks to be trained, which could possibly yield better results.We also saw that some methods, such as clustering, failed to utilise the additional shape information provided by the instrument, whilst for others, such as the decision trees, ensemble methods and neural networks, improved performance could be attained with the inclusion of such information.


2015 ◽  
Vol 28 (6) ◽  
pp. 570-600 ◽  
Author(s):  
Grant Duwe ◽  
KiDeuk Kim

Recent research has produced mixed results as to whether newer machine learning algorithms outperform older, more traditional methods such as logistic regression in predicting recidivism. In this study, we compared the performance of 12 supervised learning algorithms to predict recidivism among offenders released from Minnesota prisons. Using multiple predictive validity metrics, we assessed the performance of these algorithms across varying sample sizes, recidivism base rates, and number of predictors in the data set. The newer machine learning algorithms generally yielded better predictive validity results. LogitBoost had the best overall performance, followed by Random forests, MultiBoosting, bagged trees, and logistic model trees. Still, the gap between the best and worst algorithms was relatively modest, and none of the methods performed the best in each of the 10 scenarios we examined. The results suggest that multiple methods, including machine learning algorithms, should be considered in the development of recidivism risk assessment instruments.


2021 ◽  
Vol 19 (1) ◽  
Author(s):  
Max Hahn-Klimroth ◽  
Philipp Loick ◽  
Soo-Zin Kim-Wanner ◽  
Erhard Seifried ◽  
Halvard Bonig

Abstract Background The ability to approximate intra-operative hemoglobin loss with reasonable precision and linearity is prerequisite for determination of a relevant surgical outcome parameter: This information enables comparison of surgical procedures between different techniques, surgeons or hospitals, and supports anticipation of transfusion needs. Different formulas have been proposed, but none of them were validated for accuracy, precision and linearity against a cohort with precisely measured hemoglobin loss and, possibly for that reason, neither has established itself as gold standard. We sought to identify the minimal dataset needed to generate reasonably precise and accurate hemoglobin loss prediction tools and to derive and validate an estimation formula. Methods Routinely available clinical and laboratory data from a cohort of 401 healthy individuals with controlled hemoglobin loss between 29 and 233 g were extracted from medical charts. Supervised learning algorithms were applied to identify a minimal data set and to generate and validate a formula for calculation of hemoglobin loss. Results Of the classical supervised learning algorithms applied, the linear and Ridge regression models performed at least as well as the more complex models. Most straightforward to analyze and check for robustness, we proceeded with linear regression. Weight, height, sex and hemoglobin concentration before and on the morning after the intervention were sufficient to generate a formula for estimation of hemoglobin loss. The resulting model yields an outstanding R2 of 53.2% with similar precision throughout the entire range of volumes or donor sizes, thereby meaningfully outperforming previously proposed medical models. Conclusions The resulting formula will allow objective benchmarking of surgical blood loss, enabling informed decision making as to the need for pre-operative type-and-cross only vs. reservation of packed red cell units, depending on a patient’s anemia tolerance, and thus contributing to resource management.


2014 ◽  
Vol 14 (3) ◽  
pp. 5535-5542
Author(s):  
Sagri Sharma ◽  
Sanjay Kadam ◽  
Hemant Darbari

Analysis of diseases integrating multi-factors increases the complexity of the problem and therefore, development of frameworks for the analysis of diseases is an issue that is currently a topic of intense research. Due to the inter-dependence of the various parameters, the use of traditional methodologies has not been very effective. Consequently, newer methodologies are being sought to deal with the problem. Supervised Learning Algorithms are commonly used for performing the prediction on previously unseen data. These algorithms are commonly used for applications in fields ranging from image analysis to protein structure and function prediction and they get trained using a known dataset to come up with a predictor model that generates reasonable predictions for the response to new data. Gene expression profiles generated by DNA analysis experiments can be quite complex since these experiments can involve hypotheses involving entire genomes. The application of well-known machine learning algorithm - Support Vector Machine - to analyze the expression levels of thousands of genes simultaneously in a timely, automated and cost effective way is thus used. The objectives to undertake the presented work are development of a methodology to identify genes relevant to Hepatocellular Carcinoma (HCC) from gene expression dataset utilizing supervised learning algorithms & statistical evaluations along with development of a predictive framework that can perform classification tasks on new, unseen data


Author(s):  
M. Govindarajan

Big data mining involves knowledge discovery from these large data sets. The purpose of this chapter is to provide an analysis of different machine learning algorithms available for performing big data analytics. The machine learning algorithms are categorized in three key categories, namely, supervised, unsupervised, and semi-supervised machine learning algorithm. The supervised learning algorithms are trained with a complete set of data, and thus, the supervised learning algorithms are used to predict/forecast. Example algorithms include logistic regression and the back propagation neural network. The unsupervised learning algorithms starts learning from scratch, and therefore, the unsupervised learning algorithms are used for clustering. Example algorithms include: the Apriori algorithm and K-Means. The semi-supervised learning combines both supervised and unsupervised learning algorithms. The semi-supervised algorithms are trained, and the algorithms also include non-trained learning.


Sign in / Sign up

Export Citation Format

Share Document