scholarly journals High-Dimensional Additive Hazards Regression for Oral Squamous Cell Carcinoma Using Microarray Data: A Comparative Study

2014 ◽  
Vol 2014 ◽  
pp. 1-7
Author(s):  
Omid Hamidi ◽  
Lily Tapak ◽  
Aarefeh Jafarzadeh Kohneloo ◽  
Majid Sadeghifar

Microarray technology results in high-dimensional and low-sample size data sets. Therefore, fitting sparse models is substantial because only a small number of influential genes can reliably be identified. A number of variable selection approaches have been proposed for high-dimensional time-to-event data based on Cox proportional hazards where censoring is present. The present study applied three sparse variable selection techniques of Lasso, smoothly clipped absolute deviation and the smooth integration of counting, and absolute deviation for gene expression survival time data using the additive risk model which is adopted when the absolute effects of multiple predictors on the hazard function are of interest. The performances of used techniques were evaluated by time dependent ROC curve and bootstrap .632+ prediction error curves. The selected genes by all methods were highly significant(P<0.001). The Lasso showed maximum median of area under ROC curve over time (0.95) and smoothly clipped absolute deviation showed the lowest prediction error (0.105). It was observed that the selected genes by all methods improved the prediction of purely clinical model indicating the valuable information containing in the microarray features. So it was concluded that used approaches can satisfactorily predict survival based on selected gene expression measurements.

2013 ◽  
Vol 12 ◽  
pp. CIN.S10212 ◽  
Author(s):  
Lingkang Huang ◽  
Hao Helen Zhang ◽  
Zhao-Bang Zeng ◽  
Pierre R. Bushel

Background Microarray techniques provide promising tools for cancer diagnosis using gene expression profiles. However, molecular diagnosis based on high-throughput platforms presents great challenges due to the overwhelming number of variables versus the small sample size and the complex nature of multi-type tumors. Support vector machines (SVMs) have shown superior performance in cancer classification due to their ability to handle high dimensional low sample size data. The multi-class SVM algorithm of Crammer and Singer provides a natural framework for multi-class learning. Despite its effective performance, the procedure utilizes all variables without selection. In this paper, we propose to improve the procedure by imposing shrinkage penalties in learning to enforce solution sparsity. Results The original multi-class SVM of Crammer and Singer is effective for multi-class classification but does not conduct variable selection. We improved the method by introducing soft-thresholding type penalties to incorporate variable selection into multi-class classification for high dimensional data. The new methods were applied to simulated data and two cancer gene expression data sets. The results demonstrate that the new methods can select a small number of genes for building accurate multi-class classification rules. Furthermore, the important genes selected by the methods overlap significantly, suggesting general agreement among different variable selection schemes. Conclusions High accuracy and sparsity make the new methods attractive for cancer diagnostics with gene expression data and defining targets of therapeutic intervention. Availability The source MATLAB code are available from http://math.arizona.edu/∼hzhang/software.html.


2021 ◽  
Vol 29 (2) ◽  
Author(s):  
Ishaq Abdullahi Baba ◽  
Habshah Midi ◽  
Leong Wah June ◽  
Gafurjan Ibragimove

The widely used least absolute deviation (LAD) estimator with the smoothly clipped absolute deviation (SCAD) penalty function (abbreviated as LAD-SCAD) is known to produce corrupt estimates in the presence of outlying observations. The problem becomes more complicated when the number of predictors diverges. To overcome these problems, the LAD-SCAD based on sure independence screening (SIS) technique is put forward. The SIS method uses the rank correlation screening (RCS) algorithm in the pre-screening step and the traditional Pathwise coordinate descent algorithm for computing the sequence of the regularization parameters in the post screening step for onward model selection. It is now evident that the rank correlation is less robust against outliers. Motivated by these inadequacies, we propose to improvise the LAD-SCAD estimator using robust wrapped correlation screening (WCS) method by replacing the rank correlation in the SIS method with robust wrapped correlation. The proposed estimator is denoted as WCS+LAD-SCAD and will be employed for variable selection. The simulation study and real-life data examples show that the proposed procedure produces more efficient results compared to the existing methods.


2020 ◽  
Author(s):  
Insha Ullah ◽  
Kerrie Mengersen ◽  
Anthony Pettitt ◽  
Benoit Liquet

AbstractHigh-dimensional datasets, where the number of variables ‘p’ is much larger compared to the number of samples ‘n’, are ubiquitous and often render standard classification and regression techniques unreliable due to overfitting. An important research problem is feature selection — ranking of candidate variables based on their relevance to the outcome variable and retaining those that satisfy a chosen criterion. In this article, we propose a computationally efficient variable selection method based on principal component analysis. The method is very simple, accessible, and suitable for the analysis of high-dimensional datasets. It allows to correct for population structure in genome-wide association studies (GWAS) which otherwise would induce spurious associations and is less likely to overfit. We expect our method to accurately identify important features but at the same time reduce the False Discovery Rate (FDR) (the expected proportion of erroneously rejected null hypotheses) through accounting for the correlation between variables and through de-noising data in the training phase, which also make it robust to outliers in the training data. Being almost as fast as univariate filters, our method allows for valid statistical inference. The ability to make such inferences sets this method apart from most of the current multivariate statistical tools designed for today’s high-dimensional data. We demonstrate the superior performance of our method through extensive simulations. A semi-real gene-expression dataset, a challenging childhood acute lymphoblastic leukemia (CALL) gene expression study, and a GWAS that attempts to identify single-nucleotide polymorphisms (SNPs) associated with the rice grain length further demonstrate the usefulness of our method in genomic applications.Author summaryAn integral part of modern statistical research is feature selection, which has claimed various scientific discoveries, especially in the emerging genomics applications such as gene expression and proteomics studies, where data has thousands or tens of thousands of features but a limited number of samples. However, in practice, due to unavailability of suitable multivariate methods, researchers often resort to univariate filters when it comes to deal with a large number of variables. These univariate filters do not take into account the dependencies between variables because they independently assess variables one-by-one. This leads to loss of information, loss of statistical power (the probability of correctly rejecting the null hypothesis) and potentially biased estimates. In our paper, we propose a new variable selection method. Being computationally efficient, our method allows for valid inference. The ability to make such inferences sets this method apart from most of the current multivariate statistical tools designed for today’s high-dimensional data.


2013 ◽  
Vol 2013 ◽  
pp. 1-5 ◽  
Author(s):  
Xiao-Ying Liu ◽  
Yong Liang ◽  
Zong-Ben Xu ◽  
Hai Zhang ◽  
Kwong-Sak Leung

A new adaptiveL1/2shooting regularization method for variable selection based on the Cox’s proportional hazards mode being proposed. This adaptiveL1/2shooting algorithm can be easily obtained by the optimization of a reweighed iterative series ofL1penalties and a shooting strategy ofL1/2penalty. Simulation results based on high dimensional artificial data show that the adaptiveL1/2shooting regularization method can be more accurate for variable selection than Lasso and adaptive Lasso methods. The results from real gene expression dataset (DLBCL) also indicate that theL1/2regularization method performs competitively.


2017 ◽  
Vol 2017 ◽  
pp. 1-9 ◽  
Author(s):  
Hadi Raeisi Shahraki ◽  
Saeedeh Pourahmad ◽  
Najaf Zare

K nearest neighbors (KNN) are known as one of the simplest nonparametric classifiers but in high dimensional setting accuracy of KNN are affected by nuisance features. In this study, we proposed the K important neighbors (KIN) as a novel approach for binary classification in high dimensional problems. To avoid the curse of dimensionality, we implemented smoothly clipped absolute deviation (SCAD) logistic regression at the initial stage and considered the importance of each feature in construction of dissimilarity measure with imposing features contribution as a function of SCAD coefficients on Euclidean distance. The nature of this hybrid dissimilarity measure, which combines information of both features and distances, enjoys all good properties of SCAD penalized regression and KNN simultaneously. In comparison to KNN, simulation studies showed that KIN has a good performance in terms of both accuracy and dimension reduction. The proposed approach was found to be capable of eliminating nearly all of the noninformative features because of utilizing oracle property of SCAD penalized regression in the construction of dissimilarity measure. In very sparse settings, KIN also outperforms support vector machine (SVM) and random forest (RF) as the best classifiers.


2020 ◽  
Vol 17 (2) ◽  
pp. 0550
Author(s):  
Ali Hameed Yousef ◽  
Omar Abdulmohsin Ali

         The issue of penalized regression model has received considerable critical attention to variable selection. It plays an essential role in dealing with high dimensional data. Arctangent denoted by the Atan penalty has been used in both estimation and variable selection as an efficient method recently. However, the Atan penalty is very sensitive to outliers in response to variables or heavy-tailed error distribution. While the least absolute deviation is a good method to get robustness in regression estimation. The specific objective of this research is to propose a robust Atan estimator from combining these two ideas at once. Simulation experiments and real data applications show that the proposed LAD-Atan estimator has superior performance compared with other estimators.  


Sign in / Sign up

Export Citation Format

Share Document