scholarly journals Robust Variable Selection and Estimation Based on Kernel Modal Regression

Entropy ◽  
2019 ◽  
Vol 21 (4) ◽  
pp. 403 ◽  
Author(s):  
Changying Guo ◽  
Biqin Song ◽  
Yingjie Wang ◽  
Hong Chen ◽  
Huijuan Xiong

Model-free variable selection has attracted increasing interest recently due to its flexibility in algorithmic design and outstanding performance in real-world applications. However, most of the existing statistical methods are formulated under the mean square error (MSE) criterion, and susceptible to non-Gaussian noise and outliers. As the MSE criterion requires the data to satisfy Gaussian noise condition, it potentially hampers the effectiveness of model-free methods in complex circumstances. To circumvent this issue, we present a new model-free variable selection algorithm by integrating kernel modal regression and gradient-based variable identification together. The derived modal regression estimator is related closely to information theoretic learning under the maximum correntropy criterion, and assures algorithmic robustness to complex noise by replacing learning of the conditional mean with the conditional mode. The gradient information of estimator offers a model-free metric to screen the key variables. In theory, we investigate the theoretical foundations of our new model on generalization-bound and variable selection consistency. In applications, the effectiveness of the proposed method is verified by data experiments.

Statistics ◽  
2018 ◽  
Vol 52 (6) ◽  
pp. 1212-1248
Author(s):  
Anchao Song ◽  
Tiefeng Ma ◽  
Shaogao Lv ◽  
Changsheng Lin

2018 ◽  
Vol 167 ◽  
pp. 366-377
Author(s):  
Ahmad Alothman ◽  
Yuexiao Dong ◽  
Andreas Artemiou

Author(s):  
Tao Jiang ◽  
Yuanyuan Li ◽  
Alison A Motsinger-Reif

Abstract Motivation The recently proposed knockoff filter is a general framework for controlling the false discovery rate (FDR) when performing variable selection. This powerful new approach generates a ‘knockoff’ of each variable tested for exact FDR control. Imitation variables that mimic the correlation structure found within the original variables serve as negative controls for statistical inference. Current applications of knockoff methods use linear regression models and conduct variable selection only for variables existing in model functions. Here, we extend the use of knockoffs for machine learning with boosted trees, which are successful and widely used in problems where no prior knowledge of model function is required. However, currently available importance scores in tree models are insufficient for variable selection with FDR control. Results We propose a novel strategy for conducting variable selection without prior model topology knowledge using the knockoff method with boosted tree models. We extend the current knockoff method to model-free variable selection through the use of tree-based models. Additionally, we propose and evaluate two new sampling methods for generating knockoffs, namely the sparse covariance and principal component knockoff methods. We test and compare these methods with the original knockoff method regarding their ability to control type I errors and power. In simulation tests, we compare the properties and performance of importance test statistics of tree models. The results include different combinations of knockoffs and importance test statistics. We consider scenarios that include main-effect, interaction, exponential and second-order models while assuming the true model structures are unknown. We apply our algorithm for tumor purity estimation and tumor classification using Cancer Genome Atlas (TCGA) gene expression data. Our results show improved discrimination between difficult-to-discriminate cancer types. Availability and implementation The proposed algorithm is included in the KOBT package, which is available at https://cran.r-project.org/web/packages/KOBT/index.html. Supplementary information Supplementary data are available at Bioinformatics online.


Author(s):  
Lexin Li ◽  
R. Dennis Cook ◽  
Christopher J. Nachtsheim

Biometrics ◽  
2011 ◽  
Vol 68 (1) ◽  
pp. 12-22
Author(s):  
Wei Sun ◽  
Lexin Li

Entropy ◽  
2021 ◽  
Vol 23 (6) ◽  
pp. 651
Author(s):  
Hao Deng ◽  
Jianghong Chen ◽  
Biqin Song ◽  
Zhibin Pan

Due to their flexibility and interpretability, additive models are powerful tools for high-dimensional mean regression and variable selection. However, the least-squares loss-based mean regression models suffer from sensitivity to non-Gaussian noises, and there is also a need to improve the model’s robustness. This paper considers the estimation and variable selection via modal regression in reproducing kernel Hilbert spaces (RKHSs). Based on the mode-induced metric and two-fold Lasso-type regularizer, we proposed a sparse modal regression algorithm and gave the excess generalization error. The experimental results demonstrated the effectiveness of the proposed model.


Sign in / Sign up

Export Citation Format

Share Document