scholarly journals Variable Selection and Parameter Estimation with the Atan Regularization Method

2016 ◽  
Vol 2016 ◽  
pp. 1-12
Author(s):  
Yanxin Wang ◽  
Li Zhu

Variable selection is fundamental to high-dimensional statistical modeling. Many variable selection techniques may be implemented by penalized least squares using various penalty functions. In this paper, an arctangent type penalty which very closely resemblesl0penalty is proposed; we call it Atan penalty. The Atan-penalized least squares procedure is shown to consistently select the correct model and is asymptotically normal, provided the number of variables grows slower than the number of observations. The Atan procedure is efficiently implemented using an iteratively reweighted Lasso algorithm. Simulation results and data example show that the Atan procedure with BIC-type criterion performs very well in a variety of settings.

2021 ◽  
Author(s):  
Mu Yue

In high-dimensional data, penalized regression is often used for variable selection and parameter estimation. However, these methods typically require time-consuming cross-validation methods to select tuning parameters and retain more false positives under high dimensionality. This chapter discusses sparse boosting based machine learning methods in the following high-dimensional problems. First, a sparse boosting method to select important biomarkers is studied for the right censored survival data with high-dimensional biomarkers. Then, a two-step sparse boosting method to carry out the variable selection and the model-based prediction is studied for the high-dimensional longitudinal observations measured repeatedly over time. Finally, a multi-step sparse boosting method to identify patient subgroups that exhibit different treatment effects is studied for the high-dimensional dense longitudinal observations. This chapter intends to solve the problem of how to improve the accuracy and calculation speed of variable selection and parameter estimation in high-dimensional data. It aims to expand the application scope of sparse boosting and develop new methods of high-dimensional survival analysis, longitudinal data analysis, and subgroup analysis, which has great application prospects.


2013 ◽  
Vol 2013 ◽  
pp. 1-5 ◽  
Author(s):  
Xiao-Ying Liu ◽  
Yong Liang ◽  
Zong-Ben Xu ◽  
Hai Zhang ◽  
Kwong-Sak Leung

A new adaptiveL1/2shooting regularization method for variable selection based on the Cox’s proportional hazards mode being proposed. This adaptiveL1/2shooting algorithm can be easily obtained by the optimization of a reweighed iterative series ofL1penalties and a shooting strategy ofL1/2penalty. Simulation results based on high dimensional artificial data show that the adaptiveL1/2shooting regularization method can be more accurate for variable selection than Lasso and adaptive Lasso methods. The results from real gene expression dataset (DLBCL) also indicate that theL1/2regularization method performs competitively.


Algorithms ◽  
2018 ◽  
Vol 11 (11) ◽  
pp. 169
Author(s):  
Xuyang Lou ◽  
Xu Cai ◽  
Baotong Cui

This work addresses parameter estimation of a class of neural systems with limit cycles. An identification model is formulated based on the discretized neural model. To estimate the parameter vector in the identification model, the recursive least-squares and stochastic gradient algorithms including their multi-innovation versions by introducing an innovation vector are proposed. The simulation results of the FitzHugh–Nagumo model indicate that the proposed algorithms perform according to the expected effectiveness.


2014 ◽  
Vol 2014 ◽  
pp. 1-8 ◽  
Author(s):  
Cheng Wang ◽  
Tao Tang ◽  
Dewang Chen

The identification of a class of linear-in-parameters multiple-input single-output systems is considered. By using the iterative search, a least-squares based iterative algorithm and a gradient based iterative algorithm are proposed. A nonlinear example is used to verify the effectiveness of the algorithms, and the simulation results show that the least-squares based iterative algorithm can produce more accurate parameter estimates than the gradient based iterative algorithm.


Author(s):  
Galina Vasil’evna Troshina ◽  
Alexander Aleksandrovich Voevoda

It was suggested to use the system model working in real time for an iterative method of the parameter estimation. It gives the chance to select a suitable input signal, and also to carry out the setup of the object parameters. The object modeling for a case when the system isn't affected by the measurement noises, and also for a case when an object is under the gaussian noise was executed in the MatLab environment. The superposition of two meanders with different periods and single amplitude is used as an input signal. The model represents the three-layer structure in the MatLab environment. On the most upper layer there are units corresponding to the simulation of an input signal, directly the object, the unit of the noise simulation and the unit for the parameter estimation. The second and the third layers correspond to the simulation of the iterative method of the least squares. The diagrams of the input and the output signals in the absence of noise and in the presence of noise are shown. The results of parameter estimation of a static object are given. According to the results of modeling, the algorithm works well even in the presence of significant measurement noise. To verify the correctness of the work of an algorithm the auxiliary computations have been performed and the diagrams of the gain behavior amount which is used in the parameter estimation procedure have been constructed. The entry conditions which are necessary for the work of an iterative method of the least squares are specified. The understanding of this algorithm functioning principles is a basis for its subsequent use for the parameter estimation of the multi-channel dynamic objects.


Sign in / Sign up

Export Citation Format

Share Document