scholarly journals Bregman Proximal Gradient Algorithm With Extrapolation for a Class of Nonconvex Nonsmooth Minimization Problems

IEEE Access ◽  
2019 ◽  
Vol 7 ◽  
pp. 126515-126529
Author(s):  
Xiaoya Zhang ◽  
Roberto Barrio ◽  
M. Angeles Martinez ◽  
Hao Jiang ◽  
Lizhi Cheng
2019 ◽  
Vol 35 (3) ◽  
pp. 371-378
Author(s):  
PORNTIP PROMSINCHAI ◽  
NARIN PETROT ◽  
◽  
◽  

In this paper, we consider convex constrained optimization problems with composite objective functions over the set of a minimizer of another function. The main aim is to test numerically a new algorithm, namely a stochastic block coordinate proximal-gradient algorithm with penalization, by comparing both the number of iterations and CPU times between this introduced algorithm and the other well-known types of block coordinate descent algorithm for finding solutions of the randomly generated optimization problems with regularization term.


2012 ◽  
Vol 60 (3) ◽  
pp. 481-489 ◽  
Author(s):  
J.M. Łęski ◽  
N. Henzel

Abstract Linear regression analysis has become a fundamental tool in experimental sciences. We propose a new method for parameter estimation in linear models. The ’Generalized Ordered Linear Regression with Regularization’ (GOLRR) uses various loss functions (including the ǫ-insensitive ones), ordered weighted averaging of the residuals, and regularization. The algorithm consists in solving a sequence of weighted quadratic minimization problems where the weights used for the next iteration depend not only on the values but also on the order of the model residuals obtained for the current iteration. Such regression problem may be transformed into the iterative reweighted least squares scenario. The conjugate gradient algorithm is used to minimize the proposed criterion function. Finally, numerical examples are given to demonstrate the validity of the method proposed.


2013 ◽  
Vol 2013 ◽  
pp. 1-8 ◽  
Author(s):  
Maryam A. Alghamdi ◽  
Mohammad Ali Alghamdi ◽  
Naseer Shahzad ◽  
Hong-Kun Xu

We introduce theQ-lasso which generalizes the well-known lasso of Tibshirani (1996) withQa closed convex subset of a Euclideanm-space for some integerm≥1. This setQcan be interpreted as the set of errors within given tolerance level when linear measurements are taken to recover a signal/image via the lasso. Solutions of theQ-lasso depend on a tuning parameterγ. In this paper, we obtain basic properties of the solutions as a function ofγ. Because of ill posedness, we also applyl1-l2regularization to theQ-lasso. In addition, we discuss iterative methods for solving theQ-lasso which include the proximal-gradient algorithm and the projection-gradient algorithm.


Sign in / Sign up

Export Citation Format

Share Document