scholarly journals Sparse regularized learning in the reproducing kernel banach spaces with the \begin{document}$ \ell^1 $\end{document} norm

2020 ◽  
Vol 3 (3) ◽  
pp. 205-218
Author(s):  
Ying Lin ◽  
◽  
Rongrong Lin ◽  
Qi Ye ◽  
Author(s):  
Bahram Dastourian ◽  
Mohammad Janfada

In this paper, the concept of a family of local atoms in a Banach space is introduced by using a semi-inner product (s.i.p.). Then this concept is generalized to an atomic system for operators in Banach spaces. We also give some characterizations of atomic systems leading to new frames for operators. In addition, a reconstruction formula is obtained. The characterizations of atomic systems allow us to state some results for sampling theory in s.i.p reproducing kernel Banach spaces. Finally, we define the concept of frame operator for these kinds of frames in Banach spaces and then we establish a perturbation result in this framework.


2017 ◽  
Vol 29 (11) ◽  
pp. 3078-3093 ◽  
Author(s):  
Liangzhi Chen ◽  
Haizhang Zhang

Support vector machines, which maximize the margin from patterns to the separation hyperplane subject to correct classification, have received remarkable success in machine learning. Margin error bounds based on Hilbert spaces have been introduced in the literature to justify the strategy of maximizing the margin in SVM. Recently, there has been much interest in developing Banach space methods for machine learning. Large margin classification in Banach spaces is a focus of such attempts. In this letter we establish a margin error bound for the SVM on reproducing kernel Banach spaces, thus supplying statistical justification for large-margin classification in Banach spaces.


2011 ◽  
Vol 23 (10) ◽  
pp. 2713-2729 ◽  
Author(s):  
Guohui Song ◽  
Haizhang Zhang

A typical approach in estimating the learning rate of a regularized learning scheme is to bound the approximation error by the sum of the sampling error, the hypothesis error, and the regularization error. Using a reproducing kernel space that satisfies the linear representer theorem brings the advantage of discarding the hypothesis error from the sum automatically. Following this direction, we illustrate how reproducing kernel Banach spaces with the ℓ1 norm can be applied to improve the learning rate estimate of ℓ1-regularization in machine learning.


2019 ◽  
Vol 19 (01) ◽  
pp. 125-146
Author(s):  
Liren Huang ◽  
Chunguang Liu ◽  
Lulin Tan ◽  
Qi Ye

In this paper, we generalize the representer theorems in Banach spaces by the theory of nonsmooth analysis. The generalized representer theorems assure that the regularized learning models can be constructed by the nonconvex loss functions, the generalized training data, and the general Banach spaces which are nonreflexive, nonstrictly convex, and nonsmooth. Specially, the sparse representations of the regularized learning in 1-norm reproducing kernel Banach spaces are shown by the generalized representer theorems.


2012 ◽  
Vol 10 (3) ◽  
pp. 1401-1417 ◽  
Author(s):  
Antonio G. García ◽  
Alberto Portal

Sign in / Sign up

Export Citation Format

Share Document