Learning Rates for -Regularized Kernel Classifiers
Keyword(s):
We consider a family of classification algorithms generated from a regularization kernel scheme associated with -regularizer and convex loss function. Our main purpose is to provide an explicit convergence rate for the excess misclassification error of the produced classifiers. The error decomposition includes approximation error, hypothesis error, and sample error. We apply some novel techniques to estimate the hypothesis error and sample error. Learning rates are eventually derived under some assumptions on the kernel, the input space, the marginal distribution, and the approximation error.
2011 ◽
Vol 09
(04)
◽
pp. 395-408
◽
Keyword(s):
2020 ◽
Vol 357
(11)
◽
pp. 7069-7091
Keyword(s):
2017 ◽
Vol 15
(06)
◽
pp. 815-836
◽
1983 ◽
Vol 25
(3)
◽
pp. 463-466
◽
Keyword(s):
2020 ◽
Vol 19
(8)
◽
pp. 3973-4005