kernel machines
Recently Published Documents


TOTAL DOCUMENTS

185
(FIVE YEARS 5)

H-INDEX

22
(FIVE YEARS 0)

2021 ◽  
Vol 68 ◽  
pp. 54-66
Author(s):  
Lynn Houthuys ◽  
Johan A.K. Suykens
Keyword(s):  

2021 ◽  
Vol 135 ◽  
pp. 177-191
Author(s):  
Arun Pandey ◽  
Joachim Schreurs ◽  
Johan A.K. Suykens

2021 ◽  
Vol 26 ◽  
pp. 100721
Author(s):  
Laurent Chanel Djoufack Nkengfack ◽  
Daniel Tchiotsop ◽  
Romain Atangana ◽  
Beaudelaire Saha Tchinda ◽  
Valérie Louis-Door ◽  
...  

2021 ◽  
pp. 487-496
Author(s):  
Daniel Winter ◽  
Ang Bian ◽  
Xiaoyi Jiang
Keyword(s):  

Author(s):  
Nair K. Nikhitha ◽  
A. L. Afzal ◽  
S. Asharaf
Keyword(s):  

2020 ◽  
Vol 32 (1) ◽  
pp. 97-135 ◽  
Author(s):  
Shiyu Duan ◽  
Shujian Yu ◽  
Yunmei Chen ◽  
Jose C. Principe

We propose a novel family of connectionist models based on kernel machines and consider the problem of learning layer by layer a compositional hypothesis class (i.e., a feedforward, multilayer architecture) in a supervised setting. In terms of the models, we present a principled method to “kernelize” (partly or completely) any neural network (NN). With this method, we obtain a counterpart of any given NN that is powered by kernel machines instead of neurons. In terms of learning, when learning a feedforward deep architecture in a supervised setting, one needs to train all the components simultaneously using backpropagation (BP) since there are no explicit targets for the hidden layers (Rumelhart, Hinton, & Williams, 1986 ). We consider without loss of generality the two-layer case and present a general framework that explicitly characterizes a target for the hidden layer that is optimal for minimizing the objective function of the network. This characterization then makes possible a purely greedy training scheme that learns one layer at a time, starting from the input layer. We provide instantiations of the abstract framework under certain architectures and objective functions. Based on these instantiations, we present a layer-wise training algorithm for an [Formula: see text]-layer feedforward network for classification, where [Formula: see text] can be arbitrary. This algorithm can be given an intuitive geometric interpretation that makes the learning dynamics transparent. Empirical results are provided to complement our theory. We show that the kernelized networks, trained layer-wise, compare favorably with classical kernel machines as well as other connectionist models trained by BP. We also visualize the inner workings of the greedy kernelized models to validate our claim on the transparency of the layer-wise algorithm.


Sign in / Sign up

Export Citation Format

Share Document