hypothesis class
Recently Published Documents


TOTAL DOCUMENTS

3
(FIVE YEARS 0)

H-INDEX

1
(FIVE YEARS 0)

2020 ◽  
Vol 32 (1) ◽  
pp. 97-135 ◽  
Author(s):  
Shiyu Duan ◽  
Shujian Yu ◽  
Yunmei Chen ◽  
Jose C. Principe

We propose a novel family of connectionist models based on kernel machines and consider the problem of learning layer by layer a compositional hypothesis class (i.e., a feedforward, multilayer architecture) in a supervised setting. In terms of the models, we present a principled method to “kernelize” (partly or completely) any neural network (NN). With this method, we obtain a counterpart of any given NN that is powered by kernel machines instead of neurons. In terms of learning, when learning a feedforward deep architecture in a supervised setting, one needs to train all the components simultaneously using backpropagation (BP) since there are no explicit targets for the hidden layers (Rumelhart, Hinton, & Williams, 1986 ). We consider without loss of generality the two-layer case and present a general framework that explicitly characterizes a target for the hidden layer that is optimal for minimizing the objective function of the network. This characterization then makes possible a purely greedy training scheme that learns one layer at a time, starting from the input layer. We provide instantiations of the abstract framework under certain architectures and objective functions. Based on these instantiations, we present a layer-wise training algorithm for an [Formula: see text]-layer feedforward network for classification, where [Formula: see text] can be arbitrary. This algorithm can be given an intuitive geometric interpretation that makes the learning dynamics transparent. Empirical results are provided to complement our theory. We show that the kernelized networks, trained layer-wise, compare favorably with classical kernel machines as well as other connectionist models trained by BP. We also visualize the inner workings of the greedy kernelized models to validate our claim on the transparency of the layer-wise algorithm.


2019 ◽  
Vol 5 (2) ◽  
pp. 83-89
Author(s):  
Muhammad Athoillah

Handwritten text recognition is the ability of a system to recognize human handwritten and convert it into digital text. Handwritten text recognition is a form of classification problem, so a classification algorithm such as Nearest Neighbor (NN) is needed to solve it. NN algorithms is a simple algorithm yet provide a good result. In contrast with other algorithms that usually determined by some hypothesis class, NN Algorithm finds out a label on any test point without searching for a predictor within some predefined class of functions. Arabic is one of the most important languages in the world. Recognizing Arabic character is very interesting research, not only it is a primary language that used in Islam but also because the number of this research is still far behind the number of recognizing handwritten Latin or Chinese research. Due to that's the background, this framework built a system to recognize handwritten Arabic Character from an image dataset using the NN algorithm. The result showed that the proposed method could recognize the characters very well confirmed by its average of precision, recall and accuracy.


Sign in / Sign up

Export Citation Format

Share Document