Input space versus feature space in kernel-based methods

1999 ◽  
Vol 10 (5) ◽  
pp. 1000-1017 ◽  
Author(s):  
B. Scholkopf ◽  
S. Mika ◽  
C.J.C. Burges ◽  
P. Knirsch ◽  
K.-R. Muller ◽  
...  
2016 ◽  
Vol 25 (3) ◽  
pp. 417-429
Author(s):  
Chong Wu ◽  
Lu Wang ◽  
Zhe Shi

AbstractFor the financial distress prediction model based on support vector machine, there are no theories concerning how to choose a proper kernel function in a data-dependent way. This paper proposes a method of modified kernel function that can availably enhance classification accuracy. We apply an information-geometric method to modifying a kernel that is based on the structure of the Riemannian geometry induced in the input space by the kernel. A conformal transformation of a kernel from input space to higher-dimensional feature space enlarges volume elements locally near support vectors that are situated around the classification boundary and reduce the number of support vectors. This paper takes the Gaussian radial basis function as the internal kernel. Additionally, this paper combines the above method with the theories of standard regularization and non-dimensionalization to construct the new model. In the empirical analysis section, the paper adopts the financial data of Chinese listed companies. It uses five groups of experiments with different parameters to compare the classification accuracy. We can make the conclusion that the model of modified kernel function can effectively reduce the number of support vectors, and improve the classification accuracy.


Author(s):  
Daniel Cremers ◽  
Timo Kohlberger

We present a method of density estimation that is based on an extension of kernel PCA to a probabilistic framework. Given a set of sample data, we assume that this data forms a Gaussian distribution, not in the input space but upon a nonlinear mapping to an appropriate feature space. As with most kernel methods, this mapping can be carried out implicitly. Due to the strong nonlinearity, the corresponding density estimate in the input space is highly non-Gaussian. Numerical applications on 2-D data sets indicate that it is capable of approximating essentially arbitrary distributions. Beyond demonstrating applications on 2-D data sets, we apply our method to high-dimensional data given by various silhouettes of a 3-D object. The shape density estimated by our method is subsequently applied as a statistical shape prior to variational image segmentation. Experiments demonstrate that the resulting segmentation process can incorporate highly accurate knowledge on a large variety of complex real-world shapes. It makes the segmentation process robust to misleading information due to noise, clutter and occlusion.


Author(s):  
Minghe Sun

As machine learning techniques, support vector machines are quadratic programming models and are recent revolutionary development for classification analysis. Primal and dual formulations of support vector machine models for both two-class and multi-class classification are discussed. The dual formulations in high dimensional feature space using inner product kernels are emphasized. Nonlinear classification function or discriminant functions in high dimensional feature spaces can be constructed through the use of inner product kernels without actually mapping the data from the input space to the high dimensional feature spaces. Furthermore, the size of the dual formulation is independent of the dimension of the input space and independent of the kernels used. Two illustrative examples, one for two-class and the other for multi-class classification, are used to demonstrate the formulations of these SVM models.


2020 ◽  
Vol 7 (2) ◽  
pp. 85-91
Author(s):  
Riko Saragih ◽  
Tio Dewantho Sunoto ◽  
Judea Janoto Jarden ◽  
Dzakki Muhammad Hanif

Penerapan fungsi kernel dapat dilakukan untuk memecahkan masalah pada suatu data citra yang bersifat non-linier sehingga data tersebut dapat dipisahkan secara linier oleh sebuah hyperplane dengan cara melakukan pemetaan dari input space terhadap feature space untuk menaikkan dimensinya. Dalam artikel ini akan dibahas peningkatan akurasi pengenalan yang didapat dengan melakukan penerapan kernel jamak pada program berbasis PCA dengan menggunakan jenis kernel linear, polynomial, dan gaussian untuk pengenalan wajah yang diberi variasi pencahayaan. Proses pencocokan atau pengenalan identitas dilakukan dengan metode SVM. Peningkatan yang didapat dari hasil penerapan kernel jamak akan dibandingkan dengan penerapan kernel tunggal dan dilihat seberapa besar pengingkatan akurasi pengenalannya. Berdasarkan hasil penerapan kernel jamak yang dilakukan, rata-rata peningkatan akurasi yang didapatkan dari hasil pengenalan wajah dengan variasi pencahayaan adalah sebesar 10,5% dibandingkan dengan kernel tunggal.


1987 ◽  
Vol 52 (3) ◽  
pp. 294-299 ◽  
Author(s):  
Michael A. Primus

Variable success in audiometric assessment of young children with operant conditioning indicates the need for systematic examination of commonly employed techniques. The current study investigated response and reinforcement features of two operant discrimination paradigms with normal I7-month-old children. Findings indicated more responses prior to the onset of habituation when the response task was based on complex central processing skills (localization and coordination of auditory/visual space) versus simple detection. Use of animation in toy reinforcers resulted in more than a twofold increase in the number of subject responses. Results showed no significant difference in response conditioning rate or consistency for the response tasks and forms of reinforcement examined.


2012 ◽  
Author(s):  
Tom Busey ◽  
Chen Yu ◽  
Francisco Parada ◽  
Brandi Emerick ◽  
John Vanderkolk

2010 ◽  
Author(s):  
Sean Gallagher ◽  
Jonisha Pollard ◽  
William L. Porter
Keyword(s):  

Sign in / Sign up

Export Citation Format

Share Document