scholarly journals Selection of Support Vector Candidates Using Relative Support Distance for Sustainability in Large-Scale Support Vector Machines

2020 ◽  
Vol 10 (19) ◽  
pp. 6979
Author(s):  
Minho Ryu ◽  
Kichun Lee

Support vector machines (SVMs) are a well-known classifier due to their superior classification performance. They are defined by a hyperplane, which separates two classes with the largest margin. In the computation of the hyperplane, however, it is necessary to solve a quadratic programming problem. The storage cost of a quadratic programming problem grows with the square of the number of training sample points, and the time complexity is proportional to the cube of the number in general. Thus, it is worth studying how to reduce the training time of SVMs without compromising the performance to prepare for sustainability in large-scale SVM problems. In this paper, we proposed a novel data reduction method for reducing the training time by combining decision trees and relative support distance. We applied a new concept, relative support distance, to select good support vector candidates in each partition generated by the decision trees. The selected support vector candidates improved the training speed for large-scale SVM problems. In experiments, we demonstrated that our approach significantly reduced the training time while maintaining good classification performance in comparison with existing approaches.

2013 ◽  
Vol 312 ◽  
pp. 771-776
Author(s):  
Min Juan Zheng ◽  
Guo Jian Cheng ◽  
Fei Zhao

The quadratic programming problem in the standard support vector machine (SVM) algorithm has high time complexity and space complexity in solving the large-scale problems which becomes a bottleneck in the SVM applications. Ball Vector Machine (BVM) converts the quadratic programming problem of the traditional SVM into the minimum enclosed ball problem (MEB). It can indirectly get the solution of quadratic programming through solving the MEB problem which significantly reduces the time complexity and space complexity. The experiments show that when handling five large-scale and high-dimensional data sets, the BVM and standard SVM have a considerable accuracy, but the BVM has higher speed and less requirement space than standard SVM.


2012 ◽  
Author(s):  
N. M. Zaki ◽  
S. Deris ◽  
K. K. Chin

Penyelesaian atur cara kuadratik yang sangat besar diperlukan untuk melatih Support Vector Machine. Tiga cara penyelesaian atur cara kuadratik yang berbeza telah digunakan untuk melaksanakan latihan Support Vector Machine bagi mengkaji keberkesanannya ke atas Support Vector Machine. Prestasi bagi kesemua penyelesaian telah dikaji dan dianalisis dari segi masa pelaksanaan dan kualiti penyelesaian. Kaedah praktikal untuk mengurangkan masa latihan tersebut telah dikaji sepenuhnya. Kata kunci: Support vector machines, atur cara kuadratik Training a Support Vector Machine requires the solution of a very large quadratic programming problem. In order to study the influence of a particular quadratic programming solver on the Support Vector Machine, three different quadratic programming solvers are used to perform the Support Vector Machine training. The performance of these solvers in term of execution time and quality of the solutions are analyzed and compared. A practical method to reduce the training time is investigated. Key words: Support vector machines, quadratic programming


2021 ◽  
Author(s):  
M. Tanveer ◽  
A. Tiwari ◽  
R. Choudhary ◽  
M. A. Ganaie

2012 ◽  
Vol 9 (3) ◽  
pp. 33-43 ◽  
Author(s):  
Paulo Gaspar ◽  
Jaime Carbonell ◽  
José Luís Oliveira

Summary Classifying biological data is a common task in the biomedical context. Predicting the class of new, unknown information allows researchers to gain insight and make decisions based on the available data. Also, using classification methods often implies choosing the best parameters to obtain optimal class separation, and the number of parameters might be large in biological datasets.Support Vector Machines provide a well-established and powerful classification method to analyse data and find the minimal-risk separation between different classes. Finding that separation strongly depends on the available feature set and the tuning of hyper-parameters. Techniques for feature selection and SVM parameters optimization are known to improve classification accuracy, and its literature is extensive.In this paper we review the strategies that are used to improve the classification performance of SVMs and perform our own experimentation to study the influence of features and hyper-parameters in the optimization process, using several known kernels.


Sign in / Sign up

Export Citation Format

Share Document