A Theoretical Analysis of the Limits of Majority Voting Errors for Multiple Classifier Systems

2002 ◽  
Vol 5 (4) ◽  
pp. 333-350 ◽  
Author(s):  
Dymitr Ruta ◽  
Bogdan Gabrys
Author(s):  
S. H. Alizadeh Moghaddam ◽  
M. Mokhtarzade ◽  
S. A. Alizadeh Moghaddam

Abstract. Multiple classifier systems (MCSs) have shown great performance for the classification of hyperspectral images. The requirements for a successful MCS are 1) diversity between ensembles and 2) good classification accuracy of each ensemble. In this paper, we develop a new MCS method based on a particle swarm optimization (PSO) algorithm. Firstly, in each ensemble of the proposed method, called PSO-MCS, PSO identifies a subset of the spectral bands with a high J2 value, which is a measure of class-separability. Then, an SVM classifier is used to classify the input image, applying the selected features in each ensemble. Finally, the classification results of the entire ensembles are integrated using a majority voting strategy. Having the benefit of the PSO algorithm, PSO-MCS selects appropriate features. In addition, due to the fact that different features are selected in different runs of PSO, diversity between the ensembles is provided. Experimental results on an AVIRIS Indian Pine image show the superiority of the proposed method over its competitor, named random feature selection method.


Entropy ◽  
2020 ◽  
Vol 22 (10) ◽  
pp. 1129
Author(s):  
Jędrzej Biedrzycki ◽  
Robert Burduk

A vital aspect of the Multiple Classifier Systems construction process is the base model integration. For example, the Random Forest approach used the majority voting rule to fuse the base classifiers obtained by bagging the training dataset. In this paper we propose the algorithm that uses partitioning the feature space whose split is determined by the decision rules of each decision tree node which is the base classification model. After dividing the feature space, the centroid of each new subspace is determined. This centroids are used in order to determine the weights needed in the integration phase based on the weighted majority voting rule. The proposal was compared with other Multiple Classifier Systems approaches. The experiments regarding multiple open-source benchmarking datasets demonstrate the effectiveness of our method. To discuss the results of our experiments, we use micro and macro-average classification performance measures.


Author(s):  
SIMON GÜNTER ◽  
HORST BUNKE

Handwritten text recognition is one of the most difficult problems in the field of pattern recognition. In this paper, we describe our efforts towards improving the performance of state-of-the-art handwriting recognition systems through the use of classifier ensembles. There are many examples of classification problems in the literature where multiple classifier systems increase the performance over single classifiers. Normally one of the two following approaches is used to create a multiple classifier system. (1) Several classifiers are developed completely independent of each other and combined in a last step. (2) Several classifiers are created out of one prototype classifier by using so-called classifier ensemble creation methods. In this paper an algorithm which combines both approaches is introduced and it is used to increase the recognition rate of a hidden Markov model (HMM) based handwritten word recognizer.


Author(s):  
ROMAN BERTOLAMI ◽  
HORST BUNKE

Current multiple classifier systems for unconstrained handwritten text recognition do not provide a straightforward way to utilize language model information. In this paper, we describe a generic method to integrate a statistical n-gram language model into the combination of multiple offline handwritten text line recognizers. The proposed method first builds a word transition network and then rescores this network with an n-gram language model. Experimental evaluation conducted on a large dataset of offline handwritten text lines shows that the proposed approach improves the recognition accuracy over a reference system as well as over the original combination method that does not include a language model.


Sign in / Sign up

Export Citation Format

Share Document