classifier evaluation
Recently Published Documents


TOTAL DOCUMENTS

32
(FIVE YEARS 3)

H-INDEX

8
(FIVE YEARS 0)

2021 ◽  
Author(s):  
Robert J. Joyce ◽  
Edward Raff ◽  
Charles Nicholas

Author(s):  
David Ha ◽  
Shigeru Katagiri ◽  
Hideyuki Watanabe ◽  
Miho Ohsaki

AbstractThis paper proposes a new boundary uncertainty-based estimation method that has significantly higher accuracy, scalability, and applicability than our previously proposed boundary uncertainty estimation method. In our previous work, we introduced a new classifier evaluation metric that we termed “boundary uncertainty.” The name “boundary uncertainty” comes from evaluating the classifier based solely on measuring the equality between class posterior probabilities along the classifier boundary; satisfaction of such equality can be described as “uncertainty” along the classifier boundary. We also introduced a method to estimate this new evaluation metric. By focusing solely on the classifier boundary to evaluate its uncertainty, boundary uncertainty defines an easier estimation target that can be accurately estimated based directly on a finite training set without using a validation set. Regardless of the dataset, boundary uncertainty is defined between 0 and 1, where 1 indicates whether probability estimation for the Bayes error is achieved. We call our previous boundary uncertainty estimation method “Proposal 1” in order to contrast it with the new method introduced in this paper, which we call “Proposal 2.” Using Proposal 1, we performed successful classifier evaluation on real-world data and supported it with theoretical analysis. However, Proposal 1 suffered from accuracy, scalability, and applicability limitations owing to the difficulty of finding the location of a classifier boundary in a multidimensional sample space. The novelty of Proposal 2 is that it locally reformalizes boundary uncertainty in a single dimension that focuses on the classifier boundary. This convenient reduction with a focus toward the classifier boundary provides the new method’s significant improvements. In classifier evaluation experiments on Support Vector Machines (SVM) and MultiLayer Perceptron (MLP), we demonstrate that Proposal 2 offers a competitive classifier evaluation accuracy compared to a benchmark Cross Validation (CV) method as well as much higher scalability than both CV and Proposal 1.


2021 ◽  
Vol 104 ◽  
pp. 107219
Author(s):  
Katarzyna Stapor ◽  
Paweł Ksieniewicz ◽  
Salvador García ◽  
Michał Woźniak

Author(s):  
Peter Flach

This paper gives an overview of some ways in which our understanding of performance evaluation measures for machine-learned classifiers has improved over the last twenty years. I also highlight a range of areas where this understanding is still lacking, leading to ill-advised practices in classifier evaluation. This suggests that in order to make further progress we need to develop a proper measurement theory of machine learning. I then demonstrate by example what such a measurement theory might look like and what kinds of new results it would entail. Finally, I argue that key properties such as classification ability and data set difficulty are unlikely to be directly observable, suggesting the need for latent-variable models and causal inference.


Author(s):  
Strauss Carvalho Cunha ◽  
Emanuel Mineda Carneiro ◽  
Lineu Fernando Stege Mialaret ◽  
Luiz Alberto Vieira Dias ◽  
Adilson Marques da Cunha

IEEE Access ◽  
2016 ◽  
Vol 4 ◽  
pp. 7028-7038 ◽  
Author(s):  
Ignacio Martin-Diaz ◽  
Daniel Morinigo-Sotelo ◽  
Oscar Duque-Perez ◽  
Rene De J. Romero-Troncoso

Sign in / Sign up

Export Citation Format

Share Document