Expected Bayes Error Rate in Supervised Classification of Spatial Gaussian Data

Informatica ◽  
2011 ◽  
Vol 22 (3) ◽  
pp. 371-381 ◽  
Author(s):  
Kęstutis Dučinskas ◽  
Lijana Stabingienė
2010 ◽  
Vol 51 ◽  
Author(s):  
Lijana Stabingienė ◽  
Kęstutis Dučinskas

In spatial classification it is usually assumed that features observations given labels are independently distributed. We have retracted this assumption by proposing stationary Gaussian random field model for features observations. The label are assumed to follow Disrete Random Field (DRF) model. Formula for exact error rate based on Bayes discriminant function (BDF) is derived. In the case of partial parametric uncertainty (mean parameters and variance are unknown), the approximation of the expected error rate associated with plug-in BDF is also derived. The dependence of considered error rates on the values of range and clustering parameters is investigated numerically for training locations being second-order neighbors to location of observation to be classified.


2012 ◽  
Vol 53 ◽  
Author(s):  
Lina Dreižienė ◽  
Marta Karaliutė

In this paper we use the pluged-in Bayes discriminant function (PBDF) for classification of spatial Gaussian data into one of two populations specified by different parametric mean models and common geometric anisotropic covariance function. The pluged-in Bayes discriminant function is constructed by using ML estimators of unknown mean and anisotropy ratio parameters. We focus on the asymptotic approximation of expected error rate (AER) and our aim is to investigate the effects of two different spatial sampling designs (based on increasing and fixed domain asymptotics) on AER.


Author(s):  
HEE-JOONG KANG ◽  
SEONG-WHAN LEE

In order to raise a class discrimination power by the combination of multiple classifiers, the upper bound of Bayes error rate which is bounded by the conditional entropy of a class and decisions should be minimized. Based on the minimization of the upper bound of the Bayes error rate, Wang and Wong proposed only a tree dependence approximation scheme of a high-dimensional probability distribution composed of a class and patterns. This paper extends such a tree dependence approximation scheme to higher order dependency for improving the classification performance and thus optimally approximates the high-dimensional probability distribution with a product of low-dimensional distributions. And then, a new combination method by the proposed approximation scheme is presented and evaluated with classifiers recognizing unconstrained handwritten numerals.


Sign in / Sign up

Export Citation Format

Share Document