split criterion
Recently Published Documents


TOTAL DOCUMENTS

9
(FIVE YEARS 0)

H-INDEX

3
(FIVE YEARS 0)

Author(s):  
Klaus Broelemann ◽  
Gjergji Kasneci

Machine learning algorithms aim at minimizing the number of false decisions and increasing the accuracy of predictions. However, the high predictive power of advanced algorithms comes at the costs of transparency. State-of-the-art methods, such as neural networks and ensemble methods, result in highly complex models with little transparency. We propose shallow model trees as a way to combine simple and highly transparent predictive models for higher predictive power without losing the transparency of the original models. We present a novel split criterion for model trees that allows for significantly higher predictive power than state-of-the-art model trees while maintaining the same level of simplicity. This novel approach finds split points which allow the underlying simple models to make better predictions on the corresponding data. In addition, we introduce multiple mechanisms to increase the transparency of the resulting trees.


Author(s):  
Georg Krammer

The Andersen LRT uses sample characteristics as split criteria to evaluate Rasch model fit, or theory driven hypothesis testing for a test. The power and Type I error of a random split criterion was evaluated with a simulation study. Results consistently show a random split criterion lacks power.


2019 ◽  
Author(s):  
Georg Krammer

The Andersen LRT uses sample characteristics as split criteria to evaluate Rasch model fit, or theory driven hypothesis testing for a test. This simulation study is the first to evaluate power and Type I error of a random split criterion. Results consistently show that a random split criterion lacks power.


Author(s):  
JOAQUÍN ABELLÁN ◽  
ANDRÉS R. MASEGOSA

Variable selection methods play an important role in the field of attribute mining. In the last few years, several feature selection methods have appeared showing that the use of a set of decision trees learnt from a database can be a useful tool for selecting relevant and informative variables regarding a main class variable. With the Naive Bayes classifier as reference, in this article, our aims are twofold: (1) to study what split criterion has better performance when a complete decision tree is used to select variables; and (2) to present a filter-wrapper selection method using decision trees built with the best possible split criterion obtained in (1).


Sign in / Sign up

Export Citation Format

Share Document