scholarly journals Decision trees work better than feed-forward back-prop neural nets for a specific class of problems

Author(s):  
Xiaomei Liu ◽  
K.W. Bowyer ◽  
L.O. Hall
1995 ◽  
Vol 06 (04) ◽  
pp. 591-598
Author(s):  
GEORG STIMPFL-ABELE

The efficiency and robustness of neural feed-forward nets in particle searches is studied using the search for the Standard Higgs Boson at LEP-200 as an example. Methods to select the most efficient variables, to define standard cuts, and to recognize significant differences between the training-data sample and a test-data sample are presented. The efficiencies of the neural nets are significantly better than those of standard methods.


Author(s):  
M. A. Yurdusev ◽  
M. Firat ◽  
M. Mermer ◽  
M. E. Turan
Keyword(s):  

VLSI Design ◽  
2009 ◽  
Vol 2009 ◽  
pp. 1-11 ◽  
Author(s):  
Rida Assaad ◽  
Jose Silva-Martinez

Feed-forward techniques are explored for the design of high-frequency Operational Transconductance Amplifiers (OTAs). For single-stage amplifiers, a recycling folded-cascode OTA presents twice the GBW (197.2 MHz versus 106.3 MHz) and more than twice the slew rate (231.1 V/s versus 99.3 V/s) as a conventional folded cascode OTA for the same load, power consumption, and transistor dimensions. It is demonstrated that the efficiency of the recycling folded-cascode is equivalent to that of a telescopic OTA. As for multistage amplifiers, a No-Capacitor Feed-Forward (NCFF) compensation scheme which uses a high-frequency pole-zero doublet to obtain greater than 90 dB DC gain, GBW of 325 MHz and better than phase margin is discussed. The settling-time- of the NCFF topology can be faster than that of OTAs with Miller compensation. Experimental results for the recycling folded-cascode OTA fabricated in TSMC 0.18 m CMOS, and results of the NCFF demonstrate the efficiency and feasibility of the feed-forward schemes.


1993 ◽  
Vol 02 (02) ◽  
pp. 219-234 ◽  
Author(s):  
ROBERT G. REYNOLDS ◽  
JONATHAN I. MALETIC

The Version Space Controlled Genetic Algorithms (VGA) uses the structure of the version space to cache generalizations about the performance history of chromosomes in the genetic algorithm. This cached experience is used to constrain the generation of new members of the genetic algorithms population. The VGA is shown to be a specific instantiation of a more general framework, Autonomous Learning Elements (ALE). The capabilities of the VGA system are demonstrated using the Boole problem suggested by Wilson [Wilson 1987]. The performance of the VGA is compared to that of decision trees and genetic algorithms. The results suggest that the VGA is able to exploit a certain set of symbiotic relationships between its components, so that the resulting system performs better than either component individually.


2020 ◽  
Author(s):  
Muhammad Haseeb Arshad ◽  
M. A. Abido

This paper serves as an overview for sequential learning algorithms for single hidden layer neural nets. Cite as: M. H. Arshad, M. A. Abido. An Overview of Sequential Learning Algorithms for Single Hidden Layer Networks: Current Issues & Future Trends. Abstract: In this paper, a brief survey of the commonly used sequential-learning algorithms used with single hidden layer feed-forward neural networks is presented. A glimpse at the different kinds that are available in the literature up until now, how they have developed throughout the years, and their relative execution is summarized. Most important things to take note of during the designing phase of neural networks are its complexity, computational efficiency, maximum training time, and ability to generalize the under-study problem. The comparison of different sequential learning algorithms in regard to these merits for single hidden layer neural networks is drawn.


2020 ◽  
Vol 1 (1) ◽  
pp. 1-14
Author(s):  
Yousef Elgimati

The main focus of this paper is on the use of resampling techniques to construct predictive models from data and the goal is to identify the best possible model which can produce better predications. Bagging or Bootstrap aggregating is a general method for improving the performance of given learning algorithm by using a majority vote to combine multiple classifier outputs derived from a single classifier on a bootstrap resample version of a training set. A bootstrap sample is generated by a random sample with replacement from the original training set. Inspired by the idea of bagging, we present an improved method based on a distance function in decision trees, called modified bagging (or weighted Bagging) in this study. The experimental results show that modified bagging is superior to the usual majority vote. These results are confirmed by both real data and artificial data sets with random noise. The Modified bagged classifier performs significantly better than usual bagging on various tree levels for all sample sizes. An interesting observation is that the weighted bagging performs somewhat better than usual bagging with sumps.


Sign in / Sign up

Export Citation Format

Share Document