Variable selection and training set design for particle classification using a linear and a non-linear classifier

2017 ◽  
Vol 173 ◽  
pp. 131-144 ◽  
Author(s):  
Stefan Heisel ◽  
Tijana Kovačević ◽  
Heiko Briesen ◽  
Gerhard Schembecker ◽  
Kerstin Wohlgemuth
2005 ◽  
Vol 13 (2) ◽  
pp. 135-143 ◽  
Author(s):  
Pascal Dufour ◽  
Sharad Bhartiya ◽  
Prasad S. Dhurjati ◽  
Francis J. Doyle III

2014 ◽  
Vol 998-999 ◽  
pp. 708-711
Author(s):  
Ying Zhuo Xiang ◽  
Dong Mei Yang ◽  
Ji Kun Yan

This paper presents a novel approach to categorize multi-view vehicles in complex background using only two dimension characteristic vectors instead of high dimension vectors. Vehicles have large variability of models and the view-point makes the appearance change dramatically. Significant characteristics should be chosen as the evidence to categorize. In this paper, we categorize the vehicles into two categories – cars and lorries. Line detection method is used and calculating the average line length and the number of parallel lines as the two characteristics. A linear classifier is trained using 30 different view cars and lorries as the training set and an 10 additional different cars and lorries as the testing set.


1997 ◽  
Vol 9 (1) ◽  
pp. 1-42 ◽  
Author(s):  
Sepp Hochreiter ◽  
Jürgen Schmidhuber

We present a new algorithm for finding low-complexity neural networks with high generalization capability. The algorithm searches for a “flat” minimum of the error function. A flat minimum is a large connected region in weight space where the error remains approximately constant. An MDL-based, Bayesian argument suggests that flat minima correspond to “simple” networks and low expected overfitting. The argument is based on a Gibbs algorithm variant and a novel way of splitting generalization error into underfitting and overfitting error. Unlike many previous approaches, ours does not require gaussian assumptions and does not depend on a “good” weight prior. Instead we have a prior over input output functions, thus taking into account net architecture and training set. Although our algorithm requires the computation of second-order derivatives, it has backpropagation's order of complexity. Automatically, it effectively prunes units, weights, and input lines. Various experiments with feedforward and recurrent nets are described. In an application to stock market prediction, flat minimum search outperforms conventional backprop, weight decay, and “optimal brain surgeon/optimal brain damage.”


2020 ◽  
pp. 105971231989648 ◽  
Author(s):  
David Windridge ◽  
Henrik Svensson ◽  
Serge Thill

We consider the benefits of dream mechanisms – that is, the ability to simulate new experiences based on past ones – in a machine learning context. Specifically, we are interested in learning for artificial agents that act in the world, and operationalize “dreaming” as a mechanism by which such an agent can use its own model of the learning environment to generate new hypotheses and training data. We first show that it is not necessarily a given that such a data-hallucination process is useful, since it can easily lead to a training set dominated by spurious imagined data until an ill-defined convergence point is reached. We then analyse a notably successful implementation of a machine learning-based dreaming mechanism by Ha and Schmidhuber (Ha, D., & Schmidhuber, J. (2018). World models. arXiv e-prints, arXiv:1803.10122). On that basis, we then develop a general framework by which an agent can generate simulated data to learn from in a manner that is beneficial to the agent. This, we argue, then forms a general method for an operationalized dream-like mechanism. We finish by demonstrating the general conditions under which such mechanisms can be useful in machine learning, wherein the implicit simulator inference and extrapolation involved in dreaming act without reinforcing inference error even when inference is incomplete.


Sign in / Sign up

Export Citation Format

Share Document