Choosing how many options to choose from: Is there such a thing as a desired-set-size?

2008 ◽  
Author(s):  
Sebastian Hafenbraedl ◽  
Ulrich Hoffrage
Keyword(s):  
2002 ◽  
Vol 13 (1) ◽  
pp. 69-83 ◽  
Author(s):  
Stefan R. Schweinberger ◽  
Thomas Klos ◽  
Werner Sommer

Abstract: We recorded reaction times (RTs) and event-related potentials (ERPs) in patients with unilateral lesions during a memory search task. Participants memorized faces or abstract words, which were then recognized among new ones. The RT deficit found in patients with left brain damage (LBD) for words increased with memory set size, suggesting that their problem relates to memory search. In contrast, the RT deficit found in patients with RBD for faces was apparently related to perceptual encoding, a conclusion also supported by their reduced P100 ERP component. A late slow wave (720-1720 ms) was enhanced in patients, particularly to words in patients with LBD, and to faces in patients with RBD. Thus, the slow wave was largest in the conditions with most pronounced performance deficits, suggesting that it reflects deficit-related resource recruitment.


2011 ◽  
Author(s):  
Jeffrey S. Katz ◽  
John F. Magnotti ◽  
Anthony A. Wright

2010 ◽  
Author(s):  
Lucia Lazarowski ◽  
Rachel Eure ◽  
Mallory Gleason ◽  
Adam Goodman ◽  
Aly Mack ◽  
...  

Author(s):  
Jungeui Hong ◽  
Elizabeth A. Cudney ◽  
Genichi Taguchi ◽  
Rajesh Jugulum ◽  
Kioumars Paryani ◽  
...  

The Mahalanobis-Taguchi System is a diagnosis and predictive method for analyzing patterns in multivariate cases. The goal of this study is to compare the ability of the Mahalanobis-Taguchi System and a neural network to discriminate using small data sets. We examine the discriminant ability as a function of data set size using an application area where reliable data is publicly available. The study uses the Wisconsin Breast Cancer study with nine attributes and one class.


2021 ◽  
Vol 5 (1) ◽  
pp. 38
Author(s):  
Chiara Giola ◽  
Piero Danti ◽  
Sandro Magnani

In the age of AI, companies strive to extract benefits from data. In the first steps of data analysis, an arduous dilemma scientists have to cope with is the definition of the ’right’ quantity of data needed for a certain task. In particular, when dealing with energy management, one of the most thriving application of AI is the consumption’s optimization of energy plant generators. When designing a strategy to improve the generators’ schedule, a piece of essential information is the future energy load requested by the plant. This topic, in the literature it is referred to as load forecasting, has lately gained great popularity; in this paper authors underline the problem of estimating the correct size of data to train prediction algorithms and propose a suitable methodology. The main characters of this methodology are the Learning Curves, a powerful tool to track algorithms performance whilst data training-set size varies. At first, a brief review of the state of the art and a shallow analysis of eligible machine learning techniques are offered. Furthermore, the hypothesis and constraints of the work are explained, presenting the dataset and the goal of the analysis. Finally, the methodology is elucidated and the results are discussed.


Sign in / Sign up

Export Citation Format

Share Document