A Computational Model which Learns to Selectively Attend in Category Learning

Author(s):  
Lingyun Zhang ◽  
G.W. Cottrell
2020 ◽  
Author(s):  
Anna Aleksandrovna Ivanova ◽  
Matthias Hofer

When learning to partition the world into categories, people rely on a set of assumptions (overhypotheses) about possible category structures. Here we propose that the nature of these overhypotheses depends on the presence of a verbal label associated with a given category. We describe a computational model that demonstrates how labels can either accelerate or hinder category learning, depending on whether or not the prior beliefs imposed by their presence align with the true category structure. This account provides an explanation for the phenomena described in prior experimental work (Lupyan, Rakison, & McClelland, 2007; Brojde, Porter, & Colunga, 2011) that have remained unexplained by other models. Based on these results, we argue that the overhypothesis theory of label effects provides a way to formalize and quantify the effect of language on category learning and to develop a more precise delineation between linguistic and non-linguistic thought.


2020 ◽  
Vol 7 (10) ◽  
pp. 200328
Author(s):  
Nadja Althaus ◽  
Valentina Gliozzi ◽  
Julien Mayor ◽  
Kim Plunkett

Recency effects are well documented in the adult and infant literature: recognition and recall memory are better for recently occurring events. We explore recency effects in infant categorization, which does not merely involve memory for individual items, but the formation of abstract category representations. We present a computational model of infant categorization that simulates category learning in 10-month-olds. The model predicts that recency effects outweigh previously reported order effects for the same stimuli. According to the model, infant behaviour at test should depend mainly on the identity of the most recent training item. We evaluate these predictions in a series of experiments with 10-month-old infants. Our results show that infant behaviour confirms the model’s prediction. In particular, at test infants exhibited a preference for a category outlier over the category average only if the final training item had been close to the average, rather than distant from it. Our results are consistent with a view of categorization as a highly dynamic process where the end result of category learning is not the overall average of all stimuli encountered, but rather a fluid representation that moves depending on moment-to-moment novelty. We argue that this is a desirable property of a flexible cognitive system that adapts rapidly to different contexts.


2019 ◽  
Author(s):  
Paulo F. Carvalho ◽  
Robert Goldstone

Although current exemplar models of category learning are flexible and can capture how different features are emphasized for different categories, they still lack in the flexibility to adapt to local pressures in category learning, such as the effect of different sequences of study. In this paper we introduce a new model of category learning, the Sequential Attention Theory Model (SAT-M), in which the encoding of each presented item is influenced not only by its category assignment (global context) as in other exemplar models, but also by how its properties relate to the properties of temporally neighboring items (local context). We demonstrate that SAT-M is able to capture the effect of local context and predict not only learning but also learners’ attentional patterns during learning.


2000 ◽  
Author(s):  
F. Gregory Ashby ◽  
Shawn W. Ell ◽  
Elliott M. Waldron
Keyword(s):  

2000 ◽  
Author(s):  
Robin D. Thomas ◽  
Melissa A. Lea ◽  
Mark D. Hammerly
Keyword(s):  

2013 ◽  
Author(s):  
Joseph Boomer ◽  
Alexandria C. Zakrzewski ◽  
Jennifer R. Johnston ◽  
Barbara A. Church ◽  
Robert Musgrave ◽  
...  

2009 ◽  
Author(s):  
Michael A. Garcia ◽  
Nate Kornell ◽  
Robert A. Bjork

Sign in / Sign up

Export Citation Format

Share Document