How is the hypothesis space represented? Evidence from young children’s active search and predictions in a multiple-cue inference task.

2021 ◽  
Vol 57 (7) ◽  
pp. 1080-1093
Author(s):  
Angela Jones ◽  
Douglas B. Markant ◽  
Thorsten Pachur ◽  
Alison Gopnik ◽  
Azzurra Ruggeri
2020 ◽  
Author(s):  
Azzurra Ruggeri ◽  
Angela Jones ◽  
Thorsten Pachur ◽  
alison gopnik ◽  
Douglas Benjamin Markant

To successfully navigate in an uncertain world, one has to learn the relationship between cues (e.g., symptoms) and an outcome (e.g., disease). During this learning, it is sometimes possible to actively manipulate the cue values, allowing one to test hypotheses about this relationship directly. Across two studies, we investigated how 5- to 7-year-olds select cue configurations when learning cue-outcome relationships and how they are guided by representations of the hypothesis space regarding these relationships. In our task, children selected which monster pairs to see running in a race, allowing them to learn how two cues (the color and shape of monsters) predicted the relative speed of the monsters; subsequently, they made predictions about the speed of new monsters. Using computational modeling, we compared several models in their ability to capture children’s responses. We found that young children’s search was most consistent with a model that assumed reliance on a hypothesis space represented in terms of the relative speed of individual monsters. However, when memory aids were provided during search, 7-year-olds were best described by a model that assumes reliance on a more efficient, high-level representation that organizes the hypothesis space based on abstracted cue-outcome relationships. Our results highlight the guiding role of hypothesis-space representations for search during learning, suggest that young children already spontaneously abstract hypothesis-space representations, and provide the first evidence for a shift between search and test in terms of the hypothesis-space structures on which children rely when navigating in an uncertain world.


2017 ◽  
Vol 31 (2) ◽  
pp. 200-208 ◽  
Author(s):  
Emiliano Brunamonti ◽  
Floriana Costanzo ◽  
Anna Mammì ◽  
Cristina Rufini ◽  
Diletta Veneziani ◽  
...  

2001 ◽  
Vol 24 (4) ◽  
pp. 685-686
Author(s):  
Michael D. Lee

While Tenenbaum and Griffiths impressively consolidate and extend Shepard's research in the areas of stimulus representation and generalization, there is a need for complexity measures to be developed to control the flexibility of their “hypothesis space” approach to representation. It may also be possible to extend their concept learning model to consider the fundamental issue of representational adaptation. [Tenenbaum & Griffiths]


2014 ◽  
Vol 2014 ◽  
pp. 1-8
Author(s):  
Hongzhi Tong ◽  
Di-Rong Chen ◽  
Fenghong Yang

We consider a kind of support vector machines regression (SVMR) algorithms associated withlq  (1≤q<∞)coefficient-based regularization and data-dependent hypothesis space. Compared with former literature, we provide here a simpler convergence analysis for those algorithms. The novelty of our analysis lies in the estimation of the hypothesis error, which is implemented by setting a stepping stone between the coefficient regularized SVMR and the classical SVMR. An explicit learning rate is then derived under very mild conditions.


2013 ◽  
pp. 931-931
Author(s):  
Eyke Hüllermeier ◽  
Thomas Fober ◽  
Marco Mernberger
Keyword(s):  

Sign in / Sign up

Export Citation Format

Share Document