scholarly journals Learning cognitive maps as structured graphs for vicarious evaluation

2019 ◽  
Author(s):  
Rajeev V. Rikhye ◽  
Nishad Gothoskar ◽  
J. Swaroop Guntupalli ◽  
Antoine Dedieu ◽  
Miguel Lázaro-Gredilla ◽  
...  

AbstractCognitive maps are mental representations of spatial and conceptual relationships in an environment. These maps are critical for flexible behavior as they permit us to navigate vicariously, but their underlying representation learning mechanisms are still unknown. To form these abstract maps, hippocampus has to learn to separate or merge aliased observations appropriately in different contexts in a manner that enables generalization, efficient planning, and handling of uncertainty. Here we introduce a specific higher-order graph structure – clone-structured cognitive graph (CSCG) – which forms different clones of an observation for different contexts as a representation that addresses these problems. CSCGs can be learned efficiently using a novel probabilistic sequence model that is inherently robust to uncertainty. We show that CSCGs can explain a variety cognitive map phenomena such as discovering spatial relations from an aliased sensory stream, transitive inference between disjoint episodes of experiences, formation of transferable structural knowledge, and shortcut-finding in novel environments. By learning different clones for different contexts, CSCGs explain the emergence of splitter cells and route-specific encoding of place cells observed in maze navigation, and event-specific graded representations observed in lap-running experiments. Moreover, learning and inference dynamics of CSCGs offer a coherent explanation for a variety of place cell remapping phenomena. By lifting the aliased observations into a hidden space, CSCGs reveal latent modularity that is then used for hierarchical abstraction and planning. Altogether, learning and inference using a CSCG provides a simple unifying framework for understanding hippocampal function, and could be a pathway for forming relational abstractions in artificial intelligence.

2021 ◽  
Vol 12 (1) ◽  
Author(s):  
Dileep George ◽  
Rajeev V. Rikhye ◽  
Nishad Gothoskar ◽  
J. Swaroop Guntupalli ◽  
Antoine Dedieu ◽  
...  

AbstractCognitive maps are mental representations of spatial and conceptual relationships in an environment, and are critical for flexible behavior. To form these abstract maps, the hippocampus has to learn to separate or merge aliased observations appropriately in different contexts in a manner that enables generalization and efficient planning. Here we propose a specific higher-order graph structure, clone-structured cognitive graph (CSCG), which forms clones of an observation for different contexts as a representation that addresses these problems. CSCGs can be learned efficiently using a probabilistic sequence model that is inherently robust to uncertainty. We show that CSCGs can explain a variety of cognitive map phenomena such as discovering spatial relations from aliased sensations, transitive inference between disjoint episodes, and formation of transferable schemas. Learning different clones for different contexts explains the emergence of splitter cells observed in maze navigation and event-specific responses in lap-running experiments. Moreover, learning and inference dynamics of CSCGs offer a coherent explanation for disparate place cell remapping phenomena. By lifting aliased observations into a hidden space, CSCGs reveal latent modularity useful for hierarchical abstraction and planning. Altogether, CSCG provides a simple unifying framework for understanding hippocampal function, and could be a pathway for forming relational abstractions in artificial intelligence.


1983 ◽  
Vol 77 (5) ◽  
pp. 195-198 ◽  
Author(s):  
James F. Herman ◽  
Therese G. Herman ◽  
Steven P. Chatman

Congenitally blind subjects (mean age = 17:2) explored haptically a subset of spatial relations among four objects on a table top. They were then asked to walk all the paths connecting the objects in a large-scale environment. Subjects were able to deduce the overall arrangement of locations from any point in the large-scale environment with a fair degree of accuracy. It is argued that tactual maps could be used to introduce visually impaired individuals to the general rather than specific relationships among objects in a large-scale environment.


2001 ◽  
Vol 24 (5) ◽  
pp. 882-882
Author(s):  
Eric Chown ◽  
Lashon B. Booker ◽  
Stephen Kaplan

Perceptual learning mechanisms derived from Hebb's theory of cell assemblies can generate prototypic representations capable of extending the representational power of TEC (Theory of Event Coding) event codes. The extended capability includes categorization that accommodates “family resemblances” and problem solving that uses cognitive maps.


2020 ◽  
Author(s):  
Maya Zhe Wang ◽  
Benjamin Y. Hayden

ABSTRACTCuriosity refers to a desire for information that is not driven by immediate strategic or instrumental concerns. Latent earning refers to a form of learning that is not directly driven by standard reinforcement learning processes. We propose that curiosity serves the purpose of motivating latent learning. Thus, while latent learning is often treated as an incidental or passive process, in practice it most often reflects a strong evolved pressure to consume large amounts of information. That large volume of information in turn allows curious decision makers to generate sophisticated representations of the structure of their environment, known as cognitive maps. Cognitive maps facilitate adaptive and flexible behavior while maintaining its adaptivity and flexibility via map updates based on new information. Here we describe data supporting the idea that orbitofrontal cortex (OFC) and dorsal anterior cingulate cortex (dACC) play complementary roles in curiosity-driven learning. Specifically, we propose that (1) OFC tracks the innate value of information and incorporates new information into a detailed cognitive map; and (2) dACC tracks the environmental demands and information availability to then use the cognitive map for guiding behavior.


Author(s):  
Jia Wang ◽  
Rui Li

Navigation systems which employ sequence-based directions have been found not effective in facilitating the spatial ability for humans to be aware of themselves in an environment. Traditional maps are found easily conveying the configuration of spatial objects but having difficulty to facilitate the correspondence to spatial objects in the real world. Sketch maps as schematic map-like representations have been suggested being a possible way of achieving goals of facilitating both navigation and spatial awareness. Moreover, sketch maps as externalizations of cognitive maps have been proved as reliable representations for human spatial thinking. In this study, the authors investigate the characteristics of directions given in two different forms: sketch maps and verbal descriptions (turn-by-turn instructions). The investigation addresses three aspects of spatial relations which are orientation, street topology and sequential order and their representations using existing qualitative reasoning calculi. The results of this study demonstrate sketch maps as a better direction-giving method and provide insights of applying sketch-map-like components for navigation.


PLoS Biology ◽  
2020 ◽  
Vol 18 (10) ◽  
pp. e3000908
Author(s):  
Daisy Crawley ◽  
Lei Zhang ◽  
Emily J. H. Jones ◽  
Jumana Ahmad ◽  
Bethany Oakley ◽  
...  

2020 ◽  
Author(s):  
Ida Momennejad

Memory and planning rely on learning the structure of relationships among experiences. Compact representations of these structures guide flexible behavior in humans and animals. A century after ‘latent learning’ experiments summarized by Tolman, the larger puzzle of cognitive maps remains elusive: how does the brain learn and generalize relational structures? This review focuses on a reinforcement learning (RL) approach to learning compact representations of the structure of states. We review evidence showing that capturing structures as predictive representations updated via replay offers a neurally plausible account of human behavior and the neural representations of predictive cognitive maps. We highlight multi-scale successor representations, prioritized replay, and policy-dependence. These advances call for new directions in studying the entanglement of learning and memory with prediction and planning.


2021 ◽  
Author(s):  
Khazar Khorrami ◽  
Okko Räsänen

Decades of research has studied how language learning infants learn to discriminate speech sounds, segment words, and associate words with their meanings. While gradual development of such capabilities is unquestionable, the exact nature of these skills and the underlying mental representations yet remains unclear. In parallel, computational studies have shown that basic comprehension of speech can be achieved by statistical learning between speech and concurrent referentially ambiguous visual input. These models can operate without prior linguistic knowledge such as representations of linguistic units, and without learning mechanisms specifically targeted at such units. This has raised the question whether knowledge of linguistic units, such as phone(me)s, syllables, and words, could actually emerge as latent representations supporting the translation between speech and representations in other modalities, and instead of the units ever being proximal learning goals for the learner. In this study, formulate this idea as the so-called latent language hypothesis (LLH), connecting linguistic representation learning to general predictive processing within and across sensory modalities. We review the extent that the audiovisual aspect of LLH is supported by the existing computational studies. We then explore LLH further in extensive learning simulations with different neural network models for audiovisual cross-situational learning, and comparing learning from both synthetic and real speech data. We investigate whether the latent representations learned by the networks reflect phonetic, syllabic, or lexical structure of input speech by utilizing an array of complementary evaluation metrics related to linguistic selectivity and temporal characteristics of the representations. As a result, we find that representations associated with phonetic, syllabic, and lexical units of speech indeed emerge from the audiovisual learning process. The finding is also robust against variations in model architecture or characteristics of model training and testing data. The results suggest that cross-modal and cross-situational learning may, in principle, assist in early language development much beyond just enabling association of acoustic word forms to their referential meanings.


Author(s):  
G. M. Cohen ◽  
J. S. Grasso ◽  
M. L. Domeier ◽  
P. T. Mangonon

Any explanation of vestibular micromechanics must include the roles of the otolithic and cupular membranes. However, micromechanical models of vestibular function have been hampered by unresolved questions about the microarchitectures of these membranes and their connections to stereocilia and supporting cells. Otolithic membranes are notoriously difficult to preserve because of severe shrinkage and loss of soluble components. We have empirically developed fixation procedures that reduce shrinkage artifacts and more accurately depict the spatial relations between the otolithic membranes and the ciliary bundles and supporting cells.We used White Leghorn chicks, ranging in age from newly hatched to one week. The inner ears were fixed for 3-24 h in 1.5-1.75% glutaraldehyde in 150 mM KCl, buffered with potassium phosphate, pH 7.3; when postfixed, it was for 30 min in 1% OsO4 alone or mixed with 1% K4Fe(CN)6. The otolithic organs (saccule, utricle, lagenar macula) were embedded in Araldite 502. Semithin sections (1 μ) were stained with toluidine blue.


Sign in / Sign up

Export Citation Format

Share Document