scholarly journals Hippocampus, retrosplenial and parahippocampal cortices encode multi-compartment 3D space in a hierarchical manner

2017 ◽  
Author(s):  
Misun Kim ◽  
Eleanor A. Maguire

AbstractHumans commonly operate within 3D environments such as multi-floor buildings and yet there is a surprising dearth of studies that have examined how these spaces are represented in the brain. Here we had participants learn the locations of paintings within a virtual multi-level gallery building and then used behavioural tests and fMRI repetition suppression analyses to investigate how this 3D multi-compartment space was represented, and whether there was a bias in encoding vertical and horizontal information. We found faster response times for within-room egocentric spatial judgments and behavioural priming effects of visiting the same room, providing evidence for a compartmentalised representation of space. At the neural level, we observed a hierarchical encoding of 3D spatial information, with left anterior hippocampus representing local information within a room, while retrosplenial cortex, parahippocampal cortex and posterior hippocampus represented room information within the wider building. Of note, both our behavioural and neural findings showed that vertical and horizontal location information was similarly encoded, suggesting an isotropic representation of 3D space even in the context of a multi-compartment environment. These findings provide much-needed information about how the human brain supports spatial memory and navigation in buildings with numerous levels and rooms.

2007 ◽  
Vol 97 (5) ◽  
pp. 3670-3683 ◽  
Author(s):  
Russell A. Epstein ◽  
J. Stephen Higgins ◽  
Karen Jablonski ◽  
Alana M. Feiler

Humans and animals use information obtained from the local visual scene to orient themselves in the wider world. Although neural systems involved in scene perception have been identified, the extent to which processing in these systems is affected by previous experience is unclear. We addressed this issue by scanning subjects with functional magnetic resonance imaging (fMRI) while they viewed photographs of familiar and unfamiliar locations. Scene-selective regions in parahippocampal cortex (the parahippocampal place area, or PPA), retrosplenial cortex (RSC), and the transverse occipital sulcus (TOS) responded more strongly to images of familiar locations than to images of unfamiliar locations with the strongest effects (>50% increase) in RSC. Examination of fMRI repetition suppression (RS) effects indicated that images of familiar and unfamiliar locations were processed with the same degree of viewpoint specificity; however, increased viewpoint invariance was observed as individual scenes became more familiar over the course of a scan session. Surprisingly, these within-scan-session viewpoint-invariant RS effects were only observed when scenes were repeated across different trials but not when scenes were repeated within a trial, suggesting that within- and between-trial RS effects may index different aspects of visual scene processing. The sensitivity to environmental familiarity observed in the PPA, RSC, and TOS supports earlier claims that these regions mediate the extraction of navigationally relevant spatial information from visual scenes. As locations become familiar, the neural representations of these locations become enriched, but the viewpoint invariance of these representations does not change.


2021 ◽  
Vol 11 (1) ◽  
Author(s):  
Yuka Inamochi ◽  
Kenji Fueki ◽  
Nobuo Usui ◽  
Masato Taira ◽  
Noriyuki Wakabayashi

AbstractSuccessful adaptation to wearing dentures with palatal coverage may be associated with cortical activity changes related to tongue motor control. The purpose was to investigate the brain activity changes during tongue movement in response to a new oral environment. Twenty-eight fully dentate subjects (mean age: 28.6-years-old) who had no experience with removable dentures wore experimental palatal plates for 7 days. We measured tongue motor dexterity, difficulty with tongue movement, and brain activity using functional magnetic resonance imaging during tongue movement at pre-insertion (Day 0), as well as immediately (Day 1), 3 days (Day 3), and 7 days (Day 7) post-insertion. Difficulty with tongue movement was significantly higher on Day 1 than on Days 0, 3, and 7. In the subtraction analysis of brain activity across each day, activations in the angular gyrus and right precuneus on Day 1 were significantly higher than on Day 7. Tongue motor impairment induced activation of the angular gyrus, which was associated with monitoring of the tongue’s spatial information, as well as the activation of the precuneus, which was associated with constructing the tongue motor imagery. As the tongue regained the smoothness in its motor functions, the activation of the angular gyrus and precuneus decreased.


2020 ◽  
Author(s):  
Thomas L. Botch ◽  
Alina Spiegel ◽  
Catherine Ricciardi ◽  
Caroline E. Robertson

AbstractBumetanide has received much interest as a potential pharmacological modulator of the putative imbalance in excitatory/inhibitory (E/I) signaling that is thought to characterize autism spectrum conditions. Yet, currently, no studies of bumetanide efficacy have used an outcome measure that is modeled to depend on E/I balance in the brain. In this manuscript, we present the first causal study of the effect of bumetanide on an objective marker of E/I balance in the brain, binocular rivalry, which we have previously shown to be sensitive to pharmacological manipulation of GABA. Using a within-subjects placebo-control crossover design study, we show that, contrary to expectation, acute administration of bumetanide does not alter binocular rivalry dynamics in neurotypical adult individuals. Neither changes in response times nor response criteria can account for these results. These results raise important questions about the efficacy of acute bumetanide administration for altering E/I balance in the human brain, and highlight the importance of studies using objective markers of the underlying neural processes that drugs hope to target.


2019 ◽  
Author(s):  
Dirk van Moorselaar ◽  
Heleen A. Slagter

AbstractIt is well known that attention can facilitate performance by top-down biasing processing of task-relevant information in advance. Recent findings from behavioral studies suggest that distractor inhibition is not under similar direct control, but strongly dependent on expectations derived from previous experience. Yet, how expectations about distracting information influence distractor inhibition at the neural level remains unclear. The current study addressed this outstanding question in three experiments in which search displays with repeating distractor or target locations across trials allowed observers to learn which location to selectively suppress or boost. Behavioral findings demonstrated that both distractor and target location learning resulted in more efficient search, as indexed by faster response times. Crucially, benefits of distractor learning were observed without target location foreknowledge, unaffected by the number of possible target locations, and could not be explained by priming alone. To determine how distractor location expectations facilitated performance, we applied a spatial encoding model to EEG data to reconstruct activity in neural populations tuned to the distractor or target location. Target location learning increased neural tuning to the target location in advance, indicative of preparatory biasing. This sensitivity increased after target presentation. By contrast, distractor expectations did not change preparatory spatial tuning. Instead, distractor expectations reduced distractor-specific processing, as reflected in the disappearance of the Pd ERP component, a neural marker of distractor inhibition, and decreased decoding accuracy. These findings suggest that the brain may no longer process expected distractors as distractors, once it has learned they can safely be ignored.Significance statementWe constantly try hard to ignore conspicuous events that distract us from our current goals. Surprisingly, and in contrast to dominant attention theories, ignoring distracting, but irrelevant events does not seem to be as flexible as is focusing our attention on those same aspects. Instead, distractor suppression appears to strongly rely on learned, context-dependent expectations. Here, we investigated how learning about upcoming distractors changes distractor processing and directly contrasted the underlying neural dynamics to target learning. We show that while target learning enhanced anticipatory sensory tuning, distractor learning only modulated reactive suppressive processing. These results suggest that expected distractors may no longer be considered distractors by the brain once it has learned that they can safely be ignored.


2018 ◽  
pp. 31-63 ◽  
Author(s):  
Lukáš Herman ◽  
Tomáš Řezník ◽  
Zdeněk Stachoň ◽  
Jan Russnák

Various widely available applications such as Google Earth have made interactive 3D visualizations of spatial data popular. While several studies have focused on how users perform when interacting with these with 3D visualizations, it has not been common to record their virtual movements in 3D environments or interactions with 3D maps. We therefore created and tested a new web-based research tool: a 3D Movement and Interaction Recorder (3DmoveR). Its design incorporates findings from the latest 3D visualization research, and is built upon an iterative requirements analysis. It is implemented using open web technologies such as PHP, JavaScript, and the X3DOM library. The main goal of the tool is to record camera position and orientation during a user’s movement within a virtual 3D scene, together with other aspects of their interaction. After building the tool, we performed an experiment to demonstrate its capabilities. This experiment revealed differences between laypersons and experts (cartographers) when working with interactive 3D maps. For example, experts achieved higher numbers of correct answers in some tasks, had shorter response times, followed shorter virtual trajectories, and moved through the environment more smoothly. Interaction-based clustering as well as other ways of visualizing and qualitatively analyzing user interaction were explored.


Author(s):  
Jerome Kagan

This chapter discusses contextual constraints on brain profiles. The laboratories that measure brain activity contain uncommon combinations of physical features and incentives that prime some brain sites and suppress others. Despite these possibilities, neuroscientists continue to speculate about the implications of the brain patterns they record as if the context has a minimal influence on their observations. This position is difficult to defend given the fact that the parahippocampal cortex binds objects and events to the context in which they appear. Adults lying supine and still in the narrow tube of a magnetic scanner in an unfamiliar room are in an unusual psychological and bodily state. The compromised sense of agency, awareness of being evaluated, confinement in a narrow space, and the demand to suppress all movement affect brain and psychological processes.


2008 ◽  
pp. 530-554
Author(s):  
Christos Bouras ◽  
Eleftheria Giannaka ◽  
Maria Nani ◽  
Alexandros Panagopoulos ◽  
Thrasyvoulos Tsiatosos

In this chapter, we present the design and implementation of an integrated platform for Educational Virtual Environments. This platform aims to support an educational community, synchronous online courses in multi-user three-dimensional (3D) environments, and the creation and access of asynchronous courses through a learning content management system. In order to offer synchronous courses, we have implementeda system called EVE-II, which supports stable event sharing for multi-user 3D places, easy creation of multi-user 3D places, H.323-based voice- over IP services fully integrated in a 3D space, as well as many concurrent 3D multi-user spaces.


2019 ◽  
Vol 30 (3) ◽  
pp. 952-968
Author(s):  
Christoph Pokorny ◽  
Matias J Ison ◽  
Arjun Rao ◽  
Robert Legenstein ◽  
Christos Papadimitriou ◽  
...  

Abstract Memory traces and associations between them are fundamental for cognitive brain function. Neuron recordings suggest that distributed assemblies of neurons in the brain serve as memory traces for spatial information, real-world items, and concepts. However, there is conflicting evidence regarding neural codes for associated memory traces. Some studies suggest the emergence of overlaps between assemblies during an association, while others suggest that the assemblies themselves remain largely unchanged and new assemblies emerge as neural codes for associated memory items. Here we study the emergence of neural codes for associated memory items in a generic computational model of recurrent networks of spiking neurons with a data-constrained rule for spike-timing-dependent plasticity. The model depends critically on 2 parameters, which control the excitability of neurons and the scale of initial synaptic weights. By modifying these 2 parameters, the model can reproduce both experimental data from the human brain on the fast formation of associations through emergent overlaps between assemblies, and rodent data where new neurons are recruited to encode the associated memories. Hence, our findings suggest that the brain can use both of these 2 neural codes for associations, and dynamically switch between them during consolidation.


2020 ◽  
Vol 117 (47) ◽  
pp. 29338-29345 ◽  
Author(s):  
Neal W Morton ◽  
Margaret L. Schlichting ◽  
Alison R. Preston

Prior work has shown that the brain represents memories within a cognitive map that supports inference about connections between individual related events. Real-world adaptive behavior is also supported by recognizing common structure among numerous distinct contexts; for example, based on prior experience with restaurants, when visiting a new restaurant one can expect to first get a table, then order, eat, and finally pay the bill. We used a neurocomputational approach to examine how the brain extracts and uses abstract representations of common structure to support novel decisions. Participants learned image pairs (AB, BC) drawn from distinct triads (ABC) that shared the same internal structure and were then tested on their ability to infer indirect (AC) associations. We found that hippocampal and frontoparietal regions formed abstract representations that coded cross-triad relationships with a common geometric structure. Critically, such common representational geometries were formed despite the lack of explicit reinforcement to do so. Furthermore, we found that representations in parahippocampal cortex are hierarchical, reflecting both cross-triad relationships and distinctions between triads. We propose that representations with common geometric structure provide a vector space that codes inferred item relationships with a direction vector that is consistent across triads, thus supporting faster inference. Using computational modeling of response time data, we found evidence for dissociable vector-based retrieval and pattern-completion processes that contribute to successful inference. Moreover, we found evidence that these processes are mediated by distinct regions, with pattern completion supported by hippocampus and vector-based retrieval supported by parahippocampal cortex and lateral parietal cortex.


2015 ◽  
Vol 114 (6) ◽  
pp. 3211-3219 ◽  
Author(s):  
J. J. Tramper ◽  
W. P. Medendorp

It is known that the brain uses multiple reference frames to code spatial information, including eye-centered and body-centered frames. When we move our body in space, these internal representations are no longer in register with external space, unless they are actively updated. Whether the brain updates multiple spatial representations in parallel, or whether it restricts its updating mechanisms to a single reference frame from which other representations are constructed, remains an open question. We developed an optimal integration model to simulate the updating of visual space across body motion in multiple or single reference frames. To test this model, we designed an experiment in which participants had to remember the location of a briefly presented target while being translated sideways. The behavioral responses were in agreement with a model that uses a combination of eye- and body-centered representations, weighted according to the reliability in which the target location is stored and updated in each reference frame. Our findings suggest that the brain simultaneously updates multiple spatial representations across body motion. Because both representations are kept in sync, they can be optimally combined to provide a more precise estimate of visual locations in space than based on single-frame updating mechanisms.


Sign in / Sign up

Export Citation Format

Share Document