scholarly journals Signposts in the fog: objects facilitate scene representations in left scene-selective cortex

2017 ◽  
Author(s):  
Talia Brandman ◽  
Marius V. Peelen

AbstractWe internally represent the structure of our surroundings even when there is little layout information available in the visual image, such as when walking through fog or darkness. One way in which we disambiguate such scenes is through object cues; for example, seeing a boat supports the inference that the foggy scene is a lake. Recent studies have investigated the neural mechanisms by which object and scene processing interact to support object perception. The current study examines the reverse interaction, by which objects facilitate the neural representation of scene layout. Photographs of indoor (closed) and outdoor (open) real-world scenes were blurred such that they were difficult to categorize on their own, but easily disambiguated by the inclusion of an object. fMRI decoding was used to measure scene representations in scene-selective parahippocampal place area (PPA) and occipital place area (OPA). Classifiers were trained to distinguish response patterns to fully visible indoor and outdoor scenes, presented in an independent experiment. Testing these classifiers on blurred scenes revealed a strong improvement in classification in left PPA and OPA when objects were present, despite the reduced low-level visual feature overlap with the training set in this condition. These findings were specific to left PPA/OPA, with no evidence for object-driven facilitation in right PPA/OPA, object-selective areas, and early visual cortex. These findings demonstrate separate roles for left and right scene-selective cortex in scene representation, whereby left PPA/OPA represents inferred scene layout, influenced by contextual object cues, and right PPA/OPA represents a scene’s visual features.

2019 ◽  
Vol 31 (3) ◽  
pp. 390-400 ◽  
Author(s):  
Talia Brandman ◽  
Marius V. Peelen

We internally represent the structure of our surroundings even when there is little layout information available in the visual image, such as when walking through fog or darkness. One way in which we disambiguate such scenes is through object cues; for example, seeing a boat supports the inference that the foggy scene is a lake. Recent studies have investigated the neural mechanisms by which object and scene processing interact to support object perception. The current study examines the reverse interaction by which objects facilitate the neural representation of scene layout. Photographs of indoor (closed) and outdoor (open) real-world scenes were blurred such that they were difficult to categorize on their own but easily disambiguated by the inclusion of an object. fMRI decoding was used to measure scene representations in scene-selective parahippocampal place area (PPA) and occipital place area (OPA). Classifiers were trained to distinguish response patterns to fully visible indoor and outdoor scenes, presented in an independent experiment. Testing these classifiers on blurred scenes revealed a strong improvement in classification in left PPA and OPA when objects were present, despite the reduced low-level visual feature overlap with the training set in this condition. These findings were specific to left PPA/OPA, with no evidence for object-driven facilitation in right PPA/OPA, object-selective areas, and early visual cortex. These findings demonstrate separate roles for left and right scene-selective cortex in scene representation, whereby left PPA/OPA represents inferred scene layout, influenced by contextual object cues, and right PPA/OPA represents a scene's visual features.


eLife ◽  
2019 ◽  
Vol 8 ◽  
Author(s):  
Daniel Kaiser ◽  
Jacopo Turini ◽  
Radoslaw M Cichy

With every glimpse of our eyes, we sample only a small and incomplete fragment of the visual world, which needs to be contextualized and integrated into a coherent scene representation. Here we show that the visual system achieves this contextualization by exploiting spatial schemata, that is our knowledge about the composition of natural scenes. We measured fMRI and EEG responses to incomplete scene fragments and used representational similarity analysis to reconstruct their cortical representations in space and time. We observed a sorting of representations according to the fragments' place within the scene schema, which occurred during perceptual analysis in the occipital place area and within the first 200 ms of vision. This schema-based coding operates flexibly across visual features (as measured by a deep neural network model) and different types of environments (indoor and outdoor scenes). This flexibility highlights the mechanism's ability to efficiently organize incoming information under dynamic real-world conditions.


2008 ◽  
Vol 20 (7) ◽  
pp. 1250-1265 ◽  
Author(s):  
Daniela B. Fenker ◽  
Julietta U. Frey ◽  
Hartmut Schuetze ◽  
Dorothee Heipertz ◽  
Hans-Jochen Heinze ◽  
...  

Exploring a novel environment can facilitate subsequent hippocampal long-term potentiation in animals. We report a related behavioral enhancement in humans. In two separate experiments, recollection and free recall, both measures of hippocampus-dependent memory formation, were enhanced for words studied after a 5-min exposure to unrelated novel as opposed to familiar images depicting indoor and outdoor scenes. With functional magnetic resonance imaging, the enhancement was predicted by specific activity patterns observed during novelty exposure in parahippocampal and dorsal prefrontal cortices, regions which are known to be linked to attentional orienting to novel stimuli and perceptual processing of scenes. Novelty was also associated with activation of the substantia nigra/ventral tegmental area of the midbrain and the hippocampus, but these activations did not correlate with contextual memory enhancement. These findings indicate remarkable parallels between contextual memory enhancement in humans and existing evidence regarding contextually enhanced hippocampal plasticity in animals. They provide specific behavioral clues to enhancing hippocampus-dependent memory in humans.


2021 ◽  
pp. 1-16
Author(s):  
Qing Yu ◽  
Bradley R. Postle

Abstract Humans can construct rich subjective experience even when no information is available in the external world. Here, we investigated the neural representation of purely internally generated stimulus-like information during visual working memory. Participants performed delayed recall of oriented gratings embedded in noise with varying contrast during fMRI scanning. Their trialwise behavioral responses provided an estimate of their mental representation of the to-be-reported orientation. We used multivariate inverted encoding models to reconstruct the neural representations of orientation in reference to the response. We found that response orientation could be successfully reconstructed from activity in early visual cortex, even on 0% contrast trials when no orientation information was actually presented, suggesting the existence of a purely internally generated neural code in early visual cortex. In addition, cross-generalization and multidimensional scaling analyses demonstrated that information derived from internal sources was represented differently from typical working memory representations, which receive influences from both external and internal sources. Similar results were also observed in intraparietal sulcus, with slightly different cross-generalization patterns. These results suggest a potential mechanism for how externally driven and internally generated information is maintained in working memory.


2021 ◽  
Author(s):  
Yingying Huang ◽  
Frank Pollick ◽  
Ming Liu ◽  
Delong Zhang

Abstract Visual mental imagery and visual perception have been shown to share a hierarchical topological visual structure of neural representation. Meanwhile, many studies have reported a dissociation of neural substrate between mental imagery and perception in function and structure. However, we have limited knowledge about how the visual hierarchical cortex involved into internally generated mental imagery and perception with visual input. Here we used a dataset from previous fMRI research (Horikawa & Kamitani, 2017), which included a visual perception and an imagery experiment with human participants. We trained two types of voxel-wise encoding models, based on Gabor features and activity patterns of high visual areas, to predict activity in the early visual cortex (EVC, i.e., V1, V2, V3) during perception, and then evaluated the performance of these models during mental imagery. Our results showed that during perception and imagery, activities in the EVC could be independently predicted by the Gabor features and activity of high visual areas via encoding models, which suggested that perception and imagery might share neural representation in the EVC. We further found that there existed a Gabor-specific and a non-Gabor-specific neural response pattern to stimuli in the EVC, which were shared by perception and imagery. These findings provide insight into mechanisms of how visual perception and imagery shared representation in the EVC.


2010 ◽  
Vol 22 (11) ◽  
pp. 2417-2426 ◽  
Author(s):  
Stephanie A. McMains ◽  
Sabine Kastner

Multiple stimuli that are present simultaneously in the visual field compete for neural representation. At the same time, however, multiple stimuli in cluttered scenes also undergo perceptual organization according to certain rules originally defined by the Gestalt psychologists such as similarity or proximity, thereby segmenting scenes into candidate objects. How can these two seemingly orthogonal neural processes that occur early in the visual processing stream be reconciled? One possibility is that competition occurs among perceptual groups rather than at the level of elements within a group. We probed this idea using fMRI by assessing competitive interactions across visual cortex in displays containing varying degrees of perceptual organization or perceptual grouping (Grp). In strong Grp displays, elements were arranged such that either an illusory figure or a group of collinear elements were present, whereas in weak Grp displays the same elements were arranged randomly. Competitive interactions among stimuli were overcome throughout early visual cortex and V4, when elements were grouped regardless of Grp type. Our findings suggest that context-dependent grouping mechanisms and competitive interactions are linked to provide a bottom–up bias toward candidate objects in cluttered scenes.


2011 ◽  
Vol 105 (1) ◽  
pp. 188-199 ◽  
Author(s):  
Naoya Itatani ◽  
Georg M. Klump

It has been suggested that successively presented sounds that are perceived as separate auditory streams are represented by separate populations of neurons. Mostly, spectral separation in different peripheral filters has been identified as the cue for segregation. However, stream segregation based on temporal cues is also possible without spectral separation. Here we present sequences of ABA- triplet stimuli providing only temporal cues to neurons in the European starling auditory forebrain. A and B sounds (125 ms duration) were harmonic complexes (fundamentals 100, 200, or 400 Hz; center frequency and bandwidth chosen to fit the neurons' tuning characteristic) with identical amplitude spectra but different phase relations between components (cosine, alternating, or random phase) and presented at different rates. Differences in both rate responses and temporal response patterns of the neurons when stimulated with harmonic complexes with different phase relations provide first evidence for a mechanism allowing a separate neural representation of such stimuli. Recording sites responding >1 kHz showed enhanced rate and temporal differences compared with those responding at lower frequencies. These results demonstrate a neural correlate of streaming by temporal cues due to the variation of phase that shows striking parallels to observations in previous psychophysical studies.


2019 ◽  
Vol 11 (4) ◽  
pp. 446 ◽  
Author(s):  
Zacharias Kandylakis ◽  
Konstantinos Vasili ◽  
Konstantinos Karantzalos

Single sensor systems and standard optical—usually RGB CCTV video cameras—fail to provide adequate observations, or the amount of spectral information required to build rich, expressive, discriminative features for object detection and tracking tasks in challenging outdoor and indoor scenes under various environmental/illumination conditions. Towards this direction, we have designed a multisensor system based on thermal, shortwave infrared, and hyperspectral video sensors and propose a processing pipeline able to perform in real-time object detection tasks despite the huge amount of the concurrently acquired video streams. In particular, in order to avoid the computationally intensive coregistration of the hyperspectral data with other imaging modalities, the initially detected targets are projected through a local coordinate system on the hypercube image plane. Regarding the object detection, a detector-agnostic procedure has been developed, integrating both unsupervised (background subtraction) and supervised (deep learning convolutional neural networks) techniques for validation purposes. The detected and verified targets are extracted through the fusion and data association steps based on temporal spectral signatures of both target and background. The quite promising experimental results in challenging indoor and outdoor scenes indicated the robust and efficient performance of the developed methodology under different conditions like fog, smoke, and illumination changes.


Sign in / Sign up

Export Citation Format

Share Document