scholarly journals Different Measures of Structural Similarity Tap Different Aspects of Visual Object Processing

2017 ◽  
Vol 8 ◽  
Author(s):  
Christian Gerlach
2015 ◽  
Vol 26 (7) ◽  
pp. 3135-3145 ◽  
Author(s):  
Alyssa J. Kersey ◽  
Tyia S. Clark ◽  
Courtney A. Lussier ◽  
Bradford Z. Mahon ◽  
Jessica F. Cantlon

2010 ◽  
Vol 2010 ◽  
pp. 1-6 ◽  
Author(s):  
M. G. Tana ◽  
E. Montin ◽  
S. Cerutti ◽  
A. M. Bianchi

Functional magnetic resonance imaging (fMRI) was performed in eight healthy subjects to identify the localization, magnitude, and volume extent of activation in brain regions that are involved in blood oxygen level-dependent (BOLD) response during the performance of Conners' Continuous Performance Test (CPT). An extensive brain network was activated during the task including frontal, temporal, and occipital cortical areas and left cerebellum. The more activated cluster in terms of volume extent and magnitude was located in the right anterior cingulate cortex (ACC). Analyzing the dynamic trend of the activation in the identified areas during the entire duration of the sustained attention test, we found a progressive decreasing of BOLD response probably due to a habituation effect without any deterioration of the performances. The observed brain network is consistent with existing models of visual object processing and attentional control and may serve as a basis for fMRI studies in clinical populations with neuropsychological deficits in Conners' CPT performance.


2012 ◽  
Vol 25 (0) ◽  
pp. 117 ◽  
Author(s):  
Yi-Chuan Chen ◽  
Gert Westermann

Infants are able to learn novel associations between visual objects and auditory linguistic labels (such as a dog and the sound /dɔg/) by the end of their first year of life. Surprisingly, at this age they seem to fail to learn the associations between visual objects and natural sounds (such as a dog and its barking sound). Researchers have therefore suggested that linguistic learning is special (Fulkerson and Waxman, 2007) or that unfamiliar sounds overshadow visual object processing (Robinson and Sloutsky, 2010). However, in previous studies visual stimuli were paired with arbitrary sounds in contexts lacking ecological validity. In the present study, we create animations of two novel animals and two realistic animal calls to construct two audiovisual stimuli. In the training phase, each animal was presented in motions that mimicked animal behaviour in real life: in a short movie, the animal ran (or jumped) from the periphery to the center of the monitor, and it made calls while raising its head. In the test phase, static images of both animals were presented side-by-side and the sound for one of the animals was played. Infant looking times to each stimulus were recorded with an eye tracker. We found that following the sound, 12-month-old infants preferentially looked at the animal corresponding to the sound. These results show that 12-month-old infants are able to learn novel associations between visual objects and natural sounds in an ecologically valid situation, thereby challenging our current understanding of the development of crossmodal association learning.


2008 ◽  
Vol 20 (9) ◽  
pp. 1711-1726 ◽  
Author(s):  
Xun Liu ◽  
Nicholas A. Steinmetz ◽  
Alison B. Farley ◽  
Charles D. Smith ◽  
Jane E. Joseph

The present study explored constraints on mid-fusiform activation during object discrimination. In three experiments, participants performed a matching task on simple line configurations, nameable objects, three dimensional (3-D) shapes, and colors. Significant bilateral mid-fusiform activation emerged when participants matched objects and 3-D shapes, as compared to when they matched two-dimensional (2-D) line configurations and colors, indicating that the mid-fusiform is engaged more strongly for processing structural descriptions (e.g., comparing 3-D volumetric shape) than perceptual descriptions (e.g., comparing 2-D or color information). In two of the experiments, the same mid-fusiform regions were also modulated by the degree of structural similarity between stimuli, implicating a role for the mid-fusiform in fine differentiation of similar visual object representations. Importantly, however, this process of fine differentiation occurred at the level of structural, but not perceptual, descriptions. Moreover, mid-fusiform activity was more robust when participants matched shape compared to color information using the identical stimuli, indicating that activity in the mid-fusiform gyrus is not driven by specific stimulus properties, but rather by the process of distinguishing stimuli based on shape information. Taken together, these findings further clarify the nature of object processing in the mid-fusiform gyrus. This region is engaged specifically in structural differentiation, a critical component process of object recognition and categorization.


2006 ◽  
Vol 46 (11) ◽  
pp. 1804-1815 ◽  
Author(s):  
Britt Anderson ◽  
Jessie J. Peissig ◽  
Jedediah Singer ◽  
David L. Sheinberg

2019 ◽  
Author(s):  
Talia Brandman ◽  
Chiara Avancini ◽  
Olga Leticevscaia ◽  
Marius V. Peelen

AbstractSounds (e.g., barking) help us to visually identify objects (e.g., a dog) that are distant or ambiguous. While neuroimaging studies have revealed neuroanatomical sites of audiovisual interactions, little is known about the time-course by which sounds facilitate visual object processing. Here we used magnetoencephalography (MEG) to reveal the time-course of the facilitatory influence of natural sounds (e.g., barking) on visual object processing, and compared this to the facilitatory influence of spoken words (e.g., “dog”). Participants viewed images of blurred objects preceded by a task-irrelevant natural sound, a spoken word, or uninformative noise. A classifier was trained to discriminate multivariate sensor patterns evoked by animate and inanimate intact objects with no sounds, presented in a separate experiment, and tested on sensor patterns evoked by the blurred objects in the three auditory conditions. Results revealed that both sounds and words, relative to uninformative noise, significantly facilitated visual object category decoding between 300-500 ms after visual onset. We found no evidence for earlier facilitation by sounds than by words. These findings provide evidence for a semantic route of facilitation by both natural sounds and spoken words, whereby the auditory input first activates semantic object representations, which then modulate the visual processing of objects.


Sign in / Sign up

Export Citation Format

Share Document