scholarly journals Transsaccadic feature interactions in multiple reference frames: an fMRIa study

2018 ◽  
Author(s):  
Bianca R. Baltaretu ◽  
Benjamin T. Dunkley ◽  
Simona Monaco ◽  
Ying Chen ◽  
J.Douglas Crawford

AbstractTranssaccadic integration of visual features can operate in various frames of reference, but the corresponding neural mechanisms have not been differentiated. A recent fMRIa (adaptation) study identified two cortical regions in supramarginal gyrus (SMG) and extrastriate cortex that were sensitive to transsaccadic changes in stimulus orientation (Dunkley et al., 2016). Here, we modified this paradigm to identify the neural correlates for transsaccadic comparison of object orientations in: 1) Spatially Congruent (SC), 2) Retinally Congruent (RC) or 3) Spatially Incongruent (SI)) coordinates. Functional data were recorded from 12 human participants while they observed a grating (oriented 45° or 135°) before a saccade, and then judged whether a post-saccadic grating (in SC, RC, or SI configuration) had the same or different orientation. Our analysis focused on areas that showed a significant repetition suppression (Different > Same) or repetition enhancement (Same > Different) BOLD responses. Several cortical areas were significantly modulated in all three conditions: premotor/motor cortex (likely related to the manual response), and posterior-middle intraparietal sulcus. In the SC condition, uniquely activated areas included left SMG and left lateral occipitotemporal gyrus (LOtG). In the RC condition, unique areas included inferior frontal gyrus and the left lateral BA 7. In the SI condition, uniquely activated areas included the frontal eye field, medial BA 7, and right LOtG. Overall, the SC results were significantly different from both RC and SI. These data suggest that different cortical networks are used to compare pre- and post-saccadic orientation information, depending on the spatial nature of the task.Significance StatementEvery time one makes a saccade, the brain must compare and integrate stored visual information with new information. It has recently been shown that ‘transsaccadic integration’ of visual object orientation involves specific areas within parietal and occipital cortex (Dunkley et al., 2016). Here, we show that this pattern of cortical activation also depends on the spatial nature of the task: when the visual object is fixed relative to space, the eye, or relative to neither space nor the eye, different frontal, parietal, and occipital regions are engaged. More generally, these findings suggest that different aspects of trans-saccadic integration flexibly employ different cortical networks.

2018 ◽  
Vol 29 (7) ◽  
pp. 3023-3033 ◽  
Author(s):  
Johan N Lundström ◽  
Christina Regenbogen ◽  
Kathrin Ohla ◽  
Janina Seubert

Abstract While matched crossmodal information is known to facilitate object recognition, it is unclear how our perceptual systems encode the more gradual congruency variations that occur in our natural environment. Combining visual objects with odor mixtures to create a gradual increase in semantic object overlap, we demonstrate high behavioral acuity to linear variations of olfactory–visual overlap in a healthy adult population. This effect was paralleled by a linear increase in cortical activation at the intersection of occipital fusiform and lingual gyri, indicating linear encoding of crossmodal semantic overlap in visual object recognition networks. Effective connectivity analyses revealed that this integration of olfactory and visual information was achieved by direct information exchange between olfactory and visual areas. In addition, a parallel pathway through the superior frontal gyrus was increasingly recruited towards the most ambiguous stimuli. These findings demonstrate that cortical structures involved in object formation are inherently crossmodal and encode sensory overlap in a linear manner. The results further demonstrate that prefrontal control of these processes is likely required for ambiguous stimulus combinations, a fact of high ecological relevance that may be inappropriately captured by common task designs juxtaposing congruency and incongruency.


Author(s):  
Jessica Taytard ◽  
Camille Gand ◽  
Marie-Cécile Niérat ◽  
Romain Barthes ◽  
Sophie Lavault ◽  
...  

In healthy humans, inspiratory threshold loading deteriorates cognitive performances. This can result from motor-cognitive interference (activation of motor respiratory-related cortical networks vs. executive resources allocation), sensory-cognitive interference (dyspnea vs. shift in attentional focus), or both. We hypothesized that inspiratory loading would concomitantly induce dyspnea, activate motor respiratory-related cortical networks, and deteriorate cognitive performance. We reasoned that a concomitant activation of cortical networks and cognitive deterioration would be compatible with motor-cognitive interference, particularly in case of a predominant alteration of executive cognitive performances. Symmetrically, we reasoned that a predominant alteration of attention-depending performances would suggest sensory-cognitive interference. Twenty-five volunteers (12 men; 19.5-51.5 years) performed the Paced Auditory Serial Addition test (PASAT-A and B; calculation capacity, working memory, attention), the Trail Making Test (TMT-A, visuospatial exploration capacity; TMT-B, visuospatial exploration capacity and attention), and the Corsi block-tapping test (visuospatial memory, short-term and working memory) during unloaded breathing and inspiratory threshold loading in random order. Loading consistently induced dyspnea and respiratory-related brain activation. It was associated with deteriorations inPASAT A (52 [45.5;55.5] (median [interquartile range]) to 48 [41;54.5], p=0.01), PASAT B (55 [47.5;58] to 51 [44.5;57.5], p=0.01), and TMT B (44s [36;54.5] to 53s [42;64], p=0.01), but did not affect TMT-A and Corsi. The concomitance of cortical activation and cognitive performance deterioration is compatible with competition for cortical resources (motor-cognitive interference), while the profile of cognitive impairment (PASAT and TMT-B but not TMT-A and Corsi) is compatible with a contribution of attentional distraction (sensory-cognitive interference). Both mechanisms are therefore likely at play.


2009 ◽  
Vol 21 (4) ◽  
pp. 821-836 ◽  
Author(s):  
Benjamin Straube ◽  
Antonia Green ◽  
Susanne Weis ◽  
Anjan Chatterjee ◽  
Tilo Kircher

In human face-to-face communication, the content of speech is often illustrated by coverbal gestures. Behavioral evidence suggests that gestures provide advantages in the comprehension and memory of speech. Yet, how the human brain integrates abstract auditory and visual information into a common representation is not known. Our study investigates the neural basis of memory for bimodal speech and gesture representations. In this fMRI study, 12 participants were presented with video clips showing an actor performing meaningful metaphoric gestures (MG), unrelated, free gestures (FG), and no arm and hand movements (NG) accompanying sentences with an abstract content. After the fMRI session, the participants performed a recognition task. Behaviorally, the participants showed the highest hit rate for sentences accompanied by meaningful metaphoric gestures. Despite comparable old/new discrimination performances (d′) for the three conditions, we obtained distinct memory-related left-hemispheric activations in the inferior frontal gyrus (IFG), the premotor cortex (BA 6), and the middle temporal gyrus (MTG), as well as significant correlations between hippocampal activation and memory performance in the metaphoric gesture condition. In contrast, unrelated speech and gesture information (FG) was processed in areas of the left occipito-temporal and cerebellar region and the right IFG just like the no-gesture condition (NG). We propose that the specific left-lateralized activation pattern for the metaphoric speech–gesture sentences reflects semantic integration of speech and gestures. These results provide novel evidence about the neural integration of abstract speech and gestures as it contributes to subsequent memory performance.


2006 ◽  
Vol 96 (1) ◽  
pp. 352-362 ◽  
Author(s):  
Sabine M. Beurze ◽  
Stan Van Pelt ◽  
W. Pieter Medendorp

At some stage in the process of a sensorimotor transformation for a reaching movement, information about the current position of the hand and information about the location of the target must be encoded in the same frame of reference to compute the hand-to-target difference vector. Two main hypotheses have been proposed regarding this reference frame: an eye-centered and a body-centered frame. Here we evaluated these hypotheses using the pointing errors that subjects made when planning and executing arm movements to memorized targets starting from various initial hand positions while keeping gaze fixed in various directions. One group of subjects ( n = 10) was tested without visual information about hand position during movement planning (unseen-hand condition); another group ( n = 8) was tested with hand and target position simultaneously visible before movement onset (seen-hand condition). We found that both initial hand position and gaze fixation direction had a significant effect on the magnitude and direction of the pointing error. Errors were significantly smaller in the seen-hand condition. For both conditions, though, a reference frame analysis showed that the errors arose at an eye- or hand-centered stage or both, but not at a body-centered stage. As a common reference frame is required to specify a movement vector, these results suggest that an eye-centered mechanism is involved in integrating target and hand position in programming reaching movements. We discuss how simple gain elements modulating the eye-centered target and hand-position signals can account for these results.


2013 ◽  
Vol 26 (5) ◽  
pp. 465-482 ◽  
Author(s):  
Michelle L. Cadieux ◽  
David I. Shore

Performance on tactile temporal order judgments (TOJs) is impaired when the hands are crossed over the midline. The cause of this effect appears to be tied to the use of an external reference frame, most likely based on visual information. We measured the effect of degrading the external reference frame on the crossed-hand deficit through restriction of visual information across three experiments. Experiments 1 and 2 examined three visual conditions (eyes open–lights on, eyes open–lights off, and eyes closed–lights off) while manipulating response demands; no effect of visual condition was seen. In Experiment 3, response demands were altered to be maximally connected to the internal reference frame and only two visual conditions were tested: eyes open–lights on, eyes closed–lights off. Blindfolded participants had a reduced crossed-hands deficit. Results are discussed in terms of the time needed to recode stimuli from an internal to an external reference frame and the role of conflict between these two reference frames in causing this effect.


2012 ◽  
Vol 25 (0) ◽  
pp. 122
Author(s):  
Michael Barnett-Cowan ◽  
Jody C. Culham ◽  
Jacqueline C. Snow

The orientation at which objects are most easily recognized — the perceptual upright (PU) — is influenced by body orientation with respect to gravity. To date, the influence of these cues on object recognition has only been measured within the visual system. Here we investigate whether objects explored through touch alone are similarly influenced by body and gravitational information. Using the Oriented CHAracter Recognition Test (OCHART) adapted for haptics, blindfolded right-handed observers indicated whether the symbol ‘p’ presented in various orientations was the letter ‘p’ or ‘d’ following active touch. The average of ‘p-to-d’ and ‘d-to-p’ transitions was taken as the haptic PU. Sensory information was manipulated by positioning observers in different orientations relative to gravity with the head, body, and hand aligned. Results show that haptic object recognition is equally influenced by body and gravitational references frames, but with a constant leftward bias. This leftward bias in the haptic PU resembles leftward biases reported for visual object recognition. The influence of body orientation and gravity on the haptic PU was well predicted by an equally weighted vectorial sum of the directions indicated by these cues. Our results demonstrate that information from different reference frames influence the perceptual upright in haptic object recognition. Taken together with similar investigations in vision, our findings suggest that reliance on body and gravitational frames of reference helps maintain optimal object recognition. Equally relying on body and gravitational information may facilitate haptic exploration with an upright posture, while compensating for poor vestibular sensitivity when tilted.


2008 ◽  
Vol 20 (3-4) ◽  
pp. 71-81 ◽  
Author(s):  
Stephanie L. Simon-Dack ◽  
P. Dennis Rodriguez ◽  
Wolfgang A. Teder-Sälejärvi

Imaging, transcranial magnetic stimulation, and psychophysiological recordings of the congenitally blind have confirmed functional activation of the visual cortex but have not extensively explained the functional significance of these activation patterns in detail. This review systematically examines research on the role of the visual cortex in processing spatial and non-visual information, highlighting research on individuals with early and late onset blindness. Here, we concentrate on the methods utilized in studying visual cortical activation in early blind participants, including positron emissions tomography (PET), functional magnetic resonance imaging (fMRI), transcranial magnetic stimulation (TMS), and electrophysiological data, specifically event-related potentials (ERPs). This paper summarizes and discusses findings of these studies. We hypothesize how mechanisms of cortical plasticity are expressed in congenitally in comparison to adventitiously blind and short-term visually deprived sighted participants and discuss potential approaches for further investigation of these mechanisms in future research.


2016 ◽  
Vol 23 (2) ◽  
pp. 220-227 ◽  
Author(s):  
Tal Benoliel ◽  
Noa Raz ◽  
Tamir Ben-Hur ◽  
Netta Levin

Background: We have recently suggested that delayed visual evoked potential (VEP) latencies in the fellow eye (FE) of optic neuritis patients reflect a cortical adaptive process, to compensate for the delayed arrival of visual information via the affected eye (AE). Objective: To define the cortical mechanism that underlies this adaptive process. Methods: Cortical activations to moving stimuli and connectivity patterns within the visual network were tested using functional magnetic resonance imaging (MRI) in 11 recovered optic neuritis patients and in 11 matched controls. Results: Reduced cortical activation in early but not in higher visual areas was seen in both eyes, compared to controls. VEP latencies in the AEs inversely correlated with activation in motion-related visual cortices. Inter-eye differences in VEP latencies inversely correlated with cortical activation following FE stimulation, throughout the visual hierarchy. Functional correlation between visual regions was more pronounced in the FE compared with the AE. Conclusion: The different correlation patterns between VEP latencies and cortical activation in the AE and FE support different pathophysiology of VEP prolongation in each eye. Similar cortical activation patterns in both eyes and the fact that stronger links between early and higher visual areas were found following FE stimulation suggest a cortical modulatory process in the FE.


eLife ◽  
2017 ◽  
Vol 6 ◽  
Author(s):  
Sina Tafazoli ◽  
Houman Safaai ◽  
Gioia De Franceschi ◽  
Federica Bianca Rosselli ◽  
Walter Vanzella ◽  
...  

Rodents are emerging as increasingly popular models of visual functions. Yet, evidence that rodent visual cortex is capable of advanced visual processing, such as object recognition, is limited. Here we investigate how neurons located along the progression of extrastriate areas that, in the rat brain, run laterally to primary visual cortex, encode object information. We found a progressive functional specialization of neural responses along these areas, with: (1) a sharp reduction of the amount of low-level, energy-related visual information encoded by neuronal firing; and (2) a substantial increase in the ability of both single neurons and neuronal populations to support discrimination of visual objects under identity-preserving transformations (e.g., position and size changes). These findings strongly argue for the existence of a rat object-processing pathway, and point to the rodents as promising models to dissect the neuronal circuitry underlying transformation-tolerant recognition of visual objects.


Sign in / Sign up

Export Citation Format

Share Document