scholarly journals Adaptation to heading direction dissociates the roles of human MST and V6 in the processing of optic flow

2012 ◽  
Vol 108 (3) ◽  
pp. 794-801 ◽  
Author(s):  
Velia Cardin ◽  
Lara Hemsworth ◽  
Andrew T. Smith

The extraction of optic flow cues is fundamental for successful locomotion. During forward motion, the focus of expansion (FoE), in conjunction with knowledge of eye position, indicates the direction in which the individual is heading. Therefore, it is expected that cortical brain regions that are involved in the estimation of heading will be sensitive to this feature. To characterize cortical sensitivity to the location of the FoE or, more generally, the center of flow (CoF) during visually simulated self-motion, we carried out a functional MRI (fMRI) adaptation experiment in several human visual cortical areas that are thought to be sensitive to optic flow parameters, namely, V3A, V6, MT/V5, and MST. In each trial, two optic flow patterns were sequentially presented, with the CoF located in either the same or different positions. With an adaptation design, an area sensitive to heading direction should respond more strongly to a pair of stimuli with different CoFs than to stimuli with the same CoF. Our results show such release from adaptation in areas MT/V5 and MST, and to a lesser extent V3A, suggesting the involvement of these areas in the processing of heading direction. The effect could not be explained either by differences in local motion or by attention capture. It was not observed to a significant extent in area V6 or in control area V1. The different patterns of responses observed in MST and V6, areas that are both involved in the processing of egomotion in macaques and humans, suggest distinct roles in the processing of visual cues for self-motion.

2010 ◽  
Vol 103 (4) ◽  
pp. 1865-1873 ◽  
Author(s):  
Tao Zhang ◽  
Kenneth H. Britten

The ventral intraparietal area (VIP) of the macaque monkey is thought to be involved in judging heading direction based on optic flow. We recorded neuronal discharges in VIP while monkeys were performing a two-alternative, forced-choice heading discrimination task to relate quantitatively the activity of VIP neurons to monkeys' perceptual choices. Most VIP neurons were responsive to simulated heading stimuli and were tuned such that their responses changed across a range of forward trajectories. Using receiver operating characteristic (ROC) analysis, we found that most VIP neurons were less sensitive to small heading changes than was the monkey, although a minority of neurons were equally sensitive. Pursuit eye movements modestly yet significantly increased both neuronal and behavioral thresholds by approximately the same amount. Our results support the view that VIP activity is involved in self-motion judgments.


i-Perception ◽  
2017 ◽  
Vol 8 (3) ◽  
pp. 204166951770820 ◽  
Author(s):  
Diederick C. Niehorster ◽  
Li Li

How do we perceive object motion during self-motion using visual information alone? Previous studies have reported that the visual system can use optic flow to identify and globally subtract the retinal motion component resulting from self-motion to recover scene-relative object motion, a process called flow parsing. In this article, we developed a retinal motion nulling method to directly measure and quantify the magnitude of flow parsing (i.e., flow parsing gain) in various scenarios to examine the accuracy and tuning of flow parsing for the visual perception of object motion during self-motion. We found that flow parsing gains were below unity for all displays in all experiments; and that increasing self-motion and object motion speed did not alter flow parsing gain. We conclude that visual information alone is not sufficient for the accurate perception of scene-relative motion during self-motion. Although flow parsing performs global subtraction, its accuracy also depends on local motion information in the retinal vicinity of the moving object. Furthermore, the flow parsing gain was constant across common self-motion or object motion speeds. These results can be used to inform and validate computational models of flow parsing.


2006 ◽  
Vol 23 (1) ◽  
pp. 115-126 ◽  
Author(s):  
IAN R. WINSHIP ◽  
DOUGLAS R.W. WYLIE

Neurons sensitive to optic flow patterns have been recorded in the the olivo-vestibulocerebellar pathway and extrastriate visual cortical areas in vertebrates, and in the visual neuropile of invertebrates. The complex spike activity (CSA) of Purkinje cells in the vestibulocerebellum (VbC) responds best to patterns of optic flow that result from either self-rotation or self-translation. Previous studies have suggested that these neurons have a receptive-field (RF) structure that “approximates” the preferred optic flowfield with a “bipartite” organization. Contrasting this, studies in invertebrate species indicate that optic flow sensitive neurons are precisely tuned to their preferred flowfield, such that the local motion sensitivities and local preferred directions within their RFs precisely match the local motion in that region of the preferred flowfield. In this study, CSA in the VbC of pigeons was recorded in response to a set of complex computer-generated optic flow stimuli, similar to those used in previous studies of optic flow neurons in primate extrastriate visual cortex, to test whether the receptive field was of a precise or bipartite organization. We found that these RFs were not precisely tuned to optic flow patterns. Rather, we conclude that these neurons have a bipartite RF structure that approximates the preferred optic flowfield by pooling motion subunits of only a few different direction preferences.


PLoS ONE ◽  
2021 ◽  
Vol 16 (1) ◽  
pp. e0243381
Author(s):  
Meaghan McManus ◽  
Laurence R. Harris

Human perception is based on expectations. We expect visual upright and gravity upright, sensed through vision, vestibular and other sensory systems, to agree. Equally, we expect that visual and vestibular information about self-motion will correspond. What happens when these assumptions are violated? Tilting a person from upright so that gravity is not where it should be impacts both visually induced self-motion (vection) and the perception of upright. How might the two be connected? Using virtual reality, we varied the strength of visual orientation cues, and hence the probability of participants experiencing a visual reorientation illusion (VRI) in which visual cues to orientation dominate gravity, using an oriented corridor and a starfield while also varying head-on-trunk orientation and body posture. The effectiveness of the optic flow in simulating self-motion was assessed by how much visual motion was required to evoke the perception that the participant had reached the position of a previously presented target. VRI was assessed by questionnaire When participants reported higher levels of VRI they also required less visual motion to evoke the sense of traveling through a given distance, regardless of head or body posture, or the type of visual environment. We conclude that experiencing a VRI, in which visual-vestibular conflict is resolved and the direction of upright is reinterpreted, affects the effectiveness of optic flow at simulating motion through the environment. Therefore, any apparent effect of head or body posture or type of environment are largely indirect effects related instead, to the level of VRI experienced by the observer. We discuss potential mechanisms for this such as reinterpreting gravity information or altering the weighting of orientation cues.


2011 ◽  
Vol 106 (3) ◽  
pp. 1240-1249 ◽  
Author(s):  
Velia Cardin ◽  
Andrew T. Smith

The principal visual cue to self-motion (egomotion) is optic flow, which is specified in terms of local 2D velocities in the retinal image without reference to depth cues. However, in general, points near the center of expansion of natural flow fields are distant, whereas those in the periphery are closer, creating gradients of horizontal binocular disparity. To assess whether the brain combines disparity gradients with optic flow when encoding egomotion, stereoscopic gradients were applied to expanding dot patterns presented to observers during functional MRI scanning. The gradients were radially symmetrical, disparity changing as a function of eccentricity. The depth cues were either consistent with egomotion (peripheral dots perceived as near and central dots perceived as far) or inconsistent (the reverse gradient, central dots near, peripheral dots far). The BOLD activity generated by these stimuli was compared in a range of predefined visual regions in 13 participants with good stereoacuity. Visual area V6, in the parieto-occipital sulcus, showed a unique pattern of results, responding well to all optic flow patterns but much more strongly when they were paired with consistent rather than inconsistent or zero-disparity gradients. Of the other areas examined, a region of the precuneus and parietoinsular vestibular cortex also differentiate between consistent and inconsistent gradients, but with weak or suppressive responses. V3A, V7, MT, and ventral intraparietal area responded more strongly in the presence of a depth gradient but were indifferent to its depth-flow congruence. The results suggest that depth and flow cues are integrated in V6 to improve estimation of egomotion.


2020 ◽  
Vol 123 (4) ◽  
pp. 1369-1379
Author(s):  
Raul Rodriguez ◽  
Benjamin T. Crane

Movement direction can be determined from a combination of visual and inertial cues. Visual motion (optic flow) can represent self-motion through a fixed environment or environmental motion relative to an observer. Simultaneous visual and inertial heading cues present the question of whether the cues have a common cause (i.e., should be integrated) or whether they should be considered independent. This was studied in eight healthy human subjects who experienced 12 visual and inertial headings in the horizontal plane divided in 30° increments. The headings were estimated in two unisensory and six multisensory trial blocks. Each unisensory block included 72 stimulus presentations, while each multisensory block included 144 stimulus presentations, including every possible combination of visual and inertial headings in random order. After each multisensory stimulus, subjects reported their perception of visual and inertial headings as congruous (i.e., having common causation) or not. In the multisensory trial blocks, subjects also reported visual or inertial heading direction (3 trial blocks for each). For aligned visual-inertial headings, the rate of common causation was higher during alignment in cardinal than noncardinal directions. When visual and inertial stimuli were separated by 30°, the rate of reported common causation remained >50%, but it decreased to 15% or less for separation of ≥90°. The inertial heading was biased toward the visual heading by 11–20° for separations of 30–120°. Thus there was sensory integration even in conditions without reported common causation. The visual heading was minimally influenced by inertial direction. When trials with common causation perception were compared with those without, inertial heading perception had a stronger bias toward visual stimulus direction. NEW & NOTEWORTHY Optic flow ambiguously represents self-motion or environmental motion. When these are in different directions, it is uncertain whether these are integrated into a common perception or not. This study looks at that issue by determining whether the two modalities are consistent and by measuring their perceived directions to get a degree of influence. The visual stimulus can have significant influence on the inertial stimulus even when they are perceived as inconsistent.


2019 ◽  
Author(s):  
Tyler S Manning ◽  
Kenneth H Britten

AbstractHeading perception in primates depends heavily on visual optic-flow cues. Yet during self-motion, heading percepts remain stable even though smooth-pursuit eye movements often distort optic flow. Electrophysiological studies have identified visual areas in monkey cortex, including the dorsal medial superior temporal area (MSTd), that signal the true heading direction during pursuit. According to theoretical work, self-motion can be represented accurately by compensating for these distortions in two ways: via retinal mechanisms or via extraretinal efference-copy signals, which predict the sensory consequences of movement. Psychophysical evidence strongly supports the efference-copy hypothesis, but physiological evidence remains inconclusive. Neurons that signal the true heading direction during pursuit are found in visual areas of monkey cortex, including the dorsal medial superior temporal area (MSTd). Here we measured heading tuning in MSTd using a novel stimulus paradigm, in which we stabilize the optic-flow stimulus on the retina during pursuit. This approach isolates the effects on neuronal heading preferences of extraretinal signals, which remain active while the retinal stimulus is prevented from changing. Our results demonstrate a significant but small influence of extraretinal signals on the preferred heading directions of MSTd neurons. Under our stimulus conditions, which are rich in retinal cues, we find that retinal mechanisms dominate physiological corrections for pursuit eye movements, suggesting that extraretinal cues, such as predictive efference-copy mechanisms, have a limited role under naturalistic conditions.Significance StatementSensory systems discount stimulation caused by the animal’s own behavior. For example, eye movements cause irrelevant retinal signals that could interfere with motion perception. The visual system compensates for such self-generated motion, but how this happens is unclear. Two theoretical possibilities are a purely visual calculation or one using an internal signal of eye movements to compensate for their effects. Such a signal can be isolated by experimentally stabilizing the image on a moving retina, but this approach has never been adopted to study motion physiology. Using this method, we find that eye-movement signals have little influence on neural activity in visual cortex, while feed-forward visual calculation has a strong effect and is likely important under real-world conditions.


2021 ◽  
Vol 12 (1) ◽  
Author(s):  
Caitlin S. Mallory ◽  
Kiah Hardcastle ◽  
Malcolm G. Campbell ◽  
Alexander Attinger ◽  
Isabel I. C. Low ◽  
...  

AbstractNeural circuits generate representations of the external world from multiple information streams. The navigation system provides an exceptional lens through which we may gain insights about how such computations are implemented. Neural circuits in the medial temporal lobe construct a map-like representation of space that supports navigation. This computation integrates multiple sensory cues, and, in addition, is thought to require cues related to the individual’s movement through the environment. Here, we identify multiple self-motion signals, related to the position and velocity of the head and eyes, encoded by neurons in a key node of the navigation circuitry of mice, the medial entorhinal cortex (MEC). The representation of these signals is highly integrated with other cues in individual neurons. Such information could be used to compute the allocentric location of landmarks from visual cues and to generate internal representations of space.


2001 ◽  
Vol 85 (2) ◽  
pp. 724-734 ◽  
Author(s):  
Holger G. Krapp ◽  
Roland Hengstenberg ◽  
Martin Egelhaaf

Integrating binocular motion information tunes wide-field direction-selective neurons in the fly optic lobe to respond preferentially to specific optic flow fields. This is shown by measuring the local preferred directions (LPDs) and local motion sensitivities (LMSs) at many positions within the receptive fields of three types of anatomically identifiable lobula plate tangential neurons: the three horizontal system (HS) neurons, the two centrifugal horizontal (CH) neurons, and three heterolateral connecting elements. The latter impart to two of the HS and to both CH neurons a sensitivity to motion from the contralateral visual field. Thus in two HS neurons and both CH neurons, the response field comprises part of the ipsi- and contralateral visual hemispheres. The distributions of LPDs within the binocular response fields of each neuron show marked similarities to the optic flow fields created by particular types of self-movements of the fly. Based on the characteristic distributions of local preferred directions and motion sensitivities within the response fields, the functional role of the respective neurons in the context of behaviorally relevant processing of visual wide-field motion is discussed.


2001 ◽  
Vol 86 (2) ◽  
pp. 692-702 ◽  
Author(s):  
Michaël B. Zugaro ◽  
Eiichi Tabuchi ◽  
Céline Fouquier ◽  
Alain Berthoz ◽  
Sidney I. Wiener

Head direction (HD) cells discharge selectively in macaques, rats, and mice when they orient their head in a specific (“preferred”) direction. Preferred directions are influenced by visual cues as well as idiothetic self-motion cues derived from vestibular, proprioceptive, motor efferent copy, and command signals. To distinguish the relative importance of active locomotor signals, we compared HD cell response properties in 49 anterodorsal thalamic HD cells of six male Long-Evans rats during active displacements in a foraging task as well as during passive rotations. Since thalamic HD cells typically stop firing if the animals are tightly restrained, the rats were trained to remain immobile while drinking water distributed at intervals from a small reservoir at the center of a rotatable platform. The platform was rotated in a clockwise/counterclockwise oscillation to record directional responses in the stationary animals while the surrounding environmental cues remained stable. The peak rate of directional firing decreased by 27% on average during passive rotations ( r 2 = 0.73, P< 0.001). Individual cells recorded in sequential sessions ( n = 8) reliably showed comparable reductions in peak firing, but simultaneously recorded cells did not necessarily produce identical responses. All of the HD cells maintained the same preferred directions during passive rotations. These results are consistent with the hypothesis that the level of locomotor activity provides a state-dependent modulation of the response magnitude of AD HD cells. This could result from diffusely projecting neuromodulatory systems associated with motor state.


Sign in / Sign up

Export Citation Format

Share Document