scholarly journals Detecting object movement during self-movement: The importance of local motion contrast, position change and optic flow

2010 ◽  
Vol 9 (8) ◽  
pp. 635-635
Author(s):  
S. Rushton ◽  
P. Warren
2001 ◽  
Vol 85 (2) ◽  
pp. 724-734 ◽  
Author(s):  
Holger G. Krapp ◽  
Roland Hengstenberg ◽  
Martin Egelhaaf

Integrating binocular motion information tunes wide-field direction-selective neurons in the fly optic lobe to respond preferentially to specific optic flow fields. This is shown by measuring the local preferred directions (LPDs) and local motion sensitivities (LMSs) at many positions within the receptive fields of three types of anatomically identifiable lobula plate tangential neurons: the three horizontal system (HS) neurons, the two centrifugal horizontal (CH) neurons, and three heterolateral connecting elements. The latter impart to two of the HS and to both CH neurons a sensitivity to motion from the contralateral visual field. Thus in two HS neurons and both CH neurons, the response field comprises part of the ipsi- and contralateral visual hemispheres. The distributions of LPDs within the binocular response fields of each neuron show marked similarities to the optic flow fields created by particular types of self-movements of the fly. Based on the characteristic distributions of local preferred directions and motion sensitivities within the response fields, the functional role of the respective neurons in the context of behaviorally relevant processing of visual wide-field motion is discussed.


Perception ◽  
1996 ◽  
Vol 25 (1_suppl) ◽  
pp. 87-87
Author(s):  
I Lamouret ◽  
V Cornilleau-Pérès ◽  
J Droulez

Local motion detection mechanisms generally lead to one component of the optic flow becoming indeterminate. One way to solve this ‘aperture problem’ is to compute the optic flow which minimises some smoothing constraint. With iterative schemes the computed velocity array is suboptimal relative to the constraint until the process has converged. Under the original assumption that the iteration rate is sufficiently low to allow the perception of suboptimal flows at short stimulus durations, iterative gradient models give an accurate description of biases in the perception of tilted line velocity. We examine whether this approach can be applied to moving sinusoidal plaids. Our simulations are in agreement with a number of psychophysical results on both speed and direction perception. In particular we show that the effect of stimulus duration on the perceived direction of type II plaids [Yo and Wilson, 1992 Vision Research32(1)] can be accounted for without recourse to second-order mechanisms. The effects of contrast and component directions on the evolution rate of this bias are well reproduced. The model also successfully describes the effect of spatial frequency, and data obtained with gratings. These results suggest that iterative gradient schemes can model the dynamics of interactions between local velocity detectors, as revealed by psychophysical experiments with lines and plaids.


2004 ◽  
Vol 4 (8) ◽  
pp. 609-609
Author(s):  
J. Duijnhouwer ◽  
J. A. Beintema ◽  
R. J. A. Wezel ◽  
A. V. Berg
Keyword(s):  

2000 ◽  
Vol 84 (5) ◽  
pp. 2658-2669 ◽  
Author(s):  
Richard T. Born

Microelectrode recording and 2-deoxyglucose (2dg) labeling were used to investigate center-surround interactions in the middle temporal visual area (MT) of the owl monkey. These techniques revealed columnar groups of neurons whose receptive fields had opposite types of center-surround interaction with respect to moving visual stimuli. In one type of column, neurons responded well to objects such as a single bar or spot but poorly to large textured stimuli such as random dots. This was often due to the fact that the receptive fields had antagonistic surrounds: surround motion in the same direction as that preferred by the center suppressed responses, thus rendering these neurons unresponsive to wide-field motion. In the second set of complementary, interdigitated columns, neuronal receptive fields had reinforcing surrounds and responded optimally to wide-field motion. This functional organization could not be accounted for by systematic differences in binocular disparity. Within both column types, neurons whose receptive fields exhibited center-surround interactions were found less frequently in the input layers compared with the other layers. Additional tests were done on single units to examine the nature of the center-surround interactions. The direction tuning of the surround was broader than that of the center, and the preferred direction, with respect to that of the center, tended to be either in the same or opposite direction and only rarely in orthogonal directions. Surround motion at various velocities modulated the overall responsiveness to centrally placed moving stimuli, but it did not produce shifts in the peaks of the center's tuning curves for either direction or speed. In layers 3B and 5 of the local motion processing columns, a number of neurons responded only to local motion contrast but did so over a region of the visual field that was much larger than the optimal stimulus size. The central feature of this receptive field type was the generalization of surround antagonism over retinotopic space—a property similar to other “complex” receptive fields described previously. The columnar organization of different types of center-surround interactions may reflect the initial segregation of visual motion information into wide-field and local motion contrast systems that serve complementary functions in visual motion processing. Such segregation appears to occur at later stages of the macaque motion processing stream, in the medial superior temporal area (MST), and has also been described in invertebrate visual systems where it appears to be involved in the important function of distinguishing background motion from object motion.


2020 ◽  
Vol 20 (9) ◽  
pp. 12
Author(s):  
Lucy Evans ◽  
Rebecca A. Champion ◽  
Simon K. Rushton ◽  
Daniela Montaldi ◽  
Paul A. Warren

i-Perception ◽  
2017 ◽  
Vol 8 (3) ◽  
pp. 204166951770820 ◽  
Author(s):  
Diederick C. Niehorster ◽  
Li Li

How do we perceive object motion during self-motion using visual information alone? Previous studies have reported that the visual system can use optic flow to identify and globally subtract the retinal motion component resulting from self-motion to recover scene-relative object motion, a process called flow parsing. In this article, we developed a retinal motion nulling method to directly measure and quantify the magnitude of flow parsing (i.e., flow parsing gain) in various scenarios to examine the accuracy and tuning of flow parsing for the visual perception of object motion during self-motion. We found that flow parsing gains were below unity for all displays in all experiments; and that increasing self-motion and object motion speed did not alter flow parsing gain. We conclude that visual information alone is not sufficient for the accurate perception of scene-relative motion during self-motion. Although flow parsing performs global subtraction, its accuracy also depends on local motion information in the retinal vicinity of the moving object. Furthermore, the flow parsing gain was constant across common self-motion or object motion speeds. These results can be used to inform and validate computational models of flow parsing.


2006 ◽  
Vol 23 (1) ◽  
pp. 115-126 ◽  
Author(s):  
IAN R. WINSHIP ◽  
DOUGLAS R.W. WYLIE

Neurons sensitive to optic flow patterns have been recorded in the the olivo-vestibulocerebellar pathway and extrastriate visual cortical areas in vertebrates, and in the visual neuropile of invertebrates. The complex spike activity (CSA) of Purkinje cells in the vestibulocerebellum (VbC) responds best to patterns of optic flow that result from either self-rotation or self-translation. Previous studies have suggested that these neurons have a receptive-field (RF) structure that “approximates” the preferred optic flowfield with a “bipartite” organization. Contrasting this, studies in invertebrate species indicate that optic flow sensitive neurons are precisely tuned to their preferred flowfield, such that the local motion sensitivities and local preferred directions within their RFs precisely match the local motion in that region of the preferred flowfield. In this study, CSA in the VbC of pigeons was recorded in response to a set of complex computer-generated optic flow stimuli, similar to those used in previous studies of optic flow neurons in primate extrastriate visual cortex, to test whether the receptive field was of a precise or bipartite organization. We found that these RFs were not precisely tuned to optic flow patterns. Rather, we conclude that these neurons have a bipartite RF structure that approximates the preferred optic flowfield by pooling motion subunits of only a few different direction preferences.


2012 ◽  
Vol 108 (3) ◽  
pp. 794-801 ◽  
Author(s):  
Velia Cardin ◽  
Lara Hemsworth ◽  
Andrew T. Smith

The extraction of optic flow cues is fundamental for successful locomotion. During forward motion, the focus of expansion (FoE), in conjunction with knowledge of eye position, indicates the direction in which the individual is heading. Therefore, it is expected that cortical brain regions that are involved in the estimation of heading will be sensitive to this feature. To characterize cortical sensitivity to the location of the FoE or, more generally, the center of flow (CoF) during visually simulated self-motion, we carried out a functional MRI (fMRI) adaptation experiment in several human visual cortical areas that are thought to be sensitive to optic flow parameters, namely, V3A, V6, MT/V5, and MST. In each trial, two optic flow patterns were sequentially presented, with the CoF located in either the same or different positions. With an adaptation design, an area sensitive to heading direction should respond more strongly to a pair of stimuli with different CoFs than to stimuli with the same CoF. Our results show such release from adaptation in areas MT/V5 and MST, and to a lesser extent V3A, suggesting the involvement of these areas in the processing of heading direction. The effect could not be explained either by differences in local motion or by attention capture. It was not observed to a significant extent in area V6 or in control area V1. The different patterns of responses observed in MST and V6, areas that are both involved in the processing of egomotion in macaques and humans, suggest distinct roles in the processing of visual cues for self-motion.


Sign in / Sign up

Export Citation Format

Share Document