scholarly journals Tuning to a hip-hop beat: Pursuit eye movements reveal processing of biological motion

2021 ◽  
Author(s):  
David Souto ◽  
Kyle Nacilla ◽  
Mateusz Bocian

Smooth pursuit eye movements can anticipate predictable movement patterns, thus achieving their goal of reducing retinal motion blur. Oculomotor predictions have been thought to rely on an internal model of the target kinematics. Since biological motion is one of the most important visual stimuli in regulating human interaction, we asked whether there is a specific contribution of an internal model of biological motion in driving pursuit eye movements. Unlike previous contributions, we exploited the cyclical nature of walking to measure eye movement’s ability to track the velocity oscillations of the hip of point-light walkers. We quantified the quality of tracking by cross-correlating pursuit and hip velocity oscillations. We found a robust correlation between signals, even along the horizontal dimension, where changes in velocity during the stepping cycle are very subtle. The inversion of the walker and the presentation of the hip-dot without context incurred the same additional phase lag along the horizontal dimension, whereas a scrambled walker incurred no phase lag relative to the upright walker. Those findings support the view that local information beyond the hip-dot, but not necessarily configural information, contribute to predicting the hip kinematics that control pursuit. We also found a smaller phase lag in inverted walkers for pursuit along the vertical dimension compared to upright and scrambled walkers, indicating that inversion does not simply reduce prediction. We show that pursuit eye movements provide an implicit and robust measure of the processing of biological motion signals.

1999 ◽  
Vol 88 (3) ◽  
pp. 209-219 ◽  
Author(s):  
Gunvant K. Thaker ◽  
David E. Ross ◽  
Robert W. Buchanan ◽  
Helene M. Adami ◽  
Deborah R. Medoff

2010 ◽  
Vol 50 (24) ◽  
pp. 2721-2728 ◽  
Author(s):  
Sébastien Coppe ◽  
Jean-Jacques Orban de Xivry ◽  
Marcus Missal ◽  
Philippe Lefèvre

2015 ◽  
Vol 113 (5) ◽  
pp. 1377-1399 ◽  
Author(s):  
T. Scott Murdison ◽  
Guillaume Leclercq ◽  
Philippe Lefèvre ◽  
Gunnar Blohm

Smooth pursuit eye movements are driven by retinal motion and enable us to view moving targets with high acuity. Complicating the generation of these movements is the fact that different eye and head rotations can produce different retinal stimuli but giving rise to identical smooth pursuit trajectories. However, because our eyes accurately pursue targets regardless of eye and head orientation (Blohm G, Lefèvre P. J Neurophysiol 104: 2103–2115, 2010), the brain must somehow take these signals into account. To learn about the neural mechanisms potentially underlying this visual-to-motor transformation, we trained a physiologically inspired neural network model to combine two-dimensional (2D) retinal motion signals with three-dimensional (3D) eye and head orientation and velocity signals to generate a spatially correct 3D pursuit command. We then simulated conditions of 1) head roll-induced ocular counterroll, 2) oblique gaze-induced retinal rotations, 3) eccentric gazes (invoking the half-angle rule), and 4) optokinetic nystagmus to investigate how units in the intermediate layers of the network accounted for different 3D constraints. Simultaneously, we simulated electrophysiological recordings (visual and motor tunings) and microstimulation experiments to quantify the reference frames of signals at each processing stage. We found a gradual retinal-to-intermediate-to-spatial feedforward transformation through the hidden layers. Our model is the first to describe the general 3D transformation for smooth pursuit mediated by eye- and head-dependent gain modulation. Based on several testable experimental predictions, our model provides a mechanism by which the brain could perform the 3D visuomotor transformation for smooth pursuit.


2019 ◽  
Vol 121 (5) ◽  
pp. 1787-1797
Author(s):  
David Souto ◽  
Jayesha Chudasama ◽  
Dirk Kerzel ◽  
Alan Johnston

Smooth pursuit eye movements (pursuit) are used to minimize the retinal motion of moving objects. During pursuit, the pattern of motion on the retina carries not only information about the object movement but also reafferent information about the eye movement itself. The latter arises from the retinal flow of the stationary world in the direction opposite to the eye movement. To extract the global direction of motion of the tracked object and stationary world, the visual system needs to integrate ambiguous local motion measurements (i.e., the aperture problem). Unlike the tracked object, the stationary world’s global motion is entirely determined by the eye movement and thus can be approximately derived from motor commands sent to the eye (i.e., from an efference copy). Because retinal motion opposite to the eye movement is dominant during pursuit, different motion integration mechanisms might be used for retinal motion in the same direction and opposite to pursuit. To investigate motion integration during pursuit, we tested direction discrimination of a brief change in global object motion. The global motion stimulus was a circular array of small static apertures within which one-dimensional gratings moved. We found increased coherence thresholds and a qualitatively different reflexive ocular tracking for global motion opposite to pursuit. Both effects suggest reduced sampling of motion opposite to pursuit, which results in an impaired ability to extract coherence in motion signals in the reafferent direction. We suggest that anisotropic motion integration is an adaptation to asymmetric retinal motion patterns experienced during pursuit eye movements. NEW & NOTEWORTHY This study provides a new understanding of how the visual system achieves coherent perception of an object’s motion while the eyes themselves are moving. The visual system integrates local motion measurements to create a coherent percept of object motion. An analysis of perceptual judgments and reflexive eye movements to a brief change in an object’s global motion confirms that the visual and oculomotor systems pick fewer samples to extract global motion opposite to the eye movement.


2010 ◽  
Vol 104 (4) ◽  
pp. 2103-2115 ◽  
Author(s):  
Gunnar Blohm ◽  
Philippe Lefèvre

Smooth pursuit eye movements are driven by retinal motion signals. These retinal motion signals are converted into motor commands that obey Listing's law (i.e., no accumulation of ocular torsion). The fact that smooth pursuit follows Listing's law is often taken as evidence that no explicit reference frame transformation between the retinal velocity input and the head-centered motor command is required. Such eye-position-dependent reference frame transformations between eye- and head-centered coordinates have been well-described for saccades to static targets. Here we suggest that such an eye (and head)-position-dependent reference frame transformation is also required for target motion (i.e., velocity) driving smooth pursuit eye movements. Therefore we tested smooth pursuit initiation under different three-dimensional eye positions and compared human performance to model simulations. We specifically tested if the ocular rotation axis changed with vertical eye position, if the misalignment of the spatial and retinal axes during oblique fixations was taken into account, and if ocular torsion (due to head roll) was compensated for. If no eye-position-dependent velocity transformation was used, the pursuit initiation should follow the retinal direction, independently of eye position; in contrast, a correct visuomotor velocity transformation would result in spatially correct pursuit initiation. Overall subjects accounted for all three components of the visuomotor velocity transformation, but we did observe differences in the compensatory gains between individual subjects. We concluded that the brain does perform a visuomotor velocity transformation but that this transformation was prone to noise and inaccuracies of the internal model.


2019 ◽  
Vol 5 (1) ◽  
pp. 223-246 ◽  
Author(s):  
Eileen Kowler ◽  
Jason F. Rubinstein ◽  
Elio M. Santos ◽  
Jie Wang

Smooth pursuit eye movements maintain the line of sight on smoothly moving targets. Although often studied as a response to sensory motion, pursuit anticipates changes in motion trajectories, thus reducing harmful consequences due to sensorimotor processing delays. Evidence for predictive pursuit includes ( a) anticipatory smooth eye movements (ASEM) in the direction of expected future target motion that can be evoked by perceptual cues or by memory for recent motion, ( b) pursuit during periods of target occlusion, and ( c) improved accuracy of pursuit with self-generated or biologically realistic target motions. Predictive pursuit has been linked to neural activity in the frontal cortex and in sensory motion areas. As behavioral and neural evidence for predictive pursuit grows and statistically based models augment or replace linear systems approaches, pursuit is being regarded less as a reaction to immediate sensory motion and more as a predictive response, with retinal motion serving as one of a number of contributing cues.


2018 ◽  
Author(s):  
Didem Korkmaz Hacialihafiz ◽  
Andreas Bartels

AbstractCreating a stable perception of the world during pursuit eye movements is one of the everyday roles of visual system. Some motion regions have been shown to differentiate between motion in the external world from that generated by eye movements. However, in most circumstances, perceptual stability is consistently related to content: the surrounding scene is typically stable. However, no prior study has examined to which extent motion responsive regions are modulated by scene content, and whether there is an interaction between content and motion response. In the present study we used a factorial design that has previously been shown to reveal regional involvement in integrating efference copies of eye-movements with retinal motion to mediate perceptual stability and encode real-world motion. We then added scene content as a third factor, which allowed us to examine to which extent real-motion, retinal motion, and static responses were modulated by meaningful scenes versus their Fourier scrambled counterpart. We found that motion responses in human motion responsive regions V3A, V6, V5+/MT+ and cingulate sulcus visual area (CSv) were all modulated by scene content. Depending on the region, these motion-content interactions differentially depended on whether motion was self-induced or not. V3A was the only motion responsive region that also showed responses to still scenes. Our results suggest that contrary to the two-pathway hypothesis, scene responses are not isolated to ventral regions, but also can be found in dorsal areas.


2005 ◽  
Vol 17 (7) ◽  
pp. 1011-1017 ◽  
Author(s):  
A. Z. Zivotofsky ◽  
M. E. Goldberg ◽  
K. D. Powell

The visual system uses the pattern of motion on the retina to analyze the motion of objects in the world, and the motion of the observer him/herself. Distinguishing between retinal motion evoked by movement of the retina in space and retinal motion evoked by movement of objects in the environment is computationally difficult, and the human visual system frequently misinterprets the meaning of retinal motion. In this study, we demonstrate that the visual system of the Rhesus monkey also misinterprets retinal motion. We show that monkeys erroneously report the trajectories of pursuit targets or their own pursuit eye movements during an epoch of smooth pursuit across an orthogonally moving background. Furthermore, when they make saccades to the spatial location of stimuli that flashed early in an epoch of smooth pursuit or fixation, they make large errors that appear to take into account the erroneous smooth eye movement that they report in the first experiment, and not the eye movement that they actually make.


2019 ◽  
Author(s):  
Julia A. Gillard ◽  
Karin Petrini ◽  
Katie Noble ◽  
Jesus A. Rodriguez Perez ◽  
Frank E. Pollick

AbstractPrevious research using reverse correlation to explore the relationship between brain activity and presented image information found that Face Fusiform Area (FFA) activity could be related to the appearance of faces during free viewing of the Hollywood movie “The Good, the Bad, and the Ugly” (Hasson, et al, 2004). We applied this approach to the naturalistic viewing of unedited footage of city-centre closed-circuit television (CCTV) surveillance. Two 300 second videos were used, one containing prosocial activities and the other antisocial activities. Brain activity revealed through fMRI as well as eye movements were recorded while fifteen expert CCTV operators with a minimum of 6 months experience of CCTV surveillance alongside an age and gender matched control group of fifteen novice viewers were scanned while watching the videos. Independent scans functionally localized FFA and posterior Superior Temporal Sulcus (pSTS) activity using faces/houses and intact/scrambled point-light biological motion displays respectively. Reverse correlation revealed peaks in FFA and pSTS brain activity corresponding to the expert and novice eye movements directed towards faces and biological motion across both videos. In contrast, troughs in activation corresponded to camera-induced motion when a clear view of visual targets were temporarily not available. Our findings, validated by the eye movement data, indicate that the predicted modulation of brain activity occurs as a result of salient features of faces and biological motion embedded within the naturalistic stimuli. The examination of expertise revealed that in both pSTS and FFA the novices had significantly more activated timeframes than the experienced observers for the prosocial video. However, no difference was found for the antisocial video. The modulation of brain activity, as well as the effect of expertise gives a novel insight into the underlying visual processes in an applied real-life task.


Sign in / Sign up

Export Citation Format

Share Document