scholarly journals The perceptual and cortical consequences of adaptation to smooth pursuit: An MEG study of the extra-retinal motion aftereffect

2011 ◽  
Vol 11 (11) ◽  
pp. 531-531
Author(s):  
B. Dunkley ◽  
T. Freeman ◽  
S. Muthukumaraswamy ◽  
K. Singh
1999 ◽  
Vol 88 (3) ◽  
pp. 209-219 ◽  
Author(s):  
Gunvant K. Thaker ◽  
David E. Ross ◽  
Robert W. Buchanan ◽  
Helene M. Adami ◽  
Deborah R. Medoff

2001 ◽  
Vol 13 (1) ◽  
pp. 102-120 ◽  
Author(s):  
Christopher Pack ◽  
Stephen Grossberg ◽  
Ennio Mingolla

Smooth pursuit eye movements (SPEMs) are eye rotations that are used to maintain fixation on a moving target. Such rotations complicate the interpretation of the retinal image, because they nullify the retinal motion of the target, while generating retinal motion of stationary objects in the background. This poses a problem for the oculomotor system, which must track the stabilized target image while suppressing the optokinetic reflex, which would move the eye in the direction of the retinal background motion (opposite to the direction in which the target is moving). Similarly, the perceptual system must estimate the actual direction and speed of moving objects in spite of the confounding effects of the eye rotation. This paper proposes a neural model to account for the ability of primates to accomplish these tasks. The model simulates the neurophysiological properties of cell types found in the superior temporal sulcus of the macaque monkey, specifically the medial superior temporal (MST) region. These cells process signals related to target motion, background motion, and receive an efference copy of eye velocity during pursuit movements. The model focuses on the interactions between cells in the ventral and dorsal subdivisions of MST, which are hypothesized to process target velocity and background motion, respectively. The model explains how these signals can be combined to explain behavioral data about pursuit maintenance and perceptual data from human studies, including the Aubert-Fleischl phenomenon and the Filehne Illusion, thereby clarifying the functional significance of neurophysiological data about these MST cell properties. It is suggested that the connectivity used in the model may represent a general strategy used by the brain in analyzing the visual world.


2015 ◽  
Vol 113 (5) ◽  
pp. 1377-1399 ◽  
Author(s):  
T. Scott Murdison ◽  
Guillaume Leclercq ◽  
Philippe Lefèvre ◽  
Gunnar Blohm

Smooth pursuit eye movements are driven by retinal motion and enable us to view moving targets with high acuity. Complicating the generation of these movements is the fact that different eye and head rotations can produce different retinal stimuli but giving rise to identical smooth pursuit trajectories. However, because our eyes accurately pursue targets regardless of eye and head orientation (Blohm G, Lefèvre P. J Neurophysiol 104: 2103–2115, 2010), the brain must somehow take these signals into account. To learn about the neural mechanisms potentially underlying this visual-to-motor transformation, we trained a physiologically inspired neural network model to combine two-dimensional (2D) retinal motion signals with three-dimensional (3D) eye and head orientation and velocity signals to generate a spatially correct 3D pursuit command. We then simulated conditions of 1) head roll-induced ocular counterroll, 2) oblique gaze-induced retinal rotations, 3) eccentric gazes (invoking the half-angle rule), and 4) optokinetic nystagmus to investigate how units in the intermediate layers of the network accounted for different 3D constraints. Simultaneously, we simulated electrophysiological recordings (visual and motor tunings) and microstimulation experiments to quantify the reference frames of signals at each processing stage. We found a gradual retinal-to-intermediate-to-spatial feedforward transformation through the hidden layers. Our model is the first to describe the general 3D transformation for smooth pursuit mediated by eye- and head-dependent gain modulation. Based on several testable experimental predictions, our model provides a mechanism by which the brain could perform the 3D visuomotor transformation for smooth pursuit.


2003 ◽  
Vol 3 (11) ◽  
pp. 11 ◽  
Author(s):  
Tom C. A. Freeman ◽  
Jane H. Sumnall ◽  
Robert J. Snowden

Perception ◽  
1993 ◽  
Vol 22 (11) ◽  
pp. 1365-1380 ◽  
Author(s):  
Nicholas J Wade ◽  
Michael T Swanston ◽  
Charles M M de Weert

A brief history of quantitative assessments of interocular transfer (IOT) of the motion aftereffect (MAE) is presented. Recent research indicates that the MAE occurs as a consequence of adapting detectors for relative rather than retinal motion. When gratings above and below a stationary, fixated grating are moved in an otherwise dark field the central, retinally stationary grating appears to move in the opposite direction; when tested with stationary gratings an MAE is almost entirely confined to the central grating. The IOT of such an MAE was measured in experiment 1: the display was presented to one eye with a black field in the other. The IOT was about 30% of the monocular MAE. Similar values were found in experiment 2, in which the contralateral eye received an equivalent central stationary grating during adaptation and test. The dichoptic interaction of the processes involved in the MAE was examined by presenting the central gratings to both eyes and a single flanking grating above in one eye and below in the other (experiment 3). The MAE was tested with either the same or the contralateral pairing. Oppositely directed MAEs were found for the central and flanking gratings, but they were confined mainly to the conditions in which the configurations presented during adaptation were present in the same eyes during test. In experiment 4, the surround MAEs were compared after adaptation with two moving gratings in one eye or with a similar dichoptic configuration, and they were of similar duration. In a final experiment the MAE was tested either monocularly or binocularly after alternating adaptation of the left and right eyes and was found to be of the same duration. It is concluded that the MAE is a consequence of adapting relational-motion detectors, which are either monocular or of the binocular OR class.


2019 ◽  
Author(s):  
Joonyeol Lee ◽  
Woojae Jeong ◽  
Seolmin Kim ◽  
Yee-Joon Kim

AbstractVisually-guided smooth pursuit eye movements are composed of initial open-loop and later steady-state periods. Feedforward sensory information dominates the motor behavior during the open-loop pursuit, and a more complex feedback loop regulates the steady-state pursuit. To understand the neural representations of motion direction during open-loop and steady-state smooth pursuits, we recorded electroencephalography (EEG) responses from human observers while they tracked random dot kinematograms as pursuit targets. We estimated population direction tuning curves from multivariate EEG activity using an inverted encoding model. We found significant direction tuning curves as early as 20 ms from motion onset. Direction tuning responses were generalized to later times during the open-loop smooth pursuit, but they became more dynamic during the later steady-state pursuit. The encoding quality of retinal motion direction information estimated from the early direction tuning curves was predictive of trial-by-trial variation in initial pursuit directions. These results suggest that the movement directions of open-loop smooth pursuit are guided by the representation of the retinal motion present in the multivariate EEG activity.


2010 ◽  
Vol 104 (4) ◽  
pp. 2103-2115 ◽  
Author(s):  
Gunnar Blohm ◽  
Philippe Lefèvre

Smooth pursuit eye movements are driven by retinal motion signals. These retinal motion signals are converted into motor commands that obey Listing's law (i.e., no accumulation of ocular torsion). The fact that smooth pursuit follows Listing's law is often taken as evidence that no explicit reference frame transformation between the retinal velocity input and the head-centered motor command is required. Such eye-position-dependent reference frame transformations between eye- and head-centered coordinates have been well-described for saccades to static targets. Here we suggest that such an eye (and head)-position-dependent reference frame transformation is also required for target motion (i.e., velocity) driving smooth pursuit eye movements. Therefore we tested smooth pursuit initiation under different three-dimensional eye positions and compared human performance to model simulations. We specifically tested if the ocular rotation axis changed with vertical eye position, if the misalignment of the spatial and retinal axes during oblique fixations was taken into account, and if ocular torsion (due to head roll) was compensated for. If no eye-position-dependent velocity transformation was used, the pursuit initiation should follow the retinal direction, independently of eye position; in contrast, a correct visuomotor velocity transformation would result in spatially correct pursuit initiation. Overall subjects accounted for all three components of the visuomotor velocity transformation, but we did observe differences in the compensatory gains between individual subjects. We concluded that the brain does perform a visuomotor velocity transformation but that this transformation was prone to noise and inaccuracies of the internal model.


2019 ◽  
Vol 5 (1) ◽  
pp. 223-246 ◽  
Author(s):  
Eileen Kowler ◽  
Jason F. Rubinstein ◽  
Elio M. Santos ◽  
Jie Wang

Smooth pursuit eye movements maintain the line of sight on smoothly moving targets. Although often studied as a response to sensory motion, pursuit anticipates changes in motion trajectories, thus reducing harmful consequences due to sensorimotor processing delays. Evidence for predictive pursuit includes ( a) anticipatory smooth eye movements (ASEM) in the direction of expected future target motion that can be evoked by perceptual cues or by memory for recent motion, ( b) pursuit during periods of target occlusion, and ( c) improved accuracy of pursuit with self-generated or biologically realistic target motions. Predictive pursuit has been linked to neural activity in the frontal cortex and in sensory motion areas. As behavioral and neural evidence for predictive pursuit grows and statistically based models augment or replace linear systems approaches, pursuit is being regarded less as a reaction to immediate sensory motion and more as a predictive response, with retinal motion serving as one of a number of contributing cues.


2016 ◽  
Vol 115 (3) ◽  
pp. 1220-1227 ◽  
Author(s):  
Stephen J. Heinen ◽  
Elena Potapchuk ◽  
Scott N. J. Watamaniuk

Images that move rapidly across the retina of the human eye blur because the retina has sluggish temporal dynamics. Voluntary smooth pursuit eye movements are modeled as matching object velocity to minimize retinal motion and prevent retinal blurring. However, “catch-up” saccades that are ubiquitous during pursuit interrupt it and disrupt clear vision. But catch-up saccades may not be a common feature of ocular pursuit, because their existence has been documented with a small moving spot, the classic pursuit stimulus, which is a weak motion stimulus that may poorly emulate larger pursuit objects. We found that spot pursuit does not generalize to that of larger objects. Observers pursued a spot or a larger virtual object with or without a superimposed spot target. Single-spot targets produced lower pursuit acceleration than larger objects. Critically, more saccadic intrusions occurred when stimuli had a central dot, even when position and velocity errors were equated, suggesting that catch-up saccades result from pursuing a single, small object or a feature on a large one. To determine what differentiates a large object from a small one, we progressively shrank the featureless virtual object and found that catch-up saccade frequency was highest when it fit in the fovea. The results suggest that pursuit of a small target or an object feature recruits a saccade mechanism that does not compensate for a weak motion signal; rather, the target compels foveation. Furthermore, catch-up saccades are likely generated by neural circuitry typically used to foveate small objects or features.


Sign in / Sign up

Export Citation Format

Share Document