scholarly journals Neurophysiology of visually guided eye movements: critical review and alternative viewpoint

2018 ◽  
Vol 120 (6) ◽  
pp. 3234-3245 ◽  
Author(s):  
Laurent Goffart ◽  
Clara Bourrelly ◽  
Jean-Charles Quinton

In this article, we perform a critical examination of assumptions that led to the assimilation of measurements of the movement of a rigid body in the physical world to parameters encoded within brain activity. In many neurophysiological studies of goal-directed eye movements, equivalence has indeed been made between the kinematics of the eyes or of a targeted object and the associated neuronal processes. Such a way of proceeding brings up the reduction encountered in projective geometry when a multidimensional object is being projected onto a one-dimensional segment. The measurement of a movement indeed consists of generation of a series of numerical values from which magnitudes such as amplitude, duration, and their ratio (speed) are calculated. By contrast, movement generation consists of activation of multiple parallel channels in the brain. Yet, for many years, kinematic parameters were supposed to be encoded in brain activity, even though the neuronal image of most physical events is distributed both spatially and temporally. After explaining why the “neuronalization” of such parameters is questionable for elucidating the neural processes underlying the execution of saccadic and pursuit eye movements, we propose an alternative to the framework that has dominated the last five decades. A viewpoint is presented in which these processes follow principles that are defined by intrinsic properties of the brain (population coding, multiplicity of transmission delays, synchrony of firing, connectivity). We propose reconsideration of the time course of saccadic and pursuit eye movements as the restoration of equilibria between neural populations that exert opposing motor tendencies.

2021 ◽  
Vol 2021 (2) ◽  
Author(s):  
Shira Baror ◽  
Biyu J He

Abstract Flipping through social media feeds, viewing exhibitions in a museum, or walking through the botanical gardens, people consistently choose to engage with and disengage from visual content. Yet, in most laboratory settings, the visual stimuli, their presentation duration, and the task at hand are all controlled by the researcher. Such settings largely overlook the spontaneous nature of human visual experience, in which perception takes place independently from specific task constraints and its time course is determined by the observer as a self-governing agent. Currently, much remains unknown about how spontaneous perceptual experiences unfold in the brain. Are all perceptual categories extracted during spontaneous perception? Does spontaneous perception inherently involve volition? Is spontaneous perception segmented into discrete episodes? How do different neural networks interact over time during spontaneous perception? These questions are imperative to understand our conscious visual experience in daily life. In this article we propose a framework for spontaneous perception. We first define spontaneous perception as a task-free and self-paced experience. We propose that spontaneous perception is guided by four organizing principles that grant it temporal and spatial structures. These principles include coarse-to-fine processing, continuity and segmentation, agency and volition, and associative processing. We provide key suggestions illustrating how these principles may interact with one another in guiding the multifaceted experience of spontaneous perception. We point to testable predictions derived from this framework, including (but not limited to) the roles of the default-mode network and slow cortical potentials in underlying spontaneous perception. We conclude by suggesting several outstanding questions for future research, extending the relevance of this framework to consciousness and spontaneous brain activity. In conclusion, the spontaneous perception framework proposed herein integrates components in human perception and cognition, which have been traditionally studied in isolation, and opens the door to understand how visual perception unfolds in its most natural context.


2021 ◽  
pp. 2150048
Author(s):  
Hamidreza Namazi ◽  
Avinash Menon ◽  
Ondrej Krejcar

Our eyes are always in search of exploring our surrounding environment. The brain controls our eyes’ activities through the nervous system. Hence, analyzing the correlation between the activities of the eyes and brain is an important area of research in vision science. This paper evaluates the coupling between the reactions of the eyes and the brain in response to different moving visual stimuli. Since both eye movements and EEG signals (as the indicator of brain activity) contain information, we employed Shannon entropy to decode the coupling between them. Ten subjects looked at four moving objects (dynamic visual stimuli) with different information contents while we recorded their EEG signals and eye movements. The results demonstrated that the changes in the information contents of eye movements and EEG signals are strongly correlated ([Formula: see text]), which indicates a strong correlation between brain and eye activities. This analysis could be extended to evaluate the correlation between the activities of other organs versus the brain.


2019 ◽  
Vol 116 (6) ◽  
pp. 2027-2032 ◽  
Author(s):  
Jasper H. Fabius ◽  
Alessio Fracasso ◽  
Tanja C. W. Nijboer ◽  
Stefan Van der Stigchel

Humans move their eyes several times per second, yet we perceive the outside world as continuous despite the sudden disruptions created by each eye movement. To date, the mechanism that the brain employs to achieve visual continuity across eye movements remains unclear. While it has been proposed that the oculomotor system quickly updates and informs the visual system about the upcoming eye movement, behavioral studies investigating the time course of this updating suggest the involvement of a slow mechanism, estimated to take more than 500 ms to operate effectively. This is a surprisingly slow estimate, because both the visual system and the oculomotor system process information faster. If spatiotopic updating is indeed this slow, it cannot contribute to perceptual continuity, because it is outside the temporal regime of typical oculomotor behavior. Here, we argue that the behavioral paradigms that have been used previously are suboptimal to measure the speed of spatiotopic updating. In this study, we used a fast gaze-contingent paradigm, using high phi as a continuous stimulus across eye movements. We observed fast spatiotopic updating within 150 ms after stimulus onset. The results suggest the involvement of a fast updating mechanism that predictively influences visual perception after an eye movement. The temporal characteristics of this mechanism are compatible with the rate at which saccadic eye movements are typically observed in natural viewing.


2007 ◽  
Vol 27 (11) ◽  
pp. 2987-2998 ◽  
Author(s):  
L. C. Osborne ◽  
S. S. Hohl ◽  
W. Bialek ◽  
S. G. Lisberger

2015 ◽  
Vol 113 (5) ◽  
pp. 1377-1399 ◽  
Author(s):  
T. Scott Murdison ◽  
Guillaume Leclercq ◽  
Philippe Lefèvre ◽  
Gunnar Blohm

Smooth pursuit eye movements are driven by retinal motion and enable us to view moving targets with high acuity. Complicating the generation of these movements is the fact that different eye and head rotations can produce different retinal stimuli but giving rise to identical smooth pursuit trajectories. However, because our eyes accurately pursue targets regardless of eye and head orientation (Blohm G, Lefèvre P. J Neurophysiol 104: 2103–2115, 2010), the brain must somehow take these signals into account. To learn about the neural mechanisms potentially underlying this visual-to-motor transformation, we trained a physiologically inspired neural network model to combine two-dimensional (2D) retinal motion signals with three-dimensional (3D) eye and head orientation and velocity signals to generate a spatially correct 3D pursuit command. We then simulated conditions of 1) head roll-induced ocular counterroll, 2) oblique gaze-induced retinal rotations, 3) eccentric gazes (invoking the half-angle rule), and 4) optokinetic nystagmus to investigate how units in the intermediate layers of the network accounted for different 3D constraints. Simultaneously, we simulated electrophysiological recordings (visual and motor tunings) and microstimulation experiments to quantify the reference frames of signals at each processing stage. We found a gradual retinal-to-intermediate-to-spatial feedforward transformation through the hidden layers. Our model is the first to describe the general 3D transformation for smooth pursuit mediated by eye- and head-dependent gain modulation. Based on several testable experimental predictions, our model provides a mechanism by which the brain could perform the 3D visuomotor transformation for smooth pursuit.


2001 ◽  
Vol 85 (5) ◽  
pp. 2245-2266 ◽  
Author(s):  
A. Takemura ◽  
Y. Inoue ◽  
K. Kawano ◽  
C. Quaia ◽  
F. A. Miles

Single-unit discharges were recorded in the medial superior temporal area (MST) of five behaving monkeys. Brief (230-ms) horizontal disparity steps were applied to large correlated or anticorrelated random-dot patterns (in which the dots had the same or opposite contrast, respectively, at the two eyes), eliciting vergence eye movements at short latencies [65.8 ± 4.5 (SD) ms]. Disparity tuning curves, describing the dependence of the initial vergence responses (measured over the period 50–110 ms after the step) on the magnitude of the steps, resembled the derivative of a Gaussian, the curves obtained with correlated and anticorrelated patterns having opposite sign. Cells with disparity-related activity were isolated using correlated stimuli, and disparity tuning curves describing the dependence of these initial neuronal responses (measured over the period of 40–100 ms) on the magnitude of the disparity step were constructed ( n = 102 cells). Using objective criteria and the fuzzy c-means clustering algorithm, disparity tuning curves were sorted into four groups based on their shapes. A post hoc comparison indicated that these four groups had features in common with four of the classes of disparity-selective neurons in striate cortex, but three of the four groups appeared to be part of a continuum. Most of the data were obtained from two monkeys, and when the disparity tuning curves of all the individual neurons recorded from either monkey were summed together, they fitted the disparity tuning curve for that same animal's vergence responses remarkably well ( r 2: 0.93, 0.98). Fifty-six of the neurons recorded from these two monkeys were also tested with anticorrelated patterns, and all showed significant modulation of their activity ( P < 0.005, 1-way ANOVA). Further, when all of the disparity tuning curves obtained with these patterns from either monkey were summed together, they too fitted the disparity tuning curve for that same animal's vergence responses very well ( r 2: 0.95, 0.96). Indeed, the summed activity even reproduced idiosyncratic differences in the vergence responses of the two monkeys. Based on these and other observations on the temporal coding of events, we hypothesize that the magnitude, direction, and time course of the initial vergence velocity responses associated with disparity steps applied to large patterns are all encoded in the summed activity of the disparity-sensitive cells in MST. Latency data suggest that this activity in MST occurs early enough to play an active role in the generation of vergence eye movements at short latencies.


1989 ◽  
Vol 62 (1) ◽  
pp. 31-47 ◽  
Author(s):  
H. Komatsu ◽  
R. H. Wurtz

1. Many cells in the superior temporal sulcus (STS) of the monkey that represent the foveal region of the visual field discharge during pursuit eye movements. Damage to these areas produces a deficit in the maintenance of pursuit eye movements when the target towards the side of the brain with the lesion. In the present experiments, we electrically stimulated these areas to better localize and understand the mechanisms underlying this directional pursuit deficit. 2. Monkeys were trained to pursue a moving target using a step-ramp task in which the target first stepped to an eccentric position and then moved smoothly across the screen. Trains of stimulation were applied after the monkey had begun to pursue the target to study stimulation effects of maintenance of pursuit. 3. Stimulation during pursuit frequently produced eye acceleration toward the side of the brain stimulated. Eye speed increased during pursuit toward the side stimulated and decreased during pursuit away from the side stimulated. This increase in velocity toward the side of the brain where stimulation presumably activated cells is consistent with the decrease in pursuit velocity toward the side of the brain after cells were removed by chemical lesions. 4. The increase or decrease in pursuit speed following stimulation produced a slip of the target on the retina. The pursuit system seemed to be insensitive to this slip during the period of stimulation, however, since the effect of stimulation during pursuit of a stabilized image (open-loop condition) was similar to that resulting from stimulation under normal pursuit conditions (closed-loop). This insensitivity to visual motion during stimulation suggests that the stimulation substitutes for that visual input. 5. The separation of eye and target position that resulted from stimulation did produce catch-up saccades. This provides added evidence that alteration of middle temporal area (MT) and medial superior temporal area (MST) modifies visual-motion but not visual-position information. 6. Stimulation that produced eye acceleration during pursuit produced only a slight effect during fixation of a stationary target. The effectiveness of the stimulation also increased as the speed of the pursuit increased between 5 and 25 degrees/s. These observations, which show that pursuit velocity altered the effect of stimulation, suggest that the stimulation acted on visual motion processing before information about the pursuit movement itself is incorporated. Since this stimulation produces directional pursuit effects, we hypothesize that the directional bias for pursuit originates in the visual signal conveyed to the pursuit system.(ABSTRACT TRUNCATED AT 400 WORDS)


2020 ◽  
Author(s):  
Xiaoli Zhang ◽  
Julie D Golomb

AbstractWe can focus visuospatial attention by covertly attending to relevant locations, moving our eyes, or both simultaneously. How does shifting versus holding covert attention during fixation compare with maintaining covert attention across saccades? We acquired fMRI data during a combined saccade and covert attention task. On Eyes-fixed trials, participants either held attention at the same initial location (“hold attention”) or shifted attention to another location midway through the trial (“shift attention”). On Eyes-move trials, participants made a saccade midway through the trial, while maintaining attention in one of two reference frames: The “retinotopic attention” condition involved holding attention at a fixation-relative location but shifting to a different screen-centered location, whereas the “spatiotopic attention” condition involved holding attention on the same screen-centered location but shifting relative to fixation. We localized the brain network sensitive to attention shifts (shift > hold attention), and used multivoxel pattern time course analyses to investigate the patterns of brain activity for spatiotopic and retinotopic attention. In the attention shift network, we found transient information about both whether covert shifts were made and whether saccades were executed. Moreover, in the attention shift network, both retinotopic and spatiotopic conditions were represented more similarly to shifting than to holding covert attention. An exploratory searchlight analysis revealed additional regions where spatiotopic was relatively more similar to shifting and retinotopic more to holding. Thus, maintaining retinotopic and spatiotopic attention across saccades may involve different types of updating that vary in similarity to covert attention “hold” and “shift” signals across different regions.Significance StatementTo our knowledge, this study is the first attempt to directly compare human brain activity patterns of covert attention (to a peripheral spatial location) across saccades and during fixation. We applied fMRI multivoxel pattern time course analyses to capture the dynamic changes of activity patterns, with specific focus on the critical timepoints related to attention shifts and saccades. Our findings indicate that both retinotopic and spatiotopic attention across saccades produce patterns of activation similar to “shifting” attention in the brain, even though both tasks could be interpreted as “holding” attention by the participant. The results offer a novel perspective to understand how the brain processes and updates spatial information under different circumstances to fit the needs of various cognitive tasks.


Sign in / Sign up

Export Citation Format

Share Document