scholarly journals Closed loop motor-sensory dynamics in human vision

2019 ◽  
Author(s):  
Liron Gruber ◽  
Ehud Ahissar

AbstractVision is obtained with a continuous motion of the eyes. The kinematic analysis of eye motion, during any visual or ocular task, typically reveals two (kinematic) components: saccades, which quickly replace the visual content in the retinal fovea, and drifts, which slowly scan the image after each saccade. While the saccadic exchange of regions of interest (ROIs) is commonly considered to be included in motor-sensory closed-loops, it is commonly assumed that drifts function in an open-loop manner, that is, independent of the concurrent visual input. Accordingly, visual perception is assumed to be based on a sequence of open-loop processes, each initiated by a saccade-triggered retinal snapshot. Here we directly challenged this assumption by testing the dependency of drift kinematics on concurrent visual inputs using real-time gaze-contingent-display. Our results demonstrate a dependency of the trajectory on the concurrent visual input, convergence of speed to condition-specific values and maintenance of selected drift-related motor-sensory controlled variables, all strongly indicative of drifts being included in a closed-loop brain-world process, and thus suggesting that vision is inherently a closed-loop process.Author summaryOur eyes do not function like cameras; it has long been known that we are actively scanning our visual environment in order to see. Moreover, it is commonly accepted that our fast eye movements, saccades, are controlled by the brain and are affected by the sensory input. However, our slow eye movements, the ocular drifts, are often ignored when visual acquisition is analyzed. Accordingly, visual processing is typically assumed to be based on computations performed on saccade-triggered snapshots of the retinal state. Our work strongly challenges this model and provides significant evidence for an alternative model, a cybernetic one. We show that the dynamics of the ocular drifts do not allow, and cannot be explained by, open loop visual acquisition. Instead, our results suggest that visual acquisition is part of a closed-loop process, which dynamically and continuously links the brain to its environment.

2021 ◽  
Author(s):  
Natalia Ladyka-Wojcik ◽  
Zhong-Xu Liu ◽  
Jennifer D. Ryan

Scene construction is a key component of memory recall, navigation, and future imagining, and relies on the medial temporal lobes (MTL). A parallel body of work suggests that eye movements may enable the imagination and construction of scenes, even in the absence of external visual input. There are vast structural and functional connections between regions of the MTL and those of the oculomotor system. However, the directionality of connections between the MTL and oculomotor control regions, and how it relates to scene construction, has not been studied directly in human neuroimaging. In the current study, we used dynamic causal modeling (DCM) to investigate this relationship at a mechanistic level using a scene construction task in which participants' eye movements were either restricted (fixed-viewing) or unrestricted (free-viewing). By omitting external visual input, and by contrasting free- versus fixed- viewing, the directionality of neural connectivity during scene construction could be determined. As opposed to when eye movements were restricted, allowing free viewing during construction of scenes strengthened top-down connections from the MTL to the frontal eye fields, and to lower-level cortical visual processing regions, suppressed bottom-up connections along the visual stream, and enhanced vividness of the constructed scenes. Taken together, these findings provide novel, non-invasive evidence for the causal architecture between the MTL memory system and oculomotor system associated with constructing vivid mental representations of scenes.


2008 ◽  
Vol 05 (02) ◽  
pp. 225-246 ◽  
Author(s):  
JOYCA LACROIX ◽  
ERIC POSTMA ◽  
JAAP VAN DEN HERIK ◽  
JAAP MURRE

The saccadic selection of relevant visual input for preferential processing allows the efficient use of computational resources. Based on saccadic active human vision, we aim to develop a plausible saccade-based visual cognitive system for a humanoid robot. This paper presents two initial steps toward our objective by extending the saccade-based model of human memory called NIM1 to a plausible model of natural visual classification. NIM builds feature-vector representations from selected local image samples and uses these to make memory-based decisions. As a first step, we adapt NIM to a straightforward saccade-based model for the classification of natural visual input called NIM-CLASS and evaluate the model in a face-classification experiment. As a second step, we aim to approach the interactive nature of human vision by extending NIM-CLASS to NIM-CLASSTD by adding active top-down saccadic control. We then assess to what extent top-down control enhances classification performance. The results show that the incorporation of top-down saccadic control benefits classification performance compared to the purely bottom-up control, reducing the amount of visual input required for correct classification. We conclude that NIM-CLASSTD may provide a fruitful basis for an active visual cognitive system in a humanoid robot that enables efficient visual processing.


Author(s):  
Fiona Mulvey

This chapter introduces the basics of eye anatomy, eye movements and vision. It will explain the concepts behind human vision sufficiently for the reader to understand later chapters in the book on human perception and attention, and their relationship to (and potential measurement with) eye movements. We will first describe the path of light from the environment through the structures of the eye and on to the brain, as an introduction to the physiology of vision. We will then describe the image registered by the eye, and the types of movements the eye makes in order to perceive the environment as a cogent whole. This chapter explains how eye movements can be thought of as the interface between the visual world and the brain, and why eye movement data can be analysed not only in terms of the environment, or what is looked at, but also in terms of the brain, or subjective cognitive and emotional states. These two aspects broadly define the scope and applicability of eye movements technology in research and in human computer interaction in later sections of the book.


1996 ◽  
Vol 75 (4) ◽  
pp. 1495-1502 ◽  
Author(s):  
K. P. Hoffmann ◽  
C. Distler ◽  
C. Markner

1. Eye movements were recorded in seven innately esotropic cats during monocular and binocular horizontal optokinetic stimulation, using the search coil technique in five cats and electrooculography in two cats. 2. During closed loop measurements in these strabismic cats, slow phases of optokinetic nystagmus (OKN) were characterized by an overall reduced gain when compared with normal controls. In addition, response gain to monocular nasotemporal stimulation was even more reduced than that to temporonasal stimulation, resulting in an increased asymmetry of closed loop gain. 3. During open loop measurements, eye velocity in strabismic cats was very low at all velocities tested. 4. Differential analysis of the symmetry of OKN revealed that all our strabismic cats had a "good" or more symmetric and a "poor" or more asymmetric eye. In addition, when analyzed separately at individual velocities, the symmetry index of the good eye was fairly constant over the velocity range tested. By contrast, the symmetry index of the poor eye dropped dramatically at higher stimulus velocities. 5. To analyze the relationship of OKN symmetry and cortical physiology, we calculated the ratio between the percentage of neurons driven by one eye in the ipsilateral and the contralateral cortical hemisphere. We found a weak correlation between OKN symmetry and this cortical symmetry index (P < 0.05, analysis of variance). 6. In conclusion, slow eye movements in cats with congenital esotropia are characterized by extremely low gain, especially at higher stimulus velocities. In addition, OKN symmetry during monocular stimulation is decreased. Our data suggest that OKN symmetry is weakly correlated with the proportion of binocular neurons in the visual cortex ipsilateral to the stimulated eye. However, OKN characteristics seem to reflect to a higher degree the response properties of neurons in the pretectal nucleus of the optic tract and the dorsal terminal nucleus of the accessory optic system than properties of neurons in the visual cortex.


1994 ◽  
Vol 78 (3) ◽  
pp. 979-985 ◽  
Author(s):  
Tomoka Takeuchi ◽  
Akio Miyasita ◽  
Maki Inugami ◽  
Yuka Sasaki ◽  
Kazuhiko Fukuda

During an experiment on nocturnal sleep interruption, we observed a unique case of hallucination without sleep paralysis during the sleep-onset REM period in a normal individual. We documented the polysomnogram recorded during this hallucination. The polysomnogram showed a mixed pattern of Stages REM and W, with muscle-tone inhibition, rapid eye movements (REMs), slow eye movements (SEMs), and abundant alpha EEG trains. The blocking of alpha EEG trains by REMs appeared to reflect visual processing similar to that which occurs during waking. This hallucination was distinct from ordinary sleep-onset mentation in that it included strong emotional components and in that the subject simultaneously experienced both hallucinatory mentation and reality contact. This hallucination may resemble sleep paralysis with regard to its physiological and psychological background, and the discrimination of these two phenomena may depend on the subject's own awareness of muscle-tone inhibition.


1987 ◽  
Vol 131 (1) ◽  
pp. 323-336
Author(s):  
STEPHEN YOUNG ◽  
VICTORIA A. TAYLOR

1. Polyphemus eye movements were recorded in both pitching and yawing planes, both in a static visual environment and with a sinusoidally moving stimulus. 2. Spontaneous eye movements (average amplitude 1.7°) had different properties in the two planes, with trembling movements predominating in the pitching plane. A contour-sharpening function is proposed for these movements. 3. An attempt to analyse the eye movement response system using a Bode diagram shows a very poor fit to the data, leading to the conclusion that a closed-loop control system is an inappropriate model in this case. 4. The evoked eye movements are most convincingly represented by a model in which the time the stimulus takes to traverse a restricted sensitive zone in the central region of the eye controls the duration of a subsequent constant angular velocity saccade. The direction of the response movement follows the direction of the stimulus. A small-object tracking function is proposed for these movements.


2021 ◽  
Author(s):  
Andrey Chetverikov ◽  
Árni Kristjánsson

Prominent theories of perception suggest that the brain builds probabilistic models of the world, assessing the statistics of the visual input to inform this construction. However, the evidence for this idea is often based on simple impoverished stimuli, and the results have often been discarded as an illusion reflecting simple "summary statistics" of visual inputs. Here we show that the visual system represents probabilistic distributions of complex heterogeneous stimuli. Importantly, we show how these statistical representations are integrated with representations of other features and bound to locations, and can therefore serve as building blocks for object and scene processing. We uncover the organization of these representations at different spatial scales by showing how expectations for incoming features are biased by neighboring locations. We also show that there is not only a bias, but also a skew in the representations, arguing against accounts positing that probabilistic representations are discarded in favor of simplified summary statistics (e.g., mean and variance). In sum, our results reveal detailed probabilistic encoding of stimulus distributions, representations that are bound with other features and to particular locations.


2016 ◽  
Vol 6 (1) ◽  
Author(s):  
Hamidreza Namazi ◽  
Vladimir V. Kulish ◽  
Amin Akrami

Abstract One of the major challenges in vision research is to analyze the effect of visual stimuli on human vision. However, no relationship has been yet discovered between the structure of the visual stimulus and the structure of fixational eye movements. This study reveals the plasticity of human fixational eye movements in relation to the ‘complex’ visual stimulus. We demonstrated that the fractal temporal structure of visual dynamics shifts towards the fractal dynamics of the visual stimulus (image). The results showed that images with higher complexity (higher fractality) cause fixational eye movements with lower fractality. Considering the brain, as the main part of nervous system that is engaged in eye movements, we analyzed the governed Electroencephalogram (EEG) signal during fixation. We have found out that there is a coupling between fractality of image, EEG and fixational eye movements. The capability observed in this research can be further investigated and applied for treatment of different vision disorders.


Sign in / Sign up

Export Citation Format

Share Document