Binocular Coordination, Fixation Disparity, and Ocular Dominance

Perception ◽  
1996 ◽  
Vol 25 (1_suppl) ◽  
pp. 128-128 ◽  
Author(s):  
R Radach ◽  
D Heller ◽  
P Wiebories ◽  
W Jaschinski

In a series of experiments we have quantified the spatial and temporal dynamics of binocular coordination. Tasks studied ranged from simple scanning and letter detection to complex visual processing in text reading. In all of these paradigms we found similar eye movement characteristics: in 70% to 90% of the observations, the saccade of the abducting eye is larger, relative differences being in the order of 5% to 15% of the amplitude. During the subsequent fixation the disparity is typically reduced by a convergence movement (about 1 deg s−1), which sometimes exceeds the initial saccade amplitude asymmetry. Interestingly, the relative vergence contributions of the eyes depend on saccade length. For progressive 2-letter reading saccades, the left (adducting) eye accounts for only 20% of the total movement as compared to about 70% for 14-letter saccades. Up to now our analysis was limited to relative rather than absolute estimates of fixation disparity. To overcome this restriction, we measured disparity using the psychophysical method of dichoptically presented nonius lines as well as direct infrared pupil-reflection registration of binocular vs monocular fixation. Both measures were independent of target eccentricity (within a range typical for reading) and produced similar subject rank orders (Spearman's \rho=0.75). When we studied vergence movements in a letter detection task using autostereograms with different levels of virtual depth, it became clear that spatiotemporal vergence parameters can be quite asymmetric for both eyes. This led to the question of whether unequal contributions to vergence may be related to ocular dominance. This hypothesis is currently being investigated with a new procedure that provides a reliable estimate of subjective visual direction (the ‘cyclopean eye’) under static viewing conditions.

2021 ◽  
Vol 13 (6) ◽  
Author(s):  
Joëlle Joss ◽  
Stephanie Jainta

In reading, binocular eye movements are required for optimal visual processing and thus, in case of asthenopia or reading problems, standard orthoptic and optometric routines check individual binocular vision by a variety of tests. The present study therefore examines the predictive value of such standard measures of heterophoria, accommodative and vergence facility, AC/A-ratio, NPC and symptoms for binocular coordination parameters during reading. Binocular eye movements were recorded (EyeLink II) for 65 volunteers during a typical reading task and linear regression analyses related all parameters of binocular coordination to all above-mentioned optometric measures: while saccade disconjugacy was weakly predicted by vergence facility (15% explained variance), vergence facility, AC/A and symptoms scores predicted vergence drift (31%). Heterophoria, vergence facility and NPC explained 31% of fixation disparity and first fixation duration showed minor relations to symptoms (18%). In sum, we found only weak to moderate relationships, with expected, selective associations: dynamic parameter related to optometric tests addressing vergence dynamics, whereas the static parameter (fixation disparity) related mainly to heterophoria. Most surprisingly, symptoms were only loosely related to vergence drift and fixation duration, reflecting associations to a dynamic aspect of binocular eye movements in reading and potentially non-specific, overall but slight reading deficiency. Thus, the efficiency of optometric tests to predict binocular coordination during reading was low – questioning a simple, straightforward extrapolation of such test results to an overlearned, complex task.


2008 ◽  
Vol 2 (3) ◽  
Author(s):  
Simon P. Liversedge

In this paper I present a brief review of some recent studies my colleagues and I have carried out to investigate binocular coordination during reading. These studies demonstrate that the eyes are often not perfectly aligned during reading, with fixation disparities of approximately one character on average. Both crossed and uncrossed disparities are common and vergence movements during fixations serve to reduce, but not eliminate disparity. Fixation disparity results in different retinal inputs from each eye, yet a single non diplopic visual representation of the text is perceived when we read. A further experiment, with dichoptically presented target words in normally presented sentence frames, showed that a mechanism of fusion rather than suppression operates at an early stage during visual processing. Saccade metrics appear to be computed according to a unified visual representation based on input from both eyes. based on input from both eyes.


2010 ◽  
Vol 22 (6) ◽  
pp. 1224-1234 ◽  
Author(s):  
Aaron M. Rutman ◽  
Wesley C. Clapp ◽  
James Z. Chadick ◽  
Adam Gazzaley

Selective attention confers a behavioral benefit on both perceptual and working memory (WM) performance, often attributed to top–down modulation of sensory neural processing. However, the direct relationship between early activity modulation in sensory cortices during selective encoding and subsequent WM performance has not been established. To explore the influence of selective attention on WM recognition, we used electroencephalography to study the temporal dynamics of top–down modulation in a selective, delayed-recognition paradigm. Participants were presented with overlapped, “double-exposed” images of faces and natural scenes, and were instructed to either remember the face or the scene while simultaneously ignoring the other stimulus. Here, we present evidence that the degree to which participants modulate the early P100 (97–129 msec) event-related potential during selective stimulus encoding significantly correlates with their subsequent WM recognition. These results contribute to our evolving understanding of the mechanistic overlap between attention and memory.


2016 ◽  
Vol 33 ◽  
Author(s):  
FILIPP SCHMIDT ◽  
ANDREAS WEBER ◽  
ANKE HABERKAMP

AbstractVisual perception is not instantaneous; the perceptual representation of our environment builds up over time. This can strongly affect our responses to visual stimuli. Here, we study the temporal dynamics of visual processing by analyzing the time course of priming effects induced by the well-known Ebbinghaus illusion. In slower responses, Ebbinghaus primes produce effects in accordance with their perceptual appearance. However, in fast responses, these effects are reversed. We argue that this dissociation originates from the difference between early feedforward-mediated gist of the scene processing and later feedback-mediated more elaborate processing. Indeed, our findings are well explained by the differences between low-frequency representations mediated by the fast magnocellular pathway and high-frequency representations mediated by the slower parvocellular pathway. Our results demonstrate the potentially dramatic effect of response speed on the perception of visual illusions specifically and on our actions in response to objects in our visual environment generally.


Perception ◽  
1996 ◽  
Vol 25 (1_suppl) ◽  
pp. 76-76
Author(s):  
W Jaschinski

In binocular vision, fixation disparity is present when a fixation point falls within Panum's area, but not on corresponding retinal points. To investigate the effect of vergence load, fixation disparity was measured at viewing distances of 20, 30, 40, 60, and 100 cm (while the test subtended a constant angular size) by the psychophysical method of dichoptically presented nonius lines with a central fusion stimulus. As the viewing distance was shortened from 100 to 20 cm, mean fixation disparity changed monotonically from 1 min arc esophoria (ie the eyes converged in front of the target) to 3 min arc exophoria. The average standard deviation of the psychometric function, which is a measure of the temporal variability of vergence, was smallest at 100 cm (when fixation disparity was esophoric) and increased at shorter distances. Fixation disparity was also measured at a constant distance of 40 cm, but with prisms in front of the eyes that induced the same vergence angles as would be induced by viewing distances between 20 and 100 cm. The slope of these conventional ‘fixation disparity curves’ as a function of prism load correlated with the slope of fixation disparity as a function of viewing distance ( r=0.39, p=0.02, n=25). However, testing at different distances, as introduced here, has the advantage of preserving the natural interaction between vergence and accommodation. Since the change of fixation disparity with distance differed reliably among subjects (with a test — retest correlation of 0.65 in 34 subjects with good binocular vision) this measure may be useful for identifying subjects who are prone to near-vision complaints.


2006 ◽  
Vol 96 (5) ◽  
pp. 2253-2264 ◽  
Author(s):  
Daniel L. Adams ◽  
Jonathan C. Horton

In many regions of the mammalian cerebral cortex, cells that share a common receptive field property are grouped into columns. Despite intensive study, the function of the cortical column remains unknown. In the squirrel monkey, the expression of ocular dominance columns is variable, with columns present in some animals and not in others. By searching for differences between animals with and without columns, it should be possible to infer how columns contribute to visual processing. Single-cell recordings outside layer 4C were made in nine squirrel monkeys, followed by labeling of ocular dominance columns in layer 4C. In the squirrel monkey, compared with the macaque, cells outside layer 4C were more likely to respond to stimulation of either eye whether ocular dominance columns were present or not. In three animals lacking ocular dominance columns, single cells were recorded from layer 4C. Remarkably, 20% of cells in layer 4C were monocular despite the absence of columns. This observation means that ocular dominance columns are not necessary for monocular cells to occur in striate cortex. In macaques each row of cytochrome oxidase (CO) patches is aligned with an ocular dominance column and receives koniocellular input serving one eye only. In squirrel monkeys this was not true: CO patches and ocular dominance columns had no spatial correlation and the koniocellular input to CO patches was binocular. Thus even when ocular dominance columns occur in the squirrel monkey, they do not transform the functional architecture to resemble that of the macaque.


2011 ◽  
Vol 23 (12) ◽  
pp. 4094-4105 ◽  
Author(s):  
Chien-Te Wu ◽  
Melissa E. Libertus ◽  
Karen L. Meyerhoff ◽  
Marty G. Woldorff

Several major cognitive neuroscience models have posited that focal spatial attention is required to integrate different features of an object to form a coherent perception of it within a complex visual scene. Although many behavioral studies have supported this view, some have suggested that complex perceptual discrimination can be performed even with substantially reduced focal spatial attention, calling into question the complexity of object representation that can be achieved without focused spatial attention. In the present study, we took a cognitive neuroscience approach to this problem by recording cognition-related brain activity both to help resolve the questions about the role of focal spatial attention in object categorization processes and to investigate the underlying neural mechanisms, focusing particularly on the temporal cascade of these attentional and perceptual processes in visual cortex. More specifically, we recorded electrical brain activity in humans engaged in a specially designed cued visual search paradigm to probe the object-related visual processing before and during the transition from distributed to focal spatial attention. The onset times of the color popout cueing information, indicating where within an object array the subject was to shift attention, was parametrically varied relative to the presentation of the array (i.e., either occurring simultaneously or being delayed by 50 or 100 msec). The electrophysiological results demonstrate that some levels of object-specific representation can be formed in parallel for multiple items across the visual field under spatially distributed attention, before focal spatial attention is allocated to any of them. The object discrimination process appears to be subsequently amplified as soon as focal spatial attention is directed to a specific location and object. This set of novel neurophysiological findings thus provides important new insights on fundamental issues that have been long-debated in cognitive neuroscience concerning both object-related processing and the role of attention.


2018 ◽  
Author(s):  
Tijl Grootswagers ◽  
Amanda K. Robinson ◽  
Thomas A. Carlson

AbstractIn our daily lives, we are bombarded with a stream of rapidly changing visual input. Humans have the remarkable capacity to detect and identify objects in fast-changing scenes. Yet, when studying brain representations, stimuli are generally presented in isolation. Here, we studied the dynamics of human vision using a combination of fast stimulus presentation rates, electroencephalography and multivariate decoding analyses. Using a presentation rate of 5 images per second, we obtained the representational structure of a large number of stimuli, and showed the emerging abstract categorical organisation of this structure. Furthermore, we could separate the temporal dynamics of perceptual processing from higher-level target selection effects. In a second experiment, we used the same paradigm at 20Hz to show that shorter image presentation limits the categorical abstraction of object representations. Our results show that applying multivariate pattern analysis to every image in rapid serial visual processing streams has unprecedented potential for studying the temporal dynamics of the structure of representations in the human visual system.


2019 ◽  
Author(s):  
Sophia M. Shatek ◽  
Tijl Grootswagers ◽  
Amanda K. Robinson ◽  
Thomas A. Carlson

AbstractMental imagery is the ability to generate images in the mind in the absence of sensory input. Both perceptual visual processing and internally generated imagery engage large, overlapping networks of brain regions. However, it is unclear whether they are characterized by similar temporal dynamics. Recent magnetoencephalography work has shown that object category information was decodable from brain activity during mental imagery, but the timing was delayed relative to perception. The current study builds on these findings, using electroencephalography to investigate the dynamics of mental imagery. Sixteen participants viewed two images of the Sydney Harbour Bridge and two images of Santa Claus. On each trial, they viewed a sequence of the four images and were asked to imagine one of them, which was cued retroactively by its temporal location in the sequence. Time-resolved multivariate pattern analysis was used to decode the viewed and imagined stimuli. Our results indicate that the dynamics of imagery processes are more variable across, and within, participants compared to perception of physical stimuli. Although category and exemplar information was decodable for viewed stimuli, there were no informative patterns of activity during mental imagery. The current findings suggest stimulus complexity, task design and individual differences may influence the ability to successfully decode imagined images. We discuss the implications of these results for our understanding of the neural processes underlying mental imagery.


Sign in / Sign up

Export Citation Format

Share Document