Cooperative Representation of Visual Borders

Perception ◽  
1992 ◽  
Vol 21 (2) ◽  
pp. 185-193 ◽  
Author(s):  
Geoffrey W Stuart ◽  
Terence R J Bossomaier

Recently it has been reported that the visual cortical cells which are engaged in cooperative coding of global stimulus features, display synchrony in their firing rates when both are stimulated. Alternative models identify global stimulus features with the coarse spatial scales of the image. Versions of the Munsterberg or Café Wall illusions which differ in their low spatial frequency content were used to show that in all cases it was the high spatial frequencies in the image which determined the strength and direction of these illusions. Since cells responsive to high spatial frequencies have small receptive fields, cooperative coding must be involved in the representation of long borders in the image.

2021 ◽  
Vol 2 ◽  
Author(s):  
Arthur Shapiro

Shapiro and Hedjar (2019) proposed a shift in the definition of illusion, from ‘differences between perception and reality’ to ‘conflicts between possible constructions of reality’. This paper builds on this idea by presenting a series of motion hybrid images that juxtapose fine scale contrast (high spatial frequency content) with coarse scale contrast-generated motion (low spatial frequency content). As is the case for static hybrid images, under normal viewing conditions the fine scale contrast determines the perception of motion hybrid images; however, if the motion hybrid image is blurred or viewed from a distance, the perception is determined by the coarse scale contrast. The fine scale contrast therefore masks the perception of motion (and sometimes depth) produced by the coarser scale contrast. Since the unblurred movies contain both fine and coarse scale contrast information, but the blurred movies contain only coarse scale contrast information, cells in the brain that respond to low spatial frequencies should respond equally to both blurred and unblurred movies. Since people undoubtedly differ in the optics of their eyes and most likely in the neural processes that resolve conflict across scales, the paper suggests that motion hybrid images illustrate trade-offs between spatial scales that are important for understanding individual differences in perceptions of the natural world.


2012 ◽  
Vol 107 (4) ◽  
pp. 1094-1110 ◽  
Author(s):  
X. Tao ◽  
B. Zhang ◽  
E. L. Smith ◽  
S. Nishimoto ◽  
I. Ohzawa ◽  
...  

We used dynamic dense noise stimuli and local spectral reverse correlation methods to reveal the local sensitivities of neurons in visual area 2 (V2) of macaque monkeys to orientation and spatial frequency within their receptive fields. This minimized the potentially confounding assumptions that are inherent in stimulus selections. The majority of neurons exhibited a relatively high degree of homogeneity for the preferred orientations and spatial frequencies in the spatial matrix of facilitatory subfields. However, about 20% of all neurons showed maximum orientation differences between neighboring subfields that were greater than 25 deg. The neurons preferring horizontal or vertical orientations showed less inhomogeneity in space than the neurons preferring oblique orientations. Over 50% of all units also exhibited suppressive profiles, and those were more heterogeneous than facilitatory profiles. The preferred orientation and spatial frequency of suppressive profiles differed substantially from those of facilitatory profiles, and the neurons with suppressive subfields had greater orientation selectivity than those without suppressive subfields. The peak suppression occurred with longer delays than the peak facilitation. These results suggest that the receptive field profiles of the majority of V2 neurons reflect the orderly convergence of V1 inputs over space, but that a subset of V2 neurons exhibit more complex response profiles having both suppressive and facilitatory subfields. These V2 neurons with heterogeneous subfield profiles could play an important role in the initial processing of complex stimulus features.


2009 ◽  
Vol 26 (4) ◽  
pp. 411-420 ◽  
Author(s):  
MICHAEL L. RISNER ◽  
TIMOTHY J. GAWNE

AbstractNeurons in visual cortical area V1 typically respond well to lines or edges of specific orientations. There have been many studies investigating how the responses of these neurons to an oriented edge are affected by changes in luminance contrast. However, in natural images, edges vary not only in contrast but also in the degree of blur, both because of changes in focus and also because shadows are not sharp. The effect of blur on the response dynamics of visual cortical neurons has not been explored. We presented luminance-defined single edges in the receptive fields of parafoveal (1–6 deg eccentric) V1 neurons of two macaque monkeys trained to fixate a spot of light. We varied the width of the blurred region of the edge stimuli up to 0.36 deg of visual angle. Even though the neurons responded robustly to stimuli that only contained high spatial frequencies and 0.36 deg is much larger than the limits of acuity at this eccentricity, changing the degree of blur had minimal effect on the responses of these neurons to the edge. Primates need to measure blur at the fovea to evaluate image quality and control accommodation, but this might only involve a specialist subpopulation of neurons. If visual cortical neurons in general responded differently to sharp and blurred stimuli, then this could provide a cue for form perception, for example, by helping to disambiguate the luminance edges created by real objects from those created by shadows. On the other hand, it might be important to avoid the distraction of changing blur as objects move in and out of the plane of fixation. Our results support the latter hypothesis: the responses of parafoveal V1 neurons are largely unaffected by changes in blur over a wide range.


Perception ◽  
1997 ◽  
Vol 26 (1_suppl) ◽  
pp. 48-48
Author(s):  
B Wink ◽  
J P Harris

It has been suggested that the Parkinsonian visual system is like the normal visual system, but is inappropriately dark-adapted (Beaumont et al, 1987 Clinical Vision Sciences2 123 – 129). Thus it is of interest to ask to what extent dark adaptation of normal subjects produces visual changes like those of Parkinson's disease (PD). One such change is the reduction in apparent contrast of medium and high spatial frequencies in peripheral vision in the illness (Harris et al, 1992 Brain115 1447 – 1457). Normal subjects judged whether the contrast of a peripherally viewed grating was higher or lower than that of a foveally viewed grating, and a staircase technique was used to estimate the point of subjective equality. Judgements were made at four spatial frequencies (0.5 to 4.0 cycles deg−1) and four contrasts (8.0% to 64%). The display, the mean luminance of which was 26 cd m−2, was viewed through a 1.5 lu nd filter in the relatively dark-adapted condition. The ANOVA showed an interaction between dark adaptation and the spatial frequency of the gratings. Dark adaptation reduces the apparent contrast of high-spatial-frequency gratings, an effect which is greater at lower contrasts. This mimics the effect found with PD sufferers, and suggests that dark adaptation may provide a useful model of the PD visual system. In a second experiment, the effect of dark adaptation on the relationship between apparent spatial frequency in the fovea and periphery was investigated. The experiment was similar to the first, except that judgements were made about the apparent spatial frequency, rather than the contrast, of the peripheral grating. ANOVA showed no differential effect of dark adaptation on the apparent spatial frequency of the peripheral grating. This suggests that the observed reduction in apparent contrast of the peripheral gratings in dark-adapted normals and Parkinson's sufferers may reflect relative changes in contrast gain, rather than relative changes in the spatial organisation of receptive fields.


2018 ◽  
Vol 119 (6) ◽  
pp. 2059-2067 ◽  
Author(s):  
Chris Scholes ◽  
Paul V. McGraw ◽  
Neil W. Roach

During periods of steady fixation, we make small-amplitude ocular movements, termed microsaccades, at a rate of 1–2 every second. Early studies provided evidence that visual sensitivity is reduced during microsaccades—akin to the well-established suppression associated with larger saccades. However, the results of more recent work suggest that microsaccades may alter retinal input in a manner that enhances visual sensitivity to some stimuli. Here we parametrically varied the spatial frequency of a stimulus during a detection task and tracked contrast sensitivity as a function of time relative to microsaccades. Our data reveal two distinct modulations of sensitivity: suppression during the eye movement itself and facilitation after the eye has stopped moving. The magnitude of suppression and facilitation of visual sensitivity is related to the spatial content of the stimulus: suppression is greatest for low spatial frequencies, while sensitivity is enhanced most for stimuli of 1–2 cycles/°, spatial frequencies at which we are already most sensitive in the absence of eye movements. We present a model in which the tuning of suppression and facilitation is explained by delayed lateral inhibition between spatial frequency channels. Our data show that eye movements actively modulate visual sensitivity even during fixation: the detectability of images at different spatial scales can be increased or decreased depending on when the image occurs relative to a microsaccade. NEW & NOTEWORTHY Given the frequency with which we make microsaccades during periods of fixation, it is vital that we understand how they affect visual processing. We demonstrate two selective modulations of contrast sensitivity that are time-locked to the occurrence of a microsaccade: suppression of low spatial frequencies during each eye movement and enhancement of higher spatial frequencies after the eye has stopped moving. These complementary changes may arise naturally because of sluggish gain control between spatial channels.


Perception ◽  
1985 ◽  
Vol 14 (2) ◽  
pp. 225-238 ◽  
Author(s):  
Ken Nakayama ◽  
Gerald H Silverman ◽  
Donald I A MacLeod ◽  
Jeffrey Mulligan

The sensitivity of the visual system to motion of differentially moving random dots was measured. Two kinds of one-dimensional motion were compared: standing-wave patterns where dot movement amplitude varied as a sinusoidal function of position along the axis of dot movement (longitudinal or compressional waves) and patterns of motion where dot movement amplitude varied as a sinusoidal function orthogonal to the axis of motion (transverse or shearing waves). Spatial frequency, temporal frequency, and orientation of the motion were varied. The major finding was a much larger threshold rise for shear than for compression when motion spatial frequency increased beyond 1 cycle deg−1. Control experiments ruled out the extraneous cues of local luminance or local dot density. No conspicuous low spatial-frequency rise in thresholds for any type of differential motion was seen at the lowest spatial frequencies tested, and no difference was seen between horizontal and vertical motion. The results suggest that at the motion threshold spatial integration is greatest in a direction orthogonal to the direction of motion, a view consistent with elongated receptive fields most sensitive to motion orthogonal to their major axis.


2021 ◽  
Author(s):  
Nathan C. L. Kong ◽  
Eshed Margalit ◽  
Justin L. Gardner ◽  
Anthony M. Norcia

Task-optimized convolutional neural networks (CNNs) show striking similarities to the ventral visual stream. However, human-imperceptible image perturbations can cause a CNN to make incorrect predictions. Here we provide insight into this brittleness by investigating the representations of models that are either robust or not robust to image perturbations. Theory suggests that the robustness of a system to these perturbations could be related to the power law exponent of the eigenspectrum of its set of neural responses, where power law exponents closer to and larger than one would indicate a system that is less susceptible to input perturbations. We show that neural responses in mouse and macaque primary visual cortex (V1) obey the predictions of this theory, where their eigenspectra have power law exponents of at least one. We also find that the eigenspectra of model representations decay slowly relative to those observed in neurophysiology and that robust models have eigenspectra that decay slightly faster and have higher power law exponents than those of non-robust models. The slow decay of the eigenspectra suggests that substantial variance in the model responses is related to the encoding of fine stimulus features. We therefore investigated the spatial frequency tuning of artificial neurons and found that a large proportion of them preferred high spatial frequencies and that robust models had preferred spatial frequency distributions more aligned with the measured spatial frequency distribution of macaque V1 cells. Furthermore, robust models were quantitatively better models of V1 than non-robust models. Our results are consistent with other findings that there is a misalignment between human and machine perception. They also suggest that it may be useful to penalize slow-decaying eigenspectra or to bias models to extract features of lower spatial frequencies during task-optimization in order to improve robustness and V1 neural response predictivity.


2022 ◽  
Vol 18 (1) ◽  
pp. e1009739
Author(s):  
Nathan C. L. Kong ◽  
Eshed Margalit ◽  
Justin L. Gardner ◽  
Anthony M. Norcia

Task-optimized convolutional neural networks (CNNs) show striking similarities to the ventral visual stream. However, human-imperceptible image perturbations can cause a CNN to make incorrect predictions. Here we provide insight into this brittleness by investigating the representations of models that are either robust or not robust to image perturbations. Theory suggests that the robustness of a system to these perturbations could be related to the power law exponent of the eigenspectrum of its set of neural responses, where power law exponents closer to and larger than one would indicate a system that is less susceptible to input perturbations. We show that neural responses in mouse and macaque primary visual cortex (V1) obey the predictions of this theory, where their eigenspectra have power law exponents of at least one. We also find that the eigenspectra of model representations decay slowly relative to those observed in neurophysiology and that robust models have eigenspectra that decay slightly faster and have higher power law exponents than those of non-robust models. The slow decay of the eigenspectra suggests that substantial variance in the model responses is related to the encoding of fine stimulus features. We therefore investigated the spatial frequency tuning of artificial neurons and found that a large proportion of them preferred high spatial frequencies and that robust models had preferred spatial frequency distributions more aligned with the measured spatial frequency distribution of macaque V1 cells. Furthermore, robust models were quantitatively better models of V1 than non-robust models. Our results are consistent with other findings that there is a misalignment between human and machine perception. They also suggest that it may be useful to penalize slow-decaying eigenspectra or to bias models to extract features of lower spatial frequencies during task-optimization in order to improve robustness and V1 neural response predictivity.


2021 ◽  
Vol 12 (1) ◽  
Author(s):  
Jan H. Kirchner ◽  
Julijana Gjorgjieva

AbstractSynaptic inputs on cortical dendrites are organized with remarkable subcellular precision at the micron level. This organization emerges during early postnatal development through patterned spontaneous activity and manifests both locally where nearby synapses are significantly correlated, and globally with distance to the soma. We propose a biophysically motivated synaptic plasticity model to dissect the mechanistic origins of this organization during development and elucidate synaptic clustering of different stimulus features in the adult. Our model captures local clustering of orientation in ferret and receptive field overlap in mouse visual cortex based on the receptive field diameter and the cortical magnification of visual space. Including action potential back-propagation explains branch clustering heterogeneity in the ferret and produces a global retinotopy gradient from soma to dendrite in the mouse. Therefore, by combining activity-dependent synaptic competition and species-specific receptive fields, our framework explains different aspects of synaptic organization regarding stimulus features and spatial scales.


2019 ◽  
Author(s):  
Mickaël Jean Rémi Perrier ◽  
Louise Kauffmann ◽  
Carole Peyrin ◽  
Nicolas Vermeulen ◽  
Frederic Dutheil ◽  
...  

We attempted to highlight the respective importance of low spatial frequencies (LSFs) and high spatial frequencies (HSFs) in the emergence of visual consciousness by using an attentional blink paradigm in order to manipulate the conscious report of visual stimuli. Thirty-eight participants were asked to identify and report two targets (happy faces) embedded in a rapid stream of distractors (angry faces). Conscious perception of the second target (T2) usually improved as the lag between the targets increased. The distractors between T1 and T2 were either non-filtered (broad spatial frequencies, BSF), low-pass filtered (LSF), or high-pass filtered (HSF). The spatial frequency content of the distractors resulted in a greater disturbance of T2 reporting in the HSF than in the LSF condition. We argue that this could support the idea of HSF information playing a crucial role in the emergence of exogenous consciousness in the visual system. Other interpretations are also discussed.


Sign in / Sign up

Export Citation Format

Share Document