scholarly journals Do perceptual biases emerge early or late in visual processing? Decision-biases in motion perception

2016 ◽  
Vol 283 (1833) ◽  
pp. 20160263 ◽  
Author(s):  
Elisa Zamboni ◽  
Timothy Ledgeway ◽  
Paul V. McGraw ◽  
Denis Schluppeck

Visual perception is strongly influenced by contextual information. A good example is reference repulsion, where subjective reports about the direction of motion of a stimulus are significantly biased by the presence of an explicit reference. These perceptual biases could arise early, during sensory encoding, or alternatively, they may reflect decision-related processes occurring relatively late in the task sequence. To separate these two competing possibilities, we asked (human) subjects to perform a fine motion-discrimination task and then estimate the direction of motion in the presence or absence of an oriented reference line. When subjects performed the discrimination task with the reference, but subsequently estimated motion direction in its absence, direction estimates were unbiased. However, when subjects viewed the same stimuli but performed the estimation task only, with the orientation of the reference line jittered on every trial, the directions estimated by subjects were biased and yoked to the orientation of the shifted reference line. These results show that judgements made relative to a reference are subject to late, decision-related biases . A model in which information about motion is integrated with that of an explicit reference cue, resulting in a late, decision-related re-weighting of the sensory representation, can account for these results.

2021 ◽  
pp. 096372142199033
Author(s):  
Katherine R. Storrs ◽  
Roland W. Fleming

One of the deepest insights in neuroscience is that sensory encoding should take advantage of statistical regularities. Humans’ visual experience contains many redundancies: Scenes mostly stay the same from moment to moment, and nearby image locations usually have similar colors. A visual system that knows which regularities shape natural images can exploit them to encode scenes compactly or guess what will happen next. Although these principles have been appreciated for more than 60 years, until recently it has been possible to convert them into explicit models only for the earliest stages of visual processing. But recent advances in unsupervised deep learning have changed that. Neural networks can be taught to compress images or make predictions in space or time. In the process, they learn the statistical regularities that structure images, which in turn often reflect physical objects and processes in the outside world. The astonishing accomplishments of unsupervised deep learning reaffirm the importance of learning statistical regularities for sensory coding and provide a coherent framework for how knowledge of the outside world gets into visual cortex.


Author(s):  
Christian Merkel ◽  
Mandy Viktoria Bartsch ◽  
Mircea A Schoenfeld ◽  
Anne-Katrin Vellage ◽  
Notger G Müller ◽  
...  

Visual working memory (VWM) is an active representation enabling the manipulation of item information even in the absence of visual input. A common way to investigate VWM is to analyze the performance at later recall. This approach, however, leaves uncertainties about whether the variation of recall performance is attributable to item encoding and maintenance or to the testing of memorized information. Here, we record the contralateral delay activity (CDA) - an established electrophysiological measure of item storage and maintenance - in human subjects performing a delayed orientation precision estimation task. This allows us to link the fluctuation of recall precision directly to the process of item encoding and maintenance. We show that for two sequentially encoded orientation items, the CDA amplitude reflects the precision of orientation recall of both items, with higher precision being associated with a larger amplitude. Furthermore, we show that the CDA amplitude for each item varies independently from each other, suggesting that the precision of memory representations fluctuates independently.


2008 ◽  
Vol 99 (5) ◽  
pp. 2558-2576
Author(s):  
Mario Ruiz-Ruiz ◽  
Julio C. Martinez-Trujillo

Previous studies have demonstrated that human subjects update the location of visual targets for saccades after head and body movements and in the absence of visual feedback. This phenomenon is known as spatial updating. Here we investigated whether a similar mechanism exists for the perception of motion direction. We recorded eye positions in three dimensions and behavioral responses in seven subjects during a motion task in two different conditions: when the subject's head remained stationary and when subjects rotated their heads around an anteroposterior axis (head tilt). We demonstrated that after head-tilt subjects updated the direction of saccades made in the perceived stimulus direction (direction of motion updating), the amount of updating varied across subjects and stimulus directions, the amount of motion direction updating was highly correlated with the amount of spatial updating during a memory-guided saccade task, subjects updated the stimulus direction during a two-alternative forced-choice direction discrimination task in the absence of saccadic eye movements (perceptual updating), perceptual updating was more accurate than motion direction updating involving saccades, and subjects updated motion direction similarly during active and passive head rotation. These results demonstrate the existence of an updating mechanism for the perception of motion direction in the human brain that operates during active and passive head rotations and that resembles the one of spatial updating. Such a mechanism operates during different tasks involving different motor and perceptual skills (saccade and motion direction discrimination) with different degrees of accuracy.


2017 ◽  
Vol 118 (3) ◽  
pp. 1542-1555 ◽  
Author(s):  
Bastian Schledde ◽  
F. Orlando Galashan ◽  
Magdalena Przybyla ◽  
Andreas K. Kreiter ◽  
Detlef Wegener

Nonspatially selective attention is based on the notion that specific features or objects in the visual environment are effectively prioritized in cortical visual processing. Feature-based attention (FBA), in particular, is a well-studied process that dynamically and selectively addresses neurons preferentially processing the attended feature attribute (e.g., leftward motion). In everyday life, however, behavior may require high sensitivity for an entire feature dimension (e.g., motion), but experimental evidence for a feature dimension-specific attentional modulation on a cellular level is lacking. Therefore, we investigated neuronal activity in macaque motion-selective mediotemporal area (MT) in an experimental setting requiring the monkeys to detect either a motion change or a color change. We hypothesized that neural activity in MT is enhanced when the task requires perceptual sensitivity to motion. In line with this, we found that mean firing rates were higher in the motion task and that response variability and latency were lower compared with values in the color task, despite identical visual stimulation. This task-specific, dimension-based modulation of motion processing emerged already in the absence of visual input, was independent of the relation between the attended and stimulating motion direction, and was accompanied by a spatially global reduction of neuronal variability. The results provide single-cell support for the hypothesis of a feature dimension-specific top-down signal emphasizing the processing of an entire feature class. NEW & NOTEWORTHY Cortical processing serving visual perception prioritizes information according to current task requirements. We provide evidence in favor of a dimension-based attentional mechanism addressing all neurons that process visual information in the task-relevant feature domain. Behavioral tasks required monkeys to attend either color or motion, causing modulations of response strength, variability, latency, and baseline activity of motion-selective monkey area MT neurons irrespective of the attended motion direction but specific to the attended feature dimension.


2020 ◽  
Vol 6 (1) ◽  
pp. 335-362
Author(s):  
Tatiana Pasternak ◽  
Duje Tadin

Psychophysical and neurophysiological studies of responses to visual motion have converged on a consistent set of general principles that characterize visual processing of motion information. Both types of approaches have shown that the direction and speed of target motion are among the most important encoded stimulus properties, revealing many parallels between psychophysical and physiological responses to motion. Motivated by these parallels, this review focuses largely on more direct links between the key feature of the neuronal response to motion, direction selectivity, and its utilization in memory-guided perceptual decisions. These links were established during neuronal recordings in monkeys performing direction discriminations, but also by examining perceptual effects of widespread elimination of cortical direction selectivity produced by motion deprivation during development. Other approaches, such as microstimulation and lesions, have documented the importance of direction-selective activity in the areas that are active during memory-guided direction comparisons, area MT and the prefrontal cortex, revealing their likely interactions during behavioral tasks.


2009 ◽  
Vol 5 (2) ◽  
pp. 270-273 ◽  
Author(s):  
Szonya Durant ◽  
Johannes M Zanker

Illusory position shifts induced by motion suggest that motion processing can interfere with perceived position. This may be because accurate position representation is lost during successive visual processing steps. We found that complex motion patterns, which can only be extracted at a global level by pooling and segmenting local motion signals and integrating over time, can influence perceived position. We used motion-defined Gabor patterns containing motion-defined boundaries, which themselves moved over time. This ‘motion-defined motion’ induced position biases of up to 0.5°, much larger than has been found with luminance-defined motion. The size of the shift correlated with how detectable the motion-defined motion direction was, suggesting that the amount of bias increased with the magnitude of this complex directional signal. However, positional shifts did occur even when participants were not aware of the direction of the motion-defined motion. The size of the perceptual position shift was greatly reduced when the position judgement was made relative to the location of a static luminance-defined square, but not eliminated. These results suggest that motion-induced position shifts are a result of general mechanisms matching dynamic object properties with spatial location.


2017 ◽  
Vol 284 (1867) ◽  
pp. 20172278 ◽  
Author(s):  
Scarlett R. Howard ◽  
Aurore Avarguès-Weber ◽  
Jair E. Garcia ◽  
Devi Stuart-Fox ◽  
Adrian G. Dyer

How different visual systems process images and make perceptual errors can inform us about cognitive and visual processes. One of the strongest geometric errors in perception is a misperception of size depending on the size of surrounding objects, known as the Ebbinghaus or Titchener illusion. The ability to perceive the Ebbinghaus illusion appears to vary dramatically among vertebrate species, and even populations, but this may depend on whether the viewing distance is restricted. We tested whether honeybees perceive contextual size illusions, and whether errors in perception of size differed under restricted and unrestricted viewing conditions. When the viewing distance was unrestricted, there was an effect of context on size perception and thus, similar to humans, honeybees perceived contrast size illusions. However, when the viewing distance was restricted, bees were able to judge absolute size accurately and did not succumb to visual illusions, despite differing contextual information. Our results show that accurate size perception depends on viewing conditions, and thus may explain the wide variation in previously reported findings across species. These results provide insight into the evolution of visual mechanisms across vertebrate and invertebrate taxa, and suggest convergent evolution of a visual processing solution.


2018 ◽  
Author(s):  
Regan M. Gallagher ◽  
Thomas Suddendorf ◽  
Derek H. Arnold

AbstractPerceptual judgements are, by nature, a product of both sensation and the cognitive processes responsible for interpreting and reporting subjective experiences. Changed perceptual judgements may thus result from changes in how the world appears (perception), or subsequent interpretation (cognition). This ambiguity has led to persistent debates about how to interpret changes in decision-making, and if cognition can change how the world looks, or sounds, or feels. Here we introduce an approach that can help resolve these ambiguities. In three motion-direction experiments, we measured perceptual judgements and subjective confidence. Sensory encoding changes (i.e. the motion-direction aftereffect) impacted each measure equally, as the perceptual evidence informing both responses had changed. However, decision changes dissociated from reports of subjective uncertainty when non-perceptual effects changed decision-making. Our findings show that subjective confidence can provide important information about the cause of aftereffects, and can help inform us about the organisation of the mind.


2015 ◽  
Vol 114 (2) ◽  
pp. 1211-1226 ◽  
Author(s):  
Jonas Larsson ◽  
Sarah J. Harrison

Adaptation at early stages of sensory processing can be propagated to downstream areas. Such inherited adaptation is a potential confound for functional magnetic resonance imaging (fMRI) techniques that use selectivity of adaptation to infer neuronal selectivity. However, the relative contributions of inherited and intrinsic adaptation at higher cortical stages, and the impact of inherited adaptation on downstream processing, remain unclear. Using fMRI, we investigated how adaptation to visual motion direction and orientation influences visually evoked responses in human V1 and extrastriate visual areas. To dissociate inherited from intrinsic adaptation, we quantified the spatial specificity of adaptation for each visual area as a measure of the receptive field sizes of the area where adaptation originated, predicting that adaptation originating in V1 should be more spatially specific than adaptation intrinsic to extrastriate visual cortex. In most extrastriate visual areas, the spatial specificity of adaptation did not differ from that in V1, suggesting that adaptation originated in V1. Only in one extrastriate area—MT—was the spatial specificity of direction-selective adaptation significantly broader than in V1, consistent with a combination of inherited V1 adaptation and intrinsic MT adaptation. Moreover, inherited adaptation effects could be both facilitatory and suppressive. These results suggest that adaptation at early visual processing stages can have widespread and profound effects on responses in extrastriate visual areas, placing important constraints on the use of fMRI adaptation techniques, while also demonstrating a general experimental strategy for systematically dissociating inherited from intrinsic adaptation by fMRI.


2017 ◽  
Vol 118 (1) ◽  
pp. 564-573 ◽  
Author(s):  
Sonia Poltoratski ◽  
Sam Ling ◽  
Devin McCormack ◽  
Frank Tong

The visual system employs a sophisticated balance of attentional mechanisms: salient stimuli are prioritized for visual processing, yet observers can also ignore such stimuli when their goals require directing attention elsewhere. A powerful determinant of visual salience is local feature contrast: if a local region differs from its immediate surround along one or more feature dimensions, it will appear more salient. We used high-resolution functional MRI (fMRI) at 7T to characterize the modulatory effects of bottom-up salience and top-down voluntary attention within multiple sites along the early visual pathway, including visual areas V1–V4 and the lateral geniculate nucleus (LGN). Observers viewed arrays of spatially distributed gratings, where one of the gratings immediately to the left or right of fixation differed from all other items in orientation or motion direction, making it salient. To investigate the effects of directed attention, observers were cued to attend to the grating to the left or right of fixation, which was either salient or nonsalient. Results revealed reliable additive effects of top-down attention and stimulus-driven salience throughout visual areas V1–hV4. In comparison, the LGN exhibited significant attentional enhancement but was not reliably modulated by orientation- or motion-defined salience. Our findings indicate that top-down effects of spatial attention can influence visual processing at the earliest possible site along the visual pathway, including the LGN, whereas the processing of orientation- and motion-driven salience primarily involves feature-selective interactions that take place in early cortical visual areas. NEW & NOTEWORTHY While spatial attention allows for specific, goal-driven enhancement of stimuli, salient items outside of the current focus of attention must also be prioritized. We used 7T fMRI to compare salience and spatial attentional enhancement along the early visual hierarchy. We report additive effects of attention and bottom-up salience in early visual areas, suggesting that salience enhancement is not contingent on the observer’s attentional state.


Sign in / Sign up

Export Citation Format

Share Document