stimulus match
Recently Published Documents


TOTAL DOCUMENTS

4
(FIVE YEARS 0)

H-INDEX

2
(FIVE YEARS 0)

2018 ◽  
Author(s):  
Marcelo G. Mattar ◽  
Marie V. Carter ◽  
Marc S. Zebrowitz ◽  
Sharon L. Thompson-Schill ◽  
Geoffrey K. Aguirre

The internal representation of stimuli is imperfect and subject to bias. Noise introduced at initial encoding and during maintenance degrades the precision of representation. Stimulus estimation is also biased away from recently encountered stimuli, a phenomenon known as adaptation. Within a Bayesian framework, greater biases are predicted to result from poor precision. We tested for this effect in individual difference measures. 202 subjects contributed data through an on-line experiment (https://cfn.upenn.edu/iadapt). During separate face and color blocks, subjects performed three different tasks: an immediate stimulus-match (15 trials), a 5 seconds delayed match (30 trials), and 5 seconds of adaptation followed by a delayed match (30 trials). The stimulus spaces were circular and subjects entered their responses using a color/face wheel. Bias and precision of responses were extracted by fitting a mixture of von Mises distributions to account for random guesses. Two blocks of each measure were obtained, allowing for tests of measure reliability. We found that reliable differences between individuals in precision were as great as those between tasks or materials. The adaptation manipulation induced the expected bias in responses (colors: 7.8°; faces: 5.0°), and the magnitude of this bias reliably and substantially varied between subjects. Across subjects, there was a negative correlation between mean precision and bias (color: ρ = −0.26; faces: ρ = −0.13). This relationship was replicated in a new experiment with 192 subjects (color: ρ = −0.22; faces: ρ = −0.19). This result is consistent with a Bayesian observer model, in which individual differences in the precision of perceptual representation influences the magnitude of adaptation bias.


eLife ◽  
2017 ◽  
Vol 6 ◽  
Author(s):  
Kendrick N Kay ◽  
Jason D Yeatman

The ability to read a page of text or recognize a person's face depends on category-selective visual regions in ventral temporal cortex (VTC). To understand how these regions mediate word and face recognition, it is necessary to characterize how stimuli are represented and how this representation is used in the execution of a cognitive task. Here, we show that the response of a category-selective region in VTC can be computed as the degree to which the low-level properties of the stimulus match a category template. Moreover, we show that during execution of a task, the bottom-up representation is scaled by the intraparietal sulcus (IPS), and that the level of IPS engagement reflects the cognitive demands of the task. These results provide an account of neural processing in VTC in the form of a model that addresses both bottom-up and top-down effects and quantitatively predicts VTC responses.


2016 ◽  
Author(s):  
Kendrick N. Kay ◽  
Jason D. Yeatman

SummaryThe ability to read a page of text or recognize a person’s face depends on category-selective visual regions in ventral temporal cortex (VTC). To understand how these regions mediate word and face recognition, it is necessary to characterize how stimuli are represented and how this representation is used in the execution of a cognitive task. Here, we show that the response of a category-selective region in VTC can be computed as the degree to which the low-level properties of the stimulus match a category template. Moreover, we show that during execution of a task, the bottom-up representation is scaled by the intraparietal sulcus (IPS), and that the level of IPS engagement reflects the cognitive demands of the task. These results provide a unifying account of neural processing in VTC in the form of a model that addresses both bottom-up and top-down effects and quantitatively predicts VTC responses.


2013 ◽  
Vol 109 (2) ◽  
pp. 546-556 ◽  
Author(s):  
Nobuya Sato ◽  
William K. Page ◽  
Charles J. Duffy

We presented optic flow simulating eight directions of self-movement in the ground plane, while monkeys performed delayed match-to-sample tasks, and we recorded dorsal medial superior temporal (MSTd) neuronal activity. Randomly selected sample headings yield smaller test responses to the neuron's preferred heading when it is near the sample's heading direction and larger test responses to the preferred heading when it is far from the sample's heading. Limiting test stimuli to matching or opposite headings suppresses responses to preferred stimuli in both test conditions, whereas focusing on each neuron's preferred vs. antipreferred stimuli enhances responses to the antipreferred stimulus. Match vs. opposite paradigms create bimodal heading profiles shaped by interactions with late delay-period activity. We conclude that task contingencies, determining the prior probabilities of specific stimuli, interact with the monkeys' perceptual strategy for optic flow analysis. These influences shape attentional and working memory effects on the heading direction selectivities and preferences of MSTd neurons.


Sign in / Sign up

Export Citation Format

Share Document