The pharmacology of impulsive behaviour in rats VII: the effects of serotonergic agonists and antagonists on responding under a discrimination task using unreliable visual stimuli

1999 ◽  
Vol 146 (4) ◽  
pp. 422-431 ◽  
Author(s):  
J. L. Evenden
Author(s):  
Sofia Russo ◽  
Giulia Calignano ◽  
Marco Dispaldro ◽  
Eloisa Valenza

Efficiency in the early ability to switch attention toward competing visual stimuli (spatial attention) may be linked to future ability to detect rapid acoustic changes in linguistic stimuli (temporal attention). To test this hypothesis, we compared individual performances in the same cohort of Italian-learning infants in two separate tasks: (i) an overlap task, measuring disengagement efficiency for visual stimuli at 4 months (Experiment 1), and (ii) an auditory discrimination task for trochaic syllabic sequences at 7 months (Experiment 2). Our results indicate that an infant’s efficiency in processing competing information in the visual field (i.e., visuospatial attention; Exp. 1) correlates with the subsequent ability to orient temporal attention toward relevant acoustic changes in the speech signal (i.e., temporal attention; Exp. 2). These results point out the involvement of domain-general attentional processes (not specific to language or the sensorial domain) playing a pivotal role in the development of early language skills in infancy.


1992 ◽  
Vol 67 (6) ◽  
pp. 1447-1463 ◽  
Author(s):  
K. Nakamura ◽  
A. Mikami ◽  
K. Kubota

1. The activity of single neurons was recorded extracellularly from the monkey amygdala while monkeys performed a visual discrimination task. The monkeys were trained to remember a visual stimulus during a delay period (0.5-3.0 s), to discriminate a new visual stimulus from the stimulus, and to release a lever when the new stimulus was presented. Colored photographs (human faces, monkeys, foods, and nonfood objects) or computer-generated two-dimensional shapes (a yellow triangle, a red circle, etc.) were used as visual stimuli. 2. The activity of 160 task-related neurons was studied. Of these, 144 (90%) responded to visual stimuli, 13 (8%) showed firing during the delay period, and 9 (6%) responded to the reward. 3. Task-related neurons were categorized according to the way in which various stimuli activated the neurons. First, to evaluate the proportion of all tested stimuli that elicited changes in activity of a neuron, selectivity index 1 (SI1) was employed. Second, to evaluate the ability of a neuron to discriminate a stimulus from another stimulus, SI2 was employed. On the basis of the calculated values of SI1 and SI2, neurons were classified as selective and nonselective. Most visual neurons were categorized as selective (131/144), and a few were characterized as nonselective (13/144). Neurons active during the delay period were also categorized as selective visual and delay neurons (6/13) and as nonselective delay neurons (7/13). 4. Responses of selective visual neurons had various temporal and stimulus-selective properties. Latencies ranged widely from 60 to 300 ms. Response durations also ranged widely from 20 to 870 ms. When the natures of the various effective stimuli were studied for each neuron, one-fourth of the responses of these neurons were considered to reflect some categorical aspect of the stimuli, such as human, monkey, food, or nonfood object. Furthermore, the responses of some neurons apparently reflected a certain behavioral significance of the stimuli that was separate from the task, such as the face of a particular person, smiling human faces, etc. 5. Nonselective visual neurons responded to a visual stimulus, regardless of its nature. They also responded in the absence of a visual stimulus when the monkey anticipated the appearance of the next stimulus. 6. Selective visual and delay neurons fired in response to particular stimuli and throughout the subsequent delay periods. Nonselective delay neurons increased their discharge rates gradually during the delay period, and the discharge rate decreased after the next stimulus was presented. 7. Task-related neurons were identified in six histologically distinct nuclei of the amygdala.(ABSTRACT TRUNCATED AT 400 WORDS)


Perception ◽  
1995 ◽  
Vol 24 (4) ◽  
pp. 351-362 ◽  
Author(s):  
William R Uttal ◽  
Todd Baruch ◽  
Linda Allen

Two experiments, in which information from two different kinds of degraded (low-pass filtered and regionally averaged or blocked) visual stimuli (aircraft silhouettes) was combined, are reported. In the first experiment, the degraded images were perceptually combined by being separately presented to each eye in a dichoptic viewing situation. Both stimuli in both presentations were masked by identical random visual interference. When the two stimuli were visually fused, performance in a discrimination task was enhanced over that in control situations in which only one of the two stimuli was presented. In the second experiment the two degraded stimuli were physically superimposed prior to binocular presentation, with a similar result. The results of this hybrid (masking/binocular summation) experiment suggest that true advantageous information pooling occurs when these two types of degraded stimuli are combined either physically or dichoptically.


1965 ◽  
Vol 20 (3_suppl) ◽  
pp. 1021-1026 ◽  
Author(s):  
R. L. Brown ◽  
W. D. Galloway ◽  
R. A. San Giuliano

12 Ss were asked to interpret a series of coded electrocutaneous pulses while engaged in a visual discrimination task of varying complexity. All Ss performed both tasks in each of 4 body positions (standing, sitting, kneeling, and prone). Ss were asked to indicate on each trial which 1 of 4 electrode locations was stimulated and whether duration of stimulation was .6 or 1.6 sec. A constant intensity of 1.5 v at 60 cps was employed. Three levels of complexity (no visual stimuli, 4 × 4 metric figures, and 8 × 8 metric figures) were employed in the visual task. In the cutaneous task, analysis of information transmitted ( It), location errors, duration errors, and total errors indicate that timesharing demand significantly impaired performance, whereas variation in body position had negligible effect.


2011 ◽  
Vol 23 (3) ◽  
pp. 746-756 ◽  
Author(s):  
Vadim Axelrod ◽  
Galit Yovel

The ventral visual cortex has a modular organization in which discrete and well-defined regions show a much stronger response to certain object categories (e.g., faces, bodies) than to other categories. The majority of previous studies have examined the response of these category-selective regions to isolated images of preferred or nonpreferred categories. Thus, little is known about the way these category-selective regions represent more complex visual stimuli, which include both preferred and nonpreferred stimuli. Here we examined whether glasses (nonpreferred) modify the representation of simultaneously presented faces (preferred) in the fusiform face area. We used an event-related fMR-adaptation paradigm in which faces were presented with glasses either on or above the face while subjects performed a face or a glasses discrimination task. Our findings show that the sensitivity of the fusiform face area to glasses was maximal when glasses were presented on the face than above the face during a face discrimination task rather than during a glasses discrimination task. These findings suggest that nonpreferred stimuli may significantly modify the representation of preferred stimuli, even when they are task irrelevant. Future studies will determine whether this interaction is specific to faces or may be found for other object categories in category-selective areas.


2020 ◽  
Author(s):  
Madeline S. Cappelloni ◽  
Sabyasachi Shivkumar ◽  
Ralf M. Haefner ◽  
Ross K. Maddox

ABSTRACTThe brain combines information from multiple sensory modalities to interpret the environment. Multisensory integration is often modeled by ideal Bayesian causal inference, a model proposing that perceptual decisions arise from a statistical weighting of information from each sensory modality based on its reliability and relevance to the observer’s task. However, ideal Bayesian causal inference fails to describe human behavior in a simultaneous auditory spatial discrimination task in which spatially aligned visual stimuli improve performance despite providing no information about the correct response. This work tests the hypothesis that humans weight auditory and visual information in this task based on their relative reliabilities, even though the visual stimuli are task-uninformative, carrying no information about the correct response, and should be given zero weight. Listeners perform an auditory spatial discrimination task with relative reliabilities modulated by the stimulus durations. By comparing conditions in which task-uninformative visual stimuli are spatially aligned with auditory stimuli or centrally located (control condition), listeners are shown to have a larger multisensory effect when their auditory thresholds are worse. Even in cases in which visual stimuli are not task-informative, the brain combines sensory information that is scene-relevant, especially when the task is difficult due to unreliable auditory information.


Sign in / Sign up

Export Citation Format

Share Document