Individual Differences in Infant Visual Attention: Four-Month-Olds' Discrimination and Generalization of Global and Local Stimulus Properties

1993 ◽  
Vol 64 (4) ◽  
pp. 1191 ◽  
Author(s):  
Laura J. Freeseman ◽  
John Colombo ◽  
Jeffrey T. Coldren
1995 ◽  
Vol 10 (2) ◽  
pp. 271-285 ◽  
Author(s):  
John Colombo ◽  
Laura J. Freeseman ◽  
Jeffrey T. Coldren ◽  
Janet E. Frick

2019 ◽  
Author(s):  
Paola Perone ◽  
David Vaughn Becker ◽  
Joshua M. Tybur

Multiple studies report that disgust-eliciting stimuli are perceived as salient and subsequently capture selective attention. In the current study, we aimed to better understand the nature of temporal attentional biases toward disgust-eliciting stimuli and to investigate the extent to which these biases are sensitive to contextual and trait-level pathogen avoidance motives. Participants (N=116) performed in an Emotional Attentional Blink (EAB) task in which task-irrelevant disgust-eliciting, fear-eliciting, or neutral images preceded a target by 200, 500, or 800 milliseconds (i.e., lag two, five and eight respectively). They did so twice - once while not exposed to an odor, and once while exposed to either an odor that elicited disgust or an odor that did not - and completed a measure of disgust sensitivity. Results indicate that disgust-eliciting visual stimuli produced a greater attentional blink than neutral visual stimuli at lag two and a greater attentional blink than fear-eliciting visual stimuli at both lag two and at lag five. Neither the odor manipulations nor individual differences measures moderated this effect. We propose that visual attention is engaged for a longer period of time following disgust-eliciting stimuli because covert processes automatically initiate the evaluation of pathogen threats. The fact that state and trait pathogen avoidance do not influence this temporal attentional bias suggests that early attentional processing of pathogen cues is initiated independent from the context in which such cues are perceived.


1976 ◽  
Vol 43 (2) ◽  
pp. 555-561 ◽  
Author(s):  
Richard A. Wyrick ◽  
Vincent J. Tempone ◽  
Jack Capehart

The relationship between attention and incidental learning during discrimination training was studied in 30 children, aged 10 to 11. A polymetric eye-movement recorder measured direct visual attention. Consistent with previous findings, recall of incidental stimuli was greatest during the initial and terminal stages of intentional learning. Contrary to previous explanations, however, visual attention to incidental stimuli was not related to training. While individual differences in attention to incidental stimuli were predictive of recall, attention to incidental stimuli was not related to level of training. Results suggested that changes in higher order information processing rather than direct visual attention were responsible for the curvilinear learning of incidental stimuli during intentional training.


2003 ◽  
Vol 10 (4) ◽  
pp. 884-889 ◽  
Author(s):  
M. Kathryn Bleckley ◽  
Francis T. Durso ◽  
Jerry M. Crutchfield ◽  
Randall W. Engle ◽  
Maya M. Khanna

2019 ◽  
Vol 180 ◽  
pp. 104-112 ◽  
Author(s):  
Sanne B. Geeraerts ◽  
Roy S. Hessels ◽  
Stefan Van der Stigchel ◽  
Jorg Huijding ◽  
Joyce J. Endendijk ◽  
...  

Sensors ◽  
2020 ◽  
Vol 20 (19) ◽  
pp. 5559
Author(s):  
Minji Seo ◽  
Myungho Kim

Speech emotion recognition (SER) classifies emotions using low-level features or a spectrogram of an utterance. When SER methods are trained and tested using different datasets, they have shown performance reduction. Cross-corpus SER research identifies speech emotion using different corpora and languages. Recent cross-corpus SER research has been conducted to improve generalization. To improve the cross-corpus SER performance, we pretrained the log-mel spectrograms of the source dataset using our designed visual attention convolutional neural network (VACNN), which has a 2D CNN base model with channel- and spatial-wise visual attention modules. To train the target dataset, we extracted the feature vector using a bag of visual words (BOVW) to assist the fine-tuned model. Because visual words represent local features in the image, the BOVW helps VACNN to learn global and local features in the log-mel spectrogram by constructing a frequency histogram of visual words. The proposed method shows an overall accuracy of 83.33%, 86.92%, and 75.00% in the Ryerson Audio-Visual Database of Emotional Speech and Song (RAVDESS), the Berlin Database of Emotional Speech (EmoDB), and Surrey Audio-Visual Expressed Emotion (SAVEE), respectively. Experimental results on RAVDESS, EmoDB, SAVEE demonstrate improvements of 7.73%, 15.12%, and 2.34% compared to existing state-of-the-art cross-corpus SER approaches.


Sign in / Sign up

Export Citation Format

Share Document