scholarly journals The effect of facial expression on contrast sensitivity: a behavioural investigation and extension of Hedger, Garner, & Adams (2015)

2018 ◽  
Author(s):  
Abigail L. M. Webb ◽  
Paul B. Hibbard

AbstractIt has been argued that rapid visual processing for fearful face expressions is driven by the fact that effective contrast is higher in these faces compared to other expressions, when the contrast sensitivity function is taken into account (Hedger, Garner, & Adams, 2015). This proposal has been upheld by data from image analyses, but is yet to be tested at the behavioural level. The present study conducts a traditional contrast sensitivity task for face images of various facial expressions. Findings show that visual contrast thresholds do not differ for different facial expressions We re-conduct analysis of faces’ effective contrast, using the procedure developed by Hedger, Garner, & Adams (2015), and show that higher effective contrast in fearful face expressions relies on face images first being normalised for RMS contrast. When not normalised for RMS contrast, effective contrast in fear expressions is no different, or sometimes even lower, compared to other expressions. These findings are discussed in relation to the implications of contrast normalisation on the salience of face expressions in behavioural and neurophysiological experiments, and also the extent that natural physical differences between facial stimuli are masked during stimulus standardisation and normalisation.

Author(s):  
Michael A. Nelson ◽  
Ronald L. Halberg

Threshold contrasts for red, green, and achromatic sinusoidal gratings were measured. Spatial frequencies ranged from 0.25 to 15 cycles/deg. No significant differences in contrast thresholds were found among the three grating types. From this finding it was concluded that, under conditions of normal viewing, no significant differences should be expected in the acquisition of spatial information from monochromatic or achromatic displays of equal resolution.


2012 ◽  
Vol 25 (0) ◽  
pp. 177
Author(s):  
Vivian Ciaramitaro ◽  
Dan Jentzen

We examined the influence of covert, endogenous, crossmodal attention on auditory contrast sensitivity in a two-interval forced-choice dual-task paradigm. Attending to a visual stimulus has been found to alter the visual contrast response function via a mechanism of contrast gain for sustained visual attention, or a combination of response gain and contrast gain for transient visual attention (Ling and Carrasco, 2006). We examined if and how auditory contrast sensitivity varied as a function of attentional load, the difficulty of a competing visual task, and how such effects compared to those found for the influences of attention on visual processing. In our paradigm, subjects listened to two sequential white noise stimuli, one of which was amplitude modulated. Subjects reported which interval contained the amplitude modulated auditory stimulus. At the same time a sequence of 5 letters was presented, in an rsvp stream at central fixation, for each interval. Subjects judged which interval contained the visual target. For a given block of trials, subjects judged which interval contained white letters (easy visual task) or, in a separate block of trials, which interval had more target letters ‘A’ (difficult visual task). We found that auditory thresholds were lower for the easy compared to the difficult visual task and that the shift in the auditory contrast response function was reminiscent of a contrast gain mechanism for visual contrast. Importantly, we found that the effects of crossmodal attention on the auditory contrast response function diminished with practice.


1987 ◽  
Vol 64 (2) ◽  
pp. 587-594 ◽  
Author(s):  
Nancy Johnson ◽  
J. Timothy Petersik

Visual contrast thresholds to both stationary and moving gratings of three spatial frequencies (2, 4, and 16 cyc/deg) were measured over a 32-day period in two women displaying normal menstrual cycles and in two noncycling control subjects. The time-series data of each subject in each condition were Fourier analyzed and the resulting amplitude spectra showed differences between the two sets of subjects. The spectra of the control subjects were relatively flat, whereas those of the experimental subjects showed a number of peaks at several harmonics (periods). Conservative significance tests suggested that the peaks in the spectra of the cycling women were larger than might be expected by chance. The data also suggested that changes in sensitivity were greatest for 4-cyc/deg gratings, those nearest the peak of the normal contrast sensitivity function.


2020 ◽  
Author(s):  
Fernando Ferreira-Santos ◽  
Mariana R. Pereira ◽  
Tiago O. Paiva ◽  
Pedro R. Almeida ◽  
Eva C. Martins ◽  
...  

The behavioral and electrophysiological study of the emotional intensity of facial expressions of emotions has relied on image processing techniques termed ‘morphing’ to generate realistic facial stimuli in which emotional intensity can be manipulated. This is achieved by blending neutral and emotional facial displays and treating the percent of morphing between the two stimuli as an objective measure of emotional intensity. Here we argue that the percentage of morphing between stimuli does not provide an objective measure of emotional intensity and present supporting evidence from affective ratings and neural (event-related potential) responses. We show that 50% morphs created from high or moderate arousal stimuli differ in subjective and neural responses in a sensible way: 50% morphs are perceived as having approximately half of the emotional intensity of the original stimuli, but if the original stimuli differed in emotional intensity to begin with, then so will the morphs. We suggest a re-examination of previous studies that used percentage of morphing as a measure of emotional intensity and highlight the value of more careful experimental control of emotional stimuli and inclusion of proper manipulation checks.


2021 ◽  
pp. 174702182199299
Author(s):  
Mohamad El Haj ◽  
Emin Altintas ◽  
Ahmed A Moustafa ◽  
Abdel Halim Boudoukha

Future thinking, which is the ability to project oneself forward in time to pre-experience an event, is intimately associated with emotions. We investigated whether emotional future thinking can activate emotional facial expressions. We invited 43 participants to imagine future scenarios, cued by the words “happy,” “sad,” and “city.” Future thinking was video recorded and analysed with a facial analysis software to classify whether facial expressions (i.e., happy, sad, angry, surprised, scared, disgusted, and neutral facial expression) of participants were neutral or emotional. Analysis demonstrated higher levels of happy facial expressions during future thinking cued by the word “happy” than “sad” or “city.” In contrast, higher levels of sad facial expressions were observed during future thinking cued by the word “sad” than “happy” or “city.” Higher levels of neutral facial expressions were observed during future thinking cued by the word “city” than “happy” or “sad.” In the three conditions, the neutral facial expressions were high compared with happy and sad facial expressions. Together, emotional future thinking, at least for future scenarios cued by “happy” and “sad,” seems to trigger the corresponding facial expression. Our study provides an original physiological window into the subjective emotional experience during future thinking.


Sensors ◽  
2021 ◽  
Vol 21 (6) ◽  
pp. 2003 ◽  
Author(s):  
Xiaoliang Zhu ◽  
Shihao Ye ◽  
Liang Zhao ◽  
Zhicheng Dai

As a sub-challenge of EmotiW (the Emotion Recognition in the Wild challenge), how to improve performance on the AFEW (Acted Facial Expressions in the wild) dataset is a popular benchmark for emotion recognition tasks with various constraints, including uneven illumination, head deflection, and facial posture. In this paper, we propose a convenient facial expression recognition cascade network comprising spatial feature extraction, hybrid attention, and temporal feature extraction. First, in a video sequence, faces in each frame are detected, and the corresponding face ROI (range of interest) is extracted to obtain the face images. Then, the face images in each frame are aligned based on the position information of the facial feature points in the images. Second, the aligned face images are input to the residual neural network to extract the spatial features of facial expressions corresponding to the face images. The spatial features are input to the hybrid attention module to obtain the fusion features of facial expressions. Finally, the fusion features are input in the gate control loop unit to extract the temporal features of facial expressions. The temporal features are input to the fully connected layer to classify and recognize facial expressions. Experiments using the CK+ (the extended Cohn Kanade), Oulu-CASIA (Institute of Automation, Chinese Academy of Sciences) and AFEW datasets obtained recognition accuracy rates of 98.46%, 87.31%, and 53.44%, respectively. This demonstrated that the proposed method achieves not only competitive performance comparable to state-of-the-art methods but also greater than 2% performance improvement on the AFEW dataset, proving the significant outperformance of facial expression recognition in the natural environment.


2016 ◽  
Vol 57 (13) ◽  
pp. 5696 ◽  
Author(s):  
Wendy Ming ◽  
Dimitrios J. Palidis ◽  
Miriam Spering ◽  
Martin J. McKeown

2005 ◽  
Vol 100 (1) ◽  
pp. 129-134 ◽  
Author(s):  
Michela Balconi

The present research compared the semantic information processing of linguistic stimuli with semantic elaboration of nonlinguistic facial stimuli. To explore brain potentials (ERPs, event-related potentials) related to decoding facial expressions and the effect of semantic valence of the stimulus, we analyzed data for 20 normal subjects ( M age = 23.6 yr., SD = 0.2). Faces with three basic emotional expressions (fear, happiness, and sadness from the 1976 Ekman and Friesen database), with three semantically anomalous expressions (with respect to their emotional content), and the neutral stimuli (face without an emotional content) were presented in a random order. Differences in peak amplitude of ERP were observed later for anomalous expressions compared with congruous expressions. In fact, the results demonstrated that the emotional anomalous faces elicited a higher negative peak at about 360 msec., distributed mainly over the posterior sites. The observed electrophysiological activity may represent specific cognitive processing underlying the comprehension of facial expressions in detection of semantic anomaly. The evidence is in favour of comparability of this negative deflection with the N400 ERP effect elicited by linguistic anomalies.


1986 ◽  
Vol 62 (2) ◽  
pp. 419-423 ◽  
Author(s):  
Gilles Kirouac ◽  
Martin Bouchard ◽  
Andrée St-Pierre

The purpose of this study was to measure the capacity of human subjects to match facial expressions of emotions and behavioral categories that represented the motivational states they are supposed to illustrate. 100 university students were shown facial stimuli they had to classify using ethological behavioral categories. The results showed that accuracy of judgment was over-all lower than what was usually found when fundamental emotional categories were used. The data also indicated that the relation between emotional expressions and behavioral tendencies was more complex than expected.


Sign in / Sign up

Export Citation Format

Share Document