scholarly journals An Event-related Potential Comparison of Facial Expression Processing between Cartoon and Real Faces

2018 ◽  
Author(s):  
Jiayin Zhao ◽  
Yifang Wang ◽  
Licong An

AbstractFaces play important roles in the social lives of humans. In addition to real faces, people also encounter numerous cartoon faces in daily life. These cartoon faces convey basic emotional states through facial expressions. Using a behavioral research methodology and event-related potentials (ERPs), we conducted a facial expression recognition experiment with 17 university students to compare the processing of cartoon faces with that of real faces. This study used face type (real vs. cartoon) and participant gender (male vs. female) as independent variables. Reaction time, recognition accuracy, and the amplitudes and latencies of emotion processing-related ERP components such as N170, vertex positive potential (VPP), and late positive potential (LPP) were used as dependent variables. The ERP results revealed that cartoon faces caused larger N170 and VPP amplitudes as well as a briefer N170 latency than did real faces; that real faces induced larger LPP amplitudes than did cartoon faces; and that angry faces induced larger LPP amplitudes than did happy faces. In addition, the results showed a significant difference in the brain regions associated with face processing as reflected in a right hemispheric advantage. The behavioral results showed that the reaction times for happy faces were shorter than those for angry faces; that females showed a higher facial expression recognition accuracy than did males; and that males showed a higher recognition accuracy for angry faces than happy faces. These results demonstrate differences in facial expression recognition and neurological processing between cartoon faces and real faces among adults. Cartoon faces showed a higher processing intensity and speed than real faces during the early processing stage. However, more attentional resources were allocated for real faces during the late processing stage.

2020 ◽  
Vol 123 (3) ◽  
pp. 876-884 ◽  
Author(s):  
Gülsüm Akdeniz ◽  
Sadiye Gumusyayla ◽  
Gonul Vural ◽  
Hesna Bektas ◽  
Orhan Deniz

Migraine is a multifactorial brain disorder characterized by recurrent disabling headache attacks. One of the possible mechanisms in the pathogenesis of migraine may be a decrease in inhibitory cortical stimuli in the primary visual cortex attributable to cortical hyperexcitability. The aim of this study was to investigate the neural correlates underlying face and face pareidolia processing in terms of the event-related potential (ERP) components, N170, vertex positive potential (VPP), and N250, in patients with migraine. In total, 40 patients with migraine without aura, 23 patients with migraine and aura, and 30 healthy controls were enrolled. We recorded ERPs during the presentation of face and face pareidolia images. N170, VPP, and N250 mean amplitudes and latencies were examined. N170 was significantly greater in patients with migraine with aura than in healthy controls. VPP amplitude was significantly greater in patients with migraine without aura than in healthy controls. The face stimuli evoked significantly earlier VPP responses to faces (168.7 ms, SE = 1.46) than pareidolias (173.4 ms, SE = 1.41) in patients with migraine with aura. We did not find a significant difference between N250 amplitude for face and face pareidolia processing. A significant difference was observed between the groups for pareidolia in terms of N170 [F(2,86) = 14,75, P < 0.001] and VPP [F(2,86) = 16.43, P < 0.001] amplitudes. Early ERPs are a valuable tool to study the neural processing of face processing in patients with migraine to demonstrate visual cortical hyperexcitability. NEW & NOTEWORTHY Event-related potentials (ERPs) are important for understanding face and face pareidolia processing in patients with migraine. N170, vertex positive potential (VPP), and N250 ERPs were investigated. N170 was revealed as a potential component of cortical excitability for face and face pareidolia processing in patients with migraine.


2021 ◽  
Vol 12 ◽  
Author(s):  
Yuwei Yang ◽  
Shunshun Du ◽  
Hui He ◽  
Chengming Wang ◽  
Xueke Shan ◽  
...  

Although risk decision-making plays an important role in leadership practice, the distinction in behavior between humans with differing levels of leadership, as well as the underlying neurocognitive mechanisms involved, remain unclear. In this study, the Ultimatum Game (UG) was utilized in concert with electroencephalograms (EEG) to investigate the temporal course of cognitive and emotional processes involved in economic decision-making between high and low leadership level college students. Behavioral results from this study found that the acceptance rates in an economic transaction, when the partner was a computer under unfair/sub unfair condition, were significantly higher than in transactions with real human partners for the low leadership group, while there was no significant difference in acceptance rates for the high leadership group. Results from Event-Related Potentials (ERP) analysis further indicated that there was a larger P3 amplitude in the low leadership group than in the high leadership group. We concluded that the difference between high and low leadership groups was at least partly due to their different emotional management abilities.


2006 ◽  
Vol 18 (8) ◽  
pp. 1343-1358 ◽  
Author(s):  
Viola Macchi Cassia ◽  
Dana Kuefner ◽  
Alissa Westerlund ◽  
Charles A. Nelson

This study examined the sensitivity of early face-sensitive event-related potential (ERP) components to the disruption of two structural properties embedded in faces, namely, “updown featural arrangement” and “vertical symmetry.” Behavioral measures and ERPs were recorded as adults made an orientation judgment for canonical faces and distorted faces that had been manipulated for either or both of the mentioned properties. The P1, the N170, and the vertex positive potential (VPP) exhibited a similar gradient in sensitivity to the two investigated properties, in that they all showed a linear increase in amplitude or latency as the properties were selectively disrupted in the order of (1) up-down featural arrangement, (2) vertical symmetry, and (3) both up-down featural arrangement and vertical symmetry. Exceptions to this finding were seen for the amplitudes of the N170 and VPP, which were largest for the stimulus in which solely vertical symmetry was disrupted. Interestingly, the enhanced amplitudes of the N170 and VPP are consistent with a drop in behavioral performance on the orientation judgment for this stimulus.


2019 ◽  
Vol 20 (1) ◽  
Author(s):  
Olalekan Agbolade ◽  
Azree Nazri ◽  
Razali Yaakob ◽  
Abdul Azim Ghani ◽  
Yoke Kqueen Cheah

Abstract Background Expression in H-sapiens plays a remarkable role when it comes to social communication. The identification of this expression by human beings is relatively easy and accurate. However, achieving the same result in 3D by machine remains a challenge in computer vision. This is due to the current challenges facing facial data acquisition in 3D; such as lack of homology and complex mathematical analysis for facial point digitization. This study proposes facial expression recognition in human with the application of Multi-points Warping for 3D facial landmark by building a template mesh as a reference object. This template mesh is thereby applied to each of the target mesh on Stirling/ESRC and Bosphorus datasets. The semi-landmarks are allowed to slide along tangents to the curves and surfaces until the bending energy between a template and a target form is minimal and localization error is assessed using Procrustes ANOVA. By using Principal Component Analysis (PCA) for feature selection, classification is done using Linear Discriminant Analysis (LDA). Result The localization error is validated on the two datasets with superior performance over the state-of-the-art methods and variation in the expression is visualized using Principal Components (PCs). The deformations show various expression regions in the faces. The results indicate that Sad expression has the lowest recognition accuracy on both datasets. The classifier achieved a recognition accuracy of 99.58 and 99.32% on Stirling/ESRC and Bosphorus, respectively. Conclusion The results demonstrate that the method is robust and in agreement with the state-of-the-art results.


2007 ◽  
Vol 21 (2) ◽  
pp. 100-108 ◽  
Author(s):  
Michela Balconi ◽  
Claudio Lucchiari

Abstract. In this study we analyze whether facial expression recognition is marked by specific event-related potential (ERP) correlates and whether conscious and unconscious elaboration of emotional facial stimuli are qualitatively different processes. ERPs elicited by supraliminal and subliminal (10 ms) stimuli were recorded when subjects were viewing emotional facial expressions of four emotions or neutral stimuli. Two ERP effects (N2 and P3) were analyzed in terms of their peak amplitude and latency variations. An emotional specificity was observed for the negative deflection N2, whereas P3 was not affected by the content of the stimulus (emotional or neutral). Unaware information processing proved to be quite similar to aware processing in terms of peak morphology but not of latency. A major result of this research was that unconscious stimulation produced a more delayed peak variation than conscious stimulation did. Also, a more posterior distribution of the ERP was found for N2 as a function of emotional content of the stimulus. On the contrary, cortical lateralization (right/left) was not correlated to conscious/unconscious stimulation. The functional significance of our results is underlined in terms of subliminal effect and emotion recognition.


2016 ◽  
Vol 115 (4) ◽  
pp. 2214-2223 ◽  
Author(s):  
Anna L. Hudson ◽  
Xavier Navarro-Sune ◽  
Jacques Martinerie ◽  
Pierre Pouget ◽  
Mathieu Raux ◽  
...  

The presence of a respiratory-related cortical activity during tidal breathing is abnormal and a hallmark of respiratory difficulties, but its detection requires superior discrimination and temporal resolution. The aim of this study was to validate a computational method using EEG covariance (or connectivity) matrices to detect a change in brain activity related to breathing. In 17 healthy subjects, EEG was recorded during resting unloaded breathing (RB), voluntary sniffs, and breathing against an inspiratory threshold load (ITL). EEG were analyzed by the specially developed covariance-based classifier, event-related potentials, and time-frequency (T-F) distributions. Nine subjects repeated the protocol. The classifier could accurately detect ITL and sniffs compared with the reference period of RB. For ITL, EEG-based detection was superior to airflow-based detection ( P < 0.05). A coincident improvement in EEG-airflow correlation in ITL compared with RB ( P < 0.05) confirmed that EEG detection relates to breathing. Premotor potential incidence was significantly higher before inspiration in sniffs and ITL compared with RB ( P < 0.05), but T-F distributions revealed a significant difference between sniffs and RB only ( P < 0.05). Intraclass correlation values ranged from poor (−0.2) to excellent (1.0). Thus, as for conventional event-related potential analysis, the covariance-based classifier can accurately predict a change in brain state related to a change in respiratory state, and given its capacity for near “real-time” detection, it is suitable to monitor the respiratory state in respiratory and critically ill patients in the development of a brain-ventilator interface.


2019 ◽  
Vol 8 (4) ◽  
pp. 9782-9787

Facial Expression Recognition is an important undertaking for the machinery to recognize different expressive alterations in individual. Emotions have a strong relationship with our behavior. Human emotions are discrete reactions to inside or outside occasions which have some importance meaning. Involuntary sentiment detection is a process to understand the individual’s expressive state to identify his intensions from facial expression which is also a noteworthy piece of non-verbal correspondence. In this paper we propose a Framework that combines discriminative features discovered using Convolutional Neural Networks (CNN) to enhance the performance and accuracy of Facial Expression Recognition. For this we have implemented Inception V3 pre-trained architecture of CNN and then applying concatenation of intermediate layer with final layer which is further passing through fully connected layer to perform classification. We have used JAFFE (Japanese Female Facial Expression) Dataset for this purpose and Experimental results show that our proposed method shows better performance and improve the recognition accuracy.


Sign in / Sign up

Export Citation Format

Share Document