scholarly journals The left side superiority effect for facial expression perception is not a left visual field superiority effect

2016 ◽  
Vol 16 (12) ◽  
pp. 162
Author(s):  
Chieh-An Yang ◽  
Chien-Chung Chen
2017 ◽  
Vol 12 (8) ◽  
pp. 1342-1350
Author(s):  
Wookyoung Jung ◽  
Joong-Gu Kang ◽  
Hyeonjin Jeon ◽  
Miseon Shim ◽  
Ji Sun Kim ◽  
...  

2009 ◽  
Vol 1 (2) ◽  
pp. 49
Author(s):  
Anamitra Basu

Visual-field advantage was envisaged as a function of presentation mode (unilateral, bilateral), stimulus structure (word, face), and stimulus content (emotional, neutral) in two conditions, with and without feedback of judgment. Split visual-field paradigm was taken into account with recognition accuracy and response latency as the dependent variables. Stimuli were significantly better recognized in left visual-field than in right visual-field. Unilaterally, rather than bilaterally, presented stimuli were significantly better recognized. Emotional content were intensely recognized than neutral content. Analysis using multivariate ANOVA suggested that words as well as faces were recognized better without judgment feedback condition as compared to with judgment feedback condition; however these stimuli were judged with significantly less response latency following judgment feedback.


2002 ◽  
Vol 14 (8) ◽  
pp. 1158-1173 ◽  
Author(s):  
Matthew N. Dailey ◽  
Garrison W. Cottrell ◽  
Curtis Padgett ◽  
Ralph Adolphs

There are two competing theories of facial expression recognition. Some researchers have suggested that it is an example of “categorical perception.” In this view, expression categories are considered to be discrete entities with sharp boundaries, and discrimination of nearby pairs of expressive faces is enhanced near those boundaries. Other researchers, however, suggest that facial expression perception is more graded and that facial expressions are best thought of as points in a continuous, low-dimensional space, where, for instance, “surprise” expressions lie between “happiness” and “fear” expressions due to their perceptual similarity. In this article, we show that a simple yet biologically plausible neural network model, trained to classify facial expressions into six basic emotions, predicts data used to support both of these theories. Without any parameter tuning, the model matches a variety of psychological data on categorization, similarity, reaction times, discrimination, and recognition difficulty, both qualitatively and quantitatively. We thus explain many of the seemingly complex psychological phenomena related to facial expression perception as natural consequences of the tasks' implementations in the brain.


Sign in / Sign up

Export Citation Format

Share Document