scholarly journals Is There a Linear or a Nonlinear Relationship between Rotation and Configural Processing of Faces?

Perception ◽  
10.1068/p3195 ◽  
2002 ◽  
Vol 31 (3) ◽  
pp. 287-296 ◽  
Author(s):  
Stephan M Collishaw ◽  
Graham J Hole

Research suggests that inverted faces are harder to recognise than upright faces because of a disruption in processing their configural properties. Reasons for this difficulty were explored by investigating people's ability to identify faces at intermediate angles of rotation. Participants were asked to discriminate blurred famous and unfamiliar faces presented at nine angles. Blurred faces were used to minimise featural processing strategies, and to assess the effects of rotation that are specific to configural processing. The results indicate a linear relationship between angle of rotation and recognition accuracy. It appears that configural processing becomes gradually more disrupted the further a face is oriented away from the upright. The implications of these findings for competing explanations of the face-inversion effect are discussed.

2010 ◽  
Vol 69 (3) ◽  
pp. 161-167 ◽  
Author(s):  
Jisien Yang ◽  
Adrian Schwaninger

Configural processing has been considered the major contributor to the face inversion effect (FIE) in face recognition. However, most researchers have only obtained the FIE with one specific ratio of configural alteration. It remains unclear whether the ratio of configural alteration itself can mediate the occurrence of the FIE. We aimed to clarify this issue by manipulating the configural information parametrically using six different ratios, ranging from 4% to 24%. Participants were asked to judge whether a pair of faces were entirely identical or different. The paired faces that were to be compared were presented either simultaneously (Experiment 1) or sequentially (Experiment 2). Both experiments revealed that the FIE was observed only when the ratio of configural alteration was in the intermediate range. These results indicate that even though the FIE has been frequently adopted as an index to examine the underlying mechanism of face processing, the emergence of the FIE is not robust with any configural alteration but dependent on the ratio of configural alteration.


2018 ◽  
Author(s):  
Masaki Tomonaga

AbstractFour young laboratory-born Japanese macaques (Macaca fuscata) looked at the photographs of familiar and unfamiliar persons presented at upright and inverted orientations by pressing the lever under the conjugate schedule of sensory reinforcement (successive preferential looking procedure). Three types of photographs were prepared: photographs with persons taken in front view, those taken in back, and those without persons. The monkeys looked longer when the face was upright than inverted only for the pictures containing unfamiliar person with front view. The other types of photographs did not cause inversion effect. Familiarity weakened the face-specific inversion effect in monkeys. This difference may be due to in part the lower preference for familiar faces and the difference in processing mode between familiar and unfamiliar faces.


Author(s):  
Sarah Schroeder ◽  
Kurtis Goad ◽  
Nicole Rothner ◽  
Ali Momen ◽  
Eva Wiese

People process human faces configurally—as a Gestalt or integrated whole—but perceive objects in terms of their individual features. As a result, faces—but not objects—are more difficult to process when presented upside down versus upright. Previous research demonstrates that this inversion effect is not observed when recognizing previously seen android faces, suggesting they are processed more like objects, perhaps due to a lack of perceptual experience and/or motivation to recognize android faces. The current study aimed to determine whether negative emotions, particularly fear of androids, may lessen configural processing of android faces compared to human faces. While the current study replicated previous research showing a greater inversion effect for human compared to android faces, we did not find evidence that negative emotions—such as fear—towards androids influenced the face inversion effect. We discuss the implications of this study and opportunities for future research.


Author(s):  
Sam S. Rakover ◽  
Sam S. Rakover

Perception and recognition of faces presented upright are better than Perception and recognition of faces presented inverted. The difference between upright and inverted orientations is greater in face recognition than in non-face object recognition. This Face-Inversion Effect is explained by the “Configural Processing” hypothesis that inversion disrupts configural information processing and leaves the featural information intact. The present chapter discusses two important findings that cast doubt on this hypothesis: inversion impairs recognition of isolated features (hair & forehead, and eyes), and certain facial configural information is not affected by inversion. The chapter focuses mainly on the latter finding, which reveals a new type of facial configural information, the “Eye-Illusion”, which is based on certain geometrical illusions. The Eye-Illusion tended to resist inversion in experimental tasks of both perception and recognition. It resisted inversion also when its magnitude was reduced. Similar results were obtained with “Headlight-Illusion” produced on a car‘s front, and with “Form-Illusion” produced in geometrical forms. However, the Eye-Illusion was greater than the Headlight-Illusion, which in turn was greater than the Form-Illusion. These findings were explained by the “General Visual-Mechanism” hypothesis in terms of levels of visual information learning. The chapter proposes that a face is composed of various kinds of configural information that are differently impaired by inversion: from no effect (the Eye-Illusion) to a large effect (the Face-Inversion Effect).


2021 ◽  
Author(s):  
James Daniel Dunn ◽  
Victor Perrone de Lima Varela ◽  
Victoria Ida Nicholls ◽  
Michaell Papinutto ◽  
David White ◽  
...  

People’s ability to recognize faces varies to a surprisingly large extent and these differences are hereditary. But cognitive and perceptual processing giving rise to these differences remain poorly understood. Here we compared visual sampling of 10 super-recognizers – individuals that achieve the highest levels of accuracy in face recognition tasks – to typical viewers. Participants were asked to learn, and later recognize, a set of unfamiliar faces while their gaze position was recorded. They viewed faces through ‘spotlight’ apertures varying in size, where the face on the screen was modified in real-time to constrict the visual information displayed to the participant around their gaze position. Higher recognition accuracy in super-recognizers was only observed when at least 36% of the face was visible. We also identified qualitative differences in their visual sampling that can explain their superior recognition accuracy: (1) less systematic focus on the eye region; (2) more fixations to the central region of faces; (3) greater visual exploration of faces in general. These differences were observed in both natural and spotlight viewing conditions, but were most apparent when learning faces and not during recognition. Critically, this suggests that superior recognition performance is founded on enhanced encoding of faces into memory rather than memory retention. Together, our results point to a process whereby super-recognizers construct a more robust memory trace by accumulating samples of complex visual information across successive eye movements.


2017 ◽  
Vol 23 (3) ◽  
pp. 287-291 ◽  
Author(s):  
Tamsyn E. Van Rheenen ◽  
Nicole Joshua ◽  
David J Castle ◽  
Susan L. Rossell

AbstractObjectives: Emotion recognition impairments have been demonstrated in schizophrenia (Sz), but are less consistent and lesser in magnitude in bipolar disorder (BD). This may be related to the extent to which different face processing strategies are engaged during emotion recognition in each of these disorders. We recently showed that Sz patients had impairments in the use of both featural and configural face processing strategies, whereas BD patients were impaired only in the use of the latter. Here we examine the influence that these impairments have on facial emotion recognition in these cohorts. Methods: Twenty-eight individuals with Sz, 28 individuals with BD, and 28 healthy controls completed a facial emotion labeling task with two conditions designed to separate the use of featural and configural face processing strategies; part-based and whole-face emotion recognition. Results: Sz patients performed worse than controls on both conditions, and worse than BD patients on the whole-face condition. BD patients performed worse than controls on the whole-face condition only. Conclusions: Configural processing deficits appear to influence the recognition of facial emotions in BD, whereas both configural and featural processing abnormalities impair emotion recognition in Sz. This may explain discrepancies in the profiles of emotion recognition between the disorders. (JINS, 2017, 23, 287–291)


2003 ◽  
Vol 56 (6) ◽  
pp. 955-975 ◽  
Author(s):  
Luc Boutsen ◽  
Glyn W. Humphreys

In the “Thatcher illusion” a face, in which the eyes and mouth are inverted relative to the rest of the face, looks grotesque when shown upright but not when inverted. In four experiments we investigated the contribution of local and global processing to this illusion in normal observers. We examined inversion effects (i.e., better performance for upright than for inverted faces) in a task requiring discrimination of whether faces were or were not “thatcherized”. Observers made same/different judgements to isolated face parts (Experiments 1–2) and to whole faces (Experiments 3–4). Face pairs had the same or different identity, allowing for different processing strategies using feature-based or configural information, respectively. In Experiment 1, feature-based matching of same-person face parts yielded only a small inversion effect for normal face parts. However, when feature-based matching was prevented by using the face parts of different people on all trials (Experiment 2) an inversion effect occurred for normal but not for thatcherized parts. In Experiments 3 and 4, inversion effects occurred with normal but not with thatcherized whole faces, on both same- and different-person matching tasks. This suggests that a common configural strategy was used with whole (normal) faces. Face context facilitated attention to misoriented parts in same-person but not in different-person matching. The results indicate that (1) face inversion disrupts local configural processing, but not the processing of image features, and (2) thatcherization disrupts local configural processing in upright faces.


2021 ◽  
pp. 003329412110184
Author(s):  
Paola Surcinelli ◽  
Federica Andrei ◽  
Ornella Montebarocci ◽  
Silvana Grandi

Aim of the research The literature on emotion recognition from facial expressions shows significant differences in recognition ability depending on the proposed stimulus. Indeed, affective information is not distributed uniformly in the face and recent studies showed the importance of the mouth and the eye regions for a correct recognition. However, previous studies used mainly facial expressions presented frontally and studies which used facial expressions in profile view used a between-subjects design or children faces as stimuli. The present research aims to investigate differences in emotion recognition between faces presented in frontal and in profile views by using a within subjects experimental design. Method The sample comprised 132 Italian university students (88 female, Mage = 24.27 years, SD = 5.89). Face stimuli displayed both frontally and in profile were selected from the KDEF set. Two emotion-specific recognition accuracy scores, viz., frontal and in profile, were computed from the average of correct responses for each emotional expression. In addition, viewing times and response times (RT) were registered. Results Frontally presented facial expressions of fear, anger, and sadness were significantly better recognized than facial expressions of the same emotions in profile while no differences were found in the recognition of the other emotions. Longer viewing times were also found when faces expressing fear and anger were presented in profile. In the present study, an impairment in recognition accuracy was observed only for those emotions which rely mostly on the eye regions.


Sign in / Sign up

Export Citation Format

Share Document