Illumination-based image synthesis: creating novel images of human faces under differing pose and lighting

Author(s):  
A.S. Georghiades ◽  
P.N. Belhumeur ◽  
D.J. Kriegman
Keyword(s):  
2020 ◽  
Vol 2020 (28) ◽  
pp. 175-180
Author(s):  
Hadas Shahar ◽  
Hagit Hel-Or

The field of image forgery is widely studied, and with the recent introduction of deep networks based image synthesis, detection of fake image sequences has increased the challenge. Specifically, detecting spoofing attacks is of grave importance. In this study we exploit the minute changes in facial color of human faces in videos to determine real from fake videos. Even when idle, human skin color changes with sub-dermal blood flow, these changes are enhanced under stress and emotion. We show that extracting facial color along a video sequence can serve as a feature for training deep neural networks to successfully determine fake vs real face sequences.


2013 ◽  
pp. 1145-1161
Author(s):  
Zahid Riaz ◽  
Suat Gedikli ◽  
Michael Beetz ◽  
Bernd Radig

In this chapter, we focus on the human robot joint interaction application where robots can extract the useful multiple features from human faces. The idea follows daily life scenarios where humans rely mostly on face to face interaction and interpret gender, identity, facial behavior and age of the other persons at a very first glance. We term this problem as face-at-a-glance problem. The proposed solution to this problem is the development of a 3D photorealistic face model in real time for human facial analysis. We also discuss briefly some outstanding challenges like head poses, facial expressions and illuminations for image synthesis. Due to the diversity of the application domain and optimization of relevant information extraction for computer vision applications, we propose to solve this problem using an interdisciplinary 3D face model. The model is built using computer vision and computer graphics tools with image processing techniques. In order to trade off between accuracy and efficiency, we choose wireframe model which provides automatic face generation in real time. The goal of this chapter is to provide a standalone and comprehensive framework to extract useful multi-feature from a 3D model. Such features due to their wide range of information and less computational power, finds their applications in several advanced camera mounted technical systems. Although this chapter focuses on multi-feature extraction approach for human faces in interactive applications with intelligent systems, however the scope of this chapter is equally useful for researchers and industrial practitioner working in the modeling of 3D deformable objects. The chapter mainly specified to human faces but can also be applied to other applications like medical imaging, industrial robot manipulation and action recognition.


Author(s):  
Zahid Riaz ◽  
Suat Gedikli ◽  
Michael Beetz ◽  
Bernd Radig

In this chapter, we focus on the human robot joint interaction application where robots can extract the useful multiple features from human faces. The idea follows daily life scenarios where humans rely mostly on face to face interaction and interpret gender, identity, facial behavior and age of the other persons at a very first glance. We term this problem as face-at-a-glance problem. The proposed solution to this problem is the development of a 3D photorealistic face model in real time for human facial analysis. We also discuss briefly some outstanding challenges like head poses, facial expressions and illuminations for image synthesis. Due to the diversity of the application domain and optimization of relevant information extraction for computer vision applications, we propose to solve this problem using an interdisciplinary 3D face model. The model is built using computer vision and computer graphics tools with image processing techniques. In order to trade off between accuracy and efficiency, we choose wireframe model which provides automatic face generation in real time. The goal of this chapter is to provide a standalone and comprehensive framework to extract useful multi-feature from a 3D model. Such features due to their wide range of information and less computational power, finds their applications in several advanced camera mounted technical systems. Although this chapter focuses on multi-feature extraction approach for human faces in interactive applications with intelligent systems, however the scope of this chapter is equally useful for researchers and industrial practitioner working in the modeling of 3D deformable objects. The chapter mainly specified to human faces but can also be applied to other applications like medical imaging, industrial robot manipulation and action recognition.


2018 ◽  
Vol 4 (10) ◽  
pp. 6
Author(s):  
Khemchandra Patel ◽  
Dr. Kamlesh Namdev

Age changes cause major variations in the appearance of human faces. Due to many lifestyle factors, it is difficult to precisely predict how individuals may look with advancing years or how they looked with "retreating" years. This paper is a review of age variation methods and techniques, which is useful to capture wanted fugitives, finding missing children, updating employee databases, enhance powerful visual effect in film, television, gaming field. Currently there are many different methods available for age variation. Each has their own advantages and purpose. Because of its real life applications, researchers have shown great interest in automatic facial age estimation. In this paper, different age variation methods with their prospects are reviewed. This paper highlights latest methodologies and feature extraction methods used by researchers to estimate age. Different types of classifiers used in this domain have also been discussed.


2018 ◽  
Author(s):  
Karel Kleisner ◽  
Šimon Pokorný ◽  
Selahattin Adil Saribay

In present research, we took advantage of geometric morphometrics to propose a data-driven method for estimating the individual degree of facial typicality/distinctiveness for cross-cultural (and other cross-group) comparisons. Looking like a stranger in one’s home culture may be somewhat stressful. The same facial appearance, however, might become advantageous within an outgroup population. To address this fit between facial appearance and cultural setting, we propose a simple measure of distinctiveness/typicality based on position of an individual along the axis connecting the facial averages of two populations under comparison. The more distant a face is from its ingroup population mean towards the outgroup mean the more distinct it is (vis-à-vis the ingroup) and the more it resembles the outgroup standards. We compared this new measure with an alternative measure based on distance from outgroup mean. The new measure showed stronger association with rated facial distinctiveness than distance from outgroup mean. Subsequently, we manipulated facial stimuli to reflect different levels of ingroup-outgroup distinctiveness and tested them in one of the target cultures. Perceivers were able to successfully distinguish outgroup from ingroup faces in a two-alternative forced-choice task. There was also some evidence that this task was harder when the two faces were closer along the axis connecting the facial averages from the two cultures. Future directions and potential applications of our proposed approach are discussed.


1999 ◽  
Vol 19 (Supplement1) ◽  
pp. 87-90
Author(s):  
D. SEKIJIMA ◽  
S. HAYANO ◽  
Y. SAITO ◽  
T.L. KUNII
Keyword(s):  

1995 ◽  
Author(s):  
Jie Yang ◽  
Alex Waibel
Keyword(s):  

Sign in / Sign up

Export Citation Format

Share Document