scholarly journals Facial Feature Model for a Portrait Video Stylization

Symmetry ◽  
2018 ◽  
Vol 10 (10) ◽  
pp. 442 ◽  
Author(s):  
Dongxue Liang ◽  
Kyoungju Park ◽  
Przemyslaw Krompiec

With the advent of the deep learning method, portrait video stylization has become more popular. In this paper, we present a robust method for automatically stylizing portrait videos that contain small human faces. By extending the Mask Regions with Convolutional Neural Network features (R-CNN) with a CNN branch which detects the contour landmarks of the face, we divided the input frame into three regions: the region of facial features, the region of the inner face surrounded by 36 face contour landmarks, and the region of the outer face. Besides keeping the facial features region as it is, we used two different stroke models to render the other two regions. During the non-photorealistic rendering (NPR) of the animation video, we combined the deformable strokes and optical flow estimation between adjacent frames to follow the underlying motion coherently. The experimental results demonstrated that our method could not only effectively reserve the small and distinct facial features, but also follow the underlying motion coherently.

Author(s):  
P. S. HIREMATH ◽  
AJIT DANTI

In this paper, human faces are detected using the skin color information and the Lines-of-Separability (LS) face model. The various skin color spaces based on widely used color models such as RGB, HSV, YCbCr, YUV and YIQ are compared and an appropriate color model is selected for the purpose of skin color segmentation. The proposed approach of skin color segmentation is based on YCbCr color model and sigma control limits for variations in its color components. The segmentation by the proposed method is found to be more efficient in terms of speed and accuracy. Each of the skin segmented regions is then searched for the facial features using the LS face model to detect the face present in it. The LS face model is a geometric approach in which the spatial relationships among the facial features are determined for the purpose of face detection. Hence, the proposed approach based on the combination of skin color segmentation and LS face model is able to detect single as well as multiple faces present in a given image. The experimental results and comparative analysis demonstrate the effectiveness of this approach.


2013 ◽  
Vol 303-306 ◽  
pp. 1402-1405 ◽  
Author(s):  
Chang Yuan Wang ◽  
Mei Juan Qu ◽  
Hong Bo Jia ◽  
Hong Zhe Bi

This paper proposed a new facial feature points localization algorithm based on main characteristics of eyes.Use the result of pupil center position to initialize the model of hybrid improved active shape model (ASM) and active appearance model (AAM). The algorithm will use two-dimensional local gray information to update the feature point position when using ASM to locate the face contour feature points. As to the internal features point location, it establishes facial organs independent AAM model. At the same time, it optimizes measure functions of ASM and AAM to judge the convergence of search algorithm. The experimental results show that the new algorithm greatly improved the localization accuracy of facial feature points.


2020 ◽  
Author(s):  
Navin Ipe

The recognition of emotions via facial expressions is a complex process of piecing together various aspects of each facial feature. Since viewing a single facial feature in isolation may result in an inaccurate recognition of emotion, this paper attempts training neural networks to first identify specific facial features in isolation, and then use the general pattern of expressions on the face to identify the overall emotion. The technique presented is very basic, and can definitely be improved with more advanced techniques that incorporate time<br>and context.


2020 ◽  
Author(s):  
Navin Ipe

Emotion recognition by the human brain, normally incorporates context, body language, facial expressions, verbal cues, non-verbal cues, gestures and tone of voice. When considering only the face, piecing together various aspects of each facial feature is critical in identifying the emotion. Since viewing a single facial feature in isolation may result in inaccuracies, this paper attempts training neural networks to first identify specific<br>facial features in isolation, and then use the general pattern of expressions on the face to identify the overall emotion. The reason for classification inaccuracies are also examined.<br>


2021 ◽  
Vol 37 (5) ◽  
pp. 292-297
Author(s):  
Winney Eva

In the past two decades, many face recognition methods have been proposed. Among them, most researchers use the entire face as the basis for recognition. The basic technical route is to extract and compare the general features of the entire face. However, in actual scenes, human faces may be blocked by obstacles. Therefore, how to realize face recognition by using some of the facial features that can be obtained? In addition, this partial face recognition technology is mostly based on the acquisition of key points of the face to recognize the whole face. This review intends to summarize the full face and partial face recognition methods based on key points of the face.


2001 ◽  
Vol 6 (1) ◽  
pp. 39-44
Author(s):  
Saparudin Saparudin

Human facial feature extraction is an important process in the face recognition system. The quality of the results from the extraction of human facial features is determined by the degree of accuracy. The weighting of human facial features is used to test the accuracy of the methods used. This research produces the process of weighting the facial features automatically. The results obtained are the same as those seen by the human eyes.  


Author(s):  
Arnab Kumar Maji ◽  
Bandariakor Rymbai ◽  
Debdatta Kandar

Facial recognition is the most natural means of biometric identification as it deals with the measurement of a biological relevance. Since, faces varies from each and every person, therefore, it can be used for security purpose. Face recognition is a very challenging problem, where the human face changes over time, as it depends on the pose, expression, occlusion, aging, etc. It can be used in many areas such as for surveillance purposes, security, general identity verification, criminal justice system, smart cards, etc. The most important part of the face recognition is the evaluation of facial features. With the help of facial feature, the system usually looks for the position of eyes, nose and mouth and distances between them can be detected and computed. This chapter will discuss some of the techniques that can be used to extract important facial features.


2001 ◽  
Vol 6 (1) ◽  
pp. 39-44
Author(s):  
Saparudin Saparudin

Human facial feature extraction is an important process in the face recognition system. The quality of the results from the extraction of human facial features is determined by the degree of accuracy. The weighting of human facial features is used to test the accuracy of the methods used. This research produces the process of weighting the facial features automatically. The results obtained are the same as those seen by the human eyes.  


2012 ◽  
Vol 220-223 ◽  
pp. 2284-2287
Author(s):  
Chang Yuan Wang ◽  
Jing Wang ◽  
Mei Juan Qu

An improved active shape model (ASM) and active appearance model (AAM) based new method is proposed, this method will use two-dimensional local gray information to update the feature point position when using ASM to locate the face contour feature points. As to the internal features point location, it establishes facial organs independent AAM model. At the same time, it uses different measure functions to judge the convergence of search algorithm. The experimental results show that the new algorithm greatly improved the localization accuracy of facial feature points.


1997 ◽  
Vol 9 (5) ◽  
pp. 611-623 ◽  
Author(s):  
Frederick K. D. Nahm ◽  
Amelie Perret ◽  
David G. Amaral ◽  
Thomas D. Albright

Facial displays are an important form of social communication in nonhuman primates. Clues to the information conveyed by faces are the temporal and spatial characteristics of ocular viewing patterns to facial images. The present study compares viewing patterns of four rhesus monkeys (Macaca mulatta) to a set of 1- and 3-sec video segments of conspecific facial displays, which included open-mouth threat, lip-smack, yawn, fear-grimace, and neutral profile. Both static and dynamic video images were used. Static human faces displaying open-mouth threat, smile, and neutral gestures were also presented. Eye position was recorded with a surgically implanted eye-coil. The relative perceptual salience of the eyes, the midface, and the mouth across different expressive gestures was determined by analyzing the number of eye movements associated with each feature during static and dynamic presentations. The results indicate that motion does not significantly affect the viewing patterns to expressive facial displays, and when given a choice, monkeys spend a relatively large amount of time inspecting the face, especially the eyes, as opposed to areas surrounding the face. The expressive nature of the facial display also affected viewing patterns in that threatening and fear-related displays evoked a pattern of viewing that differed from that recorded during the presentation of submissive-related facial displays. From these results we conclude that (1) the most important determinant of the visual inspection patterns of faces is the constellation of physiognomic features and their configuration, but not facial motion, (2) the eyes are generally the most salient facial feature, and (3) the agonistic or affiliative dimension of an expressive facial display can be delineated on the basis of viewing patterns.


Sign in / Sign up

Export Citation Format

Share Document