Facial Feature Oriented Banan Filters

2013 ◽  
Vol 7 ◽  
pp. 23-36
Author(s):  
Abdelatif Hussein A. Ali
Keyword(s):  
Sensors ◽  
2021 ◽  
Vol 21 (2) ◽  
pp. 553
Author(s):  
Suresh Neethirajan ◽  
Inonge Reimert ◽  
Bas Kemp

Understanding animal emotions is a key to unlocking methods for improving animal welfare. Currently there are no ‘benchmarks’ or any scientific assessments available for measuring and quantifying the emotional responses of farm animals. Using sensors to collect biometric data as a means of measuring animal emotions is a topic of growing interest in agricultural technology. Here we reviewed several aspects of the use of sensor-based approaches in monitoring animal emotions, beginning with an introduction on animal emotions. Then we reviewed some of the available technological systems for analyzing animal emotions. These systems include a variety of sensors, the algorithms used to process biometric data taken from these sensors, facial expression, and sound analysis. We conclude that a single emotional expression measurement based on either the facial feature of animals or the physiological functions cannot show accurately the farm animal’s emotional changes, and hence compound expression recognition measurement is required. We propose some novel ways to combine sensor technologies through sensor fusion into efficient systems for monitoring and measuring the animals’ compound expression of emotions. Finally, we explore future perspectives in the field, including challenges and opportunities.


Author(s):  
Hung Phuoc Truong ◽  
Thanh Phuong Nguyen ◽  
Yong-Guk Kim

AbstractWe present a novel framework for efficient and robust facial feature representation based upon Local Binary Pattern (LBP), called Weighted Statistical Binary Pattern, wherein the descriptors utilize the straight-line topology along with different directions. The input image is initially divided into mean and variance moments. A new variance moment, which contains distinctive facial features, is prepared by extracting root k-th. Then, when Sign and Magnitude components along four different directions using the mean moment are constructed, a weighting approach according to the new variance is applied to each component. Finally, the weighted histograms of Sign and Magnitude components are concatenated to build a novel histogram of Complementary LBP along with different directions. A comprehensive evaluation using six public face datasets suggests that the present framework outperforms the state-of-the-art methods and achieves 98.51% for ORL, 98.72% for YALE, 98.83% for Caltech, 99.52% for AR, 94.78% for FERET, and 99.07% for KDEF in terms of accuracy, respectively. The influence of color spaces and the issue of degraded images are also analyzed with our descriptors. Such a result with theoretical underpinning confirms that our descriptors are robust against noise, illumination variation, diverse facial expressions, and head poses.


2021 ◽  
Author(s):  
Gabrielle E. Reimann ◽  
Catherine Walsh ◽  
Kelsey D. Csumitta ◽  
Patrick McClure ◽  
Francisco Pereira ◽  
...  

Symmetry ◽  
2018 ◽  
Vol 10 (10) ◽  
pp. 442 ◽  
Author(s):  
Dongxue Liang ◽  
Kyoungju Park ◽  
Przemyslaw Krompiec

With the advent of the deep learning method, portrait video stylization has become more popular. In this paper, we present a robust method for automatically stylizing portrait videos that contain small human faces. By extending the Mask Regions with Convolutional Neural Network features (R-CNN) with a CNN branch which detects the contour landmarks of the face, we divided the input frame into three regions: the region of facial features, the region of the inner face surrounded by 36 face contour landmarks, and the region of the outer face. Besides keeping the facial features region as it is, we used two different stroke models to render the other two regions. During the non-photorealistic rendering (NPR) of the animation video, we combined the deformable strokes and optical flow estimation between adjacent frames to follow the underlying motion coherently. The experimental results demonstrated that our method could not only effectively reserve the small and distinct facial features, but also follow the underlying motion coherently.


Perception ◽  
1986 ◽  
Vol 15 (4) ◽  
pp. 373-386 ◽  
Author(s):  
Nigel D Haig

For recognition of a target there must be some form of comparison process between the image of that target and a stored representation of that target. In the case of faces there must be a very large number of such stored representations, yet human beings seem able to perform comparisons at phenomenal speed. It is possible that faces are memorised by fitting unusual features or combinations of features onto a bland prototypical face, and such a data-compression technique would help to explain our computational speed. If humans do indeed function in this fashion, it is necessary to ask just what are the features that distinguish one face from another, and also, what are the features that form the basic set of the prototypical face. The distributed apertures technique was further developed in an attempt to answer both questions. Four target faces, stored in an image-processing computer, were each divided up into 162 contiguous squares that could be displayed in their correct positions in any combination of 24 or fewer squares. Each observer was required to judge which of the four target faces was displayed during a 1 s presentation, and the proportion of correct responses for each individual square was computed. The resultant response distributions, displayed as brightness maps, give a vivid impression of the relative saliency of each feature square, both for the individual targets and for all of them combined. The results, while broadly confirming previous work, contain some very interesting and surprising details about the differences between the target faces.


1995 ◽  
Vol 7 (1) ◽  
pp. 57-74 ◽  
Author(s):  
M.J.T. Reinders ◽  
P.J.L. van Beek ◽  
B. Sankur ◽  
J.C.A. van der Lubbe

Sign in / Sign up

Export Citation Format

Share Document