Representing and compressing facial animation parameters using facial action basis functions

1999 ◽  
Vol 9 (3) ◽  
pp. 405-410 ◽  
Author(s):  
J. Ahlberg ◽  
Haibo Li
Author(s):  
Yongmian Zhang ◽  
Jixu Chen ◽  
Yan Tong ◽  
Qiang Ji

This chapter describes a probabilistic framework for faithful reproduction of spontaneous facial expressions on a synthetic face model in a real time interactive application. The framework consists of a coupled Bayesian network (BN) to unify the facial expression analysis and synthesis into one coherent structure. At the analysis end, we cast the facial action coding system (FACS) into a dynamic Bayesian network (DBN) to capture relationships between facial expressions and the facial motions as well as their uncertainties and dynamics. The observations fed into the DBN facial expression model are measurements of facial action units (AUs) generated by an AU model. Also implemented by a DBN, the AU model captures the rigid head movements and nonrigid facial muscular movements of a spontaneous facial expression. At the synthesizer, a static BN reconstructs the Facial Animation Parameters (FAPs) and their intensity through the top-down inference according to the current state of facial expression and pose information output by the analysis end. The two BNs are connected statically through a data stream link. The novelty of using the coupled BN brings about several benefits. First, a facial expression is inferred through both spatial and temporal inference so that the perceptual quality of animation is less affected by the misdetection of facial features. Second, more realistic looking facial expressions can be reproduced by modeling the dynamics of human expressions in facial expression analysis. Third, very low bitrate (9 bytes per frame) in data transmission can be achieved.


2014 ◽  
Vol 2014 ◽  
pp. 1-6 ◽  
Author(s):  
Shuo Sun ◽  
Chunbao Ge

Animating expressive facial animation is a very challenging topic within the graphics community. In this paper, we introduce a novel ERI (expression ratio image) driving framework based on SVR and MPEG-4 for automatic 3D facial expression animation. Through using the method of support vector regression (SVR), the framework can learn and forecast the regression relationship between the facial animation parameters (FAPs) and the parameters of expression ratio image. Firstly, we build a 3D face animation system driven by FAP. Secondly, through using the method of principle component analysis (PCA), we generate the parameter sets of eigen-ERI space, which will rebuild reasonable expression ratio image. Then we learn a model with the support vector regression mapping, and facial animation parameters can be synthesized quickly with the parameters of eigen-ERI. Finally, we implement our 3D face animation system driving by the result of FAP and it works effectively.


Author(s):  
Eric C. Larson ◽  
Gary G. Yen

Facial feature tracking for model–based coding has evolved over the past decades. Of particular interest is its application in very low bit rate coding in which optimization is used to analyze head and shoulder sequences. We present the results of a computational experiment in which we apply a combination of non-dominated sorting genetic algorithm and a deterministic search to find optimal facial animation parameters at many bandwidths simultaneously. As objective functions are concerned, peak signal-to-noise ratio is maximized while the total number of facial animation parameters is minimized. Particularly, the algorithm is tested for efficiency and reliability. The results show that the overall methodology works effectively, but that a better error assessment function is needed for future study.


Sign in / Sign up

Export Citation Format

Share Document