3D face model dataset: Automatic detection of facial expressions and emotions for educational environments

2015 ◽  
Vol 46 (5) ◽  
pp. 1028-1037 ◽  
Author(s):  
Satyadhyan Chickerur ◽  
Kartik Joshi
2013 ◽  
Vol 461 ◽  
pp. 838-847
Author(s):  
Xu Zhang ◽  
Shu Jun Zhang ◽  
Kevin Hapeshi

To represent various human facial expressions is an essential requirement for emotional bio-robots. The human expressions can convey certain emotions for communications of human beings with some muscles positions and their movements. To design and develop emotional robots, it is necessary to build a generic 3D human face model. While the geometrical features of human faces are freeform surfaces with complex properties, it is the fundamental requirement for the model to have the ability of representing both primitive and freeform surfaces. This requirement makes the Non-rational Uniform B-Spline (NURBS) are suitable for 3D human face modelling. In this paper, a new parameterised feature based generic 3D human face model is proposed and implemented. Based on observation of human face anatomy, the authors define thirty-four NURBS curve features and twenty-one NURBS surface features to represent the human facial components, such as eyebrows, eyes, nose and mouth etc. These curve models and surface models can be used to simulate different facial expressions by manipulating the control points of those NURBS features. Unlike the existing individual based face modelling methods, this parameterised 3D face model also gives users the ability to use the model imitate any face appearances. In addition the potential applications of the new proposed 3D face model are also discussed. Besides emotional bio-robots, it is believed that the proposed model can also be applied in other fields such as aesthetic plastic surgery simulation, film and computer game characters creation, and criminal investigation and prevention.


2021 ◽  
Vol 1948 (1) ◽  
pp. 012053
Author(s):  
Hongxin Xu ◽  
Ruoming Lan ◽  
Tianping Li
Keyword(s):  

2011 ◽  
pp. 295-316
Author(s):  
Markus Kampmann ◽  
Liang Zhang

This chapter introduces a complete framework for automatic adaptation of a 3D face model to a human face for visual communication applications like video conferencing or video telephony. First, facial features in a facial image are estimated. Then, the 3D face model is adapted using the estimated facial features. This framework is scalable with respect to complexity. Two complexity modes, a low complexity and a high complexity mode, are introduced. For the low complexity mode, only eye and mouth features are estimated and the low complexity face model Candide is adapted. For the high complexity mode, a more detailed face model is adapted, using eye and mouth features, eyebrow and nose features, and chin and cheek contours. Experimental results with natural videophone sequences show that with this framework automatic 3D face model adaptation with high accuracy is possible.


Sign in / Sign up

Export Citation Format

Share Document