Humanoid Robot Head Interaction Based on Face Recognition

Author(s):  
Xiai Chen ◽  
Hong Xu ◽  
Ling Wang ◽  
Bingrui Wang ◽  
Chenna Yang
Author(s):  
Lixiao Huang ◽  
Daniel McDonald ◽  
Douglas Gillan

The service and entertainment industry advocates the possibility of using humanoid robots; however, direct interaction experience is uncommon. To understand humans’ interactions with humanoid robots, the present study used a robot capable of face recognition and conversation in a park and a school setting to explore the behavioral patterns of humans, dialog themes, and emotional responses. Results showed that humans’ behavioral patterns included looking at the robot, talking to the robot, talking to others about the robot, and adults taking photos. School children showed strong interest to interact with the robot and rich emotional responses. Major dialog themes included greeting, asking about the robot’s identity, testing the robot’s knowledge and capabilities, asking and replying about preferences and opinions, and correcting the robot’s errors. Observed emotional responses included liking, surprise, excitement, fright, frustration, and awkwardness. Humans interacted with the robot similarly to how they would interact with other humans but also differently. The educational value and design implication for humanoid robots are discussed.


2014 ◽  
Vol 2014 ◽  
pp. 1-5 ◽  
Author(s):  
Jizheng Yan ◽  
Zhiliang Wang ◽  
Yan Yan

Emotional robots are always the focus of artificial intelligence (AI), and intelligent control of robot facial expression is a hot research topic. This paper focuses on the design of humanoid robot head, which is divided into three steps to achieve. The first step is to solve the uncanny valley about humanoid robot, to find and avoid the relationship between human being and robot; the second step is to solve the association between human face and robot head; compared with human being and robots, we analyze the similarities and differences and explore the same basis and mechanisms between robot and human analyzing the Facial Action Coding System (FACS), which guides us to achieve humanoid expressions. On the basis of the previous two steps, the third step is to construct a robot head; through a series of experiments we test the robot head, which could show some humanoid expressions; through human-robot interaction, we find people are surprised by the robot head expression and feel happy.


2016 ◽  
Vol 3 ◽  
Author(s):  
Samer Alfayad ◽  
Mohamad El Asswad ◽  
A. Abdellatif ◽  
Fethi B. Ouezdou ◽  
Arnaud Blanchard ◽  
...  

2010 ◽  
Vol 07 (03) ◽  
pp. 429-450
Author(s):  
ALBERTO PETRILLI-BARCELÓ ◽  
HERIBERTO CASARRUBIAS-VARGAS ◽  
MIGUEL BERNAL-MARIN ◽  
EDUARDO BAYRO-CORROCHANO ◽  
RÜDIGER DILLMAN

In this article, we propose a conformal model for 3D visual perception. In our model, the two views are fused in an extended 3D horopter model. For visual simultaneous localization and mapping (SLAM), an extended Kalman filter (EKF) technique is used for 3D reconstruction and determination of the robot head pose. In addition, the Viola and Jones machine-learning technique is applied to improve the robot relocalization. The 3D horopter, the EKF-based SLAM, and the Viola and Jones machine-learning technique are key elements for building a strong real-time perception system for robot humanoids. A variety of interesting experiments show the efficiency of our system for humanoid robot vision.


Sign in / Sign up

Export Citation Format

Share Document