scholarly journals Text-driven Mouth Animation for Human Computer Interaction With Personal Assistant

Author(s):  
Yliess Hati ◽  
Francis Rousseaux ◽  
Clément Duhart

Personal assistants are becoming more pervasive in our envi-ronments but still do not provide natural interactions. Their lack of realism in term of expressiveness and their lack of visual feedback can create frustrating experiences and make users lose patience. In this sense, we propose an end-to-end trainable neural architecture for text-driven 3D mouth animations. Previous works showed such architectures provide better realism and could open the door for integrated affective Human Computer Interface (HCI). Our study shows that such visual feedback improves users’ comfort for 78%of the candidates significantly while slightly improving their time perception.

Author(s):  
Jia Zhang ◽  
◽  
Sheng-Li Xu ◽  
Fang Deng ◽  

An event-driven on-vehicle intelligent human-computer interface has been proposed to solve the problem of complex on-vehicle human-computer interaction. After need analysis of human-computer interaction under the vehicle platform, the framework of intelligent human-computer interface is established, various modules and workflows in the system are designed, and the reasoning feature based on fuzzy cognitive map (FCM) is implemented. The on-vehicle intelligent human-computer interface could help users to complete the interactive operation which is unrelated to the driving operations. Furthermore, the system could analyze the whole information and predict the information required by the user. At last, it could display the information on the interface. So, the on-vehicle intelligent human-computer interface could not only meet the user’s demand for secondary interactive tasks, but also could ensure the driving performance and safety.


Author(s):  
Antonio G. Sestito ◽  
Tyler M. Frasca ◽  
Aidan O’Rourke ◽  
Lili Ma ◽  
Douglas E. Dow

Controlling remote robots is a difficult task for human computer interface (HCI). Control of remote robots enables accomplishment of tasks without the human controller physically present due to safety concerns or the expert cannot be physically present. This paper presents a method for using an Oculus Rift to improve HCI for telerobotic control. Using the Oculus, an operator could become immersed in the robot’s environment and could more naturally control the desired position of a remotely positioned vision system via head movements. To provide the appropriate visual feedback, a three-axis gimbal was implemented as a test platform. Through software implemented motion tracking, the response of the Oculus was compared to that of a mouse which demonstrates the efficiency of the proposed system over comparable HCI.


2000 ◽  
Vol 44 (12) ◽  
pp. 2-762-2-765 ◽  
Author(s):  
Pawan R. Vora

Increasing focus on e-commerce will change the way we look at human-computer interface design. In particular, we will need to be more customer-centric – in addition to being user-centric – and incorporate both usability and business objectives in our designs. Our designs will need to consider the end-to-end customer experience rather than focusing simply on the functional objectives of the application itself. The customer experience itself will determine how the users/customers view the company itself in terms of a viable and trustworthy brand on an off the Internet. To achieve these goals, we will need to address some fundamental issues related to the ART ( Access, Relationship, and Trust) of designing interfaces. Although our profession is well suited for this role because of our user-centric roots, the onus will be on us to step up to the challenge and own the end-to-end customer experience.


Sign in / Sign up

Export Citation Format

Share Document