Multi-modal user interaction method based on gaze tracking and gesture recognition

2013 ◽  
Vol 28 (2) ◽  
pp. 114-126 ◽  
Author(s):  
Heekyung Lee ◽  
Seong Yong Lim ◽  
Injae Lee ◽  
Jihun Cha ◽  
Dong-Chan Cho ◽  
...  
2015 ◽  
Vol 74 (7) ◽  
pp. 2371-2389
Author(s):  
Jong-Jin Jung ◽  
Ji-Yeon Kim ◽  
Hyun-Sook Chung ◽  
Pan-Seop Shin

Author(s):  
Mauro Teofilo ◽  
Lucas Cordeiro ◽  
Raimundo Barreto ◽  
Jose Raimundo Pereira ◽  
Ayres Mardem ◽  
...  

Author(s):  
Xuyue Yin ◽  
Xiumin Fan ◽  
Jiajie Wang ◽  
Rui Liu ◽  
Qiang Wang

Assembly process of complex electromechanical products can be quite complicated and time consuming because of high quality demands. Aiming at improving the efficiency of the manual assembly process, this paper proposes an automatic interaction method using part recognition for augmented reality (AR) assembly guidance, which improves both the accuracy of part picking and the interaction efficiency of AR guidance system. Taking sample images of similar parts as input and part types as output, a deep neural network model Part R-CNN for part recognition is build based on Faster R-CNN and is further fine-tuned by back propagation. By recognizing the assembly part, the augmented assembly guidance information of the corresponding parts assembly process is triggered in real-time without direct user interaction. Experimental results show that the deep neural network based part recognition method reaches 94% on mean average precision and the average recognition speed is 200ms per image frame. The average speed of AR guidance content triggering is about 20fps. All system performance satisfies the accuracy and real-time requirements of the AR-aided assembly system.


Sign in / Sign up

Export Citation Format

Share Document