Action recognition for human robot interaction in industrial applications

Author(s):  
Sharath Chandra Akkaladevi ◽  
Christoph Heindl
Complexity ◽  
2020 ◽  
Vol 2020 ◽  
pp. 1-10
Author(s):  
Qiubo Zhong ◽  
Caiming Zheng ◽  
Haoxiang Zhang

A novel posture motion-based spatiotemporal fused graph convolutional network (PM-STGCN) is presented for skeleton-based action recognition. Existing methods on skeleton-based action recognition focus on independently calculating the joint information in single frame and motion information of joints between adjacent frames from the human body skeleton structure and then combine the classification results. However, that does not take into consideration of the complicated temporal and spatial relationship of the human body action sequence, so they are not very efficient in distinguishing similar actions. In this work, we enhance the ability of distinguishing similar actions by focusing on spatiotemporal fusion and adaptive feature extraction for high discrimination information. Firstly, the local posture motion-based attention (LPM-TAM) module is proposed for the purpose of suppressing the skeleton sequence data with a low amount of motion in the temporal domain, and the representation of motion posture features is concentrated. Besides, the local posture motion-based channel attention module (LPM-CAM) is introduced to make use of the strongly discriminative representation between different action classes of similarity. Finally, the posture motion-based spatiotemporal fusion (PM-STF) module is constructed which fuses the spatiotemporal skeleton data by filtering out the low-information sequence and enhances the posture motion features adaptively with high discrimination. Extensive experiments have been conducted, and the results demonstrate that the proposed model is superior to the commonly used action recognition methods. The designed human-robot interaction system based on action recognition has competitive performance compared with the speech interaction system.


2018 ◽  
Vol 51 (11) ◽  
pp. 66-71 ◽  
Author(s):  
Valeria Villani ◽  
Fabio Pini ◽  
Francesco Leali ◽  
Cristian Secchi ◽  
Cesare Fantuzzi

Author(s):  
David A. Guerra-Zubiaga ◽  
Navid Nasajpour-Esfahani ◽  
Basma Siddiqui ◽  
Kevin Kamperman

Abstract While it is not only important to synthesize instruments, controls, and robotics, it is also essential to connect these elements to people to achieve the future of automation. Whether in an operating room with surgical robots or in an earthquake disaster zone where an operator is aided by search and rescue drones, interaction between machines and humans is becoming central to increasing productivity. Industry 4.0 trends such as Internet of Things (IoT) and digital manufacturing are the early adopters of human-machine interfaces that support manufacturing automation. Such models must consider various aspects of process implementation such as explicit, implicit, and tacit knowledge to properly mimic a human’s performance. However, most inquiries in this field use expressed information instead of tacit knowledge due to an unfulfilled need for an industrial tacit knowledge framework. Tacit knowledge is difficult to learn and transfer if an operator’s logic is never revealed. In response, this research provides a knowledge model to structure, categorize, and reuse tacit knowledge for advanced manufacturing operations. The model is implemented in a human-robot interaction by capturing valuable experiences using digital tools such as Tecnomatix for further reuse in a variety of industrial applications.


2009 ◽  
Author(s):  
Matthew S. Prewett ◽  
Kristin N. Saboe ◽  
Ryan C. Johnson ◽  
Michael D. Coovert ◽  
Linda R. Elliott

2010 ◽  
Author(s):  
Eleanore Edson ◽  
Judith Lytle ◽  
Thomas McKenna

Sign in / Sign up

Export Citation Format

Share Document