scholarly journals Generation of an Assembly-Task Model Analyzing Human Demonstration.

2000 ◽  
Vol 18 (4) ◽  
pp. 535-544 ◽  
Author(s):  
Masayuki Tsuda ◽  
Tomoichi Takahashi ◽  
Hiroyuki Ogata
Procedia CIRP ◽  
2020 ◽  
Vol 93 ◽  
pp. 1109-1114
Author(s):  
Sebastian Pimminger ◽  
Werner Kurschl ◽  
Lisa Panholzer ◽  
Thomas Neumayr ◽  
Mirjam Augstein ◽  
...  

2009 ◽  
Author(s):  
Sue A. Ferguson ◽  
William S. Marras ◽  
W. Gary Allread ◽  
Gregory G. Knapik ◽  
Kimberly A. Vandlen ◽  
...  

Author(s):  
Qing Liao ◽  
Heyan Chai ◽  
Hao Han ◽  
Xiang Zhang ◽  
Xuan Wang ◽  
...  
Keyword(s):  

Electronics ◽  
2021 ◽  
Vol 10 (11) ◽  
pp. 1317
Author(s):  
Alejandro Chacón ◽  
Pere Ponsa ◽  
Cecilio Angulo

In human–robot collaborative assembly tasks, it is necessary to properly balance skills to maximize productivity. Human operators can contribute with their abilities in dexterous manipulation, reasoning and problem solving, but a bounded workload (cognitive, physical, and timing) should be assigned for the task. Collaborative robots can provide accurate, quick and precise physical work skills, but they have constrained cognitive interaction capacity and low dexterous ability. In this work, an experimental setup is introduced in the form of a laboratory case study in which the task performance of the human–robot team and the mental workload of the humans are analyzed for an assembly task. We demonstrate that an operator working on a main high-demanding cognitive task can also comply with a secondary task (assembly) mainly developed for a robot asking for some cognitive and dexterous human capacities producing a very low impact on the primary task. In this form, skills are well balanced, and the operator is satisfied with the working conditions.


Metals ◽  
2021 ◽  
Vol 11 (6) ◽  
pp. 870
Author(s):  
Robby Neven ◽  
Toon Goedemé

Automating sheet steel visual inspection can improve quality and reduce costs during its production. While many manufacturers still rely on manual or traditional inspection methods, deep learning-based approaches have proven their efficiency. In this paper, we go beyond the state-of-the-art in this domain by proposing a multi-task model that performs both pixel-based defect segmentation and severity estimation of the defects in one two-branch network. Additionally, we show how incorporation of the production process parameters improves the model’s performance. After manually constructing a real-life industrial dataset, we first implemented and trained two single-task models performing the defect segmentation and severity estimation tasks separately. Next, we compared this to a multi-task model that simultaneously performs the two tasks at hand. By combining the tasks into one model, both segmentation tasks improved by 2.5% and 3% mIoU, respectively. In the next step, we extended the multi-task model using sensor fusion with process parameters. We demonstrate that the incorporation of the process parameters resulted in a further mIoU increase of 6.8% and 2.9% for the defect segmentation and severity estimation tasks, respectively.


Sign in / Sign up

Export Citation Format

Share Document