scholarly journals Trust-Aware Decision Making for Human-Robot Collaboration

2020 ◽  
Vol 9 (2) ◽  
pp. 1-23 ◽  
Author(s):  
Min Chen ◽  
Stefanos Nikolaidis ◽  
Harold Soh ◽  
David Hsu ◽  
Siddhartha Srinivasa
Safety ◽  
2021 ◽  
Vol 7 (4) ◽  
pp. 75
Author(s):  
Guilherme Deola Borges ◽  
Angélica Muffato Reis ◽  
Rafael Ariente Neto ◽  
Diego Luiz de Mattos ◽  
André Cardoso ◽  
...  

Human–Robot Collaboration (HRC) systems are often implemented seeking for reducing risk of Work-related Musculoskeletal Disorders (WMSD) development and increasing productivity. The challenge is to successfully implement an industrial HRC to manage those factors, considering that non-linear behaviors of complex systems can produce counterintuitive effects. Therefore, the aim of this study was to design a decision-making framework considering the key ergonomic methods and using a computational model for simulations. It considered the main systemic influences when implementing a collaborative robot (cobot) into a production system and simulated scenarios of productivity and WMSD risk. In order to verify whether the computational model for simulating scenarios would be useful in the framework, a case study in a manual assembly workstation was conducted. The results show that both cycle time and WMSD risk depend on the Level of Collaboration (LoC). The proposed framework helps deciding which cobot to implement in a context of industrial assembly process. System dynamics were used to understand the actual behavior of all factors and to predict scenarios. Finally, the framework presented a clear roadmap for the future development of an industrial HRC system, drastically reducing risk management in decision-making.


2021 ◽  
Vol 11 (11) ◽  
pp. 5212
Author(s):  
Angeliki Zacharaki ◽  
Ioannis Kostavelis ◽  
Ioannis Dokas

During the last decades, collaborative robots capable of operating out of their cages are widely used in industry to assist humans in mundane and harsh manufacturing tasks. Although such robots are inherently safe by design, they are commonly accompanied by external sensors and other cyber-physical systems, to facilitate close cooperation with humans, which frequently render the collaborative ecosystem unsafe and prone to hazards. We introduce a method that capitalizes on partially observable Markov decision processes (POMDP) to amalgamate nominal actions of the system along with unsafe control actions posed by the System Theoretic Process Analysis (STPA). A decision-making mechanism that constantly prompts the system into a safer state is realized by providing situation awareness about the safety levels of the collaborative ecosystem by associating the system safety awareness with specific groups of selected actions. POMDP compensates the partial observability and uncertainty of the current state of the collaborative environment and creates safety screening policies that tend to make decisions that balance the system from unsafe to safe states in real time during the operational phase. The theoretical framework is assessed on a simulated human–robot collaborative scenario and proved capable of identifying loss and success scenarios.


2018 ◽  
Vol 41 ◽  
Author(s):  
Patrick Simen ◽  
Fuat Balcı

AbstractRahnev & Denison (R&D) argue against normative theories and in favor of a more descriptive “standard observer model” of perceptual decision making. We agree with the authors in many respects, but we argue that optimality (specifically, reward-rate maximization) has proved demonstrably useful as a hypothesis, contrary to the authors’ claims.


2018 ◽  
Vol 41 ◽  
Author(s):  
David Danks

AbstractThe target article uses a mathematical framework derived from Bayesian decision making to demonstrate suboptimal decision making but then attributes psychological reality to the framework components. Rahnev & Denison's (R&D) positive proposal thus risks ignoring plausible psychological theories that could implement complex perceptual decision making. We must be careful not to slide from success with an analytical tool to the reality of the tool components.


2018 ◽  
Vol 41 ◽  
Author(s):  
Kevin Arceneaux

AbstractIntuitions guide decision-making, and looking to the evolutionary history of humans illuminates why some behavioral responses are more intuitive than others. Yet a place remains for cognitive processes to second-guess intuitive responses – that is, to be reflective – and individual differences abound in automatic, intuitive processing as well.


2014 ◽  
Vol 38 (01) ◽  
pp. 46
Author(s):  
David R. Shanks ◽  
Ben R. Newell

Sign in / Sign up

Export Citation Format

Share Document