scholarly journals Robot Transparency and Anthropomorphic Attribute Effects on Human–Robot Interactions

Sensors ◽  
2021 ◽  
Vol 21 (17) ◽  
pp. 5722
Author(s):  
Jianmin Wang ◽  
Yujia Liu ◽  
Tianyang Yue ◽  
Chengji Wang ◽  
Jinjing Mao ◽  
...  

Anthropomorphic robots need to maintain effective and emotive communication with humans as automotive agents to establish and maintain effective human–robot performances and positive human experiences. Previous research has shown that the characteristics of robot communication positively affect human–robot interaction outcomes such as usability, trust, workload, and performance. In this study, we investigated the characteristics of transparency and anthropomorphism in robotic dual-channel communication, encompassing the voice channel (low or high, increasing the amount of information provided by textual information) and the visual channel (low or high, increasing the amount of information provided by expressive information). The results showed the benefits and limitations of increasing the transparency and anthropomorphism, demonstrating the significance of the careful implementation of transparency methods. The limitations and future directions are discussed.

Robotics ◽  
2020 ◽  
Vol 9 (1) ◽  
pp. 16 ◽  
Author(s):  
Muhammad Ahsan Gull ◽  
Shaoping Bai ◽  
Thomas Bak

Exoskeleton robotics has ushered in a new era of modern neuromuscular rehabilitation engineering and assistive technology research. The technology promises to improve the upper-limb functionalities required for performing activities of daily living. The exoskeleton technology is evolving quickly but still needs interdisciplinary research to solve technical challenges, e.g., kinematic compatibility and development of effective human–robot interaction. In this paper, the recent development in upper-limb exoskeletons is reviewed. The key challenges involved in the development of assistive exoskeletons are highlighted by comparing available solutions. This paper provides a general classification, comparisons, and overview of the mechatronic designs of upper-limb exoskeletons. In addition, a brief overview of the control modalities for upper-limb exoskeletons is also presented in this paper. A discussion on the future directions of research is included.


2011 ◽  
Vol 5 (1) ◽  
pp. 83-105 ◽  
Author(s):  
Jessie Y. C. Chen

A military vehicle crew station environment was simulated and a series of three experiments was conducted to examine the workload and performance of the combined position of the gunner and robotics operator in a multitasking environment. The study also evaluated whether aided target recognition (AiTR) capabilities (delivered through tactile and/or visual cuing) for the gunnery task might benefit the concurrent robotics and communication tasks and how the concurrent task performance might be affected when the AiTR was unreliable (i.e., false alarm prone or miss prone). Participants’ spatial ability was consistently found to be a reliable predictor of their targeting task performance as well as their modality preference for the AiTR display. Participants’ attentional control was found to significantly affect the way they interacted with unreliable automated systems.


Author(s):  
Antonio Bicchi ◽  
Michele Bavaro ◽  
Gianluca Boccadamo ◽  
Davide De Carli ◽  
Roberto Filippini ◽  
...  

2018 ◽  
Vol 15 (4) ◽  
pp. 172988141877319 ◽  
Author(s):  
S M Mizanoor Rahman ◽  
Ryojun Ikeura

In the first step, a one degree of freedom power assist robotic system is developed for lifting lightweight objects. Dynamics for human–robot co-manipulation is derived that includes human cognition, for example, weight perception. A novel admittance control scheme is derived using the weight perception–based dynamics. Human subjects lift a small-sized, lightweight object with the power assist robotic system. Human–robot interaction and system characteristics are analyzed. A comprehensive scheme is developed to evaluate the human–robot interaction and performance, and a constrained optimization algorithm is developed to determine the optimum human–robot interaction and performance. The results show that the inclusion of weight perception in the control helps achieve optimum human–robot interaction and performance for a set of hard constraints. In the second step, the same optimization algorithm and control scheme are used for lifting a heavy object with a multi-degree of freedom power assist robotic system. The results show that the human–robot interaction and performance for lifting the heavy object are not as good as that for lifting the lightweight object. Then, weight perception–based intelligent controls in the forms of model predictive control and vision-based variable admittance control are applied for lifting the heavy object. The results show that the intelligent controls enhance human–robot interaction and performance, help achieve optimum human–robot interaction and performance for a set of soft constraints, and produce similar human–robot interaction and performance as obtained for lifting the lightweight object. The human–robot interaction and performance for lifting the heavy object with power assist are treated as intuitive and natural because these are calibrated with those for lifting the lightweight object. The results also show that the variable admittance control outperforms the model predictive control. We also propose a method to adjust the variable admittance control for three degrees of freedom translational manipulation of heavy objects based on human intent recognition. The results are useful for developing controls of human friendly, high performance power assist robotic systems for heavy object manipulation in industries.


2020 ◽  
Vol 14 ◽  
Author(s):  
Katharina Kühne ◽  
Martin H. Fischer ◽  
Yuefang Zhou

Background: The increasing involvement of social robots in human lives raises the question as to how humans perceive social robots. Little is known about human perception of synthesized voices.Aim: To investigate which synthesized voice parameters predict the speaker's eeriness and voice likability; to determine if individual listener characteristics (e.g., personality, attitude toward robots, age) influence synthesized voice evaluations; and to explore which paralinguistic features subjectively distinguish humans from robots/artificial agents.Methods: 95 adults (62 females) listened to randomly presented audio-clips of three categories: synthesized (Watson, IBM), humanoid (robot Sophia, Hanson Robotics), and human voices (five clips/category). Voices were rated on intelligibility, prosody, trustworthiness, confidence, enthusiasm, pleasantness, human-likeness, likability, and naturalness. Speakers were rated on appeal, credibility, human-likeness, and eeriness. Participants' personality traits, attitudes to robots, and demographics were obtained.Results: The human voice and human speaker characteristics received reliably higher scores on all dimensions except for eeriness. Synthesized voice ratings were positively related to participants' agreeableness and neuroticism. Females rated synthesized voices more positively on most dimensions. Surprisingly, interest in social robots and attitudes toward robots played almost no role in voice evaluation. Contrary to the expectations of an uncanny valley, when the ratings of human-likeness for both the voice and the speaker characteristics were higher, they seemed less eerie to the participants. Moreover, when the speaker's voice was more humanlike, it was more liked by the participants. This latter point was only applicable to one of the synthesized voices. Finally, pleasantness and trustworthiness of the synthesized voice predicted the likability of the speaker's voice. Qualitative content analysis identified intonation, sound, emotion, and imageability/embodiment as diagnostic features.Discussion: Humans clearly prefer human voices, but manipulating diagnostic speech features might increase acceptance of synthesized voices and thereby support human-robot interaction. There is limited evidence that human-likeness of a voice is negatively linked to the perceived eeriness of the speaker.


2021 ◽  
Vol 11 (5) ◽  
pp. 2358
Author(s):  
Mitsuhiko Kimoto ◽  
Takamasa Iio ◽  
Masahiro Shiomi ◽  
Katsunori Shimohara

This study proposes a robot conversation strategy involving speech and gestures to improve a robot’s indicated object recognition, i.e., the recognition of an object indicated by a human. Research conducted to improve the performance of indicated object recognition is divided into two main approaches: development and interactive. The development approach addresses the development of new devices or algorithms. Through human–robot interaction, the interactive approach improves the performance by decreasing the variability and the ambiguity of the references. Inspired by the findings of entrainment and entrainment inhibition, this study proposes a robot conversation strategy that utilizes the interactive approach. While entrainment is a phenomenon in which people unconsciously tend to mimic words and/or gestures of their interlocutor, entrainment inhibition is the opposite phenomenon in which people decrease the amount of information contained in their words and gestures when their interlocutor provides excess information. Based on these phenomena, we designed a robot conversation strategy that elicits clear references. We experimentally compared this strategy with the other interactive strategy in which a robot explicitly requests clarifications when a human refers to an object. We obtained the following findings: (1) The proposed strategy clarifies human references and improves indicated object recognition performance, and (2) the proposed strategy forms better impressions than the other interactive strategy that explicitly requests clarifications when people refer to objects.


Author(s):  
Kimberly A. Pollard ◽  
Stephanie M. Lukin ◽  
Matthew Marge ◽  
Ashley Foots ◽  
Susan G. Hill

Industry, military, and academia are showing increasing interest in collaborative human-robot teaming in a variety of task contexts. Designing effective user interfaces for human-robot interaction is an ongoing challenge, and a variety of single and multiple-modality interfaces have been explored. Our work is to develop a bi-directional natural language interface for remote human-robot collaboration in physically situated tasks. When combined with a visual interface and audio cueing, we intend for the natural language interface to provide a naturalistic user experience that requires little training. Building the language portion of this interface requires first understanding how potential users would speak to the robot. In this paper, we describe our elicitation of minimally-constrained robot-directed language, observations about the users’ language behavior, and future directions for constructing an automated robotic system that can accommodate these language needs.


2019 ◽  
Vol 6 (2) ◽  
pp. 103
Author(s):  
Erik Danford Klein ◽  
Gary Backous ◽  
Thomas M. Schnieders ◽  
Zhonglun Wang ◽  
Richard T. Stone

Sign in / Sign up

Export Citation Format

Share Document