scholarly journals Analysis of Physical Human–Robot Interaction for Motor Learning with Physical Help

2008 ◽  
Vol 5 (4) ◽  
pp. 213-223 ◽  
Author(s):  
Shuhei Ikemoto ◽  
Takashi Minato ◽  
Hiroshi Ishiguro

In this paper, we investigate physical human–robot interaction (PHRI) as an important extension of traditional HRI research. The aim of this research is to develop a motor learning system that uses physical help from a human helper. We first propose a new control system that takes advantage of inherent joint flexibility. This control system is applied on a new humanoid robot called CB2. In order to clarify the difference between successful and unsuccessful interaction, we conduct an experiment where a human subject has to help the CB2robot in its rising-up motion. We then develop a new measure that demonstrates the difference between smooth and non-smooth physical interactions. An analysis of the experiment’s data, based on the introduced measure, shows significant differences between experts and beginners in human–robot interaction.

Author(s):  
Giorgio Metta

This chapter outlines a number of research lines that, starting from the observation of nature, attempt to mimic human behavior in humanoid robots. Humanoid robotics is one of the most exciting proving grounds for the development of biologically inspired hardware and software—machines that try to recreate billions of years of evolution with some of the abilities and characteristics of living beings. Humanoids could be especially useful for their ability to “live” in human-populated environments, occupying the same physical space as people and using tools that have been designed for people. Natural human–robot interaction is also an important facet of humanoid research. Finally, learning and adapting from experience, the hallmark of human intelligence, may require some approximation to the human body in order to attain similar capacities to humans. This chapter focuses particularly on compliant actuation, soft robotics, biomimetic robot vision, robot touch, and brain-inspired motor control in the context of the iCub humanoid robot.


Author(s):  
Margot M. E. Neggers ◽  
Raymond H. Cuijpers ◽  
Peter A. M. Ruijten ◽  
Wijnand A. IJsselsteijn

AbstractAutonomous mobile robots that operate in environments with people are expected to be able to deal with human proxemics and social distances. Previous research investigated how robots can approach persons or how to implement human-aware navigation algorithms. However, experimental research on how robots can avoid a person in a comfortable way is largely missing. The aim of the current work is to experimentally determine the shape and size of personal space of a human passed by a robot. In two studies, both a humanoid as well as a non-humanoid robot were used to pass a person at different sides and distances, after which they were asked to rate their perceived comfort. As expected, perceived comfort increases with distance. However, the shape was not circular: passing at the back of a person is more uncomfortable compared to passing at the front, especially in the case of the humanoid robot. These results give us more insight into the shape and size of personal space in human–robot interaction. Furthermore, they can serve as necessary input to human-aware navigation algorithms for autonomous mobile robots in which human comfort is traded off with efficiency goals.


2020 ◽  
Vol 12 (1) ◽  
pp. 58-73
Author(s):  
Sofia Thunberg ◽  
Tom Ziemke

AbstractInteraction between humans and robots will benefit if people have at least a rough mental model of what a robot knows about the world and what it plans to do. But how do we design human-robot interactions to facilitate this? Previous research has shown that one can change people’s mental models of robots by manipulating the robots’ physical appearance. However, this has mostly not been done in a user-centred way, i.e. without a focus on what users need and want. Starting from theories of how humans form and adapt mental models of others, we investigated how the participatory design method, PICTIVE, can be used to generate design ideas about how a humanoid robot could communicate. Five participants went through three phases based on eight scenarios from the state-of-the-art tasks in the RoboCup@Home social robotics competition. The results indicate that participatory design can be a suitable method to generate design concepts for robots’ communication in human-robot interaction.


Author(s):  
Stefan Schiffer ◽  
Alexander Ferrein

In this work we report on our effort to design and implement an early introduction to basic robotics principles for children at kindergarten age.  The humanoid robot Pepper, which is a great platform for human-robot interaction experiments, was presenting the lecture by reading out the contents to the children making use of its speech synthesis capability.  One of the main challenges of this effort was to explain complex robotics contents in a way that pre-school children could follow the basic principles and ideas using examples from their world of experience. A quiz in a Runaround-game-show style after the lecture activated the children to recap the contents  they acquired about how mobile robots work in principle. Besides the thrill being exposed to a mobile robot that would also react to the children, they were very excited and at the same time very concentrated. What sets apart our effort from other work is that part of the lecturing is actually done by a robot itself and that a quiz at the end of the lesson is done using robots as well. To the best of our knowledge this is one of only few attempts to use Pepper not as a tele-teaching tool, but as the teacher itself in order to engage pre-school children with complex robotics contents. We  got very positive feedback from the children as well as from their educators.


2007 ◽  
Vol 8 (1) ◽  
pp. 53-81 ◽  
Author(s):  
Luís Seabra Lopes ◽  
Aneesh Chauhan

This paper addresses word learning for human–robot interaction. The focus is on making a robotic agent aware of its surroundings, by having it learn the names of the objects it can find. The human user, acting as instructor, can help the robotic agent ground the words used to refer to those objects. A lifelong learning system, based on one-class learning, was developed (OCLL). This system is incremental and evolves with the presentation of any new word, which acts as a class to the robot, relying on instructor feedback. A novel experimental evaluation methodology, that takes into account the open-ended nature of word learning, is proposed and applied. This methodology is based on the realization that a robot’s vocabulary will be limited by its discriminatory capacity which, in turn, depends on its sensors and perceptual capabilities. The results indicate that the robot’s representations are capable of incrementally evolving by correcting class descriptions, based on instructor feedback to classification results. In successive experiments, it was possible for the robot to learn between 6 and 12 names of real-world office objects. Although these results are comparable to those obtained by other authors, there is a need to scale-up. The limitations of the method are discussed and potential directions for improvement are pointed out.


2020 ◽  
pp. 1-17
Author(s):  
Luis Roda-Sanchez ◽  
Teresa Olivares ◽  
Celia Garrido-Hidalgo ◽  
José Luis de la Vara ◽  
Antonio Fernández-Caballero

2009 ◽  
Vol 6 (3-4) ◽  
pp. 369-397 ◽  
Author(s):  
Kerstin Dautenhahn ◽  
Chrystopher L. Nehaniv ◽  
Michael L. Walters ◽  
Ben Robins ◽  
Hatice Kose-Bagci ◽  
...  

2007 ◽  
Vol 23 (5) ◽  
pp. 840-851 ◽  
Author(s):  
Rainer Stiefelhagen ◽  
Hazim Kemal Ekenel ◽  
Christian Fugen ◽  
Petra Gieselmann ◽  
Hartwig Holzapfel ◽  
...  

2020 ◽  
Vol 32 (1) ◽  
pp. 7-7
Author(s):  
Masahiro Shiomi ◽  
Hidenobu Sumioka ◽  
Hiroshi Ishiguro

As social robot research is advancing, the interaction distance between people and robots is decreasing. Indeed, although we were once required to maintain a certain physical distance from traditional industrial robots for safety, we can now interact with social robots in such a close distance that we can touch them. The physical existence of social robots will be essential to realize natural and acceptable interactions with people in daily environments. Because social robots function in our daily environments, we must design scenarios where robots interact closely with humans by considering various viewpoints. Interactions that involve touching robots influence the changes in the behavior of a person strongly. Therefore, robotics researchers and developers need to design such scenarios carefully. Based on these considerations, this special issue focuses on close human-robot interactions. This special issue on “Human-Robot Interaction in Close Distance” includes a review paper and 11 other interesting papers covering various topics such as social touch interactions, non-verbal behavior design for touch interactions, child-robot interactions including physical contact, conversations with physical interactions, motion copying systems, and mobile human-robot interactions. We thank all the authors and reviewers of the papers and hope this special issue will help readers better understand human-robot interaction in close distance.


Sign in / Sign up

Export Citation Format

Share Document