Comparing Interface Elements on a Tablet for Intuitive Teleoperation of a Mobile Manipulator

Author(s):  
David A. Lopez ◽  
Jared A. Frank ◽  
Vikram Kapila

As mobile robots experience increased commercialization, development of intuitive interfaces for human-robot interaction gains paramount importance to promote pervasive adoption of such robots in society. Although smart devices may be useful to operate robots, prior research has not fully investigated the appropriateness of various interaction elements (e.g., touch, gestures, sensors, etc.) to render an effective human-robot interface. This paper provides overviews of a mobile manipulator and a tablet-based application to operate the mobile manipulator. In particular, a mobile manipulator is designed to navigate an obstacle course and to pick and place objects around the course, all under the control of a human operator who uses a tablet-based application. The tablet application provides the user live videos that are captured and streamed by a camera onboard the robot and an overhead camera. In addition, to remotely operate the mobile manipulator, the tablet application provides the user a menu of four interface element options, including, virtual buttons, virtual joysticks, touchscreen gesture, and tilting the device. To evaluate the intuitiveness of the four interface elements for operating the mobile manipulator, a user study is conducted in which participants’ performance is monitored as they operate the mobile manipulator using the designed interfaces. The analysis of the user study shows that the tablet-based application allows even non-experienced users to operate the mobile manipulator without the need for extensive training.

2020 ◽  
Vol 10 (22) ◽  
pp. 7992
Author(s):  
Jinseok Woo ◽  
Yasuhiro Ohyama ◽  
Naoyuki Kubota

This paper presents a robot partner development platform based on smart devices. Humans communicate with others based on the basic motivations of human cooperation and have communicative motives based on social attributes. Understanding and applying these communicative motives become important in the development of socially-embedded robot partners. Therefore, it is becoming more important to develop robots that can be applied according to needs while taking these human communication elements into consideration. The role of a robot partner is more important in not only on the industrial sector but also in households. However, it seems that it will take time to disseminate robots. In the field of service robots, the development of robots according to various needs is important and the system integration of hardware and software becomes crucial. Therefore, in this paper, we propose a robot partner development platform for human-robot interaction. Firstly, we propose a modularized architecture of robot partners using a smart device to realize a flexible update based on the re-usability of hardware and software modules. In addition, we show examples of implementing a robot system using the proposed architecture. Next, we focus on the development of various robots using the modular robot partner system. Finally, we discuss the effectiveness of the proposed robot partner system through social implementation and experiments.


Complexity ◽  
2019 ◽  
Vol 2019 ◽  
pp. 1-16
Author(s):  
Maurice Lamb ◽  
Patrick Nalepka ◽  
Rachel W. Kallen ◽  
Tamara Lorenz ◽  
Steven J. Harrison ◽  
...  

Interactive or collaborative pick-and-place tasks occur during all kinds of daily activities, for example, when two or more individuals pass plates, glasses, and utensils back and forth between each other when setting a dinner table or loading a dishwasher together. In the near future, participation in these collaborative pick-and-place tasks could also include robotic assistants. However, for human-machine and human-robot interactions, interactive pick-and-place tasks present a unique set of challenges. A key challenge is that high-level task-representational algorithms and preplanned action or motor programs quickly become intractable, even for simple interaction scenarios. Here we address this challenge by introducing a bioinspired behavioral dynamic model of free-flowing cooperative pick-and-place behaviors based on low-dimensional dynamical movement primitives and nonlinear action selection functions. Further, we demonstrate that this model can be successfully implemented as an artificial agent control architecture to produce effective and robust human-like behavior during human-agent interactions. Participants were unable to explicitly detect whether they were working with an artificial (model controlled) agent or another human-coactor, further illustrating the potential effectiveness of the proposed modeling approach for developing systems of robust real/embodied human-robot interaction more generally.


2021 ◽  
Vol 12 (1) ◽  
pp. 402-422
Author(s):  
Kheng Lee Koay ◽  
Matt Webster ◽  
Clare Dixon ◽  
Paul Gainer ◽  
Dag Syrdal ◽  
...  

Abstract When studying the use of assistive robots in home environments, and especially how such robots can be personalised to meet the needs of the resident, key concerns are issues related to behaviour verification, behaviour interference and safety. Here, personalisation refers to the teaching of new robot behaviours by both technical and non-technical end users. In this article, we consider the issue of behaviour interference caused by situations where newly taught robot behaviours may affect or be affected by existing behaviours and thus, those behaviours will not or might not ever be executed. We focus in particular on how such situations can be detected and presented to the user. We describe the human–robot behaviour teaching system that we developed as well as the formal behaviour checking methods used. The online use of behaviour checking is demonstrated, based on static analysis of behaviours during the operation of the robot, and evaluated in a user study. We conducted a proof-of-concept human–robot interaction study with an autonomous, multi-purpose robot operating within a smart home environment. Twenty participants individually taught the robot behaviours according to instructions they were given, some of which caused interference with other behaviours. A mechanism for detecting behaviour interference provided feedback to participants and suggestions on how to resolve those conflicts. We assessed the participants’ views on detected interference as reported by the behaviour teaching system. Results indicate that interference warnings given to participants during teaching provoked an understanding of the issue. We did not find a significant influence of participants’ technical background. These results highlight a promising path towards verification and validation of assistive home companion robots that allow end-user personalisation.


Author(s):  
Yasutake Takahashi ◽  
◽  
Kyohei Yoshida ◽  
Fuminori Hibino ◽  
Yoichiro Maeda

Human-robot interaction requires intuitive interface that is not possible using devices, such as, the joystick or teaching pendant, which also require some trainings. Instruction by gesture is one example of an intuitive interfaces requiring no training, and pointing is one of the simplest gestures. We propose simple pointing recognition for a mobile robot having an upwarddirected camera system. The robot using this recognizes pointing and navigates through simple visual feedback control to where the user points. This paper explores the feasibility and utility of our proposal as shown by the results of a questionnaire on proposed and conventional interfaces.


2020 ◽  
Author(s):  
Youjin Hwang ◽  
Donghoon Shin ◽  
Jinsu Eun ◽  
Bongwon Suh ◽  
Joonhwan Lee

BACKGROUND Prolonged time of computer use increased the prevalence of ocular problems including eyestrain, tired eyes, irritation, redness, blurred vision, and double vision, collectively referred to as computer vision syndrome. Approximately 70 percent of computer users have vision-related problems. To design the effective screen intervention for preventing or improving computer vision syndrome, we must understand the effective interfaces of computer-based intervention (CBI). OBJECTIVE In this study, we aim to explore the interface elements of computer-based intervention for computer vision syndrome to set design guidelines based on pros/cons of each interface element. METHODS We conducted iterative user study to achieve our research goal. First, we conducted workshop to evaluate overall interface elements that are included in the previous systems for computer vision syndrome (N=7). Second, we designed and deployed our prototype LiquidEye with the multiple interface options to the users in the wild (N=11). Participants used LiquidEye for 14 days and during these period, we collected participants’ daily log (N=680). Also, we conducted pre and post survey and post-hoc interviews to explore how each interface element affects system acceptability. RESULTS We have collected 19 interface elements for designing intervention system for CVS from the workshop, then, deployed our first prototype LiquidEye. After deployment of LiquidEye, we conducted multiple regression analysis with the user data log to analyze significant elements affecting user participation of the LiquidEye. The significant elements include instruction page of eye rest strategy (P<.05), goal setting of resting period (P<.01), compliment page after user complete the resting (P<.0.001), middle-size popup window(P<.05), and symptom-like visual affect that alarms eye resting time (P<.0.005). CONCLUSIONS We suggest design implications to consider when designing CBI for computer vision syndrome. The sophisticated design of the customizing interface can make it possible for users to use the system more interactively which results in higher engagement and management of eye condition. There are important technical challenges still to address, but given the fact that this study has been able to sort out various factors related to computer-based intervention, it is expected to contribute greatly to the research of various CBI designs in the future.


Machines ◽  
2021 ◽  
Vol 10 (1) ◽  
pp. 15
Author(s):  
Akiyoshi Hayashi ◽  
Liz Katherine Rincon-Ardila ◽  
Gentiane Venture

In the future, in a society where robots and humans live together, HRI is an important field of research. While most human–robot-interaction (HRI) studies focus on appearance and dialogue, touch-communication has not been the focus of many studies despite the importance of its role in human–human communication. This paper investigates how and where humans touch an inorganic non-zoomorphic robot arm. Based on these results, we install touch sensors on the robot arm and conduct experiments to collect data of users’ impressions towards the robot when touching it. Our results suggest two main things. First, the touch gestures were collected with two sensors, and the collected data can be analyzed using machine learning to classify the gestures. Second, communication between humans and robots using touch can improve the user’s impression of the robots.


Sign in / Sign up

Export Citation Format

Share Document