scholarly journals State-Transition Modeling of Human–Robot Interaction for Easy Crowdsourced Robot Control

Sensors ◽  
2020 ◽  
Vol 20 (22) ◽  
pp. 6529
Author(s):  
Masaya Iwasaki ◽  
Mizuki Ikeda ◽  
Tatsuyuki Kawamura ◽  
Hideyuki Nakanishi

Robotic salespeople are often ignored by people due to their weak social presence, and thus have difficulty facilitating sales autonomously. However, for robots that are remotely controlled by humans, there is a need for experienced and trained operators. In this paper, we suggest crowdsourcing to allow general users on the internet to operate a robot remotely and facilitate customers’ purchasing activities while flexibly responding to various situations through a user interface. To implement this system, we examined how our developed remote interface can improve a robot’s social presence while being controlled by a human operator, including first-time users. Therefore, we investigated the typical flow of a customer–robot interaction that was effective for sales promotion, and modeled it as a state transition with automatic functions by accessing the robot’s sensor information. Furthermore, we created a user interface based on the model and examined whether it was effective in a real environment. Finally, we conducted experiments to examine whether the user interface could be operated by an amateur user and enhance the robot’s social presence. The results revealed that our model was able to improve the robot’s social presence and facilitate customers’ purchasing activity even when the operator was a first-time user.

Robotica ◽  
2007 ◽  
Vol 25 (5) ◽  
pp. 521-527 ◽  
Author(s):  
Harsha Medicherla ◽  
Ali Sekmen

SUMMARYAn understanding of how humans and robots can successfully interact to accomplish specific tasks is crucial in creating more sophisticated robots that may eventually become an integral part of human societies. A social robot needs to be able to learn the preferences and capabilities of the people with whom it interacts so that it can adapt its behaviors for more efficient and friendly interaction. Advances in human– computer interaction technologies have been widely used in improving human–robot interaction (HRI). It is now possible to interact with robots via natural communication means such as speech. In this paper, an innovative approach for HRI via voice-controllable intelligent user interfaces is described. The design and implementation of such interfaces are described. The traditional approaches for human–robot user interface design are explained and the advantages of the proposed approach are presented. The designed intelligent user interface, which learns user preferences and capabilities in time, can be controlled with voice. The system was successfully implemented and tested on a Pioneer 3-AT mobile robot. 20 participants, who were assessed on spatial reasoning ability, directed the robot in spatial navigation tasks to evaluate the effectiveness of the voice control in HRI. Time to complete the task, number of steps, and errors were collected. Results indicated that spatial reasoning ability and voice-control were reliable predictors of efficiency of robot teleoperation. 75% of the subjects with high spatial reasoning ability preferred using voice-control over manual control. The effect of spatial reasoning ability in teleoperation with voice-control was lower compared to that of manual control.


2016 ◽  
Vol 17 (3) ◽  
pp. 461-490 ◽  
Author(s):  
Maartje M. A. de Graaf ◽  
Somaya Ben Allouch ◽  
Jan A. G. M. van Dijk

Abstract This study aims to contribute to emerging human-robot interaction research by adding longitudinal findings to a limited number of long-term social robotics home studies. We placed 70 robots in users’ homes for a period of up to six months, and used questionnaires and interviews to collect data at six points during this period. Results indicate that users’ evaluations of the robot dropped initially, but later rose after the robot had been used for a longer period of time. This is congruent with the so-called mere-exposure effect, which shows an increasing positive evaluation of a novel stimulus once people become familiar with it. Before adoption, users focus on control beliefs showing that previous experiences with robots or other technologies allows to create a mental image of what having and using a robot in the home would entail. After adoption, users focus on utilitarian and hedonic attitudes showing that especially usefulness, social presence, enjoyment and attractiveness are important factors for long-term acceptance.


Sensors ◽  
2019 ◽  
Vol 19 (23) ◽  
pp. 5331 ◽  
Author(s):  
Prasertsak Tiawongsombat ◽  
Mun-Ho Jeong ◽  
Alongkorn Pirayawaraporn ◽  
Joong-Jae Lee ◽  
Joo-Seop Yun

Attention capability is an essential component of human–robot interaction. Several robot attention models have been proposed which aim to enable a robot to identify the attentiveness of the humans with which it communicates and gives them its attention accordingly. However, previous proposed models are often susceptible to noisy observations and result in the robot’s frequent and undesired shifts in attention. Furthermore, most approaches have difficulty adapting to change in the number of participants. To address these limitations, a novel attentiveness determination algorithm is proposed for determining the most attentive person, as well as prioritizing people based on attentiveness. The proposed algorithm, which is based on relevance theory, is named the Scalable Hidden Markov Model (Scalable HMM). The Scalable HMM allows effective computation and contributes an adaptation approach for human attentiveness; unlike conventional HMMs, Scalable HMM has a scalable number of states and observations and online adaptability for state transition probabilities, in terms of changes in the current number of states, i.e., the number of participants in a robot’s view. The proposed approach was successfully tested on image sequences (7567 frames) of individuals exhibiting a variety of actions (speaking, walking, turning head, and entering or leaving a robot’s view). From these experimental results, Scalable HMM showed a detection rate of 76% in determining the most attentive person and over 75% in prioritizing people’s attention with variation in the number of participants. Compared to recent attention approaches, Scalable HMM’s performance in people attention prioritization presents an approximately 20% improvement.


2011 ◽  
Vol 23 (4) ◽  
pp. 557-566 ◽  
Author(s):  
Vincent Duchaine ◽  
◽  
Clément Gosselin ◽  

While the majority of industrial manipulators currently in use only need to performautonomousmotion, future generations of cooperative robots will also have to execute cooperative motion and intelligently react to contacts. These extended behaviours are essential to enable safe and effective physical Human-Robot Interaction (pHRI). However, they will inevitably result in an increase of the controller complexity. This paper presents a single variable admittance control scheme that handles the three modes of operation, thereby minimizing the complexity of the controller. First, the adaptative admittance controller previously proposed by the authors for cooperative motion is recalled. Then, a novel implementation of variable admittance control for the generation of smooth autonomous motion including reaction to collisions anywhere on the robot is presented. Finally, it is shown how the control equations for these three modes of operation can be simply unified into a unique control scheme.


2014 ◽  
Vol 11 (04) ◽  
pp. 1442005 ◽  
Author(s):  
Youngho Lee ◽  
Young Jae Ryoo ◽  
Jongmyung Choi

With the development of computing technology, robots are now popular in our daily life. Human–robot interaction is not restricted to a direct communication between them. The communication could include various different human to human interactions. In this paper, we present a framework for enhancing the interaction among human–robot-environments. The proposed framework is composed of a robot part, a user part, and the DigiLog space. To evaluate the proposed framework, we applied the framework into a real-time remote robot-control platform in the smart DigiLog space. We are implementing real time controlling and monitoring of a robot by using one smart phone as the robot brain and the other smart phone as the remote controller.


2020 ◽  
Vol 12 (1) ◽  
pp. 160-174
Author(s):  
Anna Chatzimichali ◽  
Ross Harrison ◽  
Dimitrios Chrysostomou

AbstractCan we have personal robots without giving away personal data? Besides, what is the role of a robots Privacy Policy in that question? This work explores for the first time privacy in the context of consumer robotics through the lens of information communicated to users through Privacy Policies and Terms and Conditions. Privacy, personal and non-personal data are discussed under the light of the human–robot relationship, while we attempt to draw connections to dimensions related to personalization, trust, and transparency. We introduce a novel methodology to assess how the “Organization for Economic Cooperation and Development Guidelines Governing the Protection of Privacy and Trans-Border Flows of Personal Data” are reflected upon the publicly available Privacy Policies and Terms and Conditions in the consumer robotics field. We draw comparisons between the ways eight consumer robotic companies approach privacy principles. Current findings demonstrate significant deviations in the structure and context of privacy terms. Some practical dimensions in terms of improving the context and the format of privacy terms are discussed. The ultimate goal of this work is to raise awareness regarding the various privacy strategies used by robot companies while ultimately creating a usable way to make this information more relevant and accessible to users.


2021 ◽  
Vol 11 (16) ◽  
pp. 7426
Author(s):  
Furong Deng ◽  
Yu Zhou ◽  
Sifan Song ◽  
Zijian Jiang ◽  
Lifu Chen ◽  
...  

Gaze-following is an effective way for intention understanding in human–robot interaction, which aims to follow the gaze of humans to estimate what object is being observed. Most of the existing methods require people and objects to appear in the same image. Due to the limitation in the view of the camera, these methods are not applicable in practice. To address this problem, we propose a method of gaze following that utilizes a geometric map for better estimation. With the help of the map, this method is competitive for cross-frame estimation. On the basis of this method, we propose a novel gaze-based image caption system, which has been studied for the first time. Our experiments demonstrate that the system follows the gaze and describes objects accurately. We believe that this system is competent for autistic children’s rehabilitation training, pension service robots, and other applications.


Robotics ◽  
2021 ◽  
Vol 10 (4) ◽  
pp. 113
Author(s):  
Diogo Carneiro ◽  
Filipe Silva ◽  
Petia Georgieva

Catching flying objects is a challenging task in human–robot interaction. Traditional techniques predict the intersection position and time using the information obtained during the free-flying ball motion. A common pain point in these systems is the short ball flight time and uncertainties in the ball’s trajectory estimation. In this paper, we present the Robot Anticipation Learning System (RALS) that accounts for the information obtained from observation of the thrower’s hand motion before the ball is released. RALS takes extra time for the robot to start moving in the direction of the target before the opponent finishes throwing. To the best of our knowledge, this is the first robot control system for ball-catching with anticipation skills. Our results show that the information fused from both throwing and flying motions improves the ball-catching rate by up to 20% compared to the baseline approach, with the predictions relying only on the information acquired during the flight phase.


Sign in / Sign up

Export Citation Format

Share Document