scholarly journals Interactive Robot for Playing Russian Checkers

Robotics ◽  
2020 ◽  
Vol 9 (4) ◽  
pp. 107
Author(s):  
Ekaterina E. Kopets ◽  
Artur I. Karimov ◽  
Georgii Y. Kolev ◽  
Lorenzo Scalera ◽  
Denis N. Butusov

Human–robot interaction in board games is a rapidly developing field of robotics. This paper presents a robot capable of playing Russian checkers designed for entertaining, training, and research purposes. Its control program is based on a novel unsupervised self-learning algorithm inspired by AlphaZero and represents the first successful attempt of using this approach in the checkers game. The main engineering challenge in mechanics is to develop a board state acquisition system non-sensitive to lighting conditions, which is achieved by rejecting computer vision and utilizing magnetic sensors instead. An original robot face is designed to endow the robot an ability to express its attributed emotional state. Testing the robot at open-air multiday exhibitions shows the robustness of the design to difficult exploitation conditions and the high interest of visitors to the robot.

Sensors ◽  
2018 ◽  
Vol 18 (8) ◽  
pp. 2691 ◽  
Author(s):  
Marcos Maroto-Gómez ◽  
Álvaro Castro-González ◽  
José Castillo ◽  
María Malfaz ◽  
Miguel Salichs

Nowadays, many robotic applications require robots making their own decisions and adapting to different conditions and users. This work presents a biologically inspired decision making system, based on drives, motivations, wellbeing, and self-learning, that governs the behavior of the robot considering both internal and external circumstances. In this paper we state the biological foundations that drove the design of the system, as well as how it has been implemented in a real robot. Following a homeostatic approach, the ultimate goal of the robot is to keep its wellbeing as high as possible. In order to achieve this goal, our decision making system uses learning mechanisms to assess the best action to execute at any moment. Considering that the proposed system has been implemented in a real social robot, human-robot interaction is of paramount importance and the learned behaviors of the robot are oriented to foster the interactions with the user. The operation of the system is shown in a scenario where the robot Mini plays games with a user. In this context, we have included a robust user detection mechanism tailored for short distance interactions. After the learning phase, the robot has learned how to lead the user to interact with it in a natural way.


Author(s):  
Shivam Goel

Robotics in healthcare has recently emerged, backed by the recent advances in the field of machine learning and robotics. Researchers are focusing on training robots for interacting with elderly adults. This research primarily focuses on engineering more efficient robots that can learn from their mistakes, thereby aiding in better human-robot interaction. In this work, we propose a method in which a robot learns to navigate itself to the individual in need. The robotic agents' learning algorithm will be capable of navigating in an unknown environment. The robot's primary objective is to locate human in a house, and upon finding the human, the goal is to interact with them while complementing their pose and gaze. We propose an end to end learning strategy, which uses a recurrent neural network architecture in combination with Q-learning to train an optimal policy. The idea can be a contribution to better human-robot interaction.


2013 ◽  
pp. 281-301
Author(s):  
Mohan Sridharan

Developments in sensor technology and sensory input processing algorithms have enabled the use of mobile robots in real-world domains. As they are increasingly deployed to interact with humans in our homes and offices, robots need the ability to operate autonomously based on sensory cues and high-level feedback from non-expert human participants. Towards this objective, this chapter describes an integrated framework that jointly addresses the learning, adaptation, and interaction challenges associated with robust human-robot interaction in real-world application domains. The novel probabilistic framework consists of: (a) a bootstrap learning algorithm that enables a robot to learn layered graphical models of environmental objects and adapt to unforeseen dynamic changes; (b) a hierarchical planning algorithm based on partially observable Markov decision processes (POMDPs) that enables the robot to reliably and efficiently tailor learning, sensing, and processing to the task at hand; and (c) an augmented reinforcement learning algorithm that enables the robot to acquire limited high-level feedback from non-expert human participants, and merge human feedback with the information extracted from sensory cues. Instances of these algorithms are implemented and fully evaluated on mobile robots and in simulated domains using vision as the primary source of information in conjunction with range data and simplistic verbal inputs. Furthermore, a strategy is outlined to integrate these components to achieve robust human-robot interaction in real-world application domains.


Robotics ◽  
2013 ◽  
pp. 1255-1275
Author(s):  
Mohan Sridharan

Developments in sensor technology and sensory input processing algorithms have enabled the use of mobile robots in real-world domains. As they are increasingly deployed to interact with humans in our homes and offices, robots need the ability to operate autonomously based on sensory cues and high-level feedback from non-expert human participants. Towards this objective, this chapter describes an integrated framework that jointly addresses the learning, adaptation, and interaction challenges associated with robust human-robot interaction in real-world application domains. The novel probabilistic framework consists of: (a) a bootstrap learning algorithm that enables a robot to learn layered graphical models of environmental objects and adapt to unforeseen dynamic changes; (b) a hierarchical planning algorithm based on partially observable Markov decision processes (POMDPs) that enables the robot to reliably and efficiently tailor learning, sensing, and processing to the task at hand; and (c) an augmented reinforcement learning algorithm that enables the robot to acquire limited high-level feedback from non-expert human participants, and merge human feedback with the information extracted from sensory cues. Instances of these algorithms are implemented and fully evaluated on mobile robots and in simulated domains using vision as the primary source of information in conjunction with range data and simplistic verbal inputs. Furthermore, a strategy is outlined to integrate these components to achieve robust human-robot interaction in real-world application domains.


2009 ◽  
Author(s):  
Matthew S. Prewett ◽  
Kristin N. Saboe ◽  
Ryan C. Johnson ◽  
Michael D. Coovert ◽  
Linda R. Elliott

2010 ◽  
Author(s):  
Eleanore Edson ◽  
Judith Lytle ◽  
Thomas McKenna

Sign in / Sign up

Export Citation Format

Share Document