A Method of Robot Behavior Evolution Based on Intelligent Composite Motion Control

2000 ◽  
Vol 12 (3) ◽  
pp. 202-208 ◽  
Author(s):  
Masakazu Suzuki ◽  

Intelligent Composite Motion Control (ICMC) is a methodology for building up robot systems in which robots realize complex, and dexterous behavior autonomously and adaptively by parameter optimization and use of empirical knowledge only if the motion control for basic element motions is given. In this article, ICMC is first reviewed, mainly for the Method of Knowledge Array, which provides a tool for realizing suboptimal motions for new situations by use of empirical knowledge. Behavior evolution based upon ICMC is proposed, i.e., it is shown how robot motions are coordinated from the most basic motions such as joint rotation, and how they evolve into complex behavior such as dexterous ball throwing.

2019 ◽  
Vol 4 (37) ◽  
pp. eaay4663 ◽  
Author(s):  
Mark Edmonds ◽  
Feng Gao ◽  
Hangxin Liu ◽  
Xu Xie ◽  
Siyuan Qi ◽  
...  

The ability to provide comprehensive explanations of chosen actions is a hallmark of intelligence. Lack of this ability impedes the general acceptance of AI and robot systems in critical tasks. This paper examines what forms of explanations best foster human trust in machines and proposes a framework in which explanations are generated from both functional and mechanistic perspectives. The robot system learns from human demonstrations to open medicine bottles using (i) an embodied haptic prediction model to extract knowledge from sensory feedback, (ii) a stochastic grammar model induced to capture the compositional structure of a multistep task, and (iii) an improved Earley parsing algorithm to jointly leverage both the haptic and grammar models. The robot system not only shows the ability to learn from human demonstrators but also succeeds in opening new, unseen bottles. Using different forms of explanations generated by the robot system, we conducted a psychological experiment to examine what forms of explanations best foster human trust in the robot. We found that comprehensive and real-time visualizations of the robot’s internal decisions were more effective in promoting human trust than explanations based on summary text descriptions. In addition, forms of explanation that are best suited to foster trust do not necessarily correspond to the model components contributing to the best task performance. This divergence shows a need for the robotics community to integrate model components to enhance both task execution and human trust in machines.


Author(s):  
Andreas Blank ◽  
Engin Karlidag ◽  
Lukas Zikeli ◽  
Maximilian Metzner ◽  
Jörg Franke

AbstractConcurrent with autonomous robots, teleoperation gains importance in industrial applications. This includes human–robot cooperation during complex or harmful operations and remote intervention. A key role in teleoperation is the ability to translate operator inputs to robot movements. Therefore, providing different motion control types is a decisive aspect due to the variety of tasks to be expected. For a wide range of use-cases, a high degree of interoperability to a variety of robot systems is required. In addition, the control input should support up-to-date Human Machine Interfaces. To address the existing challenges, we present a middleware for teleoperation of industrial robots, which is adaptive regarding motion control types. Thereby the middleware relies on an open-source, robot meta-operating system and a standardized communication. Evaluation is performed within defined tasks utilizing different articulated robots, whereby performance and determinacy are quantified. An implementation sample of the method is available on: https://github.com/FAU-FAPS/adaptive_motion_control.


2004 ◽  
Vol 16 (4) ◽  
pp. 346-347
Author(s):  
Kazuhiro Kosuge ◽  

A new research model proposed by the Science Council of Japan in 1999 [1, 2] is based on how research is conducted and culturally integrated into society. Motion control systems developed for robots as part of the robot technology (RT) has potential applications both in actual robot systems and other systems, as demonstrated in several examples showing how motion control schemes developed for robots can be used.


2015 ◽  
Vol 27 (2) ◽  
pp. 121-121
Author(s):  
Toyomi Fujita ◽  
Takayuki Tanaka ◽  
Satoru Takahashi ◽  
Hidenori Takauji ◽  
Shun’ichi Kaneko

Robot vision is an important robotics and mechatronics technology for realizing intelligent robot systems that work in the real world. Recent improvements in computer processing are enabling environment to be recognized and robot to be controlled based on dynamic high-speed, highly accurate image information. In industrial application, target objects are detected much more robustly and reliably through high-speed processing. In intelligent systems applications, security systems that detect human beings have recently been applied positively in computer vision. Another attractive application is recognizing actions and gestures by detecting human – an application that would enable human beings and robots to interact and cooperate more smoothly when robots observe and assist human partners. This key technology could be used for aiding the elderly and handicapped in practical environments such as hospital, home, and so on. This special issue covers topics on robot vision and motion control including dynamic image processing. These articles are certain to be both informative and interesting to robotics and mechatronics researchers. We thank the authors for submitting their work and for assisting during the review process. We also thank the reviewers for their dedicated time and effort.


Sign in / Sign up

Export Citation Format

Share Document