scholarly journals Robot Learning from Demonstration: A Task-level Planning Approach

10.5772/5611 ◽  
2008 ◽  
Vol 5 (3) ◽  
pp. 33 ◽  
Author(s):  
Staffan Ekvall ◽  
Danica Kragic
2014 ◽  
Vol 565 ◽  
pp. 194-197
Author(s):  
Anna Gorbenko

We consider the problem of the task-level robot learning from demonstration. In particular, we consider a model that uses the hierarchical control structure. For this model, we propose the problem of selection of action examples. We present a polynomial time algorithm for solution of this problem. Also, we consider some experimental results for task-level learning from demonstration.


2014 ◽  
Vol 1016 ◽  
pp. 612-616
Author(s):  
Anna Gorbenko ◽  
Vladimir Popov

Various problems of the task-level robot learning from demonstration has received substantial attention recently. Among other, we can mention investigation of motor primitives. In particular, different rhythmic motor tasks are very important. Recently, the approximate period problem was considered as a model for the investigation of sequences of motor primitives. In this paper, we consider the approximate period problem and some modifications of the problem for the investigation of sequences of rhythmic motor primitives.


2014 ◽  
Vol 950 ◽  
pp. 233-236
Author(s):  
Vladimir Popov

Different problems of robot learning and planning have received considerable attention, recently. In particular, we can mention robot task learning. Robot learning from demonstration is especially important for robots that operate in unstructured environments. The effectiveness of such learning depends strongly on the quality of vision-based analysis of human hand and body gestures. In this paper, we consider a method of recognition of human hand and body gestures that based on a modified longest common subsequence algorithm with adaptive parameters.


1987 ◽  
Author(s):  
Eric Aboaf ◽  
Christopher G. Atkeson ◽  
David J. Reinkensmeyer
Keyword(s):  

Author(s):  
Hangxin Liu ◽  
Chi Zhang ◽  
Yixin Zhu ◽  
Chenfanfu Jiang ◽  
Song-Chun Zhu

This paper presents a mirroring approach, inspired by the neuroscience discovery of the mirror neurons, to transfer demonstrated manipulation actions to robots. Designed to address the different embodiments between a human (demonstrator) and a robot, this approach extends the classic robot Learning from Demonstration (LfD) in the following aspects:i) It incorporates fine-grained hand forces collected by a tactile glove in demonstration to learn robot’s fine manipulative actions; ii) Through model-free reinforcement learning and grammar induction, the demonstration is represented by a goal-oriented grammar consisting of goal states and the corresponding forces to reach the states, independent of robot embodiments; iii) A physics-based simulation engine is applied to emulate various robot actions and mirrors the actions that are functionally equivalent to the human’s in the sense of causing the same state changes by exerting similar forces. Through this approach, a robot reasons about which forces to exert and what goals to achieve to generate actions (i.e., mirroring), rather than strictly mimicking demonstration (i.e., overimitation). Thus the embodiment difference between a human and a robot is naturally overcome. In the experiment, we demonstrate the proposed approach by teaching a real Baxter robot with a complex manipulation task involving haptic feedback—opening medicine bottles.


2014 ◽  
Vol 875-877 ◽  
pp. 1994-1999
Author(s):  
James Aaron Debono ◽  
Gu Fang

For robot application to proliferate in industry, and in unregulated environments, a simple means of programming is required. This paper describes methods for robot Learning from Demonstration (LfD). These methods used an RGB-D sensor for demonstration observation, and used finite state machines (FSMs) for policy derivation. Particularly, a method for object recognition was developed, which required only a single frame of data for training, and was able to perform real-time recognition. A planning method for object grasping was also developed. Experiments with a pick-and-place robot show that the developed methods resulted in object recognition accuracy greater than 99% in cluttered scenes, and manipulation accuracies of below 3mm in linear motion and 2° in rotation.


2011 ◽  
Vol 31 (3) ◽  
pp. 360-375 ◽  
Author(s):  
George Konidaris ◽  
Scott Kuindersma ◽  
Roderic Grupen ◽  
Andrew Barto

Sign in / Sign up

Export Citation Format

Share Document