scholarly journals Programming full-body movements for humanoid robots by observation

2004 ◽  
Vol 47 (2-3) ◽  
pp. 93-108 ◽  
Author(s):  
Aleš Ude ◽  
Christopher G. Atkeson ◽  
Marcia Riley
2017 ◽  
Vol 5 (2) ◽  
pp. 291-303
Author(s):  
Maxime Trempe ◽  
Jean-Luc Gohier ◽  
Mathieu Charbonneau ◽  
Jonathan Tremblay

In recent years, it has been shown that spacing training sessions by several hours allows the consolidation of motor skills in the brain, a process leading to the stabilization of the skills and, sometimes, further improvement without additional practice. At the moment, it is unknown whether consolidation can lead to an improvement in performance when the learner performs complex full-body movements. To explore this question, we recruited 10 divers and had them practice a challenging diving maneuver. Divers first performed an initial training session, consisting of 12 dives during which visual feedback was provided immediately after each dive through video replay. Two retention tests without feedback were performed 30 min and 24 hr after the initial training session. All dives were recorded using a video camera and the participants’ performance was assessed by measuring the verticality of the body segments at water entry. Significant performance gains were observed in the 24-hr retention test (p < .05). These results suggest that the learning of complex full-body movements can benefit from consolidation and that splitting practice sessions can be used as a training tool to facilitate skill acquisition.


Author(s):  
Bernd J. Kröger ◽  
Peter Birkholz ◽  
Christiane Neuschaefer-Rube

AbstractWhile we are capable of modeling the shape, e.g. face, arms, etc. of humanoid robots in a nearly natural or human-like way, it is much more difficult to generate human-like facial or body movements and human-like behavior like e.g. speaking and co-speech gesturing. In this paper it will be argued for a developmental robotics approach for learning to speak. On the basis of current literature a blueprint of a brain model will be outlined for this kind of robots and preliminary scenarios for knowledge acquisition will be described. Furthermore it will be illustrated that natural speech acquisition mainly results from learning during face-to-face communication and it will be argued that learning to speak should be based on human-robot face-to-face communication. Here the human acts like a caretaker or teacher and the robot acts like a speech-acquiring toddler. This is a fruitful basic scenario not only for learning to speak, but also for learning to communicate in general, including to produce co-verbal manual gestures and to produce co-verbal facial expressions.


Sign in / Sign up

Export Citation Format

Share Document