scholarly journals Learning Sequential Force Interaction Skills

Robotics ◽  
2020 ◽  
Vol 9 (2) ◽  
pp. 45
Author(s):  
Simon Manschitz ◽  
Michael Gienger ◽  
Jens Kober ◽  
Jan Peters

Learning skills from kinesthetic demonstrations is a promising way of minimizing the gap between human manipulation abilities and those of robots. We propose an approach to learn sequential force interaction skills from such demonstrations. The demonstrations are decomposed into a set of movement primitives by inferring the underlying sequential structure of the task. The decomposition is based on a novel probability distribution which we call Directional Normal Distribution. The distribution allows infering the movement primitive’s composition, i.e., its coordinate frames, control variables and target coordinates from the demonstrations. In addition, it permits determining an appropriate number of movement primitives for a task via model selection. After finding the task’s composition, the system learns to sequence the resulting movement primitives in order to be able to reproduce the task on a real robot. We evaluate the approach on three different tasks, unscrewing a light bulb, box stacking and box flipping. All tasks are kinesthetically demonstrated and then reproduced on a Barrett WAM robot.

2017 ◽  
Vol 36 (8) ◽  
pp. 879-894 ◽  
Author(s):  
Rudolf Lioutikov ◽  
Gerhard Neumann ◽  
Guilherme Maeda ◽  
Jan Peters

Movement primitives are a well-established approach for encoding and executing movements. While the primitives themselves have been extensively researched, the concept of movement primitive libraries has not received similar attention. Libraries of movement primitives represent the skill set of an agent. Primitives can be queried and sequenced in order to solve specific tasks. The goal of this work is to segment unlabeled demonstrations into a representative set of primitives. Our proposed method differs from current approaches by taking advantage of the often neglected, mutual dependencies between the segments contained in the demonstrations and the primitives to be encoded. By exploiting this mutual dependency, we show that we can improve both the segmentation and the movement primitive library. Based on probabilistic inference our novel approach segments the demonstrations while learning a probabilistic representation of movement primitives. We demonstrate our method on two real robot applications. First, the robot segments sequences of different letters into a library, explaining the observed trajectories. Second, the robot segments demonstrations of a chair assembly task into a movement primitive library. The library is subsequently used to assemble the chair in an order not present in the demonstrations.


Author(s):  
Dominik M. Endres ◽  
Enrico Chiovetto ◽  
Martin A. Giese

2001 ◽  
Vol 13 (7) ◽  
pp. 1649-1681 ◽  
Author(s):  
Masa-aki Sato

The Bayesian framework provides a principled way of model selection. This framework estimates a probability distribution over an ensemble of models, and the prediction is done by averaging over the ensemble of models. Accordingly, the uncertainty of the models is taken into account, and complex models with more degrees of freedom are penalized. However, integration over model parameters is often intractable, and some approximation scheme is needed. Recently, a powerful approximation scheme, called the variational bayes (VB) method, has been proposed. This approach defines the free energy for a trial probability distribution, which approximates a joint posterior probability distribution over model parameters and hidden variables. The exact maximization of the free energy gives the true posterior distribution. The VB method uses factorized trial distributions. The integration over model parameters can be done analytically, and an iterative expectation-maximization-like algorithm, whose convergence is guaranteed, is derived. In this article, we derive an online version of the VB algorithm and prove its convergence by showing that it is a stochastic approximation for finding the maximum of the free energy. By combining sequential model selection procedures, the online VB method provides a fully online learning method with a model selection mechanism. In preliminary experiments using synthetic data, the online VB method was able to adapt the model structure to dynamic environments.


2017 ◽  
Vol 2 (2) ◽  
pp. 977-984 ◽  
Author(s):  
Debora Clever ◽  
Monika Harant ◽  
Katja Mombaur ◽  
Maximilien Naveau ◽  
Olivier Stasse ◽  
...  

2020 ◽  
Author(s):  
Luca Onnis ◽  
Hongoak Yun

Two long-standing and interrelated questions have been central to language and cognition. The first question is whether shared or separate mechanisms subserve language learning versus language processing. The second question is whether shared or separate mechanisms underlie processing in a first versus second language. Using an individual differences paradigm, we sought evidence for common mechanisms by probing implicit statistical learning (SL) skills and online reading in a second language. We found that individuals with better statistical learning skills more efficiently incorporate word-level statistical regularities while reading in their second language, making them more efficient L2 readers. In addition, sensitivity to forward and backward lexical predictability reduced reading times. Thus, sensitivity to statistical sequential structure may be a common mechanism of implicit learning as well as processing of language.


2019 ◽  
Author(s):  
Vincenzo Totaro ◽  
Andrea Gioia ◽  
Vito Iacobellis

Abstract. The need of fitting time series characterized by the presence of trend or change points has generated in latest years an increased interest in the investigation of non-stationary probability distributions. Considering that the available hydrological time series can be recognized as the observable part of a stochastic process with a definite probability distribution, two main topics can be tackled in this context: the first one is related to the definition of an objective criterion for choosing whether the stationary hypothesis can be adopted, while the second one regards the effects of non-stationarity on the estimation of distribution parameters and quantiles for assigned return period and flood risk evaluation. Although the time series trend or change points can be recognized using classical tests available in literature (e.g. Mann–Kendal or CUSUM test), for design purpose it is still required the correct selection of the stationary or non-stationary probability distribution. By this light, the focus is shifted toward model selection criteria which implies the use of parametric methods with all related issues on parameters estimation. The aim of this study is to compare the performance of parametric and non-parametric methods for trend detection analysing their power and focusing on the use of traditional model selection tools (e.g. Akaike Information Criterion and Likelihood Ratio test) within this context. Power and efficiency of parameter estimation, including the trend coefficient, were investigated through Monte Carlo simulations using Generalized Extreme Value distribution as parent with selected parameter sets.


Proceedings ◽  
2019 ◽  
Vol 21 (1) ◽  
pp. 2
Author(s):  
Alejandro Romero ◽  
Francisco Bellas ◽  
Jose A. Becerra ◽  
Richard J. Duro

In this paper, we address the problem of how to bootstrap a cognitive architecture to opportunistically start learning skills in domains where multiple skills can be learned at the same time. To this end, taking inspiration from a series of computational models of the use of motivations in infants, we propose an approach that leverages two types of cognitive motivations: exploratory and proficiency based, the latter modulated by the concept of interestingness as an implementation of attentional mechanisms. This approach is tested in an illustrative experiment with a real robot.


2017 ◽  
Author(s):  
Anu Kauppi ◽  
Pekka Kolmonen ◽  
Marko Laine ◽  
Johanna Tamminen

Abstract. We discuss uncertainty quantification for aerosol type selection in satellite-based atmospheric aerosol retrieval. The retrieval procedure uses pre-calculated aerosol microphysical models stored in look-up tables (LUTs) and top of atmosphere spectral reflectance measurements to solve the aerosol characteristics. The forward model approximations cause systematic differences between the modeled and observed reflectance. Acknowledging this model discrepancy as a source of uncertainty allows us to produce more realistic uncertainty estimates and assists the selection of the most appropriate LUTs for each individual retrieval. This paper focuses on the aerosol microphysical model selection and characterization of uncertainty in the retrieved aerosol type and aerosol optical depth (AOD). The concept of model evidence is used as a tool for model comparison. The method is based on Bayesian inference approach where all uncertainties are described as a posterior probability distribution. When there is no single best matching aerosol microphysical model we use a statistical technique based on Bayesian model averaging to combine AOD posterior probability densities of the best fitting models to obtain an averaged AOD estimate. We also determine the shared evidence of the best matching models of a certain main aerosol type in order to quantify how plausible each main aerosol type is in representing the underlying atmospheric aerosol conditions. The developed method is applied to Ozone Monitoring Instrument (OMI) measurements using multi-wavelength approach for retrieving the aerosol type and AOD estimate with uncertainty quantification for cloud-free over-land pixels. Several larger pixel set areas were studied in order to investigate robustness of the developed method. We evaluated the retrieved AOD by comparison with ground-based measurements at example sites. We found that the uncertainty of AOD expressed by posterior probability distribution reflects the difficulty in model selection. The posterior probability distribution can provide a comprehensive characterization of the uncertainty in this kind of problem for aerosol type selection. As a result, the proposed method can account for the model error and also include the model selection uncertainty in the total uncertainty budget.


Sign in / Sign up

Export Citation Format

Share Document