scholarly journals Agnostic Learning in Permutation-Invariant Domains

2016 ◽  
Vol 12 (4) ◽  
pp. 1-22
Author(s):  
Karl Wimmer
Author(s):  
Raffaella Carbone ◽  
Federico Girotti

AbstractWe introduce a notion of absorption operators in the context of quantum Markov processes. The absorption problem in invariant domains (enclosures) is treated for a quantum Markov evolution on a separable Hilbert space, both in discrete and continuous times: We define a well-behaving set of positive operators which can correspond to classical absorption probabilities, and we study their basic properties, in general, and with respect to accessibility structure of channels, transience and recurrence. In particular, we can prove that no accessibility is allowed between the null and positive recurrent subspaces. In the case, when the positive recurrent subspace is attractive, ergodic theory will allow us to get additional results, in particular about the description of fixed points.


2020 ◽  
Vol 34 (10) ◽  
pp. 13720-13721
Author(s):  
Won Kyung Lee

A multivariate time-series forecasting has great potentials in various domains. However, it is challenging to find dependency structure among the time-series variables and appropriate time-lags for each variable, which change dynamically over time. In this study, I suggest partial correlation-based attention mechanism which overcomes the shortcomings of existing pair-wise comparisons-based attention mechanisms. Moreover, I propose data-driven series-wise multi-resolution convolutional layers to represent the input time-series data for domain agnostic learning.


PLoS ONE ◽  
2020 ◽  
Vol 15 (3) ◽  
pp. e0229560
Author(s):  
George Amadeus Prenosil ◽  
Thilo Weitzel ◽  
Markus Fürstner ◽  
Michael Hentschel ◽  
Thomas Krause ◽  
...  

2016 ◽  
Vol 13 (120) ◽  
pp. 20160310 ◽  
Author(s):  
Ajaz Ahmad Bhat ◽  
Vishwanathan Mohan ◽  
Giulio Sandini ◽  
Pietro Morasso

Emerging studies indicate that several species such as corvids, apes and children solve ‘The Crow and the Pitcher’ task (from Aesop's Fables) in diverse conditions. Hidden beneath this fascinating paradigm is a fundamental question: by cumulatively interacting with different objects, how can an agent abstract the underlying cause–effect relations to predict and creatively exploit potential affordances of novel objects in the context of sought goals? Re-enacting this Aesop's Fable task on a humanoid within an open-ended ‘learning–prediction–abstraction’ loop, we address this problem and (i) present a brain-guided neural framework that emulates rapid one-shot encoding of ongoing experiences into a long-term memory and (ii) propose four task-agnostic learning rules (elimination, growth, uncertainty and status quo) that correlate predictions from remembered past experiences with the unfolding present situation to gradually abstract the underlying causal relations. Driven by the proposed architecture, the ensuing robot behaviours illustrated causal learning and anticipation similar to natural agents. Results further demonstrate that by cumulatively interacting with few objects, the predictions of the robot in case of novel objects converge close to the physical law, i.e. the Archimedes principle: this being independent of both the objects explored during learning and the order of their cumulative exploration.


Sign in / Sign up

Export Citation Format

Share Document