Temporal dynamics of real-world emotion are more strongly linked to prediction error than outcome.

2020 ◽  
Vol 149 (9) ◽  
pp. 1755-1766 ◽  
Author(s):  
William J. Villano ◽  
A. Ross Otto ◽  
C. E. Chiemeka Ezie ◽  
Roderick Gillis ◽  
Aaron S. Heller
Author(s):  
Debarun Bhattacharjya ◽  
Dharmashankar Subramanian ◽  
Tian Gao

Many real-world domains involve co-evolving relationships between events, such as meals and exercise, and time-varying random variables, such as a patient's blood glucose levels. In this paper, we propose a general framework for modeling joint temporal dynamics involving continuous time transitions of discrete state variables and irregular arrivals of events over the timeline. We show how conditional Markov processes (as represented by continuous time Bayesian networks) and multivariate point processes (as represented by graphical event models) are among various processes that are covered by the framework. We introduce and compare two simple and interpretable yet practical joint models within the framework with relevant baselines on simulated and real-world datasets, using a graph search algorithm for learning. The experiments highlight the importance of jointly modeling event arrivals and state variable transitions to better fit joint temporal datasets, and the framework opens up possibilities for models involving even more complex dynamics whenever suitable.


2020 ◽  
Vol 34 (04) ◽  
pp. 5956-5963
Author(s):  
Xianfeng Tang ◽  
Huaxiu Yao ◽  
Yiwei Sun ◽  
Charu Aggarwal ◽  
Prasenjit Mitra ◽  
...  

Multivariate time series (MTS) forecasting is widely used in various domains, such as meteorology and traffic. Due to limitations on data collection, transmission, and storage, real-world MTS data usually contains missing values, making it infeasible to apply existing MTS forecasting models such as linear regression and recurrent neural networks. Though many efforts have been devoted to this problem, most of them solely rely on local dependencies for imputing missing values, which ignores global temporal dynamics. Local dependencies/patterns would become less useful when the missing ratio is high, or the data have consecutive missing values; while exploring global patterns can alleviate such problem. Thus, jointly modeling local and global temporal dynamics is very promising for MTS forecasting with missing values. However, work in this direction is rather limited. Therefore, we study a novel problem of MTS forecasting with missing values by jointly exploring local and global temporal dynamics. We propose a new framework øurs, which leverages memory network to explore global patterns given estimations from local perspectives. We further introduce adversarial training to enhance the modeling of global temporal distribution. Experimental results on real-world datasets show the effectiveness of øurs for MTS forecasting with missing values and its robustness under various missing ratios.


Author(s):  
Yusuke Tanaka ◽  
Tomoharu Iwata ◽  
Takeshi Kurashima ◽  
Hiroyuki Toda ◽  
Naonori Ueda

Analyzing people flows is important for better navigation and location-based advertising. Since the location information of people is often aggregated for protecting privacy, it is not straightforward to estimate transition populations between locations from aggregated data. Here, aggregated data are incoming and outgoing people counts at each location; they do not contain tracking information of individuals. This paper proposes a probabilistic model for estimating unobserved transition populations between locations from only aggregated data. With the proposed model, temporal dynamics of people flows are assumed to be probabilistic diffusion processes over a network, where nodes are locations and edges are paths between locations. By maximizing the likelihood with flow conservation constraints that incorporate travel duration distributions between locations, our model can robustly estimate transition populations between locations. The statistically significant improvement of our model is demonstrated using real-world datasets of pedestrian data in exhibition halls, bike trip data and taxi trip data in New York City.


2021 ◽  
Vol 15 ◽  
Author(s):  
Omar Eldardeer ◽  
Jonas Gonzalez-Billandon ◽  
Lukas Grasse ◽  
Matthew Tata ◽  
Francesco Rea

One of the fundamental prerequisites for effective collaborations between interactive partners is the mutual sharing of the attentional focus on the same perceptual events. This is referred to as joint attention. In psychological, cognitive, and social sciences, its defining elements have been widely pinpointed. Also the field of human-robot interaction has extensively exploited joint attention which has been identified as a fundamental prerequisite for proficient human-robot collaborations. However, joint attention between robots and human partners is often encoded in prefixed robot behaviours that do not fully address the dynamics of interactive scenarios. We provide autonomous attentional behaviour for robotics based on a multi-sensory perception that robustly relocates the focus of attention on the same targets the human partner attends. Further, we investigated how such joint attention between a human and a robot partner improved with a new biologically-inspired memory-based attention component. We assessed the model with the humanoid robot iCub involved in performing a joint task with a human partner in a real-world unstructured scenario. The model showed a robust performance on capturing the stimulation, making a localisation decision in the right time frame, and then executing the right action. We then compared the attention performance of the robot against the human performance when stimulated from the same source across different modalities (audio-visual and audio only). The comparison showed that the model is behaving with temporal dynamics compatible with those of humans. This provides an effective solution for memory-based joint attention in real-world unstructured environments. Further, we analyzed the localisation performances (reaction time and accuracy), the results showed that the robot performed better in an audio-visual condition than an audio only condition. The performance of the robot in the audio-visual condition was relatively comparable with the behaviour of the human participants whereas it was less efficient in audio-only localisation. After a detailed analysis of the internal components of the architecture, we conclude that the differences in performance are due to egonoise which significantly affects the audio-only localisation performance.


2021 ◽  
Author(s):  
Enea Ceolini ◽  
Ruchella Kock ◽  
Gijsbert Stoet ◽  
Guido Band ◽  
Arko Ghosh

Cognitive and behavioral abilities alter across the adult life span. Smartphones engage various cognitive functions and the corresponding touchscreen interactions may help resolve if and how the behavior is systematically structured by aging. Here, in a sample spanning the adult lifespan (16 to 86 years, N = 598, accumulating 355 million interactions) we analyzed a range of interaction intervals - from a few milliseconds to a minute. We used probability distributions to cluster the interactions according to their next inter-touch interval dynamics to discover systematic age-related changes at the distinct temporal clusters. There were age-related behavioral losses at the clusters occupying short intervals (~ 100 ms, R2 ~ 0.8) but gains at the long intervals (~ 4 s, R2 ~ 0.4). These correlates were independent of the years of experience on the phone or the choice of fingers used on the screen. We found further evidence for a compartmentalized influence of aging, as individuals simultaneously demonstrated both accelerated and decelerated aging at distant temporal clusters. In contrast to these strong correlations, cognitive tests probing sensorimotor, working memory, and executive processes revealed rather weak age-related decline. Contrary to the common notion of a simple behavioral decline with age based on conventional cognitive tests, we show that real-world behavior does not simply decline and the nature of aging systematically varies according to the underlying temporal dynamics. Of all the imaginable factors determining smartphone interactions in the real world, age-sensitive cognitive and behavioral processes can dominatingly dictate smartphone temporal dynamics.


2020 ◽  
Author(s):  
Eelke Spaak ◽  
Marius V. Peelen ◽  
Floris P. de Lange

AbstractVisual scene context is well-known to facilitate the recognition of scene-congruent objects. Interestingly, however, according to the influential theory of predictive coding, scene congruency should lead to reduced (rather than enhanced) processing of congruent objects, compared to incongruent ones, since congruent objects elicit reduced prediction error responses. We tested this counterintuitive hypothesis in two online behavioural experiments with human participants (N = 300). We found clear evidence for impaired perception of congruent objects, both in a change detection task measuring response times as well as in a bias-free object discrimination task measuring accuracy. Congruency costs were related to independent subjective congruency ratings. Finally, we show that the reported effects cannot be explained by low-level stimulus confounds, response biases, or top-down strategy. These results provide convincing evidence for perceptual congruency costs during scene viewing, in line with predictive coding theory.Statement of RelevanceThe theory of the ‘Bayesian brain’, the idea that our brain is a hypothesis-testing machine, has become very influential over the past decades. A particularly influential formulation is the theory of predictive coding. This theory entails that stimuli that are expected, for instance because of the context in which they appear, generate a weaker neural response than unexpected stimuli. Scene context correctly ‘predicts’ congruent scene elements, which should result in lower prediction error. Our study tests this important, counterintuitive, and hitherto not fully tested, hypothesis. We find clear evidence in favour of it, and demonstrate that these ‘congruency costs’ are indeed evident in perception, and not limited to one particular task setting or stimulus set. Since perception in the real world is never of isolated objects, but always of entire scenes, these findings are important not just for the Bayesian brain hypothesis, but for our understanding of real-world visual perception in general.


Author(s):  
Shubham Gupta ◽  
Gaurav Sharma ◽  
Ambedkar Dukkipati

Networks observed in real world like social networks, collaboration networks etc., exhibit temporal dynamics, i.e. nodes and edges appear and/or disappear over time. In this paper, we propose a generative, latent space based, statistical model for such networks (called dynamic networks). We consider the case where the number of nodes is fixed, but the presence of edges can vary over time. Our model allows the number of communities in the network to be different at different time steps. We use a neural network based methodology to perform approximate inference in the proposed model and its simplified version. Experiments done on synthetic and real world networks for the task of community detection and link prediction demonstrate the utility and effectiveness of our model as compared to other similar existing approaches.


2017 ◽  
Vol 17 (10) ◽  
pp. 574
Author(s):  
Seyed-Mahdi Khaligh-Razavi ◽  
Radoslaw Cichy ◽  
Dimitrios Pantazis ◽  
Aude Oliva

2015 ◽  
Vol 15 (12) ◽  
pp. 740 ◽  
Author(s):  
Daniel Kaiser ◽  
Nikolaas Oosterhof ◽  
Marius Peelen

Sign in / Sign up

Export Citation Format

Share Document