scholarly journals Inferring an Observer’s Prediction Strategy in Sequence Learning Experiments

Entropy ◽  
2020 ◽  
Vol 22 (8) ◽  
pp. 896 ◽  
Author(s):  
Abhinuv Uppal ◽  
Vanessa Ferdinand ◽  
Sarah Marzen

Cognitive systems exhibit astounding prediction capabilities that allow them to reap rewards from regularities in their environment. How do organisms predict environmental input and how well do they do it? As a prerequisite to answering that question, we first address the limits on prediction strategy inference, given a series of inputs and predictions from an observer. We study the special case of Bayesian observers, allowing for a probability that the observer randomly ignores data when building her model. We demonstrate that an observer’s prediction model can be correctly inferred for binary stimuli generated from a finite-order Markov model. However, we can not necessarily infer the model’s parameter values unless we have access to several “clones” of the observer. As stimuli become increasingly complicated, correct inference requires exponentially more data points, computational power, and computational time. These factors place a practical limit on how well we are able to infer an observer’s prediction strategy in an experimental or observational setting.

2021 ◽  
Vol 11 (9) ◽  
pp. 3867
Author(s):  
Zhewei Liu ◽  
Zijia Zhang ◽  
Yaoming Cai ◽  
Yilin Miao ◽  
Zhikun Chen

Extreme Learning Machine (ELM) is characterized by simplicity, generalization ability, and computational efficiency. However, previous ELMs fail to consider the inherent high-order relationship among data points, resulting in being powerless on structured data and poor robustness on noise data. This paper presents a novel semi-supervised ELM, termed Hypergraph Convolutional ELM (HGCELM), based on using hypergraph convolution to extend ELM into the non-Euclidean domain. The method inherits all the advantages from ELM, and consists of a random hypergraph convolutional layer followed by a hypergraph convolutional regression layer, enabling it to model complex intraclass variations. We show that the traditional ELM is a special case of the HGCELM model in the regular Euclidean domain. Extensive experimental results show that HGCELM remarkably outperforms eight competitive methods on 26 classification benchmarks.


2016 ◽  
Vol 35 (1) ◽  
pp. 15 ◽  
Author(s):  
Dhanya S Pankaj ◽  
Rama Rao Nidamanuri

The 3D modeling pipeline involves registration of partially overlapping 3D scans of an object. The automatic pairwise coarse alignment of partially overlapping 3D images is generally performed using 3D feature matching. The transformation estimation from matched features generally requires robust estimation due to the presence of outliers. RANSAC is a method of choice in problems where model estimation is to be done from data samples containing outliers. The number of RANSAC iterations depends on the number of data points and inliers to the model. Convergence of RANSAC can be very slow in the case of large number of outliers. This paper presents a novel algorithm for the 3D registration task which provides more accurate results in lesser computational time compared to RANSAC. The proposed algorithm is also compared against the existing modifications of RANSAC for 3D pairwise registration. The results indicate that the proposed algorithm tends to obtain the best 3D transformation matrix in lesser time compared to the other algorithms.


Atmosphere ◽  
2019 ◽  
Vol 10 (11) ◽  
pp. 717
Author(s):  
Ricardo Navares ◽  
José Luis Aznarte

Airborne pollen monitoring datasets sometimes exhibit gaps, even very long, either because of maintenance or because of a lack of expert personnel. Despite the numerous imputation techniques available, not all of them effectively include the spatial relations of the data since the assumption of missing-at-random is made. However, there are several techniques in geostatistics that overcome this limitation such as the inverse distance weighting and Gaussian processes or kriging. In this paper, a new method is proposed that utilizes convolutional neural networks. This method not only shows a competitive advantage in terms of accuracy when compared to the aforementioned techniques by improving the error by 5% on average, but also reduces execution training times by 90% when compared to a Gaussian process. To show the advantages of the proposal, 10%, 20%, and 30% of the data points are removed in the time series of a Poaceae pollen observation station in the region of Madrid, and the airborne concentrations from the remaining available stations in the network are used to impute the data removed. Even though the improvements in terms of accuracy are not significantly large, even if consistent, the gain in computational time and the flexibility of the proposed convolutional neural network allow field experts to adapt and extend the solution, for instance including meteorological variables, with the potential decrease of the errors reported in this paper.


Author(s):  
Yunxiao Zhang

We use the Energy Packet Network (EPN) to investigate an optimal energy distribution problem for the computer-communication system which is powered by intermittent renewable energy sources. The objective is to find an optimal energy distribution to minimize the proposed cost function which computes penalty costs caused by the overall average response time of jobs and the energy loss. In this EPN system, we consider the energy can be lost through storage leakages, or due to empty workstations which will consume energy even no job needs to be processed. Related numerical examples with different sets of parameter values are presented in the paper to evaluate the system performance and to examine the obtained analytical solution. Then a special case is considered to study the optimal system performance when the total energy harvesting rate is sufficiently large.


Author(s):  
Wenqiang Yuan ◽  
Yusheng Liu

In this work, we present a new multi-objective particle swarm optimization algorithm (PSO) characterized by the use of the geometrization analysis of the particles. The proposed method, called geometry analysis PSO (GAPSO), firstly parameterize the data points of the optimization model of mechatronic system to obtain their parameter values, then one curve or one surface is adopted to fit these points and the tangent value and normal value for each point are acquired, eventually the particles are guided by the use of its derivative value and tangent value to approximate the true Pareto front and get a uniform distribution. Our proposed method is compared with respect to two multi-objective metaheuristics representative of the state-of-the-art in this area. The experiments carried out indicate that GAPSO obtains remarkable results in terms of both accuracy and distribution.


Water ◽  
2018 ◽  
Vol 10 (9) ◽  
pp. 1269 ◽  
Author(s):  
Yun Choi ◽  
Mun-Ju Shin ◽  
Kyung Kim

The choice of the computational time step (dt) value and the method for setting dt can have a bearing on the accuracy and performance of a simulation, and this effect has not been comprehensively researched across different simulation conditions. In this study, the effects of the fixed time step (FTS) method and the automatic time step (ATS) method on the simulated runoff of a distributed rainfall–runoff model were compared. The results revealed that the ATS method had less peak flow variability than the FTS method for the virtual catchment. In the FTS method, the difference in time step had more impact on the runoff simulation results than the other factors such as differences in the amount of rainfall, the density of the stream network, or the spatial resolution of the input data. Different optimal parameter values according to the computational time step were found when FTS and ATS were used in a real catchment, and the changes in the optimal parameter values were smaller in ATS than in FTS. The results of our analyses can help to yield reliable runoff simulation results.


Entropy ◽  
2018 ◽  
Vol 20 (8) ◽  
pp. 579 ◽  
Author(s):  
Samira Ahmadi ◽  
Nariman Sepehri ◽  
Christine Wu ◽  
Tony Szturm

Sample entropy (SampEn) has been used to quantify the regularity or predictability of human gait signals. There are studies on the appropriate use of this measure for inter-stride spatio-temporal gait variables. However, the sensitivity of this measure to preprocessing of the signal and to variant values of template size (m), tolerance size (r), and sampling rate has not been studied when applied to “whole” gait signals. Whole gait signals are the entire time series data obtained from force or inertial sensors. This study systematically investigates the sensitivity of SampEn of the center of pressure displacement in the mediolateral direction (ML COP-D) to variant parameter values and two pre-processing methods. These two methods are filtering the high-frequency components and resampling the signals to have the same average number of data points per stride. The discriminatory ability of SampEn is studied by comparing treadmill walk only (WO) to dual-task (DT) condition. The results suggest that SampEn maintains the directional difference between two walking conditions across variant parameter values, showing a significant increase from WO to DT condition, especially when signals are low-pass filtered. Moreover, when gait speed is different between test conditions, signals should be low-pass filtered and resampled to have the same average number of data points per stride.


2006 ◽  
Vol 53 (3) ◽  
pp. 111-119 ◽  
Author(s):  
W. Gujer

Model complexity in activated sludge modelling has increased over 30 years in parallel with the computational power of computers. Today, the complexity of biokinetics has reached a practical limit. Future advances may be in the direction of enhanced spacial resolution (CFD, single organisms) respectively, in repetitive computations (MC simulation, parameter identification). Further model development may be in niches such as population dynamics, micropollutants, etc.


Author(s):  
Marlies Holkje Barendrecht ◽  
Alberto Viglione ◽  
Heidi Kreibich ◽  
Sergiy Vorogushyn ◽  
Bruno Merz ◽  
...  

Abstract. Socio-hydrological modelling studies that have been published so far show that dynamic coupled human-flood models are a promising tool to represent the phenomena and the feedbacks in human-flood systems. So far these models are mostly generic and have not been developed and calibrated to represent specific case studies. We believe that applying and calibrating these type of models to real world case studies can help us to further develop our understanding about the phenomena that occur in these systems. In this paper we propose a method to estimate the parameter values of a socio-hydrological model and we test it by applying it to an artificial case study. We postulate a model that describes the feedbacks between floods, awareness and preparedness. After simulating hypothetical time series with a given combination of parameters, we sample few data points for our variables and try to estimate the parameters given these data points using Bayesian Inference. The results show that, if we are able to collect data for our case study, we would, in theory, be able to estimate the parameter values for our socio-hydrological flood model.


2016 ◽  
Vol 31 (18) ◽  
pp. 1650105 ◽  
Author(s):  
I. Brevik ◽  
V. V. Obukhov ◽  
A. V. Timoshkin

A remarkable property of modern cosmology is that it allows for a special case of symmetry, consisting in the possibility of describing the early-time acceleration (inflation) and the late-time acceleration using the same theoretical framework. In this paper, we consider various cosmological models corresponding to a generalized form for the equation of state for the fluid in a flat Friedmann–Robertson–Walker (FRW) universe, emphasizing cases where the so-called type IV singular inflation is encountered in the future. This is a soft (non-crushing) kind of singularity. Parameter values for an inhomogeneous equation of state leading to singular inflation are obtained. We present models for which there are two type IV singularities, the first corresponding to the end of the inflationary era and the second to a late-time event. We also study the correspondence between the theoretical slow-roll parameters leading to type IV singular inflation and the recent results observed by the Planck satellite.


Sign in / Sign up

Export Citation Format

Share Document