scholarly journals Leave-One-Trial-Out (LOTO): A general approach to link single-trial parameters of cognitive models to neural data

2018 ◽  
Author(s):  
Sebastian Gluth ◽  
Nachshon Meiran

AbstractIt has become a key goal of model-based neuroscience to estimate trial-by-trial fluctuations of cognitive model parameters for linking these fluctuations to brain signals. However, previously developed methods were limited by being difficulty to implement, time-consuming, or model-specific. Here, we propose an easy, efficient and general approach to estimating trial-wise changes in parameters: Leave-One-Trial-Out (LOTO). The rationale behind LOTO is that the difference between the parameter estimates for the complete dataset and for the dataset with one omitted trial reflects the parameter value in the omitted trial. We show that LOTO is superior to estimating parameter values from single trials and compare it to previously proposed approaches. Furthermore, the method allows distinguishing true variability in a parameter from noise and from variability in other parameters. In our view, the practicability and generality of LOTO will advance research on tracking fluctuations in latent cognitive variables and linking them to neural data.

eLife ◽  
2019 ◽  
Vol 8 ◽  
Author(s):  
Sebastian Gluth ◽  
Nachshon Meiran

A key goal of model-based cognitive neuroscience is to estimate the trial-by-trial fluctuations of cognitive model parameters in order to link these fluctuations to brain signals. However, previously developed methods are limited by being difficult to implement, time-consuming, or model-specific. Here, we propose an easy, efficient and general approach to estimating trial-wise changes in parameters: Leave-One-Trial-Out (LOTO). The rationale behind LOTO is that the difference between parameter estimates for the complete dataset and for the dataset with one omitted trial reflects the parameter value in the omitted trial. We show that LOTO is superior to estimating parameter values from single trials and compare it to previously proposed approaches. Furthermore, the method makes it possible to distinguish true variability in a parameter from noise and from other sources of variability. In our view, the practicability and generality of LOTO will advance research on tracking fluctuations in latent cognitive variables and linking them to neural data.


2018 ◽  
Vol 141 (1) ◽  
Author(s):  
Alyssa T. Liem ◽  
J. Gregory McDaniel ◽  
Andrew S. Wixom

A method is presented to improve the estimates of material properties, dimensions, and other model parameters for linear vibrating systems. The method improves the estimates of a single model parameter of interest by finding parameter values that bring model predictions into agreement with experimental measurements. A truncated Neumann series is used to approximate the inverse of the dynamic stiffness matrix. This approximation avoids the need to directly solve the equations of motion for each parameter variation. The Neumman series is shown to be equivalent to a Taylor series expansion about nominal parameter values. A recursive scheme is presented for computing the associated derivatives, which are interpreted as sensitivities of displacements to parameter variations. The convergence of the Neumman series is studied in the context of vibrating systems, and it is found that the spectral radius is strongly dependent on system resonances. A homogeneous viscoelastic bar in longitudinal vibration is chosen as a test specimen, and the complex-valued Young's modulus is chosen as an uncertain parameter. The method is demonstrated on simulated experimental measurements computed from the model. These demonstrations show that parameter values estimated by the method agree with those used to simulate the experiment when enough terms are included in the Neumann series. Similar results are obtained for the case of an elastic plate with clamped boundary conditions. The method is also demonstrated on experimental data, where it produces improved parameter estimates that bring the model predictions into agreement with the measured response to within 1% at a point on the bar across a frequency range that includes three resonance frequencies.


Author(s):  
Ranik Raaen Wahlstrøm ◽  
Florentina Paraschiv ◽  
Michael Schürle

AbstractWe shed light on computational challenges when fitting the Nelson-Siegel, Bliss and Svensson parsimonious yield curve models to observed US Treasury securities with maturities up to 30 years. As model parameters have a specific financial meaning, the stability of their estimated values over time becomes relevant when their dynamic behavior is interpreted in risk-return models. Our study is the first in the literature that compares the stability of estimated model parameters among different parsimonious models and for different approaches for predefining initial parameter values. We find that the Nelson-Siegel parameter estimates are more stable and conserve their intrinsic economical interpretation. Results reveal in addition the patterns of confounding effects in the Svensson model. To obtain the most stable and intuitive parameter estimates over time, we recommend the use of the Nelson-Siegel model by taking initial parameter values derived from the observed yields. The implications of excluding Treasury bills, constraining parameters and reducing clusters across time to maturity are also investigated.


Author(s):  
Rafegh Aghamohammadi ◽  
Jorge Laval

This paper extends the Stochastic Method of Cuts (SMoC) to approximate of the Macroscopic Fundamental Diagram (MFD) of urban networks and uses Maximum Likelihood Estimation (MLE) method to estimate the model parameters based on empirical data from a corridor and 30 cities around the world. For the corridor case, the estimated values are in good agreement with the measured values of the parameters. For the network datasets, the results indicate that the method yields satisfactory parameter estimates and graphical fits for roughly 50\% of the studied networks, where estimations fall within the expected range of the parameter values. The satisfactory estimates are mostly for the datasets which (i) cover a relatively wider range of densities and (ii) the average flow values at different densities are approximately normally distributed similar to the probability density function of the SMoC. The estimated parameter values are compared to the real or expected values and any discrepancies and their potential causes are discussed in depth to identify the challenges in the MFD estimation both analytically and empirically. In particular, we find that the most important issues needing further investigation are: (i) the distribution of loop detectors within the links, (ii) the distribution of loop detectors across the network, and (iii) the treatment of unsignalized intersections and their impact on the block length.


2012 ◽  
Vol 2012 ◽  
pp. 1-13 ◽  
Author(s):  
Jim J. Xiao

The objectives were to review available PK models for saturable FcRn-mediated IgG disposition, and to explore an alternative semimechanistic model. Most available empirical and mechanistic PK models assumed equal IgG concentrations in plasma and endosome in addition to other model-specific assumptions. These might have led to inappropriate parameter estimates and model interpretations. Some physiologically based PK (PBPK) models included FcRn-mediated IgG recycling. The nature of PBPK models requires borrowing parameter values from literature, and subtle differences in the assumptions may render dramatic changes in parameter estimates related to the IgG recycling kinetics. These models might have been unnecessarily complicated to address FcRn saturation and nonlinear IgG PK especially in the IVIG setting. A simple semimechanistic PK model (cutoff model) was developed that assumed a constant endogenous IgG production rate and a saturable FcRn-binding capacity. The FcRn-binding capacity was defined as MAX, and IgG concentrations exceeding MAX in endosome resulted in lysosomal degradation. The model parameters were estimated using simulated data from previously published models. The cutoff model adequately described the rat and mouse IgG PK data simulated from published models and allowed reasonable estimation of endogenous IgG turnover rates.


2017 ◽  
Author(s):  
David P. McGovern ◽  
Aoife Hayes ◽  
Simon P. Kelly ◽  
Redmond O’Connell

Ageing impacts on decision making behaviour across a wide range of cognitive tasks and scenarios. Computational modeling has proven highly valuable in providing mechanistic interpretations of these age-related differences; however, the extent to which model parameter differences accurately reflect changes to the underlying neural computations has yet to be tested. Here, we measured neural signatures of decision formation as younger and older participants performed motion discrimination and contrast-change detection tasks, and compared the dynamics of these signals to key parameter estimates from fits of a prominent accumulation-to-bound model (drift diffusion) to behavioural data. Our results indicate marked discrepancies between the age-related effects observed in the model output and the neural data. Most notably, while the model predicted a higher decision boundary in older age for both tasks, the neural data indicated no such differences. To reconcile the model and neural findings, we used our neurophysiological observations as a guide to constrain and adapt the model parameters. In addition to providing better fits to behaviour on both tasks, the resultant neurally-informed models furnished novel predictions regarding other features of the neural data which were empirically validated. These included a slower mean rate of evidence accumulation amongst older adults during motion discrimination and a beneficial reduction in between-trial variability in accumulation rates on the contrast-change detection task, which was linked to more consistent attentional engagement. Our findings serve to highlight how combining human brain signal measurements with computational modelling can yield unique insights into group differences in neural mechanisms for decision making.


2021 ◽  
Vol 11 (7) ◽  
pp. 2898
Author(s):  
Humberto C. Godinez ◽  
Esteban Rougier

Simulation of fracture initiation, propagation, and arrest is a problem of interest for many applications in the scientific community. There are a number of numerical methods used for this purpose, and among the most widely accepted is the combined finite-discrete element method (FDEM). To model fracture with FDEM, material behavior is described by specifying a combination of elastic properties, strengths (in the normal and tangential directions), and energy dissipated in failure modes I and II, which are modeled by incorporating a parameterized softening curve defining a post-peak stress-displacement relationship unique to each material. In this work, we implement a data assimilation method to estimate key model parameter values with the objective of improving the calibration processes for FDEM fracture simulations. Specifically, we implement the ensemble Kalman filter assimilation method to the Hybrid Optimization Software Suite (HOSS), a FDEM-based code which was developed for the simulation of fracture and fragmentation behavior. We present a set of assimilation experiments to match the numerical results obtained for a Split Hopkinson Pressure Bar (SHPB) model with experimental observations for granite. We achieved this by calibrating a subset of model parameters. The results show a steady convergence of the assimilated parameter values towards observed time/stress curves from the SHPB observations. In particular, both tensile and shear strengths seem to be converging faster than the other parameters considered.


2008 ◽  
Vol 10 (2) ◽  
pp. 153-162 ◽  
Author(s):  
B. G. Ruessink

When a numerical model is to be used as a practical tool, its parameters should preferably be stable and consistent, that is, possess a small uncertainty and be time-invariant. Using data and predictions of alongshore mean currents flowing on a beach as a case study, this paper illustrates how parameter stability and consistency can be assessed using Markov chain Monte Carlo. Within a single calibration run, Markov chain Monte Carlo estimates the parameter posterior probability density function, its mode being the best-fit parameter set. Parameter stability is investigated by stepwise adding new data to a calibration run, while consistency is examined by calibrating the model on different datasets of equal length. The results for the present case study indicate that various tidal cycles with strong (say, >0.5 m/s) currents are required to obtain stable parameter estimates, and that the best-fit model parameters and the underlying posterior distribution are strongly time-varying. This inconsistent parameter behavior may reflect unresolved variability of the processes represented by the parameters, or may represent compensational behavior for temporal violations in specific model assumptions.


1991 ◽  
Vol 18 (2) ◽  
pp. 320-327 ◽  
Author(s):  
Murray A. Fitch ◽  
Edward A. McBean

A model is developed for the prediction of river flows resulting from combined snowmelt and precipitation. The model employs a Kalman filter to reflect uncertainty both in the measured data and in the system model parameters. The forecasting algorithm is used to develop multi-day forecasts for the Sturgeon River, Ontario. The algorithm is shown to develop good 1-day and 2-day ahead forecasts, but the linear prediction model is found inadequate for longer-term forecasts. Good initial parameter estimates are shown to be essential for optimal forecasting performance. Key words: Kalman filter, streamflow forecast, multi-day, streamflow, Sturgeon River, MISP algorithm.


2018 ◽  
Vol 51 (4) ◽  
pp. 1059-1068 ◽  
Author(s):  
Pascal Parois ◽  
James Arnold ◽  
Richard Cooper

Crystallographic restraints are widely used during refinement of small-molecule and macromolecular crystal structures. They can be especially useful for introducing additional observations and information into structure refinements against low-quality or low-resolution data (e.g. data obtained at high pressure) or to retain physically meaningful parameter values in disordered or unstable refinements. However, despite the fact that the anisotropic displacement parameters (ADPs) often constitute more than half of the total model parameters determined in a structure analysis, there are relatively few useful restraints for them, examples being Hirshfeld rigid-bond restraints, direct equivalence of parameters and SHELXL RIGU-type restraints. Conversely, geometric parameters can be subject to a multitude of restraints (e.g. absolute or relative distance, angle, planarity, chiral volume, and geometric similarity). This article presents a series of new ADP restraints implemented in CRYSTALS [Parois, Cooper & Thompson (2015), Chem. Cent. J. 9, 30] to give more control over ADPs by restraining, in a variety of ways, the directions and magnitudes of the principal axes of the ellipsoids in locally defined coordinate systems. The use of these new ADPs results in more realistic models, as well as a better user experience, through restraints that are more efficient and faster to set up. The use of these restraints is recommended to preserve physically meaningful relationships between displacement parameters in a structural model for rigid bodies, rotationally disordered groups and low-completeness data.


Sign in / Sign up

Export Citation Format

Share Document