The role of variability and uncertainty in testing hypotheses involving parameters in stochastic demographic models

2006 ◽  
Vol 84 (11) ◽  
pp. 1698-1701
Author(s):  
J. Fieberg ◽  
D.F. Staples

Hierarchical / random effect models provide a statistical framework for estimating variance parameters that describe temporal and spatial variability of vital rates in population dynamic models. In practice, estimates of variance parameters (e.g., process error) from these models are often confused with estimates of uncertainty about model parameter estimates (e.g., standard errors). These two sources of “error” have different implications for predictions from stochastic models. Estimates of process error (or variability) are useful for describing the magnitude of variation in vital rates over time and are a feature of the modeled process itself, whereas estimates of parameter standard errors (or uncertainty) are necessary for interpreting how well we are able to estimate model parameters and whether they differ among groups. The goal of this comment is to illustrate these concepts in the context of a recent paper by A.W. Reed and N.A. Slade (Can. J. Zool. 84: 635–642 (2006)) . In particular, we will show that their “hypothesis tests” involving mean parameters are actually comparisons of the estimated distributions of vital rates among groups of individuals.

1990 ◽  
Vol 47 (12) ◽  
pp. 2315-2327 ◽  
Author(s):  
Terrance J. Quinn II ◽  
Richard B. Deriso ◽  
Philip R. Neal

We review techniques for estimating the abundance of migratory populations and develop a new technique based on catch-age data from geographic regions and our earlier technique, catch-age analysis with auxiliary information (Deriso et al. 1985, 1989). Data requirements are catch-age data over several years, some auxiliary information, and migration rates among regions. The model, containing parameters for year-class abundance, age selectivity, full-recruitment fishing mortality, and catchability, is fitted to data with a nonlinear least squares algorithm. We present a measurement error model and a process error model and favor the process error model because all model parameters can be jointly estimated. By application to data on Pacific halibut, the process error model converges readily and produces estimates with no significant bias. These estimates have relatively high precision compared to those from analyses which did not incorporate migration information. The error structure used in a model has a more significant impact on parameter estimates than migration rates. A sensitivity study of migration rates shows sensitivity of the order of the rates themselves.


2021 ◽  
Vol 12 (1) ◽  
Author(s):  
Sophie Smout ◽  
Kimberly Murray ◽  
Geert Aarts ◽  
Martin Biuw ◽  
Sophie Brasseur ◽  
...  

To support sustainable management of apex predator populations, it is important to estimate population size and understand the drivers of population trends to anticipate the consequences of human decisions. Robust population models are needed, which must be based on realistic biological principles and validated with the best available data. A team of international experts reviewed age-structured models of North Atlantic pinniped populations, including Grey seal (Halichoerus grypus), Harp seal (Pagophilus groenlandicus), and Hooded seal (Cystophora cristata). Statistical methods used to fit such models to data were compared and contrasted. Differences in biological assumptions and model equations were driven by the data available from separate studies, including observation methodology and pre-processing. Counts of pups during the breeding season were used in all models, with additional counts of adults and juveniles available in some. The regularity and frequency of data collection, including survey counts and vital rate estimates, varied. Important differences between the models concerned the nature and causes of variation in vital rates (age-dependent survival and fecundity). Parameterisation of age at maturity was detailed and time-dependent in some models and simplified in others. Methods for estimation of model parameters were reviewed and compared. They included Bayesian and maximum likelihood (ML) approaches, implemented via bespoke coding in C, C++, TMB or JAGS. Comparative model runs suggested that as expected, ML-based implementations were rapid and computationally efficient, while Bayesian approaches, which used MCMC or sequential importance sampling, required longer for inference. For grey seal populations in the Netherlands, where preliminary ML-based TMB results were compared with the outputs of a Bayesian JAGS implementation, some differences in parameter estimates were apparent. For these seal populations, further investigations are recommended to explore differences that might result from the modelling framework and model-fitting methodology, and their importance for inference and management advice. The group recommended building on the success of this workshop via continued collaboration with ICES and NAMMCO assessment groups, as well as other experts in the marine mammal modelling community. Specifically, for Northeast Atlantic harp and hooded seal populations, the workshop represents the initial step towards a full ICES benchmark process aimed at revising and evaluating new assessment models.


Author(s):  
D. W. Beardsmore ◽  
H. Teng ◽  
Michael Martin

We present the detailed results of a series of Monte Carlo simulations of the Gao and Dodds calibration procedure that was carried out to determine the likely size in the errors in the Beremin cleavage model parameter estimates that might be expected for fracture toughness data sets of various sizes. The calibration process was carried out a large number of times using different sample sizes, and mean values and standard errors in the parameter estimates were determined. Modified boundary layer finite element models were used to represent high and low constraint conditions (as in the fracture tests) as well as the SSY condition. The “experimental” Jc values were obtained numerically by random sampling of a Beremin distribution function with known values of the true parameters. A number of cautionary remarks in the application of the calibration method are made.


2021 ◽  
Author(s):  
Udo Boehm ◽  
Nathan J. Evans ◽  
Quentin Frederik Gronau ◽  
Dora Matzke ◽  
Eric-Jan Wagenmakers ◽  
...  

Cognitive models provide a substantively meaningful quantitative description of latent cognitive processes. The quantitative formulation of these models supports cumulative theory building and enables strong empirical tests. However, the non-linearity of these models and pervasive correlations among model parameters pose special challenges when applying cognitive models to data. Firstly, estimating cognitive models typically requires large hierarchical data sets that need to be accommodated by an appropriate statistical structure within the model. Secondly, statistical inference needs to appropriately account for model uncertainty to avoid overconfidence and biased parameter estimates. In the present work we show how these challenges can be addressed through a combination of Bayesian hierarchical modelling and Bayesian model averaging. To illustrate these techniques, we apply the popular diffusion decision model to data from a collaborative selective influence study.


2021 ◽  
Vol 11 (1) ◽  
Author(s):  
Ana I. García-Cervigón ◽  
Pedro F. Quintana-Ascencio ◽  
Adrián Escudero ◽  
Merari E. Ferrer-Cervantes ◽  
Ana M. Sánchez ◽  
...  

AbstractPopulation persistence is strongly determined by climatic variability. Changes in the patterns of climatic events linked to global warming may alter population dynamics, but their effects may be strongly modulated by biotic interactions. Plant populations interact with each other in such a way that responses to climate of a single population may impact the dynamics of the whole community. In this study, we assess how climate variability affects persistence and coexistence of two dominant plant species in a semiarid shrub community on gypsum soils. We use 9 years of demographic data to parameterize demographic models and to simulate population dynamics under different climatic and ecological scenarios. We observe that populations of both coexisting species may respond to common climatic fluctuations both similarly and in idiosyncratic ways, depending on the yearly combination of climatic factors. Biotic interactions (both within and among species) modulate some of their vital rates, but their effects on population dynamics highly depend on climatic fluctuations. Our results indicate that increased levels of climatic variability may alter interspecific relationships. These alterations might potentially affect species coexistence, disrupting competitive hierarchies and ultimately leading to abrupt changes in community composition.


2021 ◽  
Vol 13 (11) ◽  
pp. 6214
Author(s):  
Bumjoon Bae ◽  
Changju Lee ◽  
Tae-Young Pak ◽  
Sunghoon Lee

Aggregation of spatiotemporal data can encounter potential information loss or distort attributes via individual observation, which would influence modeling results and lead to an erroneous inference, named the ecological fallacy. Therefore, deciding spatial and temporal resolution is a fundamental consideration in a spatiotemporal analysis. The modifiable temporal unit problem (MTUP) occurs when using data that is temporally aggregated. While consideration of the spatial dimension has been increasingly studied, the counterpart, a temporal unit, is rarely considered, particularly in the traffic safety modeling field. The purpose of this research is to identify the MTUP effect in crash-frequency modeling using data with various temporal scales. A sensitivity analysis framework is adopted with four negative binomial regression models and four random effect negative binomial models having yearly, quarterly, monthly, and weekly temporal units. As the different temporal unit was applied, the result of the model estimation also changed in terms of the mean and significance of the parameter estimates. Increasing temporal correlation due to using the small temporal unit can be handled with the random effect models.


2008 ◽  
Vol 10 (2) ◽  
pp. 153-162 ◽  
Author(s):  
B. G. Ruessink

When a numerical model is to be used as a practical tool, its parameters should preferably be stable and consistent, that is, possess a small uncertainty and be time-invariant. Using data and predictions of alongshore mean currents flowing on a beach as a case study, this paper illustrates how parameter stability and consistency can be assessed using Markov chain Monte Carlo. Within a single calibration run, Markov chain Monte Carlo estimates the parameter posterior probability density function, its mode being the best-fit parameter set. Parameter stability is investigated by stepwise adding new data to a calibration run, while consistency is examined by calibrating the model on different datasets of equal length. The results for the present case study indicate that various tidal cycles with strong (say, >0.5 m/s) currents are required to obtain stable parameter estimates, and that the best-fit model parameters and the underlying posterior distribution are strongly time-varying. This inconsistent parameter behavior may reflect unresolved variability of the processes represented by the parameters, or may represent compensational behavior for temporal violations in specific model assumptions.


1991 ◽  
Vol 18 (2) ◽  
pp. 320-327 ◽  
Author(s):  
Murray A. Fitch ◽  
Edward A. McBean

A model is developed for the prediction of river flows resulting from combined snowmelt and precipitation. The model employs a Kalman filter to reflect uncertainty both in the measured data and in the system model parameters. The forecasting algorithm is used to develop multi-day forecasts for the Sturgeon River, Ontario. The algorithm is shown to develop good 1-day and 2-day ahead forecasts, but the linear prediction model is found inadequate for longer-term forecasts. Good initial parameter estimates are shown to be essential for optimal forecasting performance. Key words: Kalman filter, streamflow forecast, multi-day, streamflow, Sturgeon River, MISP algorithm.


2011 ◽  
Vol 64 (S1) ◽  
pp. S3-S18 ◽  
Author(s):  
Yuanxi Yang ◽  
Jinlong Li ◽  
Junyi Xu ◽  
Jing Tang

Integrated navigation using multiple Global Navigation Satellite Systems (GNSS) is beneficial to increase the number of observable satellites, alleviate the effects of systematic errors and improve the accuracy of positioning, navigation and timing (PNT). When multiple constellations and multiple frequency measurements are employed, the functional and stochastic models as well as the estimation principle for PNT may be different. Therefore, the commonly used definition of “dilution of precision (DOP)” based on the least squares (LS) estimation and unified functional and stochastic models will be not applicable anymore. In this paper, three types of generalised DOPs are defined. The first type of generalised DOP is based on the error influence function (IF) of pseudo-ranges that reflects the geometry strength of the measurements, error magnitude and the estimation risk criteria. When the least squares estimation is used, the first type of generalised DOP is identical to the one commonly used. In order to define the first type of generalised DOP, an IF of signal–in-space (SIS) errors on the parameter estimates of PNT is derived. The second type of generalised DOP is defined based on the functional model with additional systematic parameters induced by the compatibility and interoperability problems among different GNSS systems. The third type of generalised DOP is defined based on Bayesian estimation in which the a priori information of the model parameters is taken into account. This is suitable for evaluating the precision of kinematic positioning or navigation. Different types of generalised DOPs are suitable for different PNT scenarios and an example for the calculation of these DOPs for multi-GNSS systems including GPS, GLONASS, Compass and Galileo is given. New observation equations of Compass and GLONASS that may contain additional parameters for interoperability are specifically investigated. It shows that if the interoperability of multi-GNSS is not fulfilled, the increased number of satellites will not significantly reduce the generalised DOP value. Furthermore, the outlying measurements will not change the original DOP, but will change the first type of generalised DOP which includes a robust error IF. A priori information of the model parameters will also reduce the DOP.


2007 ◽  
Vol 215 (1) ◽  
pp. 61-71 ◽  
Author(s):  
Edgar Erdfelder ◽  
Lutz Cüpper ◽  
Tina-Sarah Auer ◽  
Monika Undorf

Abstract. A memory measurement model is presented that accounts for judgments of remembering, knowing, and guessing in old-new recognition tasks by assuming four disjoint latent memory states: recollection, familiarity, uncertainty, and rejection. This four-states model can be applied to both Tulving's (1985) remember-know procedure (RK version) and Gardiner and coworker's ( Gardiner, Java, & Richardson-Klavehn, 1996 ; Gardiner, Richardson-Klavehn, & Ramponi, 1997 ) remember-know-guess procedure (RKG version). It is shown that the RK version of the model fits remember-know data approximately as well as the one-dimensional signal detection model does. In contrast, the RKG version of the four-states model outperforms the corresponding detection model even if unequal variances for old and new items are allowed for.We show empirically that the two versions of the four-statesmodelmeasure the same state probabilities. However, the RKG version, requiring remember-know-guess judgments, provides parameter estimates with smaller standard errors and is therefore recommended for routine use.


Sign in / Sign up

Export Citation Format

Share Document