Migratory Catch-Age Analysis

1990 ◽  
Vol 47 (12) ◽  
pp. 2315-2327 ◽  
Author(s):  
Terrance J. Quinn II ◽  
Richard B. Deriso ◽  
Philip R. Neal

We review techniques for estimating the abundance of migratory populations and develop a new technique based on catch-age data from geographic regions and our earlier technique, catch-age analysis with auxiliary information (Deriso et al. 1985, 1989). Data requirements are catch-age data over several years, some auxiliary information, and migration rates among regions. The model, containing parameters for year-class abundance, age selectivity, full-recruitment fishing mortality, and catchability, is fitted to data with a nonlinear least squares algorithm. We present a measurement error model and a process error model and favor the process error model because all model parameters can be jointly estimated. By application to data on Pacific halibut, the process error model converges readily and produces estimates with no significant bias. These estimates have relatively high precision compared to those from analyses which did not incorporate migration information. The error structure used in a model has a more significant impact on parameter estimates than migration rates. A sensitivity study of migration rates shows sensitivity of the order of the rates themselves.

Author(s):  
James R. McCusker ◽  
Kourosh Danai

A method of parameter estimation was recently introduced that separately estimates each parameter of the dynamic model [1]. In this method, regions coined as parameter signatures, are identified in the time-scale domain wherein the prediction error can be attributed to the error of a single model parameter. Based on these single-parameter associations, individual model parameters can then be estimated for iterative estimation. Relative to nonlinear least squares, the proposed Parameter Signature Isolation Method (PARSIM) has two distinct attributes. One attribute of PARSIM is to leave the estimation of a parameter dormant when a parameter signature cannot be extracted for it. Another attribute is independence from the contour of the prediction error. The first attribute could cause erroneous parameter estimates, when the parameters are not adapted continually. The second attribute, on the other hand, can provide a safeguard against local minima entrapments. These attributes motivate integrating PARSIM with a method, like nonlinear least-squares, that is less prone to dormancy of parameter estimates. The paper demonstrates the merit of the proposed integrated approach in application to a difficult estimation problem.


2017 ◽  
Author(s):  
Mario R. Hernández-López ◽  
Félix Francés

Abstract. Over the years, the Standard Least Squares (SLS) has been the most commonly adopted criterion for the calibration of hydrological models, despite the fact that they generally do not fulfill the assumptions made by the SLS method: very often errors are autocorrelated, heteroscedastic, biased and/or non-Gaussian. Similarly to recent papers, which suggest more appropriate models for the errors in hydrological modeling, this paper addresses the challenging problem of jointly estimate hydrological and error model parameters (joint inference) in a Bayesian framework, trying to solve some of the problems found in previous related researches. This paper performs a Bayesian joint inference through the application of different inference models, as the known SLS or WLS and the new GL++ and GL++Bias error models. These inferences were carried out on two lumped hydrological models which were forced with daily hydrometeorological data from a basin of the MOPEX project. The main finding of this paper is that a joint inference, to be statistically correct, must take into account the joint probability distribution of the state variable to be predicted and its deviation from the observations (the errors). Consequently, the relationship between the marginal and conditional distributions of this joint distribution must be taken into account in the inference process. This relation is defined by two general statistical expressions called the Total Laws (TLs): the Total Expectation and the Total Variance Laws. Only simple error models, as SLS, do not explicitly need the TLs implementation. An important consequence of the TLs enforcement is the reduction of the degrees of freedom in the inference problem namely, the reduction of the parameter space dimension. This research demonstrates that non-fulfillment of TLs produces incorrect error and hydrological parameter estimates and unreliable predictive distributions. The target of a (joint) inference must be fulfilling the error model hypotheses rather than to achieve the better fitting to the observations. Consequently, for a given hydrological model, the resulting performance of the prediction, the reliability of its predictive uncertainty, as well as the robustness of the parameter estimates, will be exclusively conditioned by the degree in which errors fulfill the error model hypotheses.


2006 ◽  
Vol 84 (11) ◽  
pp. 1698-1701
Author(s):  
J. Fieberg ◽  
D.F. Staples

Hierarchical / random effect models provide a statistical framework for estimating variance parameters that describe temporal and spatial variability of vital rates in population dynamic models. In practice, estimates of variance parameters (e.g., process error) from these models are often confused with estimates of uncertainty about model parameter estimates (e.g., standard errors). These two sources of “error” have different implications for predictions from stochastic models. Estimates of process error (or variability) are useful for describing the magnitude of variation in vital rates over time and are a feature of the modeled process itself, whereas estimates of parameter standard errors (or uncertainty) are necessary for interpreting how well we are able to estimate model parameters and whether they differ among groups. The goal of this comment is to illustrate these concepts in the context of a recent paper by A.W. Reed and N.A. Slade (Can. J. Zool. 84: 635–642 (2006)) . In particular, we will show that their “hypothesis tests” involving mean parameters are actually comparisons of the estimated distributions of vital rates among groups of individuals.


1983 ◽  
Vol 10 (4) ◽  
pp. 703-712
Author(s):  
David T. Chapman

The suitability of a statistical technique known as nonlinear least squares for use in estimating mixing coefficients was evaluated by fitting models to residence time distribution curves. The washout curves were generated by adding slug inputs of tracers to three different reactors. Each of the reactors, used to treat wastewaters, was a different size and represented a different degree of mixing.Three models, described in the paper, were examined for use in conjuction with the nonlinear least squares technique. They included the axial dispersion, N-tanks-in-series, and Cholette–Cloutier models. The form of the equation for the axial dispersion model depends on the boundary conditions for the reactor being studied. For reactors which cannot be classified as "open" vessels, the required analytical solutions either do not exist or are not suitable for use with the nonlinear least squares technique.Mixing coefficients for the N-tanks and Chollette–Cloutier models were obtained from the tracer washout curves for the three reactors. The residual sum of squares based on nonlinear least squares estimates for the model parameters was compared with the sum of squares obtained using more conventional methods for estimating the parameters. The existence of trailing tails on the tracer curves resulted in misleading parameter estimates for the two models using conventional techniques. Keywords: mixing, least squares, tracer, dispersion, short-circuiting, deadspace.


2009 ◽  
Vol 29 (7) ◽  
pp. 1317-1331 ◽  
Author(s):  
Giampaolo Tomasi ◽  
Alessandra Bertoldo ◽  
Shrinivas Bishu ◽  
Aaron Unterman ◽  
Carolyn Beebe Smith ◽  
...  

We adapted and validated a basis function method (BFM) to estimate at the voxel level parameters of the kinetic model of the l-[1-11C]leucine positron emission tomography (PET) method and regional rates of cerebral protein synthesis (rCPS). In simulation at noise levels typical of voxel data, BFM yielded low-bias estimates of rCPS; in measured data, BFM and nonlinear least-squares parameter estimates were in good agreement. We also examined whether there are advantages to using voxel-level estimates averaged over regions of interest (ROIs) in place of estimates obtained by directly fitting ROI time-activity curves (TACs). In both simulated and measured data, fits of ROI TACs were poor, likely because of tissue heterogeneity not taken into account in the kinetic model. In simulation, rCPS determined from fitting ROI TACs was substantially overestimated and BFM-estimated rCPS averaged over all voxels in an ROI was slightly underestimated. In measured data, rCPS determined by regional averaging of voxel estimates was lower than rCPS determined from ROI TACs, consistent with simulation. In both simulated and measured data, intersubject variability of BFM-estimated rCPS averaged over all voxels in a ROI was low. We conclude that voxelwise estimation is preferable to fitting ROI TACs using a homogeneous tissue model.


Methodology ◽  
2015 ◽  
Vol 11 (3) ◽  
pp. 89-99 ◽  
Author(s):  
Leslie Rutkowski ◽  
Yan Zhou

Abstract. Given a consistent interest in comparing achievement across sub-populations in international assessments such as TIMSS, PIRLS, and PISA, it is critical that sub-population achievement is estimated reliably and with sufficient precision. As such, we systematically examine the limitations to current estimation methods used by these programs. Using a simulation study along with empirical results from the 2007 cycle of TIMSS, we show that a combination of missing and misclassified data in the conditioning model induces biases in sub-population achievement estimates, the magnitude and degree to which can be readily explained by data quality. Importantly, estimated biases in sub-population achievement are limited to the conditioning variable with poor-quality data while other sub-population achievement estimates are unaffected. Findings are generally in line with theory on missing and error-prone covariates. The current research adds to a small body of literature that has noted some of the limitations to sub-population estimation.


Water ◽  
2021 ◽  
Vol 13 (4) ◽  
pp. 463
Author(s):  
Gopinathan R. Abhijith ◽  
Leonid Kadinski ◽  
Avi Ostfeld

The formation of bacterial regrowth and disinfection by-products is ubiquitous in chlorinated water distribution systems (WDSs) operated with organic loads. A generic, easy-to-use mechanistic model describing the fundamental processes governing the interrelationship between chlorine, total organic carbon (TOC), and bacteria to analyze the spatiotemporal water quality variations in WDSs was developed using EPANET-MSX. The representation of multispecies reactions was simplified to minimize the interdependent model parameters. The physicochemical/biological processes that cannot be experimentally determined were neglected. The effects of source water characteristics and water residence time on controlling bacterial regrowth and Trihalomethane (THM) formation in two well-tested systems under chlorinated and non-chlorinated conditions were analyzed by applying the model. The results established that a 100% increase in the free chlorine concentration and a 50% reduction in the TOC at the source effectuated a 5.87 log scale decrement in the bacteriological activity at the expense of a 60% increase in THM formation. The sensitivity study showed the impact of the operating conditions and the network characteristics in determining parameter sensitivities to model outputs. The maximum specific growth rate constant for bulk phase bacteria was found to be the most sensitive parameter to the predicted bacterial regrowth.


Genetics ◽  
1993 ◽  
Vol 133 (3) ◽  
pp. 711-727
Author(s):  
B K Epperson

Abstract The geographic distribution of genetic variation is an important theoretical and experimental component of population genetics. Previous characterizations of genetic structure of populations have used measures of spatial variance and spatial correlations. Yet a full understanding of the causes and consequences of spatial structure requires complete characterization of the underlying space-time system. This paper examines important interactions between processes and spatial structure in systems of subpopulations with migration and drift, by analyzing correlations of gene frequencies over space and time. We develop methods for studying important features of the complete set of space-time correlations of gene frequencies for the first time in population genetics. These methods also provide a new alternative for studying the purely spatial correlations and the variance, for models with general spatial dimensionalities and migration patterns. These results are obtained by employing theorems, previously unused in population genetics, for space-time autoregressive (STAR) stochastic spatial time series. We include results on systems with subpopulation interactions that have time delay lags (temporal orders) greater than one. We use the space-time correlation structure to develop novel estimators for migration rates that are based on space-time data (samples collected over space and time) rather than on purely spatial data, for real systems. We examine the space-time and spatial correlations for some specific stepping stone migration models. One focus is on the effects of anisotropic migration rates. Partial space-time correlation coefficients can be used for identifying migration patterns. Using STAR models, the spatial, space-time, and partial space-time correlations together provide a framework with an unprecedented level of detail for characterizing, predicting and contrasting space-time theoretical distributions of gene frequencies, and for identifying features such as the pattern of migration and estimating migration rates in experimental studies of genetic variation over space and time.


2008 ◽  
Vol 10 (2) ◽  
pp. 153-162 ◽  
Author(s):  
B. G. Ruessink

When a numerical model is to be used as a practical tool, its parameters should preferably be stable and consistent, that is, possess a small uncertainty and be time-invariant. Using data and predictions of alongshore mean currents flowing on a beach as a case study, this paper illustrates how parameter stability and consistency can be assessed using Markov chain Monte Carlo. Within a single calibration run, Markov chain Monte Carlo estimates the parameter posterior probability density function, its mode being the best-fit parameter set. Parameter stability is investigated by stepwise adding new data to a calibration run, while consistency is examined by calibrating the model on different datasets of equal length. The results for the present case study indicate that various tidal cycles with strong (say, >0.5 m/s) currents are required to obtain stable parameter estimates, and that the best-fit model parameters and the underlying posterior distribution are strongly time-varying. This inconsistent parameter behavior may reflect unresolved variability of the processes represented by the parameters, or may represent compensational behavior for temporal violations in specific model assumptions.


1991 ◽  
Vol 18 (2) ◽  
pp. 320-327 ◽  
Author(s):  
Murray A. Fitch ◽  
Edward A. McBean

A model is developed for the prediction of river flows resulting from combined snowmelt and precipitation. The model employs a Kalman filter to reflect uncertainty both in the measured data and in the system model parameters. The forecasting algorithm is used to develop multi-day forecasts for the Sturgeon River, Ontario. The algorithm is shown to develop good 1-day and 2-day ahead forecasts, but the linear prediction model is found inadequate for longer-term forecasts. Good initial parameter estimates are shown to be essential for optimal forecasting performance. Key words: Kalman filter, streamflow forecast, multi-day, streamflow, Sturgeon River, MISP algorithm.


Sign in / Sign up

Export Citation Format

Share Document