Analysis of Coded Wire Tag Returns from Commercial Catches

1992 ◽  
Vol 49 (9) ◽  
pp. 1816-1825 ◽  
Author(s):  
R. M. Cormack ◽  
J. R. Skalski

Three alternative but equivalent approaches to the analysis of coded wire tag (CWT) data using log-linear models are presented. All three use iteratively weighted least squares to estimate treatment effects in hatchery releases under the assumption that the variance of a count is proportional to its expected value. The commonly made assumption of normal distributions with constant variance for recovery rates is inefficient. Analysis of tag recovery at the most disaggregated level (i.e. the level at which the sample fraction f is measured) is found necessary for valid inferences. Failure to include zero counts in analyses of recovery data is also shown to induce or mask interactions among CWT recoveries. Recoveries of CWT from coho salmon (Oncorhynchus kisutch) are used to illustrate the method of analysis. Coordinated CWT releases to facilitate mixing of stocks is recommended in the design of hatchery studies.

1997 ◽  
Vol 54 (7) ◽  
pp. 1433-1449 ◽  
Author(s):  
M Labelle ◽  
C J Walters ◽  
B Riddell

Juvenile tagging and escapement enumeration was conducted during 1985-1988 in nine streams within a 150-km section on the east coast of Vancouver Island. Fourteen coho salmon (Oncorhynchus kisutch) stocks of wild, hatchery, and mixed origin were monitored for ocean survival and exploitation patterns. Estimates of smolt-to-adult survival ranged from 0.5 to 23.1%. Survival rates were highly variable across years and stocks. No stock or stock type had consistently higher survival, but one hatchery stock exhibited consistently lower survival. Average exploitation rates were about 80% each year, and were as high as 96% for some stocks. Exploitation rates were not consistently higher or lower for any stock or stock type, but hatchery reared coho tended to be subject to higher exploitation. Log-linear models were used to assess the effects of various factors on survival and exploitation. Certain hatchery rearing practices had a large influence on survival. Genetic factors, run timing, and stream location had large influences on exploitation rates. An assessment of covariation in survival and exploitation rates showed no indication of a high level of similarity among stocks from adjacent streams or among stock types in this region.


2014 ◽  
Vol 71 (1) ◽  
Author(s):  
Bello Abdulkadir Rasheed ◽  
Robiah Adnan ◽  
Seyed Ehsan Saffari ◽  
Kafi Dano Pati

In a linear regression model, the ordinary least squares (OLS) method is considered the best method to estimate the regression parameters if the assumptions are met. However, if the data does not satisfy the underlying assumptions, the results will be misleading. The violation for the assumption of constant variance in the least squares regression is caused by the presence of outliers and heteroscedasticity in the data. This assumption of constant variance (homoscedasticity) is very important in linear regression in which the least squares estimators enjoy the property of minimum variance. Therefor e robust regression method is required to handle the problem of outlier in the data. However, this research will use the weighted least square techniques to estimate the parameter of regression coefficients when the assumption of error variance is violated in the data. Estimation of WLS is the same as carrying out the OLS in a transformed variables procedure. The WLS can easily be affected by outliers. To remedy this, We have suggested a strong technique for the estimation of regression parameters in the existence of heteroscedasticity and outliers. Here we apply the robust regression of M-estimation using iterative reweighted least squares (IRWLS) of Huber and Tukey Bisquare function and resistance regression estimator of least trimmed squares to estimating the model parameters of state-wide crime of united states in 1993. The outcomes from the study indicate the estimators obtained from the M-estimation techniques and the least trimmed method are more effective compared with those obtained from the OLS.


Symmetry ◽  
2021 ◽  
Vol 13 (11) ◽  
pp. 2173
Author(s):  
Isaac Akoto ◽  
João T. Mexia ◽  
Filipe J. Marques

In this work, we derived new asymptotic results for multinomial models. To obtain these results, we started by studying limit distributions in models with a compact parameter space. This restriction holds since the key parameter whose components are the probabilities of the possible outcomes have non-negative components that add up to 1. Based on these results, we obtained confidence ellipsoids and simultaneous confidence intervals for models with normal limit distributions. We then studied the covariance matrices of the limit normal distributions for the multinomial models. This was a transition between the previous general results and on the inference for multinomial models in which we considered the chi-square tests, confidence regions and non-linear statistics—namely log-linear models with two numerical applications to those models. Namely, our approach overcame the hierarchical restrictions assumed to analyse the multidimensional contingency table.


2010 ◽  
Vol 62 (4) ◽  
pp. 875-882 ◽  
Author(s):  
A. Dembélé ◽  
J.-L. Bertrand-Krajewski ◽  
B. Barillon

Regression models are among the most frequently used models to estimate pollutants event mean concentrations (EMC) in wet weather discharges in urban catchments. Two main questions dealing with the calibration of EMC regression models are investigated: i) the sensitivity of models to the size and the content of data sets used for their calibration, ii) the change of modelling results when models are re-calibrated when data sets grow and change with time when new experimental data are collected. Based on an experimental data set of 64 rain events monitored in a densely urbanised catchment, four TSS EMC regression models (two log-linear and two linear models) with two or three explanatory variables have been derived and analysed. Model calibration with the iterative re-weighted least squares method is less sensitive and leads to more robust results than the ordinary least squares method. Three calibration options have been investigated: two options accounting for the chronological order of the observations, one option using random samples of events from the whole available data set. Results obtained with the best performing non linear model clearly indicate that the model is highly sensitive to the size and the content of the data set used for its calibration.


1998 ◽  
Vol 84 (6) ◽  
pp. 2163-2170 ◽  
Author(s):  
Mitchell J. Rosen ◽  
John D. Sorkin ◽  
Andrew P. Goldberg ◽  
James M. Hagberg ◽  
Leslie I. Katzel

Studies assessing changes in maximal aerobic capacity (V˙o 2 max) associated with aging have traditionally employed the ratio ofV˙o 2 max to body weight. Log-linear, ordinary least-squares, and weighted least-squares models may avoid some of the inherent weaknesses associated with the use of ratios. In this study we used four different methods to examine the age-associated decline inV˙o 2 max in a cross-sectional sample of 276 healthy men, aged 45–80 yr. Sixty-one of the men were aerobically trained athletes, and the remainder were sedentary. The model that accounted for the largest proportion of variance was a weighted least-squares model that included age, fat-free mass, and an indicator variable denoting exercise training status. The model accounted for 66% of the variance inV˙o 2 max and satisfied all the important general linear model assumptions. The other approaches failed to satisfy one or more of these assumptions. The results indicated thatV˙o 2 max declines at the same rate in athletic and sedentary men (0.24 l/min or 9%/decade) and that 35% of this decline (0.08 l ⋅ min−1 ⋅ decade−1) is due to the age-associated loss of fat-free mass.


Sign in / Sign up

Export Citation Format

Share Document