Measuring growth patterns in the field: effects of sampling regime and methods on standardized estimates

2011 ◽  
Vol 89 (6) ◽  
pp. 529-537 ◽  
Author(s):  
J.G.A. Martin ◽  
F. Pelletier

Although mixed effects models are widely used in ecology and evolution, their application to standardized traits that change within season or across ontogeny remains limited. Mixed models offer a robust way to standardize individual quantitative traits to a common condition such as body mass at a certain point in time (within a year or across ontogeny), or parturition date for a given climatic condition. Currently, however, most researchers use simple linear models to accomplish this task. We use both empirical and simulated data to underline the application of mixed models for standardizing trait values to a common environment for each individual. We show that mixed model standardizations provide more accurate estimates of mass parameters than linear models for all sampling regimes and especially for individuals with few repeated measures. Our simulations and analyses on empirical data both confirm that mixed models provide a better way to standardize trait values for individuals with repeated measurements compared with classical least squares regression. Linear regression should therefore be avoided to adjust or standardize individual measurements

Plants ◽  
2021 ◽  
Vol 10 (2) ◽  
pp. 362
Author(s):  
Ioannis Spyroglou ◽  
Jan Skalák ◽  
Veronika Balakhonova ◽  
Zuzana Benedikty ◽  
Alexandros G. Rigas ◽  
...  

Plants adapt to continual changes in environmental conditions throughout their life spans. High-throughput phenotyping methods have been developed to noninvasively monitor the physiological responses to abiotic/biotic stresses on a scale spanning a long time, covering most of the vegetative and reproductive stages. However, some of the physiological events comprise almost immediate and very fast responses towards the changing environment which might be overlooked in long-term observations. Additionally, there are certain technical difficulties and restrictions in analyzing phenotyping data, especially when dealing with repeated measurements. In this study, a method for comparing means at different time points using generalized linear mixed models combined with classical time series models is presented. As an example, we use multiple chlorophyll time series measurements from different genotypes. The use of additional time series models as random effects is essential as the residuals of the initial mixed model may contain autocorrelations that bias the result. The nature of mixed models offers a viable solution as these can incorporate time series models for residuals as random effects. The results from analyzing chlorophyll content time series show that the autocorrelation is successfully eliminated from the residuals and incorporated into the final model. This allows the use of statistical inference.


2020 ◽  
Vol 17 (1) ◽  
Author(s):  
Thomas Faulkenberry

In this paper, I develop a formula for estimating Bayes factors directly from minimal summary statistics produced in repeated measures analysis of variance designs. The formula, which requires knowing only the F-statistic, the number of subjects, and the number of repeated measurements per subject, is based on the BIC approximation of the Bayes factor, a common default method for Bayesian computation with linear models. In addition to providing computational examples, I report a simulation study in which I demonstrate that the formula compares favorably to a recently developed, more complex method that accounts for correlation between repeated measurements. The minimal BIC method provides a simple way for researchers to estimate Bayes factors from a minimal set of summary statistics, giving users a powerful index for estimating the evidential value of not only their own data, but also the data reported in published studies.


2009 ◽  
Vol 39 (1) ◽  
pp. 61-80 ◽  
Author(s):  
José Garrido ◽  
Jun Zhou

AbstractGeneralized linear models (GLMs) are gaining popularity as a statistical analysis method for insurance data. For segmented portfolios, as in car insurance, the question of credibility arises naturally; how many observations are needed in a risk class before the GLM estimators can be considered credible? In this paper we study the limited fluctuations credibility of the GLM estimators as well as in the extended case of generalized linear mixed model (GLMMs). We show how credibility depends on the sample size, the distribution of covariates and the link function. This provides a mechanism to obtain confidence intervals for the GLM and GLMM estimators.


2019 ◽  
Vol 30 (6) ◽  
pp. NP1-NP2 ◽  
Author(s):  
Işıl Kutluturk Karagoz ◽  
Berhan Keskin ◽  
Flora Özkalaycı ◽  
Ali Karagöz

We have some criticism regarding some technical issues. Mixed models have begun to play a pivotal role in statistical analyses and offer many advantages over more conventional analyses regarding repeated variance analyses. First, they allow to avoid conducting multiple t-tests; second, they can accommodate for within-patient correlation; third, they allow to incorporate not only a random coefficient, but also a random slope, typically ‘linear’ time in longitudinal case series when there are enough data and patients’ trajectories vary a lot and improving model fit.


Author(s):  
Rui Fang ◽  
Brandie Wagner ◽  
J. Kirk Harris ◽  
Sophie A Fillon

Identification of the majority of organisms present in human-associated microbial communities is feasible with the advent of high throughput sequencing technology. However, these data consist of non-negative, highly skewed sequence counts with a large proportion of zeros. Zero-inflated models are useful for analyzing such data. Moreover, the non-zero observations may be over-dispersed in relation to the Poisson distribution, biasing parameter estimates and underestimating standard errors. In such a circumstance, a zero-inflated negative binomial (ZINB) model better accounts for these characteristics compared to a zero-inflated Poisson (ZIP). In addition, complex study designs are possible with repeated measurements or multiple samples collected from the same subject, thus random effects are introduced to account for the within subject variation. A zero-inflated negative binomial mixed model contains components to model the probability of excess zero values and the negative binomial parameters, allowing for repeated measures using independent random effects between these two components. The objective of this study is to examine the application of a zero-inflated negative binomial mixed model to human microbiota sequence data.


2020 ◽  
Vol 1 ◽  
Author(s):  
Geyse Maria dos Santos Muniz Mota ◽  
Matheus Kury ◽  
Cecília Pereira da Silva Braga Tenório ◽  
Flávia Lucisano Botelho do Amaral ◽  
Cecília Pedroso Turssi ◽  
...  

This study evaluated the surface roughness and color alteration of an aged nanofilled composite exposed to different staining solutions and bleaching agents. Ninety nanofilled composite (Filtek Z350XT, 3M/Oral Care) specimens were submitted to 5,000 thermal cycles and immersed in (n = 30): red wine, coffee, and artificial saliva at 37°C for 48 h. Groups were subdivided according to the bleaching protocol (n = 10) with 20% carbamide peroxide, 38% hydrogen peroxide, or without bleaching - control. Mean surface roughness values (Ra - μm) and color parameters (L*, a*, b*) were measured at baseline (T0), after thermal cycling aging and staining (TS), and after bleaching (TB). Color (ΔE00) and whiteness index (ΔWID) changes were determined after aging and staining (Ts-T0) and after bleaching (TB-TS). The adopted perceptibility and acceptability thresholds of the nanofilled composite were 0.81 and 1.71 ΔE00 units and 0.61 and 2.90 ΔWID units, respectively. Ra was analyzed using mixed models for repeated measurements and L* by the Tukey-Kramer test. The a* and b* values were evaluated by generalized linear models for repeated measures. ΔE00 was tested using two-way ANOVA and Tukey tests, and ΔWID by Kruskal-Wallis and Dunn tests (α = 5%). Ra of all groups decreased after aging and staining (TS, p < 0.05), but increased after bleaching only for groups stained with red wine (TB). Aging and staining decreased the luminosity of the composites, but L* increased after bleaching (p < 0.05). Aging and staining increased a* and b* values, but after bleaching, b* values decreased (p < 0.05). After bleaching, ΔE00 and ΔWID were greater in stained groups at both time intervals, regardless of the bleaching protocol. Stained resin composites exhibited perceptible but unacceptable color (ΔE00 > 1.71) and whiteness (ΔWID > 2.90) changes, regardless of the bleaching treatment performed. Therefore, red wine affected the surface roughness of the aged nanofilled resin submitted to bleaching. Bleaching was unable to reverse the color changes promoted by red wine and coffee on the aged nanofilled composite.


2016 ◽  
Vol 27 (3) ◽  
pp. 863-875 ◽  
Author(s):  
Ana W Capuano ◽  
Robert S Wilson ◽  
Sue E Leurgans ◽  
Jeffrey D Dawson ◽  
David A Bennett ◽  
...  

Linear mixed models are widely used to analyze longitudinal cognitive data. Often, however, the trajectory of cognitive function is nonlinear. For example, some participants may experience cognitive decline that accelerates as death approaches. Polynomial regression and piecewise linear models are common approaches used to characterize nonlinear trajectories, although both have assumptions that may not correspond with the actual trajectories. An alternative is to use a flexible sigmoidal mixed model based on the logistic family of curves. We describe a general class of such a model, which has up to five parameters, representing (1) final level, (2) rate of decline, (3) midpoint of decline, (4) initial level before decline, and (5) asymmetry. Focusing on a four-parameter symmetric sub-class of the model, with random effects on two of the parameters, we demonstrate that a likelihood approach to fitting this model produces accurate estimates of mean levels across time, even in the case of model misspecification. We also illustrate the method on deceased participants who had completed at least 5 years of annual cognitive testing and annual assessment of body mass. We show that departures from a stable body can modify the trajectory curves and anticipate cognitive decline.


2021 ◽  
pp. 0272989X2110038
Author(s):  
Felix Achana ◽  
Daniel Gallacher ◽  
Raymond Oppong ◽  
Sungwook Kim ◽  
Stavros Petrou ◽  
...  

Economic evaluations conducted alongside randomized controlled trials are a popular vehicle for generating high-quality evidence on the incremental cost-effectiveness of competing health care interventions. Typically, in these studies, resource use (and by extension, economic costs) and clinical (or preference-based health) outcomes data are collected prospectively for trial participants to estimate the joint distribution of incremental costs and incremental benefits associated with the intervention. In this article, we extend the generalized linear mixed-model framework to enable simultaneous modeling of multiple outcomes of mixed data types, such as those typically encountered in trial-based economic evaluations, taking into account correlation of outcomes due to repeated measurements on the same individual and other clustering effects. We provide new wrapper functions to estimate the models in Stata and R by maximum and restricted maximum quasi-likelihood and compare the performance of the new routines with alternative implementations across a range of statistical programming packages. Empirical applications using observed and simulated data from clinical trials suggest that the new methods produce broadly similar results as compared with Stata’s merlin and gsem commands and a Bayesian implementation in WinBUGS. We highlight that, although these empirical applications primarily focus on trial-based economic evaluations, the new methods presented can be generalized to other health economic investigations characterized by multivariate hierarchical data structures.


2015 ◽  
Vol 26 (3) ◽  
pp. 1110-1129
Author(s):  
Nicholas Mitsakakis ◽  
George Tomlinson

Estimation of net costs attributed to a disease or other health condition is very important for health economists and policy makers. Skewness and heteroscedasticity are well-known characteristics for cost data, making linear models generally inappropriate and dictating the use of other types of models, such as gamma regression. Additional hurdles emerge when individual level data are not available. In this paper, we consider the latter case were data are only available at the aggregate level, containing means and standard deviations for different strata defined by a number of demographic and clinical factors. We summarize a number of methods that can be used for this estimation, and we propose a Bayesian approach that utilizes the sample stratum specific standard deviations as stochastic. We investigate the performance of two linear mixed models, comparing them with two proposed gamma regression mixed models, to analyze simulated data generated by gamma and log-normal distributions. Our proposed Bayesian approach seems to have significant advantages for net cost estimation when only aggregate data are available. The implemented gamma models do not seem to offer the expected benefits over the linear models; however, further investigation and refinement is needed.


Sign in / Sign up

Export Citation Format

Share Document