scholarly journals Optimal Repeated Measurements for Two Treatment Designs with Dependent Observations: The Case of Compound Symmetry

Mathematics ◽  
2019 ◽  
Vol 7 (4) ◽  
pp. 378 ◽  
Author(s):  
Chalikias

In this paper, we construct optimal repeated measurement designs of two treatments for estimating direct effects, and we examine the case of compound symmetry dependency. We present the model and the design that minimizes the variance of the estimated difference of the two treatments. The optimal designs with dependent observations in a compound symmetry model are the same as in the case of independent observations.

2018 ◽  
Vol 98 (4) ◽  
pp. 897-907
Author(s):  
Gaofeng Jia ◽  
Helen M. Booker

Multi-environment trials are conducted to evaluate the performance of cultivars. In a combined analysis, the mixed model is superior to an analysis of variance for evaluating and comparing cultivars and dealing with an unbalanced data structure. This study seeks to identify the optimal models using the Saskatchewan Variety Performance Group post-registration regional trial data for flax. Yield data were collected for 15 entries in post-registration tests conducted in Saskatchewan from 2007 to 2016 (except 2011) and 16 mixed models with homogeneous or heterogeneous residual errors were compared. A compound symmetry model with heterogeneous residual error (CSR) had the best fit, with a normal distribution of residuals and a mean of zero fitted to the trial data for each year. The compound symmetry model with homogeneous residual error (CS) and a model extending the CSR to higher dimensions (DIAGR) were the next best models in most cases. Five hundred random samples from a two-stage sampling method were produced to determine the optimal models suitable for various environments. The CSR model was superior to other models for 396 out of 500 samples (79.2%). The top three models, CSR, CS, and DIAGR, had higher statistical power and could be used to access the yield stability of the new flax cultivars. Optimal mixed models are recommended for future data analysis of new flax cultivars in regional tests.


2008 ◽  
Vol 52 (No. 8) ◽  
pp. 254-260 ◽  
Author(s):  
A. Wolc ◽  
M. Lisowski ◽  
T. Szwaczkowski

Six generations of three layer lines (13 770 recorded individuals of A22 line, 13 950 of A88, 9 351 of K66) were used to estimate genetic effects on egg production under cumulative, multitrait and repeatability models. Variance components were estimated by the AI-REML algorithm. The heritability of cumulative records ranged from 0.08 to 0.1. For the repeated measurements model the following genetic parameters were obtained: heritability 0.02–0.03, repeatability 0.04–0.38. The first two months of egg production were found to differ from the other periods: heritability was relatively high (<i>h</i><sup>2</sup> > 0.35) and low or negative correlations with the other periods were found. Heritability was low (<i>h</i><sup>2</sup> < 0.1) from the peak production until the end of recording and the consecutive periods were highly correlated. Further studies on monthly records are suggested.


Biometrika ◽  
1987 ◽  
Vol 74 (4) ◽  
pp. 725-734 ◽  
Author(s):  
ADELCHI AZZALINI ◽  
ALESSANDRA GIOVAGNOLI

2006 ◽  
Vol 3 (1) ◽  
Author(s):  
Ene Käärik

In this paper the author demonstrates how the copulas approach can be used to find algorithms for imputing dropouts in repeated measurements studies. One problem with repeated measurements is the knowledge that the data is described by joint distribution. Copulas are used to create the joint distribution with given marginal distributions. Knowing the joint distribution we can find the conditional distribution of the measurement at a specific time point, conditioned by past measurements, and this will be essential for imputing missing values. Using Gaussian copulas, two simple methods for imputation are presented. Compound symmetry and the case of autoregressive dependencies are discussed. Effectiveness of the proposed approach is tested via series of simulations and results showing that the imputation algorithms based on copulas are appropriate for modelling dropouts.


2017 ◽  
Vol 28 (3) ◽  
pp. 788-800 ◽  
Author(s):  
PJ Godolphin ◽  
EJ Godolphin

When performing a repeated measures experiment, such as a clinical trial, there is a risk of subject drop-out during the experiment. If one or more subjects leave the study prematurely, a situation could arise where the eventual design is disconnected, implying that very few treatment contrasts for both direct effects and carryover effects are estimable. This paper aims to identify experimental conditions where this problem with the eventual design can be avoided. It is shown that in the class of uniformly balanced repeated measurement designs consisting of two or more Latin squares, there are planned designs with the following useful property. Provided that all subjects have completed the first two periods of study, such a design will not be replaced by a disconnected eventual design due to drop-out, irrespective of the type of drop-out behaviour that may occur. Designs with this property are referred to as perpetually connected. These experimental conditions are identified and examined in the paper and an example of at least one perpetually connected uniformly balanced repeated measurement design is given in each case. The results improve upon previous contributions in the literature that have been confined largely to cases in which drop-out occurs only in the final periods of study.


2013 ◽  
Vol 61 (1) ◽  
Author(s):  
Lim Fong Tee ◽  
Mohd Saberi Mohamad ◽  
Safaai Deris ◽  
Ahmad ‘Athif Mohd Faudzi ◽  
Muhammad Shafie Abd Latiff ◽  
...  

Hierarchical clustering is an unsupervised technique, which is a common approach to study protein and gene expression data. In clustering, the patterns of expression of different genes are grouped into distinct clusters, in which the genes in the same cluster are assumed potential to be functionally related or to be influenced by a common upstream factor. Although the use of clustering methods has rapidly become one of the standard computational approaches in the literature of microarray gene expression data analysis, the uncertainty in the results obtained is still bothersome. Experimental repetitions are generally performed to overcome the drawbacks of biological variability and technical variability. In this study, the author proposes repeated measurement to evaluate the stability of gene clusters. This paper aims to prove that the stability from the gene clusters, incorporated with repeated measurement, can be used for further analysis.


1988 ◽  
Vol 37 (1-2) ◽  
pp. 55-66 ◽  
Author(s):  
E. Carlstein

Many important statistics are actually degenerate U-statistics; examples include the χ2 goodness-of-fit statistic, the generalized Cramer-von Mises goodness-of-fit statistics, Hoeffding's nonparametric measure of bivariate dependence, the sample variance, ahd the cross-product statistic. Although these statistics were originally proposed for iid data, they remain intuitively reasonable and useful even when the underlying data contain serial dependence. The presence of such dependence alters the limiting distributions for these statistics, and this in turn should be reflected in any concomitant confidence intervals or critical regions. This paper presents straightforward asymptotic distribution theory for degenerate U-statistics computed from dependent observations : the results are applied to the examples mentioned above. The dependence in the data is characterized using standard model-free mixing conditions. There has not been much other work on degenerate U-statistics in the non-independent case, and our geueral formulation is the first to permit a unified treatment of all the examples discussed above. In fact, the asymptotic distributions of the Cramér-von Mises and Hoeffding statistics have not previously been derived in the case of non-independent data.


SPE Journal ◽  
2018 ◽  
Vol 24 (01) ◽  
pp. 60-70 ◽  
Author(s):  
Oscar Vazquez ◽  
Gill Ross ◽  
Myles M. Jordan ◽  
Dionysius Angga Baskoro ◽  
Eric Mackay ◽  
...  

Summary Oilfield-scale deposition is one of the important flow-assurance challenges facing the oil industry. There are a number of methods to mitigate oilfield scale, such as reducing sulfates in the injected brine, reducing water flow, removing damage by using dissolvers or physically by milling or reperforating, and inhibition, which is particularly recommended if a severe risk of sulfate-scale deposition is present. Inhibition consists of injecting a chemical that prevents the deposition of scale, either by stopping nucleation or by retarding crystal growth. The inhibiting chemicals are either injected in a dedicated continuous line or bullheaded as a batch treatment into the formation, commonly known as a scale-squeeze treatment. In general, scale-squeeze treatments consist of the following stages: preflush to condition the formation or act as a buffer to displace tubing fluids; the main treatment, where the main pill of chemical is injected; overflush to displace the chemical deep into the reservoir; a shut-in stage to allow further chemical retention; and placing the well back in production. The well will be protected as long as the concentration of the chemical in the produced brine is greater than a certain threshold, commonly known as minimum inhibitor concentration (MIC). This value is usually between 1 and 20 ppm. The most important factor in a squeeze-treatment design is the squeeze lifetime, which is determined by the volume of water or days of production where the chemical-return concentration is greater than the MIC. The main purpose of this paper is to describe the automatic optimization of squeeze-treatment designs using an optimization algorithm, in particular particle-swarm optimization (PSO). The algorithm provides a number of optimal designs, which result in squeeze lifetimes close to the target. To determine the most efficient design of the optimal designs identified by the algorithm, the following objectives were considered: operational-deployment costs, chemical cost, total-injected-water volume, and squeeze-treatment lifetime. Operational-deployment costs include the support vessel, pump, and tank hire. There might not be a single design optimizing all objectives, and thus the problem becomes a multiobjective optimization. Therefore, a number of Pareto optimal solutions exist. These designs are not dominated by any other design and cannot be bettered. Calculating the Pareto is essential to identify the most efficient design (i.e., the most cost-effective design.)


Sign in / Sign up

Export Citation Format

Share Document