scholarly journals Simple study designs in ecology produce inaccurate estimates of biodiversity responses

2019 ◽  
Author(s):  
Alec P. Christie ◽  
Tatsuya Amano ◽  
Philip A. Martin ◽  
Gorm E. Shackelford ◽  
Benno I. Simmons ◽  
...  

AbstractEcologists use a wide range of study designs to estimate the impact of interventions or threats but there are no quantitative comparisons of their accuracy. For example, while it is accepted that simpler designs, such as After (sampling sites post-impact without a control), Before-After (BA) and Control-Impact (CI), are less robust than Randomised Controlled Trials (RCT) and Before-After Control-Impact (BACI) designs, it is not known how much less accurate they are.We simulate a step-change response of a population to an environmental impact using empirically-derived estimates of the major parameters. We use five ecological study designs to estimate the effect of this impact and evaluate each one by determining the percentage of simulations in which they accurately estimate the direction and magnitude of the environmental impact. We also simulate different numbers of replicates and assess several accuracy thresholds.We demonstrate that BACI designs could be 1.1-1.5 times more accurate than RCTs, 2.9-4.1 times more accurate than BA, 3.8-5.6 times more accurate than CI, and 6.8-10.8 times more accurate than After designs, when estimating to within ±30% of the true effect (depending on the sample size). We also found that increasing sample size substantially increases the accuracy of BACI designs but only increases the precision of simpler designs around a biased estimate; only by using more robust designs can accuracy increase. Modestly increasing replication of both control and impact sites also increased the accuracy of BACI designs more than substantially increasing replicates in just one of these groups.We argue that investment into using more robust designs in ecology, where possible, is extremely worthwhile given the inaccuracy of simpler designs, even when using large sample sizes. Based on our results we propose a weighting system that quantitatively ranks the accuracy of studies based on their study design and the number of replicates used. We hope these ‘accuracy weights’ enable researchers to better account for study design in evidence synthesis when assessing the reliability of a range of studies using a variety of designs.

Author(s):  
Gary Sutlieff ◽  
Lucy Berthoud ◽  
Mark Stinchcombe

Abstract CBRN (Chemical, Biological, Radiological, and Nuclear) threats are becoming more prevalent, as more entities gain access to modern weapons and industrial technologies and chemicals. This has produced a need for improvements to modelling, detection, and monitoring of these events. While there are currently no dedicated satellites for CBRN purposes, there are a wide range of possibilities for satellite data to contribute to this field, from atmospheric composition and chemical detection to cloud cover, land mapping, and surface property measurements. This study looks at currently available satellite data, including meteorological data such as wind and cloud profiles, surface properties like temperature and humidity, chemical detection, and sounding. Results of this survey revealed several gaps in the available data, particularly concerning biological and radiological detection. The results also suggest that publicly available satellite data largely does not meet the requirements of spatial resolution, coverage, and latency that CBRN detection requires, outside of providing terrain use and building height data for constructing models. Lastly, the study evaluates upcoming instruments, platforms, and satellite technologies to gauge the impact these developments will have in the near future. Improvements in spatial and temporal resolution as well as latency are already becoming possible, and new instruments will fill in the gaps in detection by imaging a wider range of chemicals and other agents and by collecting new data types. This study shows that with developments coming within the next decade, satellites should begin to provide valuable augmentations to CBRN event detection and monitoring. Article Highlights There is a wide range of existing satellite data in fields that are of interest to CBRN detection and monitoring. The data is mostly of insufficient quality (resolution or latency) for the demanding requirements of CBRN modelling for incident control. Future technologies and platforms will improve resolution and latency, making satellite data more viable in the CBRN management field


2021 ◽  
Vol 21 (1) ◽  
Author(s):  
Frank de Vocht ◽  
Srinivasa Vittal Katikireddi ◽  
Cheryl McQuire ◽  
Kate Tilling ◽  
Matthew Hickman ◽  
...  

Abstract Background Natural or quasi experiments are appealing for public health research because they enable the evaluation of events or interventions that are difficult or impossible to manipulate experimentally, such as many policy and health system reforms. However, there remains ambiguity in the literature about their definition and how they differ from randomized controlled experiments and from other observational designs. We conceptualise natural experiments in the context of public health evaluations and align the study design to the Target Trial Framework. Methods A literature search was conducted, and key methodological papers were used to develop this work. Peer-reviewed papers were supplemented by grey literature. Results Natural experiment studies (NES) combine features of experiments and non-experiments. They differ from planned experiments, such as randomized controlled trials, in that exposure allocation is not controlled by researchers. They differ from other observational designs in that they evaluate the impact of events or process that leads to differences in exposure. As a result they are, in theory, less susceptible to bias than other observational study designs. Importantly, causal inference relies heavily on the assumption that exposure allocation can be considered ‘as-if randomized’. The target trial framework provides a systematic basis for evaluating this assumption and the other design elements that underpin the causal claims that can be made from NES. Conclusions NES should be considered a type of study design rather than a set of tools for analyses of non-randomized interventions. Alignment of NES to the Target Trial framework will clarify the strength of evidence underpinning claims about the effectiveness of public health interventions.


2002 ◽  
Vol 04 (04) ◽  
pp. 475-492 ◽  
Author(s):  
CHARLES KELLY

The linkages between disaster and environmental damage are recognized as important to predicting, preventing and mitigating the impact of disasters. Environmental Impact Assessment (EIA) procedures are well developed for non-ndisaster situations. However, they are conceptually and operationally inappropriate for use in disaster conditions, particularly in the first 120 days after the disaster has begun. The paper provides a conceptual overview of the requirements for an environmental impact assessment procedure appropriate for disaster conditions. These requirements are captured in guidelines for a Rapid Environmental Impact Assessment (REA) for use in disasters. The REA guides the collection and assessment of a wide range of factors which can indicate: (1) the negative impacts of a disaster on the environment, (2) the impacts of environmental conditions on the magnitude of a disaster and, (3) the positive or negative impacts of relief efforts on environmental conditions. The REA also provides a foundation for recovery program EIAs, thus improving the overall post disaster recovery process. The REA is designed primarily for relief cadres, but is also expected to be usable as an assessment tool with disaster victims. The paper discusses the field testing of the REA under actual disaster conditions.


2018 ◽  
Vol 52 (04) ◽  
pp. 170-174
Author(s):  
Emanuel Severus ◽  
Cathrin Sauer ◽  
Michael Bauer ◽  
Michael Ostacher ◽  
Ion-George Anghelescu

Abstract Introduction Randomized, double-blind, placebo-controlled trials were developed to draw rather unbiased conclusions regarding the efficacy of antidepressants in the treatment of a major depressive episode (internal validity), mostly with the purpose of formal approval of new compounds in this indication. However, at the same time, data suggest that the very process of randomization and blinded administrations of placebo will have a significant impact on the efficacy of the antidepressant tested and therefore may limit the external validity of results obtained from this type of studies. Therefore, there is an urgent need to systematically study the impact of randomization/placebo control/blinding on patient population, efficacy, tolerability, and external validity in the psychopharmacological treatment of patients with a major depressive episode. Methods To develop a study design that allows the systematic exploration of the impact of trial design on characteristics of included patient population and outcome. Results We propose a study design including sample size calculation and statistical analysis in which patients with a major depressive episode are randomized to 3 distinct study designs that differ with regard to control, randomization, and blindness. Discussion The results of the proposed study design may have substantial consequences when it comes to how to best interpret the results of traditional randomized, double-blind, placebo-controlled trials in the acute treatment of major depressive disorder. Furthermore, they may lead to the implementation of new study designs that may be more suitable for assessing the effectiveness of new antidepressant compounds in everyday clinical practice.


2016 ◽  
Author(s):  
Sara Ballouz ◽  
Jesse Gillis

AbstractBackgroundDisagreements over genetic signatures associated with disease have been particularly prominent in the field of psychiatric genetics, creating a sharp divide between disease burdens attributed to common and rare variation, with study designs independently targeting each. Meta-analysis within each of these study designs is routine, whether using raw data or summary statistics, but combining results across study designs is atypical. However, tests of functional convergence are used across all study designs, where candidate gene sets are assessed for overlaps with previously known properties. This suggests one possible avenue for combining not study data, but the functional conclusions that they reach.MethodIn this work, we test for functional convergence in autism spectrum disorder (ASD) across different study types, and specifically whether the degree to which a gene is implicated in autism is correlated with the degree to which it drives functional convergence. Because different study designs are distinguishable by their differences in effect size, this also provides a unified means of incorporating the impact of study design into the analysis of convergence.ResultsWe detected remarkably significant positive trends in aggregate (p < 2.2e-16) with 14 individually significant properties (FDR<0.01), many in areas researchers have targeted based on different reasoning, such as the fragile X mental retardation protein (FMRP) interactor enrichment (FDR 0.003). We are also able to detect novel technical effects and we see that network enrichment from protein-protein interaction data is heavily confounded with study design, arising readily in control data.ConclusionsWe see a convergent functional signal for a subset of known and novel functions in ASD from all sources of genetic variation. Meta-analytic approaches explicitly accounting for different study designs can be adapted to other diseases to discover novel functional associations and increase statistical power.


Author(s):  
Krishna Regmi ◽  
Cho Mar Lwin

AbstractIntroductionSocial distancing measures (SDMs) protect public health from the outbreak of coronavirus disease 2019 (COVID-19). However, the impact of SDMs has been inconsistent and unclear. This study aims to assess the effects of SDMs (e.g. isolation, quarantine) for reducing the transmission of COVID-19.Methods and analysisWe will conduct a systematic review meta-analysis research of both randomised controlled trials and non-randomised controlled trials. We will search MEDLINE, EMBASE, Allied & Complementary Medicine, COVID-19 Research and WHO database on COVID-19 for primary studies assessing the effects of SDMs (e.g. isolation, quarantine) for reducing the transmission of COVID-19, and will be reported in accordance with PRISMA statement. The PRISMA-P checklist will be used while preparing this protocol. We will use Joanna Briggs Institute guidelines (JBI Critical Appraisal Checklists) to assess the methodological qualities and synthesised performing thematic analysis. Two reviewers will independently screen the papers and extracted data. If sufficient data are available, the random-effects model for meta-analysis will be performed to measure the effect size of SDMs or the strengths of relationships. To assess the heterogeneity of effects, I2 together with the observed effects (Q-value, with degrees of freedom) will be used to provide the true effects in the analysis.Ethics and disseminationEthics approval and consent will not be required for this systematic review of the literature as it does not involve human participation. We will be able to disseminate the study findings using the following strategies: we will be publishing at least one paper in peer-reviewed journals, and an abstract will be presented at suitable national/international conferences or workshops. We will also share important information with public health authorities as well as with the World Health Organization. In addition, we may post the submitted manuscript under review to bioRxiv, medRxiv, or other relevant pre-print servers.Strengths and limitations of this studyTo our knowledge, this study will be the first systematic review to examine the impact of social distancing measures to reduce transmission of COVID-19.This study will offer highest level of evidence for informed decisions, drawing a broader framework.This protocol reduces the possibility of duplication, provides transparency to the methods and procedures that will be used, minimise potential biases and allows peer-review.This research is not externally funded, and therefore time and resource will be constrained.If included studies will be variable in sample size, quality and population, which may open to bias, and the heterogeneity of data will preclude a meaningful meta-analysis to measure the impact of specific SDMs


2019 ◽  
Vol 24 (01) ◽  
pp. 36-44 ◽  
Author(s):  
Yuki Fujihara ◽  
Nasa Fujihara ◽  
Michiro Yamamoto ◽  
Hitoshi Hirata

Background: To date, little is known about the characteristics of highly cited studies in hand surgery compared with other orthopaedic subspecialties. We aimed to assess the position of hand surgery within the orthopedic surgery literature. Methods: We conducted a bibliographic analysis using the Web of Science database to review 1,568 articles published between January 2012 and December 2012 in 4 relevant general orthopedic and 2 hand surgery journals. We used the number of citations within 3 years of publication to measure the impact of each paper. To analyze prognostic factors using logistic regression analysis, we extracted data on orthopedic subspecialty, published journal, location of authorship, and type of study for all articles. For clinical studies, we also recorded details on study design and sample size. Results: Of eligible hand surgery articles (n = 307), the majority (62%) were case reports/series. Only 19% were comparative studies, comprising a significantly smaller proportion of comparative studies from other subspecialties in general orthopedic journals. Systematic reviews/meta-analyses generated a significantly higher number of average citations, whereas educational reviews were consistently cited less frequently than other study types (14.9 and 6.1 average citations, respectively). Being published in the Journal of Bone and Joint Surgery, American volume, having authorship in North America or Europe and Australia, focusing on subspecialties like hip & knee, sports, or shoulder, utilizing a comparative or randomized clinical trial study design, and having a larger sample size increased the odds of receiving more citations. Conclusions: Clinical studies related to hand surgery published in general orthopedic journals are most often of lower quality study design. Having a larger sample size or using a comparative study or randomized clinical trial design can improve the quality of study and may ultimately increase the impact factor of hand surgery journals.


2014 ◽  
Vol 2014 ◽  
pp. 1-17 ◽  
Author(s):  
Ennio Giulio Favalli ◽  
Serena Bugatti ◽  
Martina Biggioggero ◽  
Roberto Caporali

Over the last decades, the increasing knowledge in the area of rheumatoid arthritis has progressively expanded the arsenal of available drugs, especially with the introduction of novel targeted therapies such as biological disease modifying antirheumatic drugs (DMARDs). In this situation, rheumatologists are offered a wide range of treatment options, but on the other side the need for comparisons between available drugs becomes more and more crucial in order to better define the strategies for the choice and the optimal sequencing. Indirect comparisons or meta-analyses of data coming from different randomised controlled trials (RCTs) are not immune to conceptual and technical challenges and often provide inconsistent results. In this review we examine some of the possible evolutions of traditional RCTs, such as the inclusion of active comparators, aimed at individualising treatments in real-life conditions. Although head-to-head RCTs may be considered the best tool to directly compare the efficacy and safety of two different DMARDs, surprisingly only 20 studies with such design have been published in the last 25 years. Given the recent advent of the first RCTs truly comparing biological DMARDs, we also review the state of the art of head-to-head trials in RA.


2021 ◽  
pp. 1357633X2110371
Author(s):  
Chukwuemeka A. Umeh ◽  
Maunika Reddy ◽  
Ankit Dubey ◽  
Mohammad Yousuf ◽  
Sumanta Chaudhuri ◽  
...  

Introduction A wide range of study designs have been utilized in evaluations of home telemonitoring and these studies have produced conflicting outcomes over the years. While some of the research has shown that telemonitoring is beneficial in reducing all-cause mortality, hospital admission, length of stay in hospital and emergency room visits, other studies have not shown such benefits. This study, therefore, aims to examine several home telemonitoring study designs and the influence of study design on study outcomes. Method Articles were obtained by searching PubMed database with the term heart failure combined with the following terms: telemonitoring, telehealth, home monitoring, and remote monitoring. Searches were limited to randomized controlled trial conducted between year January 1, 2000 and February 6, 2021. The characteristics of the study designs and study outcomes were extracted and analyzed. Result Our review of 34 randomized controlled trials of heart failure telemonitoring did not show any significant influence of study design on reduction in number of hospitalizations and/or decrease in mortality. Studies that were done outside North America (USA and Canada) and studies that selected patients at high risk of re-hospitalization were more likely to result in decreased hospitalization and/or mortality, though this was not statistically significant. All the studies that met our inclusion criteria were from high-income countries and only one study enrolled patients at high risk of re-hospitalization. Conclusion There is a need for more studies to understand why telemonitoring studies in Europe were more likely to reduce hospital admission and mortality compared to those in North America. There is also a need for more studies on the effect of telemonitoring in patients at high risk of hospital readmission.


2020 ◽  
Author(s):  
Nicole Riemer ◽  
Jessica Gasparik ◽  
Qing Ye ◽  
Matthew West ◽  
Jeff Curtis ◽  
...  

&lt;p&gt;Atmospheric aerosols are evolving mixtures of different chemical species.&amp;#160; The term &amp;#8220;aerosol mixing state&amp;#8221; is commonly used to describe how different chemical species are distributed throughout a particle population.&amp;#160; A population is &amp;#8220;fully internally mixed&amp;#8221; if each individual particle consists of same species mixtures, whereas it is fully externally mixed if each particle only contains one species. Mixing state matters for aerosol health impacts and for climate-relevant aerosol properties, such as the particles&amp;#8217; propensity to form cloud droplets or the aerosol optical properties.&lt;/p&gt;&lt;p&gt;The mixing state metric &amp;#967; quantifies the degree of internal or external mixing and can be calculated based on the particles&amp;#8217; species mass fractions. Several field studies have used this metric to quantify mixing states for different ambient environments using sophisticated single-particle measurement techniques. Inherent to these methods is a finite number of particles, ranging from a few hundred to several thousand particles, used to estimate the mixing state metric.&amp;#160;&lt;/p&gt;&lt;p&gt;This study evaluates the error that is introduced in calculating &amp;#967; due to a limited particle sample size. &amp;#160;We used the particle-resolved model PartMC-MOSAIC to generate a scenario library that encompasses a large number of reference particle populations and that represents a wide range of mixing states. We stochastically sub-sampled these particle populations using sample sizes of 10 to 10,000 particles and recalculated &amp;#967; based on the sub-samples. This procedure mimics the impact of only having a limited sample size as it is common in real-world applications. The finite sample size leads to a consistent overestimation of &amp;#967;, meaning that the populations appear more internally mixed than they are in reality. These findings are experimentally confirmed using single-particle SP-AMS measurement data from the Pittsburgh area. We also determined confidence intervals of &amp;#967; for our sub-sampled populations. To determine &amp;#967; within a range of &amp;#160;+/- 10 percentage points requires a sample size of at least 1000 particles.&lt;/p&gt;&lt;p&gt;&amp;#160;&lt;/p&gt;


Sign in / Sign up

Export Citation Format

Share Document