Statistical significance and confidence intervals

BMJ ◽  
2009 ◽  
Vol 339 (sep02 2) ◽  
pp. b3401-b3401 ◽  
Author(s):  
P. M Sedgwick
Author(s):  
Nizam Damani

This section includes a chapter on basic epidemiology and biostatistics as applied to healthcare-associated infections (HAIs). The epidemiology section summarizes various types of studies and outlines the advantages and disadvantages of case–control and cohort studies. It describes the incidence and prevalence rate and how to calculate the most common HAIs. Practical advice is also given on how to avoid bias and confounders. The chapter describes basic concepts in biostatistics and tests of statistical significance used in investigating an outbreak. It also provides guidance on how to calculate the sensitivity and specificity of the test and describes how to interpret confidence intervals and statistical process charts.


2006 ◽  
Vol 34 (5) ◽  
pp. 601-629 ◽  
Author(s):  
Robin K. Henson

Effect sizes are critical to result interpretation and synthesis across studies. Although statistical significance testing has historically dominated the determination of result importance, modern views emphasize the role of effect sizes and confidence intervals. This article accessibly discusses how to calculate and interpret the effect sizes that counseling psychologists use most frequently. To provide context, the author presents a brief history of statistical significance tests. Second, the author discusses the difference between statistical, practical, and clinical significance. Third, the author reviews and graphically demonstrates two common types of effect sizes, commenting on multivariate and corrected effect sizes. Fourth, the author emphasizes meta-analytic thinking and the potential role of confidence intervals around effect sizes. Finally, the author gives a hypothetical example of how to report and potentially interpret some effect sizes.


2021 ◽  
Vol 9 (B) ◽  
pp. 1525-1528
Author(s):  
Aliya A. Zhanpeissova ◽  
B. T. Tukbekova ◽  
S. B. Akhmetova ◽  
B. Dyusseno Sandugash ◽  
K. Zh. Alimshaikhina ◽  
...  

 «IMPORTANCE OF THE PNEUMOCOCCUS IN COMMUNITY-ACQUIRED PNEUMONIA IN TENDER-AGE INFANTS ON THE BACKGROUND OF VACCINATION» The authors of the manuscript provide data demonstrating that una pneumonia against the background of vaccination is accompanied by a change in the etiological structure. The article is a new idea and overall it is well written and structured. However, simple study design, lack of patient baseline and follow-up data limit the conclusion of withdrawal. I invite the authors to solve the following questions: 1) There are many spelling, grammatical and punctuation errors. Authors should correct mistakes and rewrite the article in a clear and understandable style. I have visited the authors with the help of a language expert on rewriting. 2) The main problem of the manuscript is poor statistical processing and statistical reporting. The authors stated: “All data were expressed as confidence intervals. Thus, it is unclear whether the etiological structure was normally distributed. Moreover, following normal scientific practice, authors should indicate specific statistical significance values ​​when comparing each test parameter. 3) The cited literature is relevant to the research and meets the requirements of the journal. But mentions of 9/19 were published over 10 years ago. I recommend reading current research on this topic. In conclusion, the work makes a good impression. On the other hand, it can be published after significant changes with reference to the comments mentioned above, including bug fixes and rewriting of the article in a clear and easily understandable style.


2021 ◽  
pp. bmjebm-2020-111603
Author(s):  
John Ferguson

Commonly accepted statistical advice dictates that large-sample size and highly powered clinical trials generate more reliable evidence than trials with smaller sample sizes. This advice is generally sound: treatment effect estimates from larger trials tend to be more accurate, as witnessed by tighter confidence intervals in addition to reduced publication biases. Consider then two clinical trials testing the same treatment which result in the same p values, the trials being identical apart from differences in sample size. Assuming statistical significance, one might at first suspect that the larger trial offers stronger evidence that the treatment in question is truly effective. Yet, often precisely the opposite will be true. Here, we illustrate and explain this somewhat counterintuitive result and suggest some ramifications regarding interpretation and analysis of clinical trial results.


2019 ◽  
Author(s):  
Marshall A. Taylor

Coefficient plots are a popular tool for visualizing regression estimates. The appeal of these plots is that they visualize confidence intervals around the estimates and generally center the plot around zero, meaning that any estimate that crosses zero is statistically non-significant at at least the alpha-level around which the confidence intervals are constructed. For models with statistical significance levels determined via randomization models of inference and for which there is no standard error or confidence intervals for the estimate itself, these plots appear less useful. In this paper, I illustrate a variant of the coefficient plot for regression models with p-values constructed using permutation tests. These visualizations plot each estimate's p-value and its associated confidence interval in relation to a specified alpha-level. These plots can help the analyst interpret and report both the statistical and substantive significance of their models. Illustrations are provided using a nonprobability sample of activists and participants at a 1962 anti-Communism school.


1988 ◽  
Vol 63 (1) ◽  
pp. 319-331 ◽  
Author(s):  
David Johnstone

A recent book by psychologist M. Oakes (1986) surveys the practice and logical foundations of statistical tests in the social and behavioral sciences. The book is aimed at producers and consumers of statistical research reports in these disciplines and has as its objective a shift in common practice from “significance” tests (however interpreted) to their more complete and informative analogs, confidence intervals. Much is made of the writings of the great English scientist and statistician Sir Ronald Fisher, to whom, most of all, the received theory of statistical tests is due. Oakes misrepresents Fisher's position on points of logic. There is also some overstatement of the case for confidence intervals. More interesting is the author's positive explanation for the widespread acceptance of significance tests among applied researchers, for there is no less settled logic or scheme of inference within theoretical statistics, as instantiated by the current papers of Casella and Berger (1987) and Berger and Sellke (1987) in the Journal of the American Statistical Association. That research workers in applied fields continue to use significance tests routinely may be explained by forces of supply and demand in the market for statistical evidence, where the commodity traded is not so much evidence, but “statistical significance.”


Sign in / Sign up

Export Citation Format

Share Document