Reporting and communication of randomisation procedures is suboptimal in veterinary trials

2017 ◽  
Vol 181 (8) ◽  
pp. 195-195 ◽  
Author(s):  
N. Di Girolamo ◽  
M. A. Giuffrida ◽  
A. L. Winter ◽  
R. Meursinge Reynders

To evaluate randomisation mechanisms in the veterinary literature, all trials defined as ‘randomised’ were extracted from five leading veterinary journals for the year 2013. Three blinded investigators evaluated (1) if the random sequence generation was actually non-random, and (2) whether method (CONSORT item 8A) and (3) type of randomisation (CONSORT item 8B) were reported. Trialists were contacted via email to establish (1) willingness to respond to questions on randomisation procedures, (2) whether reporting of randomisation improved following a suggestion to use the CONSORT 2010 guideline. Seven per cent ((95 per cent CI 2 to 12 per cent); 8/114) of the trials defined as ‘randomised’ explicitly used methods that are considered non-random. Almost half of the trials (49 per cent (40 to 59 per cent); 52/106) did not report any mechanism of randomisation. Only 13 trials (12.3 per cent (6 to 19 per cent); 13/106) reported both items. 39 of 114 (34.2 per cent) trialists contacted were willing to respond to further questions on randomisation mechanisms; 4 (3.5 per cent) trialists were unwilling and 71 (62.3 per cent) trialists did not respond. Email correspondence resulted in a mean clarification of 0.7 items (95 per cent CI 0.4 to 1.0) for the 15 trials for trialists that replied. Improved adherence to CONSORT guidelines and trialists communication is imperative to increase the quality of published evidence in veterinary medicine and to reduce research waste.

Medicina ◽  
2019 ◽  
Vol 55 (7) ◽  
pp. 372
Author(s):  
Roxana-Denisa Capraş ◽  
Andrada Elena Urda-Cîmpean ◽  
Sorana D. Bolboacă

Background and objectives: Informed decision-making requires the ability to identify and integrate high-quality scientific evidence in daily practice. We aimed to assess whether randomized controlled trials (RCTs) on endometriosis therapy follow methodological criteria corresponding to the RCTs’ specific level in the hierarchy of evidence in such details to allow the reproduction and replication of the study. Materials and Methods: Using the keywords “therapy” and “endometriosis” and “efficacy” three bibliographic databases were searched for English written scientific articles published from 1 January 2008 to 3 March 2018. Only the randomized clinical trials (RCTs) were evaluated in terms of whether they provided the appropriate level of scientific evidence, equivalent to level 1, degree 1b in the hierarchy of evidence. A list of criteria to ensure study replication and reproduction, considering CONSORT guideline and MECIR standards, was developed and used to evaluate RCTs’ methodological soundness, and scores were granted. Three types of bias, namely selection bias (random sequence generation and allocation concealment), detection bias (blinding of outcome assessment), and attrition bias (incomplete outcome data) were also evaluated. Results: We found 387 articles on endometriosis therapy, of which 38 were RCTs: 30 double-blinded RCTs and 8 open-label RCTs. No article achieved the maximum score according to the evaluated methodological criteria. Even though 73.3% of the double-blinded RCTs had clear title, abstract, introduction, and objectives, only 13.3% provided precise information regarding experimental design and randomization, and also showed a low risk of bias. The blinding method was poorly reported in 43.3% of the double-blinded RCTs, while allocation concealment and random sequence generation were inadequate in 33.3% of them. Conclusions: None of the evaluated RCTs met all the methodological criteria, none had only a low risk of bias and provided sufficient details on methods and randomization to allow for the reproduction and replication of the study. Consequently, the appropriate level of scientific evidence (level 1, degree 1b) could not be granted. On endometriosis therapy, this study evaluated the quality of reporting in RCTs and not the quality of how the studies were performed.


Author(s):  
S.E. Nyssanbayeva ◽  
◽  
N.A. Kapalova ◽  
A. Haumen

Cryptographic technologies that have become widespread in the world are inex- tricably linked to the issues of secure storage, use of keys, and key exchange. Often, insecure key management reduces the quality of even exceptionally good systems, since the security of the algorithm is concentrated mainly in the key. This paper proposes a key management model in cryptographic systems. The model is based on creating a unified key database for all users. This database is filled with keys of a certain length, which are generated using a pseudo-random sequence generation algorithm.


PeerJ ◽  
2016 ◽  
Vol 4 ◽  
pp. e1649 ◽  
Author(s):  
Nicola Di Girolamo ◽  
Reint Meursinge Reynders

The validity of studies that assess the effectiveness of an intervention (EoI) depends on variables such as the type of study design, the quality of their methodology, and the participants enrolled. Five leading veterinary journals and 5 leading human medical journals were hand-searched for EoI studies for the year 2013. We assessed (1) the prevalence of randomized controlled trials (RCTs) among EoI studies, (2) the type of participants enrolled, and (3) the methodological quality of the selected studies. Of 1707 eligible articles, 590 were EoI articles and 435 RCTs. Random allocation to the intervention was performed in 52% (114/219; 95%CI:45.2–58.8%) of veterinary EoI articles, against 87% (321/371; 82.5–89.7%) of human EoI articles (adjusted OR:9.2; 3.4–24.8). Veterinary RCTs were smaller (median: 26 animals versus 465 humans) and less likely to enroll real patients, compared with human RCTs (OR:331; 45–2441). Only 2% of the veterinary RCTs, versus 77% of the human RCTs, reported power calculations, primary outcomes, random sequence generation, allocation concealment and estimation methods. Currently, internal and external validity of veterinary EoI studies is limited compared to human medical ones. To address these issues, veterinary interventional research needs to improve its methodology, increase the number of published RCTs and enroll real clinical patients.


PLoS Biology ◽  
2021 ◽  
Vol 19 (4) ◽  
pp. e3001162
Author(s):  
Christiaan H. Vinkers ◽  
Herm J. Lamberink ◽  
Joeri K. Tijdink ◽  
Pauline Heus ◽  
Lex Bouter ◽  
...  

Many randomized controlled trials (RCTs) are biased and difficult to reproduce due to methodological flaws and poor reporting. There is increasing attention for responsible research practices and implementation of reporting guidelines, but whether these efforts have improved the methodological quality of RCTs (e.g., lower risk of bias) is unknown. We, therefore, mapped risk-of-bias trends over time in RCT publications in relation to journal and author characteristics. Meta-information of 176,620 RCTs published between 1966 and 2018 was extracted. The risk-of-bias probability (random sequence generation, allocation concealment, blinding of patients/personnel, and blinding of outcome assessment) was assessed using a risk-of-bias machine learning tool. This tool was simultaneously validated using 63,327 human risk-of-bias assessments obtained from 17,394 RCTs evaluated in the Cochrane Database of Systematic Reviews (CDSR). Moreover, RCT registration and CONSORT Statement reporting were assessed using automated searches. Publication characteristics included the number of authors, journal impact factor (JIF), and medical discipline. The annual number of published RCTs substantially increased over 4 decades, accompanied by increases in authors (5.2 to 7.8) and institutions (2.9 to 4.8). The risk of bias remained present in most RCTs but decreased over time for allocation concealment (63% to 51%), random sequence generation (57% to 36%), and blinding of outcome assessment (58% to 52%). Trial registration (37% to 47%) and the use of the CONSORT Statement (1% to 20%) also rapidly increased. In journals with a higher impact factor (>10), the risk of bias was consistently lower with higher levels of RCT registration and the use of the CONSORT Statement. Automated risk-of-bias predictions had accuracies above 70% for allocation concealment (70.7%), random sequence generation (72.1%), and blinding of patients/personnel (79.8%), but not for blinding of outcome assessment (62.7%). In conclusion, the likelihood of bias in RCTs has generally decreased over the last decades. This optimistic trend may be driven by increased knowledge augmented by mandatory trial registration and more stringent reporting guidelines and journal requirements. Nevertheless, relatively high probabilities of bias remain, particularly in journals with lower impact factors. This emphasizes that further improvement of RCT registration, conduct, and reporting is still urgently needed.


2019 ◽  
Vol 19 (1) ◽  
Author(s):  
Ognjen Barcot ◽  
Matija Boric ◽  
Tina Poklepovic Pericic ◽  
Marija Cavar ◽  
Svjetlana Dosenovic ◽  
...  

Sign in / Sign up

Export Citation Format

Share Document