scholarly journals The fundamental principles of reproducibility

Author(s):  
Odd Erik Gundersen

Reproducibility is a confused terminology. In this paper, I take a fundamental view on reproducibility rooted in the scientific method. The scientific method is analysed and characterized in order to develop the terminology required to define reproducibility. Furthermore, the literature on reproducibility and replication is surveyed, and experiments are modelled as tasks and problem solving methods. Machine learning is used to exemplify the described approach. Based on the analysis, reproducibility is defined and three different degrees of reproducibility as well as four types of reproducibility are specified. This article is part of the theme issue ‘Reliability and reproducibility in computational science: implementing verification, validation and uncertainty quantification in silico ’.

Author(s):  
M. S. Krafczyk ◽  
A. Shi ◽  
A. Bhaskar ◽  
D. Marinov ◽  
V. Stodden

We carry out efforts to reproduce computational results for seven published articles and identify barriers to computational reproducibility. We then derive three principles to guide the practice and dissemination of reproducible computational research: (i) Provide transparency regarding how computational results are produced; (ii) When writing and releasing research software, aim for ease of (re-)executability; (iii) Make any code upon which the results rely as deterministic as possible. We then exemplify these three principles with 12 specific guidelines for their implementation in practice. We illustrate the three principles of reproducible research with a series of vignettes from our experimental reproducibility work. We define a novel Reproduction Package , a formalism that specifies a structured way to share computational research artifacts that implements the guidelines generated from our reproduction efforts to allow others to build, reproduce and extend computational science. We make our reproduction efforts in this paper publicly available as exemplar Reproduction Packages . This article is part of the theme issue ‘Reliability and reproducibility in computational science: implementing verification, validation and uncertainty quantification in silico ’.


Author(s):  
D. Ye ◽  
L. Veen ◽  
A. Nikishova ◽  
J. Lakhlili ◽  
W. Edeling ◽  
...  

Uncertainty quantification (UQ) is a key component when using computational models that involve uncertainties, e.g. in decision-making scenarios. In this work, we present uncertainty quantification patterns (UQPs) that are designed to support the analysis of uncertainty in coupled multi-scale and multi-domain applications. UQPs provide the basic building blocks to create tailored UQ for multiscale models. The UQPs are implemented as generic templates, which can then be customized and aggregated to create a dedicated UQ procedure for multiscale applications. We present the implementation of the UQPs with multiscale coupling toolkit Multiscale Coupling Library and Environment 3. Potential speed-up for UQPs has been derived as well. As a proof of concept, two examples of multiscale applications using UQPs are presented. This article is part of the theme issue ‘Reliability and reproducibility in computational science: implementing verification, validation and uncertainty quantification in silico ’.


Author(s):  
D. Groen ◽  
H. Arabnejad ◽  
V. Jancauskas ◽  
W. N. Edeling ◽  
F. Jansson ◽  
...  

We present the VECMA toolkit (VECMAtk), a flexible software environment for single and multiscale simulations that introduces directly applicable and reusable procedures for verification, validation (V&V), sensitivity analysis (SA) and uncertainty quantication (UQ). It enables users to verify key aspects of their applications, systematically compare and validate the simulation outputs against observational or benchmark data, and run simulations conveniently on any platform from the desktop to current multi-petascale computers. In this sequel to our paper on VECMAtk which we presented last year [ 1 ] we focus on a range of functional and performance improvements that we have introduced, cover newly introduced components, and applications examples from seven different domains such as conflict modelling and environmental sciences. We also present several implemented patterns for UQ/SA and V&V, and guide the reader through one example concerning COVID-19 modelling in detail. This article is part of the theme issue ‘Reliability and reproducibility in computational science: implementing verification, validation and uncertainty quantification in silico ’.


2018 ◽  
Author(s):  
Brett Buttliere

Over the last decade, there have been many suggestions to improve how scientists answer their questions, but far fewer attempt to improve the questions scientists are asking in the first place. The goal of the paper is then to examine and summarize synthesize the evidence on how to ask the best questions possible. First is a brief review of the philosophical and empirical literature on how the best science is done, which implicitly but not explicitly mentions the role of psychology and especially cognitive conflict. Then we more closely focus on the psychology of the scientist, finding that they are humans, engaged in a meaning making process, and that cognitive conflict is a necessary input for any learning or change in the system. The scientific method is, of course, a specialized meaning making process. We present evidence for this central role of cognitive conflict in science by examining the most discussed scientific papers between 2013 and 2017, which are, in general, controversial and about big problems (e.g., whether vaccines cause autism, how often doctors kill us with their mistakes). Toward the end we discuss the role of science in society, suggesting science itself is an uncertainty reducing and problem solving enterprise. From this basis we encourage scientists to take riskier stances on bigger topics, for the good of themselves and society generally.


2014 ◽  
Vol 14 (16) ◽  
pp. 1913-1922 ◽  
Author(s):  
Dimitar Dobchev ◽  
Girinath Pillai ◽  
Mati Karelson

Molecules ◽  
2021 ◽  
Vol 26 (9) ◽  
pp. 2505
Author(s):  
Raheem Remtulla ◽  
Sanjoy Kumar Das ◽  
Leonard A. Levin

Phosphine-borane complexes are novel chemical entities with preclinical efficacy in neuronal and ophthalmic disease models. In vitro and in vivo studies showed that the metabolites of these compounds are capable of cleaving disulfide bonds implicated in the downstream effects of axonal injury. A difficulty in using standard in silico methods for studying these drugs is that most computational tools are not designed for borane-containing compounds. Using in silico and machine learning methodologies, the absorption-distribution properties of these unique compounds were assessed. Features examined with in silico methods included cellular permeability, octanol-water partition coefficient, blood-brain barrier permeability, oral absorption and serum protein binding. The resultant neural networks demonstrated an appropriate level of accuracy and were comparable to existing in silico methodologies. Specifically, they were able to reliably predict pharmacokinetic features of known boron-containing compounds. These methods predicted that phosphine-borane compounds and their metabolites meet the necessary pharmacokinetic features for orally active drug candidates. This study showed that the combination of standard in silico predictive and machine learning models with neural networks is effective in predicting pharmacokinetic features of novel boron-containing compounds as neuroprotective drugs.


Sign in / Sign up

Export Citation Format

Share Document