scholarly journals Evaluating FAIR-Compliance Through an Objective, Automated, Community-Governed Framework

2018 ◽  
Author(s):  
Mark D Wilkinson ◽  
Michel Dumontier ◽  
Susanna-Assunta Sansone ◽  
Luiz Olavo Bonino da Silva Santos ◽  
Mario Prieto ◽  
...  

AbstractWith the increased adoption of the FAIR Principles, a wide range of stakeholders, from scientists to publishers, funding agencies and policy makers, are seeking ways to transparently evaluate resource FAIRness. We describe the FAIR Evaluator, a software infrastructure to register and execute tests of compliance with the recently published FAIR Metrics. The Evaluator enables digital resources to be assessed objectively and transparently. We illustrate its application to three widely used generalist repositories - Dataverse, Dryad, and Zenodo - and report their feedback. Evaluations allow communities to select relevant Metric subsets to deliver FAIRness measurements in diverse and specialized applications. Evaluations are executed in a semi-automated manner through Web Forms filled-in by a user, or through a JSON-based API. A comparison of manual vs automated evaluation reveals that automated evaluations are generally stricter, resulting in lower, though more accurate, FAIRness scores. Finally, we highlight the need for enhanced infrastructure such as standards registries, like FAIRsharing, as well as additional community involvement in domain-specific data infrastructure creation.

2018 ◽  
Vol 44 (6) ◽  
pp. 785-801
Author(s):  
Hong Huang

This article aims to understand the views of genomic scientists with regard to the data quality assurances associated with semiotics and data–information–knowledge (DIK). The resulting communication of signs generated from genomic curation work, was found within different semantic levels of DIK that correlate specific data quality dimensions with their respective skills. Syntactic data quality dimensions were ranked the highest among all other semiotic data quality dimensions, which indicated that scientists spend great efforts for handling data wrangling activities in genome curation work. Semantic- and pragmatic-related sign communications were about meaningful interpretation, thus required additional adaptive and interpretative skills to deal with data quality issues. This expanded concept of ‘curation’ as sign/semiotic was not previously explored from the practical to the theoretical perspectives. The findings inform policy makers and practitioners to develop framework and cyberinfrastructure that facilitate the initiatives and advocacies of ‘Big Data to Knowledge’ by funding agencies. The findings from this study can also help plan data quality assurance policies and thus maximise the efficiency of genomic data management. Our results give strong support to the relevance of data quality skills communication for relationship with data quality assurance in genome curation activities.


2019 ◽  
Vol 6 (1) ◽  
Author(s):  
Mark D. Wilkinson ◽  
Michel Dumontier ◽  
Susanna-Assunta Sansone ◽  
Luiz Olavo Bonino da Silva Santos ◽  
Mario Prieto ◽  
...  

Abstract Transparent evaluations of FAIRness are increasingly required by a wide range of stakeholders, from scientists to publishers, funding agencies and policy makers. We propose a scalable, automatable framework to evaluate digital resources that encompasses measurable indicators, open source tools, and participation guidelines, which come together to accommodate domain relevant community-defined FAIR assessments. The components of the framework are: (1) Maturity Indicators – community-authored specifications that delimit a specific automatically-measurable FAIR behavior; (2) Compliance Tests – small Web apps that test digital resources against individual Maturity Indicators; and (3) the Evaluator, a Web application that registers, assembles, and applies community-relevant sets of Compliance Tests against a digital resource, and provides a detailed report about what a machine “sees” when it visits that resource. We discuss the technical and social considerations of FAIR assessments, and how this translates to our community-driven infrastructure. We then illustrate how the output of the Evaluator tool can serve as a roadmap to assist data stewards to incrementally and realistically improve the FAIRness of their resources.


2019 ◽  
Author(s):  
Mark D Wilkinson ◽  
Michel Dumontier ◽  
Susanna-Assunta Sansone ◽  
Luiz Olavo Bonino da Silva Santos ◽  
Mario Prieto ◽  
...  

AbstractTransparent evaluations of FAIRness are increasingly required by a wide range of stakeholders, from scientists to publishers, funding agencies and policy makers. We propose a scalable, automatable framework to evaluate digital resources that encompasses measurable indicators, open source tools, and participation guidelines, which come together to accommodate domain relevant community-defined FAIR assessments. The components of the framework are: (1) Maturity Indicators - community-authored specifications that delimit a specific automatically-measurable FAIR behavior; (2) Compliance Tests - small Web apps that test digital resources against individual Maturity Indicators; and (3) the Evaluator, a Web application that registers, assembles, and applies community-relevant sets of Compliance Tests against a digital resource, and provides a detailed report about what a machine “sees” when it visits that resource. We discuss the technical and social considerations of FAIR assessments, and how this translates to our community-driven infrastructure. We then illustrate how the output of the Evaluator tool can serve as a roadmap to assist data stewards to incrementally and realistically improve the FAIRness of their resources.


Anticorruption in History is the first major collection of case studies on how past societies and polities, in and beyond Europe, defined legitimate power in terms of fighting corruption and designed specific mechanisms to pursue that agenda. It is a timely book: corruption is widely seen today as a major problem, undermining trust in government, financial institutions, economic efficiency, the principle of equality before the law and human wellbeing in general. Corruption, in short, is a major hurdle on the “path to Denmark”—a feted blueprint for stable and successful statebuilding. The resonance of this view explains why efforts to promote anticorruption policies have proliferated in recent years. But while the subjects of corruption and anticorruption have captured the attention of politicians, scholars, NGOs and the global media, scant attention has been paid to the link between corruption and the change of anticorruption policies over time and place. Such a historical approach could help explain major moments of change in the past as well as reasons for the success and failure of specific anticorruption policies and their relation to a country’s image (of itself or as construed from outside) as being more or less corrupt. It is precisely this scholarly lacuna that the present volume intends to begin to fill. A wide range of historical contexts are addressed, ranging from the ancient to the modern period, with specific insights for policy makers offered throughout.


Agronomy ◽  
2021 ◽  
Vol 11 (8) ◽  
pp. 1566
Author(s):  
Ernesto Mesa-Vázquez ◽  
Juan F. Velasco-Muñoz ◽  
José A. Aznar-Sánchez ◽  
Belén López-Felices

Over the last two decades, experimental economics has been gaining relevance in the research of a wide range of issues related to agriculture. In turn, the agricultural activity provides an excellent field of study within which to validate the use of instruments employed by experimental economics. The aim of this study is to analyze the dynamics of the research on the application of experimental economics in agriculture on a global level. Thus, a literature review has been carried out for the period between the years 2000 and 2020 based on a bibliometric study. The main results show that there has been a growing use of experimental economics methods in the research on agriculture, particularly over the last five years. This evolution is evident in the different indicators analyzed and is reflected in the greater scientific production and number of actors involved. The most relevant topics within the research on experimental economics in agriculture focus on the farmer, the markets, the consumer, environmental policy, and public goods. These results can be useful for policy makers and researchers interested in this line of research.


Author(s):  
Frederick van der Ploeg

AbstractEconomists have adopted the Pigouvian approach to climate policy, which sets the carbon price to the social cost of carbon. We adjust this carbon price for macroeconomic uncertainty and disasters by deriving the risk-adjusted discount rate. We highlight ethics- versus market-based calibrations and discuss the effects of a falling term structure of the discount rate. Given the wide range of estimates used for marginal damages and the discount rate, it is unsurprising that negotiators and policy makers have rejected the Pigouvian approach and adopted a more pragmatic approach based on a temperature cap. The corresponding cap on cumulative emissions is lower if risk tolerance and temperature sensitivity are more uncertain. The carbon price then grows much faster than under the Pigouvian approach and discuss how this rate of growth is adjusted by economic and abatement cost risks. We then analyse how policy uncertainty and technological breakthrough can lead to the risk of stranded assets. Finally, we discuss various obstacles to successful carbon pricing.


2020 ◽  
Author(s):  
Angela Poh

The view that China has become increasingly assertive under President Xi Jinping is now a common trope in academic and media discourse. However, until the end of Xi Jinping’s first term in March 2018, China had been relatively restrained in its use of coercive economic measures. This is puzzling given the conventional belief among scholars and practitioners that sanctions are a middle ground between diplomatic and military/paramilitary action. Using a wide range of methods and data — including in-depth interviews with 76 current and former politicians, policy-makers, diplomats, and commercial actors across 12 countries and 16 cities — Sanctions with Chinese Characteristics: Rhetoric and Restraint in China’s Diplomacy examines the ways in which China had employed economic sanctions to further its political objectives, and the factors explaining China’s behaviour. This book provides a systematic investigation into the ways in which Chinese decisionmakers approached sanctions both at the United Nations Security Council and unilaterally, and shows how China’s longstanding sanctions rhetoric has had a constraining effect on its behaviour, resulting in its inability to employ sanctions in complete alignment with its immediate interests.


2018 ◽  
Vol 41 (1) ◽  
pp. 125-144 ◽  
Author(s):  
Rebecca Campbell ◽  
Rachael Goodman-Williams ◽  
Hannah Feeney ◽  
Giannina Fehler-Cabral

The purpose of this study was to develop triangulation coding methods for a large-scale action research and evaluation project and to examine how practitioners and policy makers interpreted both convergent and divergent data. We created a color-coded system that evaluated the extent of triangulation across methodologies (qualitative and quantitative), data collection methods (observations, interviews, and archival records), and stakeholder groups (five distinct disciplines/organizations). Triangulation was assessed for both specific data points (e.g., a piece of historical/contextual information or qualitative theme) and substantive findings that emanated from further analysis of those data points (e.g., a statistical model or a mechanistic qualitative assertion that links themes). We present five case study examples that explore the complexities of interpreting triangulation data and determining whether data are deemed credible and actionable if not convergent.


2020 ◽  
Author(s):  
Geoffrey Schau ◽  
Erik Burlingame ◽  
Young Hwan Chang

AbstractDeep learning systems have emerged as powerful mechanisms for learning domain translation models. However, in many cases, complete information in one domain is assumed to be necessary for sufficient cross-domain prediction. In this work, we motivate a formal justification for domain-specific information separation in a simple linear case and illustrate that a self-supervised approach enables domain translation between data domains while filtering out domain-specific data features. We introduce a novel approach to identify domainspecific information from sets of unpaired measurements in complementary data domains by considering a deep learning cross-domain autoencoder architecture designed to learn shared latent representations of data while enabling domain translation. We introduce an orthogonal gate block designed to enforce orthogonality of input feature sets by explicitly removing non-sharable information specific to each domain and illustrate separability of domain-specific information on a toy dataset.


Sign in / Sign up

Export Citation Format

Share Document