scholarly journals Paper 3: EUROCAT data quality indicators for population-based registries of congenital anomalies†

2011 ◽  
Vol 91 (S1) ◽  
pp. S23-S30 ◽  
Author(s):  
Maria Loane ◽  
Helen Dolk ◽  
Ester Garne ◽  
Ruth Greenlees ◽  
1993 ◽  
Vol 9 (4) ◽  
pp. 577-604 ◽  
Author(s):  
Barry L. Johnson ◽  
T. Damstra ◽  
Chris Derosa ◽  
C. Elmer ◽  
M. Gilbert

2020 ◽  
Author(s):  
Carsten Schmidt ◽  
Stephan Struckmann ◽  
Cornelia Enzenbach ◽  
Achim Reineke ◽  
Jürgen Stausberg ◽  
...  

Abstract Background No standards exist for the handling and reporting of data quality in health research. This work introduces a data quality framework for observational health research data collections with supporting software implementations to facilitate harmonized data quality assessments. Methods Developments were guided by the evaluation of an existing data quality framework and literature reviews. Functions for the computation of data quality indicators were written in R. The concept and implementations are illustrated based on data from the population-based Study of Health in Pomerania (SHIP).Results The data quality framework comprises 34 data quality indicators. These target three aspects of data quality: compliance with pre-specified structural and technical requirements (Integrity), presence of data values (completeness), and error in the data values (correctness). R functions calculate data quality metrics based on the provided study data and metadata and R Markdown reports are generated. Guidance on the concept and tools is available through a dedicated website. Conclusions The presented data quality framework is the first of its kind for observational health research data collections that links a formal concept to implementations in R. The framework and tools facilitate harmonized data quality assessments in pursue of transparent and reproducible research. Application scenarios comprise data quality monitoring while a study is carried out as well as performing an initial data analysis before starting substantive scientific analyses.


2021 ◽  
Vol 21 (1) ◽  
Author(s):  
Carsten Oliver Schmidt ◽  
Stephan Struckmann ◽  
Cornelia Enzenbach ◽  
Achim Reineke ◽  
Jürgen Stausberg ◽  
...  

Abstract Background No standards exist for the handling and reporting of data quality in health research. This work introduces a data quality framework for observational health research data collections with supporting software implementations to facilitate harmonized data quality assessments. Methods Developments were guided by the evaluation of an existing data quality framework and literature reviews. Functions for the computation of data quality indicators were written in R. The concept and implementations are illustrated based on data from the population-based Study of Health in Pomerania (SHIP). Results The data quality framework comprises 34 data quality indicators. These target four aspects of data quality: compliance with pre-specified structural and technical requirements (integrity); presence of data values (completeness); inadmissible or uncertain data values and contradictions (consistency); unexpected distributions and associations (accuracy). R functions calculate data quality metrics based on the provided study data and metadata and R Markdown reports are generated. Guidance on the concept and tools is available through a dedicated website. Conclusions The presented data quality framework is the first of its kind for observational health research data collections that links a formal concept to implementations in R. The framework and tools facilitate harmonized data quality assessments in pursue of transparent and reproducible research. Application scenarios comprise data quality monitoring while a study is carried out as well as performing an initial data analysis before starting substantive scientific analyses but the developments are also of relevance beyond research.


2020 ◽  
Vol 65 (8) ◽  
pp. 27-38
Author(s):  
Iwona Markowicz ◽  
Paweł Baran

In the research carried out to date by the authors of the article, the assessment of the quality of mirror data in the exchange of goods between European Union (EU) countries was based on the value of goods. A similar approach is applied by many researchers. The aim of the research discussed in the article is to assess the quality of data relating to intra-EU trade based on not only the value, but also on the quantity of goods. The analysis of discrepancies in data relating to trade between EU countries, with a particular emphasis on Poland, was based on selected research methods known from literature. Both the value-based and the quantitative approach constitute the authors' contribution to the development of research methodology. Data quality indicators were proposed and data pertaining to the weight of goods were used. Information on trade in goods between EU countries in 2017 obtained from Eurostat's Comext database was analysed. The research relating to the dynamics also covered the years 2005, 2008, 2011, and 2014. The results of the analysis demonstrated that the total share of export of goods from Poland to a given country within the EU is different for data expressed in value (value of goods) and in quantity (weight of goods). Therefore, both approaches should be applied in the study of the quality of mirror data.


1985 ◽  
Vol 4 (4) ◽  
pp. 305-308 ◽  
Author(s):  
J.K. Baldwin ◽  
B.K. Hoover

The definition of the scope and application of GLPs vs. glps is discussed. Current concepts and future trends, such as the 2-tiered approach, data hardness, data quality indicators, and professional ethics, are examined.


Sign in / Sign up

Export Citation Format

Share Document