scholarly journals Quality assessment for register-based statistics - Results for the Austrian census 2011

2016 ◽  
Vol 45 (2) ◽  
pp. 3-14 ◽  
Author(s):  
Eva-Maria Asamer ◽  
Franz Astleithner ◽  
Predrag Cetkovic ◽  
Stefan Humer ◽  
Manuela Lenk ◽  
...  

In 2011, Statistics Austria carried out the first register-based census. The use of administrative data for statistical purposes is accompanied by various advantages like a reduced burden for the respondents and less costs for the NSI. However, new challenges, like the quality assessment of this kind of data, arise. Therefore, Statistics Austria developed a comprehensive standardized framework for the evaluation of the data quality for registerbased statistics.In this paper, we present the principle of the quality framework and detailed results from the quality evaluation of the 2011 Austrian census. For each attribute in the census a quality measure is derived from four hyperdimensions. The first three hyperdimensions focus on the documentation of data, the usability of the records and the comparison of data to an external source. The fourth hyperdimension assesses the quality of the imputations. In the framework all the available information on each attribute can be combined to form one final quality indicator. This procedure allows to track changes in quality during data processing and to compare the quality of different census generations.

2015 ◽  
Vol 31 (2) ◽  
pp. 231-247 ◽  
Author(s):  
Matthias Schnetzer ◽  
Franz Astleithner ◽  
Predrag Cetkovic ◽  
Stefan Humer ◽  
Manuela Lenk ◽  
...  

Abstract This article contributes a framework for the quality assessment of imputations within a broader structure to evaluate the quality of register-based data. Four quality-related hyperdimensions examine the data processing from the raw-data level to the final statistics. Our focus lies on the quality assessment of different imputation steps and their influence on overall data quality. We suggest classification rates as a measure of accuracy of imputation and derive several computational approaches.


2013 ◽  
Vol 04 (01) ◽  
pp. 1-11 ◽  
Author(s):  
A. Shachak ◽  
M. Laberge

SummaryObjective: The objectives of this study are to 1) create a quality assessment tool for socio-demographic data aligned with the needs of Community Health Centres (CHCs) and based on the data quality framework of the Canadian Institute for Health Information (CIHI), and 2) test the feasibility of the tool in CHCs.Methods: The tool was developed based on both theoretical and practical knowledge. A review of the literature was performed to identify data quality frameworks and dimensions that could be employed. In addition, informal discussions with Community Health Centres staff members holding various positions were conducted and a team of subject matter experts was established. This approach supported the alignment between the tool (i.e., the indicators developed, the rating scale, and weighting system) and the setting for which it has been designed. The tool was pilot tested in five CHCs across Ontario.Results: The decision to focus on socio-demographic data was based on findings from the discussions with staff members. The team established nine principles for the development of the tool, including the use of computer software, whenever possible, to query the data and ensure consistency of the measurement. Data quality scores ranged from 45 to 74 on a scale of 0 (lowest quality) to 100 (highest data quality), with one CHC that was not able to run all of the queries. The feedback from staff was positive and supports the feasibility of the tool as an application of the CIHI data quality framework in a local setting.Conclusion: Pilot test results demonstrate the feasibility of the tool and an applicability of the CIHI framework as a basis for developing tools for data quality assessment in health care organizations.Citation: Laberge M, Shachak A. Developing a tool to assess the quality of socio-demographic data in community health centres. Appl Clin Inf 2012; 4:1–11http://dx.doi.org/10.4338/ACI-2012-10-CR-0041


Author(s):  
Syed Mustafa Ali ◽  
Farah Naureen ◽  
Arif Noor ◽  
Maged Kamel N. Boulos ◽  
Javariya Aamir ◽  
...  

Background Increasingly, healthcare organizations are using technology for the efficient management of data. The aim of this study was to compare the data quality of digital records with the quality of the corresponding paper-based records by using data quality assessment framework. Methodology We conducted a desk review of paper-based and digital records over the study duration from April 2016 to July 2016 at six enrolled TB clinics. We input all data fields of the patient treatment (TB01) card into a spreadsheet-based template to undertake a field-to-field comparison of the shared fields between TB01 and digital data. Findings A total of 117 TB01 cards were prepared at six enrolled sites, whereas just 50% of the records (n=59; 59 out of 117 TB01 cards) were digitized. There were 1,239 comparable data fields, out of which 65% (n=803) were correctly matched between paper based and digital records. However, 35% of the data fields (n=436) had anomalies, either in paper-based records or in digital records. 1.9 data quality issues were calculated per digital patient record, whereas it was 2.1 issues per record for paper-based record. Based on the analysis of valid data quality issues, it was found that there were more data quality issues in paper-based records (n=123) than in digital records (n=110). Conclusion There were fewer data quality issues in digital records as compared to the corresponding paper-based records. Greater use of mobile data capture and continued use of the data quality assessment framework can deliver more meaningful information for decision making.


2014 ◽  
Vol 693 ◽  
pp. 261-266 ◽  
Author(s):  
Jolanta B. Krolczyk ◽  
Marek Tukiendorf ◽  
Rafał Dawid

The paper presents the research on the quality of thirteen-component granular mixture. The changes in the quality of the mixture after reduction the mixing time from a standard time of 30 minutes to: 25 minutes, 20 minutes and 15 minutes have been analyzed. The research has been conducted in industrial conditions where a vertical mixer with worm agitator with the charge of two tons was used. Research results were presented graphically as changes in concentration of components shares in the mixing time of 30, 25, 20 and 15 minutes and the obtained results were compared. Quality assessment of a thirteen-component mixture with the use of residual sum of squares for four mixing times was carried out.


2019 ◽  
Author(s):  
Pavankumar Mulgund ◽  
Raj Sharman ◽  
Priya Anand ◽  
Shashank Shekhar ◽  
Priya Karadi

BACKGROUND In recent years, online physician-rating websites have become prominent and exert considerable influence on patients’ decisions. However, the quality of these decisions depends on the quality of data that these systems collect. Thus, there is a need to examine the various data quality issues with physician-rating websites. OBJECTIVE This study’s objective was to identify and categorize the data quality issues afflicting physician-rating websites by reviewing the literature on online patient-reported physician ratings and reviews. METHODS We performed a systematic literature search in ACM Digital Library, EBSCO, Springer, PubMed, and Google Scholar. The search was limited to quantitative, qualitative, and mixed-method papers published in the English language from 2001 to 2020. RESULTS A total of 423 articles were screened. From these, 49 papers describing 18 unique data quality issues afflicting physician-rating websites were included. Using a data quality framework, we classified these issues into the following four categories: intrinsic, contextual, representational, and accessible. Among the papers, 53% (26/49) reported intrinsic data quality errors, 61% (30/49) highlighted contextual data quality issues, 8% (4/49) discussed representational data quality issues, and 27% (13/49) emphasized accessibility data quality. More than half the papers discussed multiple categories of data quality issues. CONCLUSIONS The results from this review demonstrate the presence of a range of data quality issues. While intrinsic and contextual factors have been well-researched, accessibility and representational issues warrant more attention from researchers, as well as practitioners. In particular, representational factors, such as the impact of inline advertisements and the positioning of positive reviews on the first few pages, are usually deliberate and result from the business model of physician-rating websites. The impact of these factors on data quality has not been addressed adequately and requires further investigation.


Author(s):  
Catherine Eastwood ◽  
Keith Denny ◽  
Maureen Kelly ◽  
Hude Quan

Theme: Data and Linkage QualityObjectives: To define health data quality from clinical, data science, and health system perspectives To describe some of the international best practices related to quality and how they are being applied to Canada’s administrative health data. To compare methods for health data quality assessment and improvement in Canada (automated logical checks, chart quality indicators, reabstraction studies, coding manager perspectives) To highlight how data linkage can be used to provide new insights into the quality of original data sources To highlight current international initiatives for improving coded data quality including results from current ICD-11 field trials Dr. Keith Denny: Director of Clinical Data Standards and Quality, Canadian Insititute for Health Information (CIHI), Adjunct Research Professor, Carleton University, Ottawa, ON. He provides leadership for CIHI’s information quality initiatives and for the development and application of clinical classifications and terminology standards. Maureen Kelly: Manager of Information Quality at CIHI, Ottawa, ON. She leads CIHI’s corporate quality program that is focused on enhancing the quality of CIHI’s data sources and information products and to fostering CIHI’s quality culture. Dr. Cathy Eastwood: Scientific Manager, Associate Director of Alberta SPOR Methods & Development Platform, Community Health Sciences, Cumming School of Medicine, University of Calgary, Calgary, AB. She has expertise in clinical data collection, evaluation of local and systemic data quality issues, disease classification coding with ICD-10 and ICD-11. Dr. Hude Quan: Professor, Community Health Sciences, Cumming School of Medicine, University of Calgary, Director Alberta SPOR Methods Platform; Co-Chair of Hypertension Canada, Co-Chair of Person to Population Health Collaborative of the Libin Cardiovascular Institute in Calgary, AB. He has expertise in assessing, validating, and linking administrative data sources for conducting data science research including artificial intelligence methods for evaluating and improving data quality. Intended Outcomes:“What is quality health data?” The panel of experts will address this common question by discussing how to define high quality health data, and measures being taken to ensure that they are available in Canada. Optimizing the quality of clinical-administrative data, and their use-value, first requires an understanding of the processes used to create the data. Subsequently, we can address the limitations in data collection and use these data for diverse applications. Current advances in digital data collection are providing more solutions to improve health data quality at lower cost. This panel will describe a number of quality assessment and improvement initiatives aimed at ensuring that health data are fit for a range of secondary uses including data linkage. It will also discuss how the need for the linkage and integration of data sources can influence the views of the data source’s fitness for use. CIHI content will include: Methods for optimizing the value of clinical-administrative data CIHI Information Quality Framework Reabstraction studies (e.g. physician documentation/coders’ experiences) Linkage analytics for data quality University of Calgary content will include: Defining/measuring health data quality Automated methods for quality assessment and improvement ICD-11 features and coding practices Electronic health record initiatives


Author(s):  
Eaton E. Lattman ◽  
Thomas D. Grant ◽  
Edward H. Snell

Extracting information from scattering data is very sensitive to the quality of the data. In this chapter data quality characterization is described, including initial data processing procedures to alert the user to potential data quality issues. Accurate buffer subtraction is crucial for correct modeling and analysis of SAS data, and mechanisms for identifying buffer subtraction errors are discussed. Examining SAS parameters such as a function of concentration or exposure is very useful for identifying concentration dependent artifacts or radiation damage that, if unnoticed, can be very detrimental to further analysis, including misinterpreting the results and drawing erroneous conclusions. SAS is often used for analyzing flexible molecules in solution that may be difficult to study with other structural techniques. Qualitative and quantitative assessments of flexibility are described.


Sign in / Sign up

Export Citation Format

Share Document