scholarly journals Data quality improvement in general practice

2006 ◽  
Vol 23 (5) ◽  
pp. 529-536 ◽  
Author(s):  
H. Brouwer
2021 ◽  
Vol 27 (2) ◽  
pp. 143
Author(s):  
Abhijeet Ghosh ◽  
Elizabeth Halcomb ◽  
Sandra McCarthy ◽  
Christine Ashley

General practice data provide important opportunities for both population health and within-practice initiatives to improve health. Despite its promise, a lack of accuracy affects the use of such data. The Sentinel Practices Data Sourcing (SPDS) project is a structured chronic disease surveillance and data quality improvement strategy in general practice. A mixed-methods approach was used to evaluate data quality improvement in 99 participating practices over 12 months. Quantitative data were obtained by measuring performance against 10 defined indicators, whereas 48 semi-structured interviews provided qualitative data. Aggregated scores demonstrated improvements in all indicators, ranging from minor to substantially significant improvements. Participants reported positively on levels of support provided, and acquisition of new knowledge and skills relating to data entry and cleansing. This evaluation provides evidence of the effectiveness of a structured approach to improve the quality of primary care data. Investing in this targeted intervention has the potential to create sustained improvements in data quality, which can drive clinical practice improvement.


2021 ◽  
Vol 10 (2) ◽  
pp. e001309
Author(s):  
Jennifer Gosling ◽  
Nicholas Mays ◽  
Bob Erens ◽  
David Reid ◽  
Josephine Exley

BackgroundThis paper presents the results of the first UK-wide survey of National Health Service (NHS) general practitioners (GPs) and practice managers (PMs) designed to explore the service improvement activities being undertaken in practices, and the factors that facilitated or obstructed that work. The research was prompted by growing policy and professional interest in the quality of general practice and its improvement. The analysis compares GP and PM involvement in, and experience of, quality improvement activities.MethodsThis was a mixed-method study comprising 26 semistructured interviews, a focus group and two surveys. The qualitative data supported the design of the surveys, which were sent to all 46 238 GPs on the Royal College of General Practitioners (RCGP) database and the PM at every practice across the UK (n=9153) in July 2017.ResultsResponses from 2377 GPs and 1424 PMs were received and were broadly representative of each group. Ninety-nine per cent reported having planned or undertaken improvement activities in the previous 12 months. The most frequent related to prescribing and access. Key facilitators of improvement included ‘good clinical leadership’. The two main barriers were ‘too many demands from external stakeholders’ and a lack of protected time. Audit and significant event audit were the most common improvement tools used, but respondents were interested in training on other quality improvement tools.ConclusionGPs and PMs are interested in improving service quality. As such, the new quality improvement domain in the Quality and Outcomes Framework used in the payment of practices is likely to be relatively easily accepted by GPs in England. However, if improving quality is to become routine work for practices, it will be important for the NHS in the four UK countries to work with practices to mitigate some of the barriers that they face, in particular the lack of protected time.


2020 ◽  
Vol 8 (4) ◽  
pp. e000512
Author(s):  
Ingvild Vatten Alsnes ◽  
Morten Munkvik ◽  
W Dana Flanders ◽  
Nicolas Øyane

ObjectivesWe aimed to describe the quality improvement measures made by Norwegian general practice (GP) during the COVID-19 pandemic, evaluate the differences in quality improvements based on region and assess the combinations of actions taken.DesignDescriptive study.SettingParticipants were included after taking part in an online quality improvement COVID-19 course for Norwegian GPs in April 2020. The participants reported whether internal and external measures were in place: COVID-19 sign on entrance, updated home page, access to video consultations and/or electronic written consultations, home office solutions, separate working teams, preparedness for home visits, isolation rooms, knowledge on decontamination, access to sufficient supplies of personal protective equipment (PPE) and COVID-19 clinics.ParticipantsOne hundred GP offices were included. The mean number of general practitioners per office was 5.63.ResultsMore than 80% of practices had the following preparedness measures: COVID-19 sign on entrance, updated home page, COVID-19 clinic in the municipality, video and written electronic consultations, knowledge on how to use PPE, and home office solutions for general practitioners. Less than 50% had both PPE and knowledge of decontamination. Lack of PPE was reported by 37%, and 34% reported neither sufficient PPE nor a dedicated COVID-19 clinic. 15% reported that they had an isolation room, but not enough PPE. There were no geographical differences.ConclusionsNorwegian GPs in this study implemented many quality improvements to adapt to the COVID-19 pandemic. Overall, the largest potentials for improvement seem to be securing sufficient supply of PPE and establishing an isolation room at their practices.


Author(s):  
Alla Andrianova ◽  
Maxim Simonov ◽  
Dmitry Perets ◽  
Andrey Margarit ◽  
Darya Serebryakova ◽  
...  

Author(s):  
Suranga C. H. Geekiyanage ◽  
Dan Sui ◽  
Bernt S. Aadnoy

Drilling industry operations heavily depend on digital information. Data analysis is a process of acquiring, transforming, interpreting, modelling, displaying and storing data with an aim of extracting useful information, so that the decision-making, actions executing, events detecting and incident managing of a system can be handled in an efficient and certain manner. This paper aims to provide an approach to understand, cleanse, improve and interpret the post-well or realtime data to preserve or enhance data features, like accuracy, consistency, reliability and validity. Data quality management is a process with three major phases. Phase I is an evaluation of pre-data quality to identify data issues such as missing or incomplete data, non-standard or invalid data and redundant data etc. Phase II is an implementation of different data quality managing practices such as filtering, data assimilation, and data reconciliation to improve data accuracy and discover useful information. The third and final phase is a post-data quality evaluation, which is conducted to assure data quality and enhance the system performance. In this study, a laboratory-scale drilling rig with a control system capable of drilling is utilized for data acquisition and quality improvement. Safe and efficient performance of such control system heavily relies on quality of the data obtained while drilling and its sufficient availability. Pump pressure, top-drive rotational speed, weight on bit, drill string torque and bit depth are available measurements. The data analysis is challenged by issues such as corruption of data due to noises, time delays, missing or incomplete data and external disturbances. In order to solve such issues, different data quality improvement practices are applied for the testing. These techniques help the intelligent system to achieve better decision-making and quicker fault detection. The study from the laboratory-scale drilling rig clearly demonstrates the need for a proper data quality management process and clear understanding of signal processing methods to carry out an intelligent digitalization in oil and gas industry.


2021 ◽  
Author(s):  
Qing Xie ◽  
Chengong Han ◽  
Victor Jin ◽  
Shili Lin

Single cell Hi-C techniques enable one to study cell to cell variability in chromatin interactions. However, single cell Hi-C (scHi-C) data suffer severely from sparsity, that is, the existence of excess zeros due to insufficient sequencing depth. Complicate things further is the fact that not all zeros are created equal, as some are due to loci truly not interacting because of the underlying biological mechanism (structural zeros), whereas others are indeed due to insufficient sequencing depth (sampling zeros), especially for loci that interact infrequently. Differentiating between structural zeros and sampling zeros is important since correct inference would improve downstream analyses such as clustering and discovery of subtypes. Nevertheless, distinguishing between these two types of zeros has received little attention in the single cell Hi-C literature, where the issue of sparsity has been addressed mainly as a data quality improvement problem. To fill this gap, in this paper, we propose HiCImpute, a Bayesian hierarchy model that goes beyond data quality improvement by also identifying observed zeros that are in fact structural zeros. HiCImpute takes spatial dependencies of scHi-C 2D data structure into account while also borrowing information from similar single cells and bulk data, when such are available. Through an extensive set of analyses of synthetic and real data, we demonstrate the ability of HiCImpute for identifying structural zeros with high sensitivity, and for accurate imputation of dropout values in sampling zeros. Downstream analyses using data improved from HiCImpute yielded much more accurate clustering of cell types compared to using observed data or data improved by several comparison methods. Most significantly, HiCImpute-improved data has led to the identification of subtypes within each of the excitatory neuronal cells of L4 and L5 in the prefrontal cortex.


Sign in / Sign up

Export Citation Format

Share Document