An Evaluation of Kiosks for Direct-to-Consumer Telemedicine Using the National Quality Forum Assessment Framework

Author(s):  
Sapir Nachum ◽  
Kriti Gogia ◽  
Sunday Clark ◽  
Hanson Hsu ◽  
Rahul Sharma ◽  
...  
2010 ◽  
Vol 17 (8) ◽  
pp. 1989-1994 ◽  
Author(s):  
Lee G. Wilke ◽  
Karla V. Ballman ◽  
Linda M. McCall ◽  
Armando E. Giuliano ◽  
Pat W. Whitworth ◽  
...  

2011 ◽  
Vol 53 (6) ◽  
pp. 110S
Author(s):  
Benjamin S. Brooke ◽  
Ying Wei Lum ◽  
Timothy M. Pawlik ◽  
Peter J. Pronovost ◽  
Bruce A. Perler ◽  
...  

2020 ◽  
pp. 073346482094087
Author(s):  
Nicholas G. Castle ◽  
David Gifford ◽  
Lindsay B. Schwartz

The development and testing of a nursing facility resident satisfaction survey (i.e., CoreQ) that could be used for public reporting purposes is presented here. This is important as very little satisfaction with care information is publicly available for nursing facility consumers. Validity testing is reported detailing the development of the CoreQ: Short Stay Discharge questionnaire and a measure that was calculated from the items in the questionnaire. This questionnaire resulted in four items whose combined score gives a measure representing participants’ overall satisfaction with the nursing facility. The measure parsimoniously reports this satisfaction as a score (ranging from 0 to 100) and was recently endorsed by the National Quality Forum (NQF). The measure may have significance for report cards and payment metrics, as it incorporates the consumers’ opinion.


2014 ◽  
Vol 22 (2) ◽  
pp. 409-416 ◽  
Author(s):  
Andy Amster ◽  
Joseph Jentzsch ◽  
Ham Pasupuleti ◽  
K G Subramanian

Abstract Objective To analyze the completeness, computability, and accuracy of specifications for five National Quality Forum-specified (NQF) eMeasures spanning ambulatory, post-discharge, and emergency care within a comprehensive, integrated electronic health record (EHR) environment. Materials and methods To evaluate completeness, we assessed eMeasure logic, data elements, and value sets. To evaluate computability, we assessed the translation of eMeasure algorithms to programmable logic constructs and the availability of EHR data elements to implement specified data criteria, using a de-identified clinical data set from Kaiser Permanente Northwest. To assess accuracy, we compared eMeasure results with those obtained independently by existing audited chart abstraction methods used for external and internal reporting. Results One measure specification was incomplete; missing applicable LOINC codes rendered it non-computable. For three of four computable measures, data availability issues occurred; the literal specification guidance for a data element differed from the physical implementation of the data element in the EHR. In two cases, cross-referencing specified data elements to EHR equivalents allowed variably accurate measure computation. Substantial data availability issues occurred for one of the four computable measures, producing highly inaccurate results. Discussion Existing clinical workflows, documentation, and coding in the EHR were significant barriers to implementing eMeasures as specified. Implementation requires redesigning business or clinical practices and, for one measure, systemic EHR modifications, including clinical text search capabilities. Conclusions Five NQF eMeasures fell short of being machine-consumable specifications. Both clinical domain and technological expertise are required to implement manually intensive steps from data mapping to text mining to EHR-specific eMeasure implementation.


2020 ◽  
Vol 35 (6) ◽  
pp. 458-464
Author(s):  
David R. Nerenz ◽  
David Cella ◽  
Lacy Fabian ◽  
Eugene Nuccio ◽  
John Bott ◽  
...  

In the summer of 2017, the National Quality Forum (NQF) announced the formation of a Scientific Methods Panel (hereafter referred to as “the Panel”) as part of a redesign of its endorsement process. NQF created the Panel in response to stakeholder request during a Kaizen improvement event held in May 2017. Given the Panel’s role in the endorsement of performance measures used in national payment programs, the objective of this article is to describe the work of the Panel, and to describe its function in the larger context of the NQF measure endorsement process and in the measurement enterprise writ large. This article also serves as an introduction to a series of planned white papers being authored by the panel on specific technical issues in the area of health care performance measurement.


2016 ◽  
Vol 34 (26_suppl) ◽  
pp. 180-180
Author(s):  
Shelly S. Lo ◽  
Lauren Allison Wiebe ◽  
Catherine Deamant ◽  
Amy Scheu ◽  
Betty Roggenkamp ◽  
...  

180 Background: The Institute of Medicine (IOM) 2013 report recommends supportive oncology care from diagnosis through survivorship, to end of life. The Coleman Supportive Oncology Collaborative (CSOC) developed a city-wide plan to improve supportive oncology. Metrics derived from the Commission on Cancer (CoC), ASCO Quality Oncology Practice Initiative (ASCO-QOPI) and National Quality Forum (NQF) were used to assess the CSOC impact. Methods: Medical records of consecutive cancer patients from 6 practice improvement cancer centers in Chicago (3 academic, 2 safety-net, 1 public) were reviewed for 2 periods: 2014 (n = 843) and Q1 of 2015 (n = 313). Descriptive statistics assessed differences in quality metrics. Results: Significant improvement was achieved in 6 of 8 core supportive oncology metrics (see table). Conclusions: Consolidated metrics are feasible to assess supportive oncology quality. Early data indicate improvement and effectiveness of the collaborative approach. [Table: see text]


Sign in / Sign up

Export Citation Format

Share Document