Electronic Intervention to Improve Structured Cancer Stage Data Capture

2016 ◽  
Vol 12 (10) ◽  
pp. e949-e956 ◽  
Author(s):  
Michael Cecchini ◽  
Kim Framski ◽  
Patricia Lazette ◽  
Teresita Vega ◽  
Michael Strait ◽  
...  

Purpose: Cancer staging is critical for prognostication, treatment planning, and determining clinical trial eligibility. Electronic health records (EHRs) have structured staging modules, but physician use is inconsistent. Typically, stage is entered as unstructured free text in clinical notes and cannot easily be used for reporting. Methods: We created an Epic Best Practice Advisory (BPA) decision support tool that requires physicians to enter cancer stage in a structured module. If certain conditions are met, the BPA is triggered as a hard stop, and the physician cannot chart until staging is complete or a reason for not staging is selected. We used Plan, Do, Study, Act methodology to inform the intervention and compared preexisting staging rates to rates at 4, 8, and 12 months postintervention. Results: For 12 months before BPA implementation, 1,480 of 5,222 (28%) patients had cancer stage structured within the Epic problem list. From 1 to 4 months after the BPA 2,057 of 1,788 (115%) cases were staged in Epic. In the 5- to 8-month period after the BPA, 1,057 of 1,893 (56%) cases were staged, and 9 to 12 months after the BPA 1,082 of 1,817 (60%) were staged. Conclusion: Electronic decision support improves the rate of structured cancer staging at our institution. The staging rates between 56% and 60% for the 5- to 8-month and 9- to 12-month periods likely reflect accurate postintervention staging rates, whereas the initial 115% rate for 1 to 4 months is inflated by providers staging cancers diagnosed before the BPA.

2016 ◽  
Vol 34 (7_suppl) ◽  
pp. 151-151
Author(s):  
Kerin B. Adelson ◽  
Kim Framski ◽  
Patricia Lazette ◽  
Teresita Vega ◽  
Rogerio Lilenbaum

151 Background: Cancer Staging is critical for prognosticating, treatment planning, outcomes analysis, registry reporting and clinical trial eligibility determination. Oncology EHRs have structured staging modules but use by physicians is inconsistent. Typically, stage is entered as unstructured free text in clinical notes and cannot be used for reporting. Instead, institutions depend the tumor registry (TR) which typically lag 6 months behind. Our Cancer Committee determined that real-time capture of structured cancer staging was an imperative. Methods: We created an EPIC best practice advisory (BPA) decision support tool that requires physicians to enter cancer stage if the following criteria are met: 1)unstaged cancer on the problem list 2)EPIC staging module exists for that cancer 3)physician is from a specialty with staging expertise. This BPA was implemented 12/18/14. If physicians chose not to stage they had to enter a reason why. Choices were: 1) cancer diagnosed before 2014, at which the BPA was permanently removed 2) staging studies not yet completed, at which the BPA fired at a future encounter 3) Not a staging provider, at which the BPA no longer fires for that individual provider 4) Cannot stage: document reason, at which the BPA was permanently removed. Results: We used TR data to determine the number of patients who were eligible for staging. In 12 months prior to the intervention, 1480/5222, or 28% of patients who were eligible for staging were staged in the structured staging module. After we launched the intervention, between 12/18/14 and 4/30/15, 1654/1831 or 90% of eligible patients were staged electronically. This is an absolute improvement of > 200% Conclusions: Electronic decision support can dramatically improve rates of structured staging. Such data allows automated reports for clinical trial screening, outcomes analysis, quality comparisons, and reporting. We are now building automated reports for: clinical trial eligibility, Commission on Cancer/ QOPI breast, colon and lung measures, rates of palliative care consultation for advanced disease and outcome measures like disease free interval by stage and overall survival.


2018 ◽  
Vol 27 (01) ◽  
pp. 127-128

Chen JH, Alagappan M, Goldstein MK, Asch SM, Altman RB. Decaying relevance of clinical data towards future decisions in data-driven inpatient clinical order sets. Int J Med Inform 2017 Jun;102:71-9 https://www.ncbi.nlm.nih.gov/pmc/articles/pmid/28495350/ Ebadi A, Tighe PJ, Zhang L, Rashidi P. DisTeam: A decision support tool for surgical team selection. Artif Intell Med 2017 Feb;76:16-26 https://www.ncbi.nlm.nih.gov/pmc/articles/pmid/28363285/ Fung KW, Kapusnik-Uner J, Cunningham J, Higby-Baker S, Bodenreider O. Comparison of three commercial knowledge bases for detection of drug-drug interactions in clinical decision support. J Am Med Inform Assoc 2017 Jul 1;24(4):806-12 https://academic.oup.com/jamia/article-lookup/doi/10.1093/jamia/ocx010 Mikalsen KØ, Soguero-Ruiz C, Jensen K, Hindberg K, Gran M, Revhaug A, Lindsetmo RO, Skrøvseth SO, Godtliebsen F, Jenssen R. Using anchors from free text in electronic health records to diagnose postoperative delirium. Comput Methods Programs Biomed 2017 Dec;152:105-14 https://linkinghub.elsevier.com/retrieve/pii/S0169-2607(17)31154-9


2019 ◽  
Vol 6 (Supplement_2) ◽  
pp. S805-S806
Author(s):  
Ryan K Dare ◽  
Claire E Bewley ◽  
Amanda J Novack ◽  
Jared M Heiles ◽  
Larissa K Chin

Abstract Background Hospital-acquired CDI contribute to significant morbidity, mortality, and cost burden in hospitalized patients. Clinical practice guidelines recommend strict testing criteria when employing nucleic acid amplification testing alone as to not test asymptomatic carriers. A BPA within the electronic medical record (EMR) may assist with this screening. Methods At our 9-hospital system, we created a BPA to help identify patients who may not meet criteria for CDI testing. Initial BPA (January 2018) asked if patient had 3 or more stools (yes/no) and if laxatives were administered in the last 48 hours (yes/no). An expanded BPA was updated to pull medication administration records for use of laxatives in the prior 48 hours (August 2018) and notified providers of recent C. difficile testing in the past 7 days (January 2019). C. difficile orders from March 2017 (historical), March 2018 (intervention 1), and March 2019 (intervention 2) were evaluated to assess impact of these interventions. Results C. difficile testing during 30,621 (historical), 31,299 (intervention 1), and 31,960 (intervention 2) patient-days were evaluated. Rates of C. difficile orders and infections are reported in the table. Ratio of positive C. difficile specimens to tested specimens were similar between the historical arm (51 of 402; 12.7%) and both intervention 1 (42 of 271; 15.5%) and intervention 2 (45 of 316; 14.2%) arms (P = 0.3 and P = 0.5, respectively). Intervention 1 and intervention 2 arms were similar in all metrics. Statistical analysis was performed using Stata, v.14.2. Conclusion Implementation of a decision support tool to assist with C. difficile testing significantly decreased order rates in both the initial and expanded BPA intervention arms. Compared with historical rates, incidence of CDI decreased in both intervention arms though these were not statistically significant. Similarly, ratio of positive specimens to specimens tested increased in both intervention arms, though not significant, indicating a trend toward improved patient selection. To improve appropriate CDI testing, further oversight and/or education is needed to accompany implementation of an EMR decision support tool, such as BPAs. Disclosures All authors: No reported disclosures.


2017 ◽  
Author(s):  
Nicola Whiffin ◽  
Roddy Walsh ◽  
Risha Govind ◽  
Matthew Edwards ◽  
Mian Ahmad ◽  
...  

ABSTRACTPurposeInternationally-adopted variant interpretation guidelines from the American College of Medical Genetics and Genomics (ACMG) are generic and require disease-specific refinement. Here we developed CardioClassifier (www.cardioclassifier.org), a semi-automated decision-support tool for inherited cardiac conditions (ICCs).MethodsCardioClassifier integrates data retrieved from multiple sources with user-input case-specific information, through an interactive interface, to support varian interpretation. Combining disease- and gene-specific knowledge with variant observations in large cohorts of cases and controls, we refined 14 computational ACMG criteria and created three ICC-specific rules.ResultsWe benchmarked CardioClassifier on 57 expertly-curated variants and show full retrieval of all computational data, concordantly activating 87.3% of rules. A generic annotation tool identified fewer than half as many clinically-actionable variants (64/219 vs 156/219, Fisher’s P=1.1x10-18), with important false positives; illustrating the critical importance of disease and gene-specific annotations. CardioClassifier identified putatively disease-causing variants in 33.7% of 327 cardiomyopathy cases, comparable with leading ICC laboratories. Through addition of manually-curated data, variants found in over 40% of cardiomyopathy cases are fully annotated, without requiring additional user-input data.ConclusionCardioClassifier is an ICC-specific decision-support tool that integrates expertly curated computational annotations with case-specific data to generate fast, reproducible and interactive variant pathogenicity reports, according to best practice guidelines.


2020 ◽  
Vol 41 (S1) ◽  
pp. s184-s184
Author(s):  
Stephanie Cobb ◽  
Stephanie Nguyen ◽  
Deepa Raj ◽  
Dena Taherzadeh ◽  
Pranavi Sreeramoju

Background:Mycobacterium tuberculosis (TB) is one of the leading causes of morbidity and mortality worldwide. At our health system, 50–100 patients are diagnosed with tuberculosis every year. One risk factor for TB is residence within a homeless shelter. In response to an increased number of cases in local homeless shelters, the health department sought assistance with contact tracing of individuals potentially exposed to tuberculosis. We report the results of contact tracing performed at our health system. Methods: The setting is a 770-bed, safety-net, academic hospital with community clinics and a correctional health center. Name, date of birth, and social security number of contacts potentially exposed during February 2009 to July 2013 were programmed into the electronic medical records to create a decision support tool upon entering the health system. The best practice alert (BPA) informed physicians of the exposure and offered a link to a screening test, T-spot.TB, and a link to an information sheet. This intervention was implemented from July 2013 to July 2015. After excluding patients with active TB, data on the magnitude of exposure in each homeless shelter and screening test results were analyzed with ANOVA using SPSS v 26 software. Results: Of the 8,649 identified exposed contacts, 2,118 entered our health system. Of those for whom the BPA was triggered, 1,117 had a T-spot.TB done, with 313 positive results and 57 borderline results. Table 1 shows that shelter 3 was correlated with a positive T-spot.TB. Conclusions: The BPA, which prompted physicians to evaluate an individual for TB, was effective at capturing high-risk, exposed individuals. Clinical decision support tools enabled our safety-net health system to respond effectively to a local public health need.Funding: NoneDisclosures: None


Sign in / Sign up

Export Citation Format

Share Document