scholarly journals Development of an Electronic Tool to Measure Daily Appropriateness of Inpatient Antibacterial Use

2020 ◽  
Vol 41 (S1) ◽  
pp. s2-s2
Author(s):  
Vanessa Stevens ◽  
Pamela Belperio ◽  
Melinda Neuhauser ◽  
Lauri Hicks ◽  
McKenna Nevers ◽  
...  

Background: Assessing antimicrobial use (AU) appropriateness isa cornerstone of antimicrobial stewardship, largely accomplished through time-intensive manual chart review of specific agents or diagnoses. Efforts to evaluate appropriateness have focused on assessing the appropriateness of an entire treatment course. An electronic measure was developed to assess the appropriateness of each day of inpatient AU leveraging electronic health record data. Methods: We extracted contextual data, including risk factors for resistant organisms, allergies, constitutional signs and symptoms from diagnostic and procedural codes, and microbiological findings, from the electronic health records of patients in Veterans’ Health Administration inpatient wards reporting data to the National Healthcare Safety Network (NHSN) AU option from 2017–2018. Only the antibacterial categories shown in Figure 1 were included. Respiratory, urinary tract, skin and soft-tissue, and other infection categories were defined and applied to each hospital day. Algorithm rules were constructed to evaluate AU based on the clinical context (eg, in the ICU, during empiric therapy, drug–pathogen match, recommended drugs, and duration). Rules were drawn from available literature, were discussed with experts, and were then refined empirically. Generally, the rules allowed for use of first-line agents unless risk factors or contraindications were identified. AU was categorized as appropriate, inappropriate, or indeterminate for each day, then aggregated into an overall measure of facility-level AU appropriateness. A validation set of 20 charts were randomly selected for manual review. Results: Facility distribution of appropriateness, inappropriateness, and indeterminate AU by 4 of the adult, 2017 baseline NHSN Standardized Antimicrobial Administration Ratio (SAAR) categories are shown in Figure 1. The median facility-level inappropriateness across all SAAR categories was 37.2% (IQR, 29.4%–52.5%). The median facility-level indeterminate AU across all SAAR categories was 14.4% (IQR, 9.1%–21.2%). Chart review of 20 admissions showed agreement with algorithm appropriateness and inappropriateness in 95.4% of 240 antibacterial days.Conclusions: We developed a comprehensive, flexible electronic tool to evaluate AU appropriateness for combinations of setting, antibacterial agent, syndrome, or time frame of interest (eg, empiric, definitive, or excess duration). Application of our algorithm in 2 years of VA acute-care data suggest substantial interfacility variability; the highest rates of inappropriateness were for anti-MRSA therapy. Our preliminary chart review demonstrated agreement between electronic and manual review in >95% of antimicrobial days. This approach may be useful to identify potential stewardship targets, in the development of decision support systems, and in conjunction with other metrics to track AU over time.Funding: NoneDisclosures: None

BMJ Open ◽  
2021 ◽  
Vol 11 (10) ◽  
pp. e051978
Author(s):  
Xiao Qing Wang ◽  
Theodore Iwashyna ◽  
Hallie Prescott ◽  
Valeria Valbuena ◽  
Sarah Seelye

ObjectiveExtraction and standardisation of pulse oximetry and supplemental oxygen data from electronic health records has the potential to improve risk-adjustment, quality assessment and prognostication. We develop an approach to standardisation and report on its use for benchmarking purposes.Materials and methodsUsing electronic health record data from the nationwide Veteran’s Affairs healthcare system (2013–2017), we extracted, standardised and validated pulse oximetry and supplemental oxygen data for 2 765 446 hospitalisations in the Veteran’s Affairs Patient Database (VAPD) cohort study. We assessed face, concurrent and predictive validities using the following approaches, respectively: (1) evaluating the stability of patients’ pulse oximetry values during a 24-hour period, (2) testing for greater amounts of supplemental oxygen use in patients likely to need oxygen therapy and (3) examining the association between supplemental oxygen and subsequent mortality.ResultsWe found that 2 700 922 (98%) hospitalisations had at least one pulse oximetry reading, and 864 605 (31%) hospitalisations received oxygen therapy. Patients monitored by pulse oximetry had a reading on average every 6 hours (median 4; IQR 3–7). Patients on supplemental oxygen were older, white and male compared with patients not receiving oxygen therapy (p<0.001) and were more likely to have diagnoses of heart failure and chronic pulmonary diseases (p<0.001). The amount of supplemental oxygen for patients with at least three consecutive values recorded during a 24-hour period fluctuated by median 2 L/min (IQR: 2–3), and 81% of such triplets showed the same level of oxygen receipt.ConclusionOur approach to standardising pulse oximetry and supplemental oxygen data shows face, concurrent and predictive validities as the following: supplemental oxygen clusters in the range consistent with hospital wall-dispensed oxygen supplies (face validity); there are greater amounts of supplemental oxygen for certain clinical conditions (concurrent validity) and there is an association of supplemental oxygen with in-hospital and postdischarge mortality (predictive validity).


2021 ◽  
Vol 186 (Supplement_1) ◽  
pp. 651-658
Author(s):  
Kath M Bogie ◽  
Steven K Roggenkamp ◽  
Ningzhou Zeng ◽  
Jacinta M Seton ◽  
Katelyn R Schwartz ◽  
...  

ABSTRACT Background Pressure injuries (PrI) are serious complications for many with spinal cord injury (SCI), significantly burdening health care systems, in particular the Veterans Health Administration. Clinical practice guidelines (CPG) provide recommendations. However, many risk factors span multiple domains. Effective prioritization of CPG recommendations has been identified as a need. Bioinformatics facilitates clinical decision support for complex challenges. The Veteran’s Administration Informatics and Computing Infrastructure provides access to electronic health record (EHR) data for all Veterans Health Administration health care encounters. The overall study objective was to expand our prototype structural model of environmental, social, and clinical factors and develop the foundation for resource which will provide weighted systemic insight into PrI risk in veterans with SCI. Methods The SCI PrI Resource (SCI-PIR) includes three integrated modules: (1) the SCIPUDSphere multidomain database of veterans’ EHR data extracted from October 2010 to September 2015 for ICD-9-CM coding consistency together with tissue health profiles, (2) the Spinal Cord Injury Pressure Ulcer and Deep Tissue Injury Ontology (SCIPUDO) developed from the cohort’s free text clinical note (Text Integration Utility) notes, and (3) the clinical user interface for direct SCI-PIR query. Results The SCI-PIR contains relevant EHR data for a study cohort of 36,626 veterans with SCI, representing 10% to 14% of the U.S. population with SCI. Extracted datasets include SCI diagnostics, demographics, comorbidities, rurality, medications, and laboratory tests. Many terminology variations for non-coded input data were found. SCIPUDO facilitates robust information extraction from over six million Text Integration Utility notes annually for the study cohort. Visual widgets in the clinical user interface can be directly populated with SCIPUDO terms, allowing patient-specific query construction. Conclusion The SCI-PIR contains valuable clinical data based on CPG-identified risk factors, providing a basis for personalized PrI risk management following SCI. Understanding the relative impact of risk factors supports PrI management for veterans with SCI. Personalized interactive programs can enhance best practices by decreasing both initial PrI formation and readmission rates due to PrI recurrence for veterans with SCI.


2020 ◽  
Vol 41 (S1) ◽  
pp. s12-s13
Author(s):  
Hillary Mull ◽  
Kelly Stolzmann ◽  
Emily Kalver ◽  
Marlena Shin ◽  
Marin Schweizer ◽  
...  

Background: Antimicrobial prophylaxis is an evidence-proven strategy for reducing procedure-related infections; however, measuring this key quality metric typically requires manual review, due to the way antimicrobial prophylaxis is documented in the electronic medical record (EMR). Our objective was to combine structured and unstructured data from the Veterans’ Health Administration (VA) EMR to create an electronic tool for measuring preincisional antimicrobial prophylaxis. We assessed this methodology in cardiac device implantation procedures. Methods: With clinician input and review of clinical guidelines, we developed a list of antimicrobial names recommended for the prevention of cardiac device infection. Next, we iteratively combined positive flags for an antimicrobial order or drug fill from structured data fields in the EMR and hits on text string searches of antimicrobial names documented in electronic clinical notes to optimize an algorithm to flag preincisional antimicrobial use with high sensitivity and specificity. We trained the algorithm using existing fiscal year (FY) 2008-15 data from the VA Clinical Assessment Reporting and Tracking-Electrophysiology (CART-EP), which contains manually determined information about antimicrobial prophylaxis. We then validated the performance of the final version of the algorithm using a national cohort of VA patients who underwent cardiac device procedures in FY 2016 or 2017. Discordant cases underwent expert manual review to identify reasons for algorithm misclassification and to identify potential future implementation barriers. Results: The CART-EP dataset included 2,102 procedures at 38 VA facilities with manually identified antimicrobial prophylaxis in 2,056 cases (97.8%). The final algorithm combining structured EMR fields and text-note search results flagged 2,048 of the CART-EP cases (97.4%). Algorithm validation identified antimicrobial prophylaxis in 16,334 of 19,212 cardiac device procedures (87.9%). Misclassifications occurred due to EMR documentation issues. Conclusions: We developed a methodology with high accuracy to measure guideline-concordant use of antimicrobial prophylaxis before cardiac device procedures using data fields present in modern EMRs that does not rely on manual review. In addition to broad applicability in the VA and other healthcare systems with EMRs, this method could be adapted for other procedural areas in which antimicrobial prophylaxis is recommended but comprehensive measurement has been limited to resource-intense manual review.Funding: NoneDisclosures: None


Author(s):  
Jeffrey G Klann ◽  
Griffin M Weber ◽  
Hossein Estiri ◽  
Bertrand Moal ◽  
Paul Avillach ◽  
...  

Abstract Introduction The Consortium for Clinical Characterization of COVID-19 by EHR (4CE) is an international collaboration addressing COVID-19 with federated analyses of electronic health record (EHR) data. Objective We sought to develop and validate a computable phenotype for COVID-19 severity. Methods Twelve 4CE sites participated. First we developed an EHR-based severity phenotype consisting of six code classes, and we validated it on patient hospitalization data from the 12 4CE clinical sites against the outcomes of ICU admission and/or death. We also piloted an alternative machine-learning approach and compared selected predictors of severity to the 4CE phenotype at one site. Results The full 4CE severity phenotype had pooled sensitivity of 0.73 and specificity 0.83 for the combined outcome of ICU admission and/or death. The sensitivity of individual code categories for acuity had high variability - up to 0.65 across sites. At one pilot site, the expert-derived phenotype had mean AUC 0.903 (95% CI: 0.886, 0.921), compared to AUC 0.956 (95% CI: 0.952, 0.959) for the machine-learning approach. Billing codes were poor proxies of ICU admission, with as low as 49% precision and recall compared to chart review. Discussion We developed a severity phenotype using 6 code classes that proved resilient to coding variability across international institutions. In contrast, machine-learning approaches may overfit hospital-specific orders. Manual chart review revealed discrepancies even in the gold-standard outcomes, possibly due to heterogeneous pandemic conditions. Conclusion We developed an EHR-based severity phenotype for COVID-19 in hospitalized patients and validated it at 12 international sites.


Circulation ◽  
2020 ◽  
Vol 142 (Suppl_3) ◽  
Author(s):  
Evan S Manning ◽  
Melanie D Whittington ◽  
Susan R Kirsh ◽  
Rachael Kenney ◽  
Jeffrey ToddStenberg ◽  
...  

Introduction: A study of 42,000 cardiology consults within the Veterans Health Administration (VHA) in 2016 found that patients who received electronic consultation (e-consults) had similar healthcare costs at 6 months compared to those who received face-to-face (F2F) consults. However, results may have been confounded if patients with less costly conditions received e-consults. Our aim was to compare costs between those receiving F2F vs. e-consults for a similar indication. Hypothesis: Electronic rather than F2F consultation for atrial fibrillation (AF) management will be associated with lower total healthcare costs. Methods: We conducted a retrospective cohort study of a national sample of VHA patients who received cardiology consultation in 2016. We used a natural language processing script to identify consults for AF management. Primary outcomes were total healthcare costs at 3 and 6 months. Secondary outcomes included inpatient and outpatient costs. We compared costs between groups using a generalized linear model with a gamma distribution and log link. We adjusted for community wage and Charlson comorbidity indices, distance to nearest facility, age, and gender. Standard errors were clustered at the facility level. Results: We sampled 176 F2F and 136 e-consults from 43 facilities. Mean total 6-month costs were $12,928 (95% confidence interval [CI]: 1,377; 40,644) and $8,286 (95% CI: 959; 31,320) among e-consult and F2F groups, respectively. The e-consult group had 12.3% higher 3-month (p<0.001) and 41.5% higher 6-month total healthcare costs (p<0.001) in comparison to the F2F group. At 3 months, the e-consult group had 25.1% lower inpatient costs (p<0.001) and 32.5% higher outpatient costs (p<0.001). At 6 months, the e-consult group had 6.3% higher inpatient costs (p<0.001) and 48.4% higher outpatient costs (p<0.001). Conclusions: Use of e-consults for AF management is associated with reduced inpatient costs at 3 months, but higher total costs, which were largely driven by outpatient costs. Improving our understanding of healthcare utilization after initial consultation, or in differences in reasons for consultation within AF management may help explain these differences.


Pain Medicine ◽  
2020 ◽  
Vol 21 (10) ◽  
pp. 2563-2572 ◽  
Author(s):  
Diana M Higgins ◽  
Eugenia Buta ◽  
Alicia A Heapy ◽  
Mary A Driscoll ◽  
Robert D Kerns ◽  
...  

Abstract Objective To examine the relationship between body mass index (BMI) and pain intensity among veterans with musculoskeletal disorder diagnoses (MSDs; nontraumatic joint disorder; osteoarthritis; low back, back, and neck pain). Setting Administrative and electronic health record data from the Veterans Health Administration (VHA). Subjects A national cohort of US military veterans with MSDs in VHA care during 2001–2012 (N = 1,759,338). Methods These cross-sectional data were analyzed using hurdle negative binomial models of pain intensity as a function of BMI, adjusted for comorbidities and demographics. Results The sample had a mean age of 59.4, 95% were male, 77% were white/Non-Hispanic, 79% were overweight or obese, and 42% reported no pain at index MSD diagnosis. Overall, there was a J-shaped relationship between BMI and pain (nadir = 27 kg/m2), with the severely obese (BMI ≥ 40 kg/m2) being most likely to report any pain (OR vs normal weight = 1.23, 95% confidence interval = 1.21–1.26). The association between BMI and pain varied by MSD, with a stronger relationship in the osteoarthritis group and a less pronounced relationship in the back and low back pain groups. Conclusions There was a high prevalence of overweight/obesity among veterans with MSD. High levels of BMI (&gt;27 kg/m2) were associated with increased odds of pain, most markedly among veterans with osteoarthritis.


2005 ◽  
Vol 134 (2) ◽  
pp. 249-257 ◽  
Author(s):  
I. A. ZUNIGA ◽  
J. J. CHEN ◽  
D. S. LANE ◽  
J. ALLMER ◽  
V. E. JIMENEZ-LUCHO

This study analyses a screening programme for hepatitis C virus (HCV) infection among US veterans in a suburban Veterans Affairs Medical Center, in New York. This is the first study examining all 11 potential risk factors listed in the 2001 National U.S. Veterans Health Administration Screening Guidelines. A retrospective study was conducted of 5400 veterans ‘at risk’ of HCV, identified through a questionnaire in this institution's primary-care outpatient departments between 1 October 2001 and 31 December 2003. Multivariate logistic regression models were built to identify independent predictors of infection. Of 2282 veterans tested for HCV, 4·6% were confirmed by HCV PCR to be HCV infected. In the multivariate model developed, injection drug use, blood transfusion before 1992, service during the Vietnam era, tattoo, and a history of abnormal liver function tests were independent predictors of HCV infection. Our data support considering a more targeted screening approach that includes five of the 11 risk factors.


Hand ◽  
2020 ◽  
pp. 155894472096497
Author(s):  
Miranda J. Rogers ◽  
Chao-Chin Lu ◽  
Andrew R. Stephens ◽  
Brittany N. Garcia ◽  
Wei Chen ◽  
...  

Background: Scaphotrapeziotrapezoid (STT) arthrodesis is a procedure used for specific degenerative arthritis and instability patterns of the wrist. This study evaluates nonunion rate and risk factors for reoperation after STT arthrodesis in the Veterans Affairs Department patient population. The purpose of our study was to assess the long-term nonunion rate following STT arthrodesis and to identify factors associated with reoperation. Methods: The national Veterans Health Administration Corporate Data Warehouse and Current Procedural Terminology codes identified STT arthrodesis procedures from 1995 to 2016. Frequencies of total wrist arthrodesis (TWA) and secondary operations were determined. Univariate analyses provided odds ratios for risk factors associated with complications. Results: Fifty-eight STT arthrodeses were performed in 54 patients with a mean follow-up of 120 months. Kirschner wires (K-wires) were the most common fixation method (69%). Six wrists (10%) required secondary procedures: 5 TWAs and 1 revision STT arthrodesis. Four patients underwent additional procedures for nonunion (7%). Twenty-four patients required K-wire removal, 8 (14%) of these in the operating room, which were not included in regression analysis. Every increase in 1 year of age resulted in a 15% decrease in likelihood of reoperation (95% confidence interval: 0.77-0.93; P < .0001). Opioid use within 90 days before surgery ( P = 1.00), positive smoking history ( P = 1.00), race ( P = .30), comorbidity count ( P = .25), and body mass index ( P = .19) were not associated with increased risk of reoperation. Conclusions: At a mean follow-up of 10 years, patients undergoing STT arthrodesis have a 10% risk of reoperation, and this risk decreases with older patient age. There was a symptomatic nonunion rate of 7%, similar to prior published rates. Patient demographics, comorbidity, smoking history, and opioid use did not appear to increase risk of reoperation.


2006 ◽  
Vol 1 (2) ◽  
pp. 163-169 ◽  
Author(s):  
Dwight C. Evans ◽  
W. Paul Nichol ◽  
Jonathan B. Perlin

Since 1995, the Veterans Health Administration (VHA) has had an ongoing process of systems improvement that has led to dramatic improvement in the quality of care delivered. A major component of the redesign of the VHA has been the creation of a fully developed enterprise-wide Electronic Health Record (EHR). VHA’s Health Information Technology was developed in a collaborative fashion between local clinical champions and central software engineers. Successful national EHR implementation was achieved by 1999, since when the VHA has been able to increase its productivity by nearly 6 per cent per year.


2018 ◽  
Vol 09 (04) ◽  
pp. 803-808 ◽  
Author(s):  
Julia Lloyd ◽  
Erin Ahrens ◽  
Donnie Clark ◽  
Terri Dachenhaus ◽  
Kathryn Nuss

Objective This article describes the method of integrating a manual pediatric emergency department sepsis screening process into the electronic health record that leverages existing clinical documentation and keeps providers in their current, routine clinical workflows. Methods Criteria in the manual pediatric emergency department sepsis screening tool were mapped to standard documentation routinely entered in the electronic health record. Data elements were extracted and scored from the medical history, medication record, vital signs, and physical assessments. Scores that met a predefined sepsis risk threshold triggered interruptive system alerts which notified emergency department staff to perform sepsis huddles and consider appropriate interventions. Statistical comparison of the new electronic tool to the manual process was completed by a two-tail paired t-test. Results The performance of the pediatric electronic sepsis screening tool was evaluated by comparing flowsheet row documentation of the manual, sepsis alert process against the interruptive system alert instance of the electronic sepsis screening tool. In an 8-week testing period, the automated pediatric electronic sepsis screening tool identified 100% of patients flagged by the manual process (n = 29), on average, 68 minutes earlier. Conclusion Integrating a manual sepsis screening tool into the electronic health record automated identification of pediatric sepsis screening in a busy emergency department. The electronic sepsis screening tool is as accurate as a manual process and would alert bedside clinicians significantly earlier in the emergency department course. Deployment of this electronic tool has the capability to improve timely sepsis detection and management of patients at risk for sepsis without requiring additional documentation by providers.


Sign in / Sign up

Export Citation Format

Share Document