scholarly journals Applying A/B Testing to Clinical Decision Support: Rapid Randomized Controlled Trials

10.2196/16651 ◽  
2021 ◽  
Vol 23 (4) ◽  
pp. e16651
Author(s):  
Jonathan Austrian ◽  
Felicia Mendoza ◽  
Adam Szerencsy ◽  
Lucille Fenelon ◽  
Leora I Horwitz ◽  
...  

Background Clinical decision support (CDS) is a valuable feature of electronic health records (EHRs) designed to improve quality and safety. However, due to the complexities of system design and inconsistent results, CDS tools may inadvertently increase alert fatigue and contribute to physician burnout. A/B testing, or rapid-cycle randomized tests, is a useful method that can be applied to the EHR in order to rapidly understand and iteratively improve design choices embedded within CDS tools. Objective This paper describes how rapid randomized controlled trials (RCTs) embedded within EHRs can be used to quickly ascertain the superiority of potential CDS design changes to improve their usability, reduce alert fatigue, and promote quality of care. Methods A multistep process combining tools from user-centered design, A/B testing, and implementation science was used to understand, ideate, prototype, test, analyze, and improve each candidate CDS. CDS engagement metrics (alert views, acceptance rates) were used to evaluate which CDS version is superior. Results To demonstrate the impact of the process, 2 experiments are highlighted. First, after multiple rounds of usability testing, a revised CDS influenza alert was tested against usual care CDS in a rapid (~6 weeks) RCT. The new alert text resulted in minimal impact on reducing firings per patients per day, but this failure triggered another round of review that identified key technical improvements (ie, removal of dismissal button and firings in procedural areas) that led to a dramatic decrease in firings per patient per day (23.1 to 7.3). In the second experiment, the process was used to test 3 versions (financial, quality, regulatory) of text supporting tobacco cessation alerts as well as 3 supporting images. Based on 3 rounds of RCTs, there was no significant difference in acceptance rates based on the framing of the messages or addition of images. Conclusions These experiments support the potential for this new process to rapidly develop, deploy, and rigorously evaluate CDS within an EHR. We also identified important considerations in applying these methods. This approach may be an important tool for improving the impact of and experience with CDS. Trial Registration Flu alert trial: ClinicalTrials.gov NCT03415425; https://clinicaltrials.gov/ct2/show/NCT03415425. Tobacco alert trial: ClinicalTrials.gov NCT03714191; https://clinicaltrials.gov/ct2/show/NCT03714191

2019 ◽  
Author(s):  
Devin Mann ◽  
Adam Szerencsy ◽  
Leora Horwitz ◽  
Simon Jones ◽  
Masha Kuznetsova ◽  
...  

BACKGROUND Clinical decision support (CDS) is a valuable feature of electronic health records (EHRs) designed to improve quality and safety. However, due to the complexities of system design and inconsistent results, CDS tools may inadvertently increase alert fatigue and contribute to physician burnout. A/B testing, or rapid-cycle randomized tests, is a useful method that can be applied to the EHR in order to understand and iteratively improve design choices embedded within CDS tools. OBJECTIVE This paper describes how rapid randomized controlled trials (RCTs) embedded within EHRs can be used to quickly ascertain the superiority of potential CDS tools to improve their usability, reduce alert fatigue and promote quality of care. METHODS A multi-step process combining tools from user-centered design, A/B testing and implementation science is used to understand, ideate, prototype, test, analyze and improve each candidate CDS. CDS engagement metrics (alert views, ignores, orders) are used to evaluate which CDS version is superior. RESULTS Two experiments are highlighted to demonstrate the impact of the process. First, after multiple rounds of usability testing, a revised CDS influenza alert was tested against usual care in a rapid RCT. The new alert text resulted in minimal impact but the failure triggered another round of testing that identified key issues and led to a 70% reduction in alert volume in the next round. In the second experiment, the process was used to test three versions (financial, quality, regulatory) of text supporting tobacco cessation alerts as well as three supporting images. Three rounds of RCTs showed that the financial framing was 5-10% more effective than the other two but that adding images did not have a positive impact. CONCLUSIONS These data support the potential for this new process to rapidly develop, deploy and improve CDS within an EHR. This approach may be an important tool for improving the impact and experience of CDS. CLINICALTRIAL Our flu alert trial was registered in January 2018 with ClinicalTrials.gov, registration number NCT03415425. Our tobacco alert trial was registered in October 2018 with ClinicalTrials.gov, registration number NCT03714191.


2021 ◽  
Vol 12 (02) ◽  
pp. 199-207
Author(s):  
Liang Yan ◽  
Thomas Reese ◽  
Scott D. Nelson

Abstract Objective Increasingly, pharmacists provide team-based care that impacts patient care; however, the extent of recent clinical decision support (CDS), targeted to support the evolving roles of pharmacists, is unknown. Our objective was to evaluate the literature to understand the impact of clinical pharmacists using CDS. Methods We searched MEDLINE, EMBASE, and Cochrane Central for randomized controlled trials, nonrandomized trials, and quasi-experimental studies which evaluated CDS tools that were developed for inpatient pharmacists as a target user. The primary outcome of our analysis was the impact of CDS on patient safety, quality use of medication, and quality of care. Outcomes were scored as positive, negative, or neutral. The secondary outcome was the proportion of CDS developed for tasks other than medication order verification. Study quality was assessed using the Newcastle–Ottawa Scale. Results Of 4,365 potentially relevant articles, 15 were included. Five studies were randomized controlled trials. All included studies were rated as good quality. Of the studies evaluating inpatient pharmacists using a CDS tool, four showed significantly improved quality use of medications, four showed significantly improved patient safety, and three showed significantly improved quality of care. Six studies (40%) supported expanded roles of clinical pharmacists. Conclusion These results suggest that CDS can support clinical inpatient pharmacists in preventing medication errors and optimizing pharmacotherapy. Moreover, an increasing number of CDS tools have been developed for pharmacists' roles outside of order verification, whereby further supporting and establishing pharmacists as leaders in safe and effective pharmacotherapy.


2020 ◽  
Vol 10 (4) ◽  
pp. 142
Author(s):  
Brian J. Douthit ◽  
R. Clayton Musser ◽  
Kay S. Lytle ◽  
Rachel L. Richesson

(1) Background: The five rights of clinical decision support (CDS) are a well-known framework for planning the nuances of CDS, but recent advancements have given us more options to modify the format of the alert. One-size-fits-all assessments fail to capture the nuance of different BestPractice Advisory (BPA) formats. To demonstrate a tailored evaluation methodology, we assessed a BPA after implementation of Storyboard for changes in alert fatigue, behavior influence, and task completion; (2) Methods: Data from 19 weeks before and after implementation were used to evaluate differences in each domain. Individual clinics were evaluated for task completion and compared for changes pre- and post-redesign; (3) Results: The change in format was correlated with an increase in alert fatigue, a decrease in erroneous free text answers, and worsened task completion at a system level. At a local level, however, 14% of clinics had improved task completion; (4) Conclusions: While the change in BPA format was correlated with decreased performance, the changes may have been driven primarily by the COVID-19 pandemic. The framework and metrics proposed can be used in future studies to assess the impact of new CDS formats. Although the changes in this study seemed undesirable in aggregate, some positive changes were observed at the level of individual clinics. Personalized implementations of CDS tools based on local need should be considered.


2018 ◽  
Author(s):  
Sundas Khan ◽  
Safiya Richardson ◽  
Andrew Liu ◽  
Vinodh Mechery ◽  
Lauren McCullagh ◽  
...  

BACKGROUND Successful clinical decision support (CDS) tools can help use evidence-based medicine to effectively improve patient outcomes. However, the impact of these tools has been limited by low provider adoption due to overtriggering, leading to alert fatigue. We developed a tracking mechanism for monitoring trigger (percent of total visits for which the tool triggers) and adoption (percent of completed tools) rates of a complex CDS tool based on the Wells criteria for pulmonary embolism (PE). OBJECTIVE We aimed to monitor and evaluate the adoption and trigger rates of the tool and assess whether ongoing tool modifications would improve adoption rates. METHODS As part of a larger clinical trial, a CDS tool was developed using the Wells criteria to calculate pretest probability for PE at 2 tertiary centers’ emergency departments (EDs). The tool had multiple triggers: any order for D-dimer, computed tomography (CT) of the chest with intravenous contrast, CT pulmonary angiography (CTPA), ventilation-perfusion scan, or lower extremity Doppler ultrasound. A tracking dashboard was developed using Tableau to monitor real-time trigger and adoption rates. Based on initial low provider adoption rates of the tool, we conducted small focus groups with key ED providers to elicit barriers to tool use. We identified overtriggering of the tool for non-PE-related evaluations and inability to order CT testing for intermediate-risk patients. Thus, the tool was modified to allow CT testing for the intermediate-risk group and not to trigger for CT chest with intravenous contrast orders. A dialogue box, “Are you considering PE for this patient?” was added before the tool triggered to account for CTPAs ordered for aortic dissection evaluation. RESULTS In the ED of tertiary center 1, 95,295 patients visited during the academic year. The tool triggered for an average of 509 patients per month (average trigger rate 2036/30,234, 6.73%) before the modifications, reducing to 423 patients per month (average trigger rate 1629/31,361, 5.22%). In the ED of tertiary center 2, 88,956 patients visited during the academic year, with the tool triggering for about 473 patients per month (average trigger rate 1892/29,706, 6.37%) before the modifications and for about 400 per month (average trigger rate 1534/30,006, 5.12%) afterward. The modifications resulted in a significant 4.5- and 3-fold increase in provider adoption rates in tertiary centers 1 and 2, respectively. The modifications increased the average monthly adoption rate from 23.20/360 (6.5%) tools to 81.60/280.20 (29.3%) tools and 46.60/318.80 (14.7%) tools to 111.20/263.40 (42.6%) tools in centers 1 and 2, respectively. CONCLUSIONS Close postimplementation monitoring of CDS tools may help improve provider adoption. Adaptive modifications based on user feedback may increase targeted CDS with lower trigger rates, reducing alert fatigue and increasing provider adoption. Iterative improvements and a postimplementation monitoring dashboard can significantly improve adoption rates.


2014 ◽  
Vol 32 (36) ◽  
pp. 4120-4126 ◽  
Author(s):  
Isabelle Boutron ◽  
Douglas G. Altman ◽  
Sally Hopewell ◽  
Francisco Vera-Badillo ◽  
Ian Tannock ◽  
...  

Purpose We aimed to assess the impact of spin (ie, reporting to convince readers that the beneficial effect of the experimental treatment is greater than shown by the results) on the interpretation of results of abstracts of randomized controlled trials (RCTs) in the field of cancer. Methods We performed a two-arm, parallel-group RCT. We selected a sample of published RCTs with statistically nonsignificant primary outcome and with spin in the abstract conclusion. Two versions of these abstracts were used—the original with spin and a rewritten version without spin. Participants were clinician corresponding authors of articles reporting RCTs, investigators of trials, and reviewers of French national grants. The primary outcome was clinicians' interpretation of the beneficial effect of the experimental treatment (0 to 10 scale). Participants were blinded to study hypothesis. Results Three hundred clinicians were randomly assigned using a Web-based system; 150 clinicians assessed an abstract with spin and 150 assessed an abstract without spin. For abstracts with spin, the experimental treatment was rated as being more beneficial (mean difference, 0.71; 95% CI, 0.07 to 1.35; P = .030), the trial was rated as being less rigorous (mean difference, −0.59; 95% CI, −1.13 to 0.05; P = .034), and clinicians were more interested in reading the full-text article (mean difference, 0.77; 95% CI, 0.08 to 1.47; P = .029). There was no statistically significant difference in the clinicians' rating of the importance of the study or the need to run another trial. Conclusion Spin in abstracts can have an impact on clinicians' interpretation of the trial results.


2014 ◽  
Vol 05 (03) ◽  
pp. 802-813 ◽  
Author(s):  
A.D. Bryant ◽  
G.S. Fletcher ◽  
T.H. Payne

SummaryBackground: Interruptive drug interaction alerts may reduce adverse drug events and are required for Stage I Meaningful Use attestation. For the last decade override rates have been very high. Despite their widespread use in commercial EHR systems, previously described interventions to improve alert frequency and acceptance have not been well studied.Objectives: (1) To measure override rates of inpatient medication alerts within a commercial clinical decision support system, and assess the impact of local customization efforts. (2) To compare override rates between drug-drug interaction and drug-allergy interaction alerts, between attending and resident physicians, and between public and academic hospitals. (3) To measure the correlation between physicians’ individual alert quantities and override rates as an indicator of potential alert fatigue.Methods: We retrospectively analyzed physician responses to drug-drug and drug-allergy interaction alerts, as generated by a common decision support product in a large teaching hospital system.Results: (1) Over four days, 461 different physicians entered 18,354 medication orders, resulting in 2,455 visible alerts; 2,280 alerts (93%) were overridden. (2) The drug-drug alert override rate was 95.1%, statistically higher than the rate for drug-allergy alerts (90.9%) (p < 0.001). There was no significant difference in override rates between attendings and residents, or between hospitals. (3) Physicians saw a mean of 1.3 alerts per day, and the number of alerts per physician was not significantly correlated with override rate (R2 = 0.03, p = 0.41).Conclusions: Despite intensive efforts to improve a commercial drug interaction alert system and to reduce alerting, override rates remain as high as reported over a decade ago. Alert fatigue does not seem to contribute. The results suggest the need to fundamentally question the premises of drug interaction alert systems.Citation: Bryant AD, Fletcher GS, Payne TH. Drug interaction alert override rates in the Meaningful Use era: No evidence of progress. Appl Clin Inf 2014; 5: 802–813http://dx.doi.org/10.4338/ACI-2013-12-RA-0103


Sign in / Sign up

Export Citation Format

Share Document