A Systematised State-of-the-Art Review of Machine Learning Models to Aid Clinical Decision-Making in Epilepsy

2020 ◽  
Author(s):  
Edward Jonathan Han-Burgess ◽  
Richard J. Stevens
2021 ◽  
Vol 12 (04) ◽  
pp. 808-815
Author(s):  
Lin Lawrence Guo ◽  
Stephen R. Pfohl ◽  
Jason Fries ◽  
Jose Posada ◽  
Scott Lanyon Fleming ◽  
...  

Abstract Objective The change in performance of machine learning models over time as a result of temporal dataset shift is a barrier to machine learning-derived models facilitating decision-making in clinical practice. Our aim was to describe technical procedures used to preserve the performance of machine learning models in the presence of temporal dataset shifts. Methods Studies were included if they were fully published articles that used machine learning and implemented a procedure to mitigate the effects of temporal dataset shift in a clinical setting. We described how dataset shift was measured, the procedures used to preserve model performance, and their effects. Results Of 4,457 potentially relevant publications identified, 15 were included. The impact of temporal dataset shift was primarily quantified using changes, usually deterioration, in calibration or discrimination. Calibration deterioration was more common (n = 11) than discrimination deterioration (n = 3). Mitigation strategies were categorized as model level or feature level. Model-level approaches (n = 15) were more common than feature-level approaches (n = 2), with the most common approaches being model refitting (n = 12), probability calibration (n = 7), model updating (n = 6), and model selection (n = 6). In general, all mitigation strategies were successful at preserving calibration but not uniformly successful in preserving discrimination. Conclusion There was limited research in preserving the performance of machine learning models in the presence of temporal dataset shift in clinical medicine. Future research could focus on the impact of dataset shift on clinical decision making, benchmark the mitigation strategies on a wider range of datasets and tasks, and identify optimal strategies for specific settings.


2019 ◽  
Vol 3 (s1) ◽  
pp. 60-61
Author(s):  
Kadie Clancy ◽  
Esmaeel Dadashzadeh ◽  
Christof Kaltenmeier ◽  
JB Moses ◽  
Shandong Wu

OBJECTIVES/SPECIFIC AIMS: This retrospective study aims to create and train machine learning models using a radiomic-based feature extraction method for two classification tasks: benign vs. pathologic PI and operation of benefit vs. operation not needed. The long-term goal of our study is to build a computerized model that incorporates both radiomic features and critical non-imaging clinical factors to improve current surgical decision-making when managing PI patients. METHODS/STUDY POPULATION: Searched radiology reports from 2010-2012 via the UPMC MARS Database for reports containing the term “pneumatosis” (subsequently accounting for negations and age restrictions). Our inclusion criteria included: patient age 18 or older, clinical data available at time of CT diagnosis, and PI visualized on manual review of imaging. Cases with intra-abdominal free air were excluded. Collected CT imaging data and an additional 149 clinical data elements per patient for a total of 75 PI cases. Data collection of an additional 225 patients is ongoing. We trained models for two clinically-relevant prediction tasks. The first (referred to as prediction task 1) classifies between benign and pathologic PI. Benign PI is defined as either lack of intraoperative visualization of transmural intestinal necrosis or successful non-operative management until discharge. Pathologic PI is defined as either intraoperative visualization of transmural PI or withdrawal of care and subsequent death during hospitalization. The distribution of data samples for prediction task 1 is 47 benign cases and 38 pathologic cases. The second (referred to as prediction task 2) classifies between whether the patient benefitted from an operation or not. “Operation of benefit” is defined as patients with PI, be it transmural or simply mucosal, who benefited from an operation. “Operation not needed” is defined as patients who were safely discharged without an operation or patients who had an operation, but nothing was found. The distribution of data samples for prediction task 2 is 37 operation not needed cases and 38 operation of benefit cases. An experienced surgical resident from UPMC manually segmented 3D PI ROIs from the CT scans (5 mm Axial cut) for each case. The most concerning ~10-15 cm segment of bowel for necrosis with a 1 cm margin was selected. A total of 7 slices per patient were segmented for consistency. For both prediction task 1 and prediction task 2, we independently completed the following procedure for testing and training: 1.) Extracted radiomic features from the 3D PI ROIs that resulted in 99 total features. 2.) Used LASSO feature selection to determine the subset of the original 99 features that are most significant for performance of the prediction task. 3.) Used leave-one-out cross-validation for testing and training to account for the small dataset size in our preliminary analysis. Implemented and trained several machine learning models (AdaBoost, SVM, and Naive Bayes). 4.) Evaluated the trained models in terms of AUC and Accuracy and determined the ideal model structure based on these performance metrics. RESULTS/ANTICIPATED RESULTS: Prediction Task 1: The top-performing model for this task was an SVM model trained using 19 features. This model had an AUC of 0.79 and an accuracy of 75%. Prediction Task 2: The top-performing model for this task was an SVM model trained using 28 features. This model had an AUC of 0.74 and an accuracy of 64%. DISCUSSION/SIGNIFICANCE OF IMPACT: To the best of our knowledge, this is the first study to use radiomic-based machine learning models for the prediction of tissue ischemia, specifically intestinal ischemia in the setting of PI. In this preliminary study, which serves as a proof of concept, the performance of our models has demonstrated the potential of machine learning based only on radiomic imaging features to have discriminative power for surgical decision-making problems. While many non-imaging-related clinical factors play a role in the gestalt of clinical decision making when PI presents, we have presented radiomic-based models that may augment this decision-making process, especially for more difficult cases when clinical features indicating acute abdomen are absent. It should be noted that prediction task 2, whether or not a patient presenting with PI would benefit from an operation, has lower performance than prediction task 1 and is also a more challenging task for physicians in real clinical environments. While our results are promising and demonstrate potential, we are currently working to increase our dataset to 300 patients to further train and assess our models. References DuBose, Joseph J., et al. “Pneumatosis Intestinalis Predictive Evaluation Study (PIPES): a multicenter epidemiologic study of the Eastern Association for the Surgery of Trauma.” Journal of Trauma and Acute Care Surgery 75.1 (2013): 15-23. Knechtle, Stuart J., Andrew M. Davidoff, and Reed P. Rice. “Pneumatosis intestinalis. Surgical management and clinical outcome.” Annals of Surgery 212.2 (1990): 160.


Med ◽  
2021 ◽  
Author(s):  
Lorenz Adlung ◽  
Yotam Cohen ◽  
Uria Mor ◽  
Eran Elinav

Author(s):  
Chenxi Huang ◽  
Shu-Xia Li ◽  
César Caraballo ◽  
Frederick A. Masoudi ◽  
John S. Rumsfeld ◽  
...  

Background: New methods such as machine learning techniques have been increasingly used to enhance the performance of risk predictions for clinical decision-making. However, commonly reported performance metrics may not be sufficient to capture the advantages of these newly proposed models for their adoption by health care professionals to improve care. Machine learning models often improve risk estimation for certain subpopulations that may be missed by these metrics. Methods and Results: This article addresses the limitations of commonly reported metrics for performance comparison and proposes additional metrics. Our discussions cover metrics related to overall performance, discrimination, calibration, resolution, reclassification, and model implementation. Models for predicting acute kidney injury after percutaneous coronary intervention are used to illustrate the use of these metrics. Conclusions: We demonstrate that commonly reported metrics may not have sufficient sensitivity to identify improvement of machine learning models and propose the use of a comprehensive list of performance metrics for reporting and comparing clinical risk prediction models.


2020 ◽  
Vol 6 (1) ◽  
Author(s):  
Shubham Debnath ◽  
◽  
Douglas P. Barnaby ◽  
Kevin Coppa ◽  
Alexander Makhnevich ◽  
...  

Sign in / Sign up

Export Citation Format

Share Document