Visualizing Emergency Department Admission through Data Mining

2019 ◽  
Vol 1 (2) ◽  
pp. 19-31
Author(s):  
Kalaivani S ◽  
Shalini Dhiman ◽  
Rajagopal T.K.P.

Emergency Department (ED) boarding –the inability to transfer emergency patients to inpatient beds- is a key factor contributing to ED overcrowding. This paper presents a novel approach to improving hospital operational efficiency and, therefore, to decreasing ED boarding. Using the historic data of 15,000 patients, admission results and patient information are correlated in order to identify important admission predictor factors. For example, the type of radiology exams prescribed by the ED physician is identified as among the most important predictors of admission. Based on these  factors, a  real-time prediction  model is  developed which  is able  to correctly predict  the  admission  result  of  four  out  of  every  five  ED  patients.  The  proposed admission  model  can  be  used  by inpatient  units  to  estimate  the  likelihood  of ED patients’ admission, and consequently, the number of incoming patients from ED in the near future. Using  similar prediction models,  hospitals can evaluate their short-time needs for inpatient care more accurately Emergency Department (ED) boarding – the inability to transfer emergency patients to inpatient beds- is a key factor contributing to ED overcrowding. This paper presents a novel approach to improving hospital operational efficiency and, therefore, to decreasing ED boarding. Using the historic data of 15,000 patients, admission results and patient information are correlated in order to identify important admission predictor factors. For example, the type of radiology exams prescribed by the ED physician is identified as among the most important predictors of admission. The proposed admission model can be used by inpatient units to estimate the likelihood of ED patients’ admission, and consequently, the number of incoming patients from ED in the near future. Using similar prediction models, hospitals can evaluate their short-time needs for inpatient care more accurately. We use three algorithms to build the predictive models: (1) logistic regression, (2) decision trees, and Analytic tools (accuracy=80.31%, AUC-ROC=0.859) than the decision tree accuracy=80.06%, AUC-ROC=0.824) and the logistic regression model (accuracy=79.94%, AUC-ROC=0.849). Drawing on logistic regression, we identify several factors related to hospital admissions including hospital site, age, arrival mode, triage category, care group, previous admission in the past month, and previous admission in the past year. From a different perspective, the research focuses on mobility data instead of personal data in general using Structural Equation Modelling analysis method. Based on this research finding, we identified an unexplored factor that can be used to predict the intention to disclose mobility data, and the result also confirmed that context aspects such as demographics and different personal data categories.

2022 ◽  
Author(s):  
Hyungbok Lee ◽  
Sangrim Lee ◽  
Hyeoneui Kim

Abstract BackgroundTransferring an emergency patient to another emergency department (ED) is necessary when she/he is unable to receive necessary treatment from the first visited ED, although the transfer poses potential risks for adverse clinical outcomes and lowering the quality of emergency medical services by overcrowding the transferred ED. This study aimed to understand the factors affecting the ED length of stay (LOS) of critically ill patients and to investigate whether they are receiving prompt treatment through Interhospital Transfer (IHT).MethodsThis study analyzed 968 critically ill patients transferred to the ED of the study site in 2019. Machine learning based prediction models were built to predict the ED LOS dichotomized as greater than 6 hours or less. Explanatory variables in patient characteristics, clinical characteristics, transfer-related characteristics, and ED characteristics were selected through univariate analyses.ResultsAmong the prediction models, the Logistic Regression (AUC 0.85) model showed the highest prediction performance, followed by Random Forest (AUC 0.83) and Naïve Bayes (AUC 0.83). The Logistic Regression model suggested that the need for emergency operation or angiography (OR 3.91, 95% CI=1.65–9.21), the need for Intensive Care Unit (ICU) admission (OR 3.84, 95% CI=2.53–5.83), fewer consultations (OR 3.57, 95% CI=2.84–4.49), a high triage level (OR 2.27, 95% CI=1.43–3.59), and fewer diagnoses (OR 1.32, 95% CI=1.09–1.61) coincided with a higher likelihood of 6-hour-or-less stays in the ED. Furthermore, an interhospital transfer handoff led to significantly shorter ED LOS among the patients who needed emergency operation or angiography, or ICU admission, or had a high triage level.ConclusionsThe results of this study suggest that patients prioritized in emergency treatment receive prompt intervention and leave the ED in time. Also, having a proper interhospital transfer handoff before IHT is crucial to provide efficient care and avoid unnecessarily longer stay in ED.


Author(s):  
David W. Savage ◽  
Douglas G. Woolford ◽  
Mackenzie Simpson ◽  
David Wood ◽  
Robert Ohle

Emergency department (ED) overcrowding is a growing problem in Canada. Many interventions have been proposed to increase patient flow. The objective of this study was to predict patient admission early in the visit with the goal of reducing waiting time in ED for admitted patients. ED data for a one-year period from Thunder Bay, Canada was obtained. Initial logistic regression models were developed using age, sex, mode of arrival, and patient acuity as explanatory variables and admission yes or no as the outcome. A second stage prediction was made using the diagnostic tests ordered to further refine the predictive models. Predictive accuracy of the logistic regression model was adequate. The AUC was approximately 81%. By summing the probabilities of patients in the ED, the hourly prediction improved. This study has shown that the number of hospital beds required on an hourly basis can be predicted using triage administrative data.


2021 ◽  
Vol 7 (1) ◽  
pp. 1-12
Author(s):  
Borislava Vrigazova

The confirmed approach to choosing the number of principal components for prediction models includes exploring the contribution of each principal component to the total variance of the target variable. A combination of possible important principal components can be chosen to explain a big part of the variance in the target. Sometimes several combinations of principal components should be explored to achieve the highest accuracy in classification. This research proposes a novel automatic way of deciding how many principal components should be retained to improve classification accuracy. We do that by combining principal components with the ANOVA selection. To improve the accuracy resulting from our automatic approach, we use the bootstrap procedure for model selection. We call this procedure the Bootstrapped-ANOVA PCA selection. Our results suggest that this procedure can automate the principal components selection and improve the accuracy of classification models, in our example, the logistic regression. This work is licensed under a Creative Commons Attribution-NonCommercial 4.0 International License.


2019 ◽  
Author(s):  
Oskar Flygare ◽  
Jesper Enander ◽  
Erik Andersson ◽  
Brjánn Ljótsson ◽  
Volen Z Ivanov ◽  
...  

**Background:** Previous attempts to identify predictors of treatment outcomes in body dysmorphic disorder (BDD) have yielded inconsistent findings. One way to increase precision and clinical utility could be to use machine learning methods, which can incorporate multiple non-linear associations in prediction models. **Methods:** This study used a random forests machine learning approach to test if it is possible to reliably predict remission from BDD in a sample of 88 individuals that had received internet-delivered cognitive behavioral therapy for BDD. The random forest models were compared to traditional logistic regression analyses. **Results:** Random forests correctly identified 78% of participants as remitters or non-remitters at post-treatment. The accuracy of prediction was lower in subsequent follow-ups (68%, 66% and 61% correctly classified at 3-, 12- and 24-month follow-ups, respectively). Depressive symptoms, treatment credibility, working alliance, and initial severity of BDD were among the most important predictors at the beginning of treatment. By contrast, the logistic regression models did not identify consistent and strong predictors of remission from BDD. **Conclusions:** The results provide initial support for the clinical utility of machine learning approaches in the prediction of outcomes of patients with BDD. **Trial registration:** ClinicalTrials.gov ID: NCT02010619.


2021 ◽  
Vol 10 (11) ◽  
pp. 2475
Author(s):  
Olivier Peyrony ◽  
Danaé Gamelon ◽  
Romain Brune ◽  
Anthony Chauvin ◽  
Daniel Aiham Ghazali ◽  
...  

Background: We aimed to describe red blood cell (RBC) transfusions in the emergency department (ED) with a particular focus on the hemoglobin (Hb) level thresholds that are used in this setting. Methods: This was a cross-sectional study of 12 EDs including all adult patients that received RBC transfusion in January and February 2018. Descriptive statistics were reported. Logistic regression was performed to assess variables that were independently associated with a pre-transfusion Hb level ≥ 8 g/dL. Results: During the study period, 529 patients received RBC transfusion. The median age was 74 (59–85) years. The patients had a history of cancer or hematological disease in 185 (35.2%) cases. Acute bleeding was observed in the ED for 242 (44.7%) patients, among which 145 (59.9%) were gastrointestinal. Anemia was chronic in 191 (40.2%) cases, mostly due to vitamin or iron deficiency or to malignancy with transfusion support. Pre-transfusion Hb level was 6.9 (6.0–7.8) g/dL. The transfusion motive was not notified in the medical chart in 206 (38.9%) cases. In the multivariable logistic regression, variables that were associated with a higher pre-transfusion Hb level (≥8 g/dL) were a history of coronary artery disease (OR: 2.09; 95% CI: 1.29–3.41), the presence of acute bleeding (OR: 2.44; 95% CI: 1.53–3.94), and older age (OR: 1.02/year; 95% CI: 1.01–1.04). Conclusion: RBC transfusion in the ED was an everyday concern and involved patients with heterogeneous medical situations and severity. Pre-transfusion Hb level was rather restrictive. Almost half of transfusions were provided because of acute bleeding which was associated with a higher Hb threshold.


Author(s):  
Kazutaka Uchida ◽  
Junichi Kouno ◽  
Shinichi Yoshimura ◽  
Norito Kinjo ◽  
Fumihiro Sakakibara ◽  
...  

AbstractIn conjunction with recent advancements in machine learning (ML), such technologies have been applied in various fields owing to their high predictive performance. We tried to develop prehospital stroke scale with ML. We conducted multi-center retrospective and prospective cohort study. The training cohort had eight centers in Japan from June 2015 to March 2018, and the test cohort had 13 centers from April 2019 to March 2020. We use the three different ML algorithms (logistic regression, random forests, XGBoost) to develop models. Main outcomes were large vessel occlusion (LVO), intracranial hemorrhage (ICH), subarachnoid hemorrhage (SAH), and cerebral infarction (CI) other than LVO. The predictive abilities were validated in the test cohort with accuracy, positive predictive value, sensitivity, specificity, area under the receiver operating characteristic curve (AUC), and F score. The training cohort included 3178 patients with 337 LVO, 487 ICH, 131 SAH, and 676 CI cases, and the test cohort included 3127 patients with 183 LVO, 372 ICH, 90 SAH, and 577 CI cases. The overall accuracies were 0.65, and the positive predictive values, sensitivities, specificities, AUCs, and F scores were stable in the test cohort. The classification abilities were also fair for all ML models. The AUCs for LVO of logistic regression, random forests, and XGBoost were 0.89, 0.89, and 0.88, respectively, in the test cohort, and these values were higher than the previously reported prediction models for LVO. The ML models developed to predict the probability and types of stroke at the prehospital stage had superior predictive abilities.


Author(s):  
L. Orazi ◽  
A. Rota ◽  
B. Reggiani

AbstractLaser surface hardening is rapidly growing in industrial applications due to its high flexibility, accuracy, cleanness and energy efficiency. However, the experimental process optimization can be a tricky task due to the number of involved parameters, thus suggesting for alternative approaches such as reliable numerical simulations. Conventional laser hardening models compute the achieved hardness on the basis of microstructure predictions due to carbon diffusion during the process heat thermal cycle. Nevertheless, this approach is very time consuming and not allows to simulate real complex products during laser treatments. To overcome this limitation, a novel simplified approach for laser surface hardening modelling is presented and discussed. The basic assumption consists in neglecting the austenite homogenization due to the short time and the insufficient carbon diffusion during the heating phase of the process. In the present work, this assumption is experimentally verified through nano-hardness measurements on C45 carbon steel samples both laser and oven treated by means of atomic force microscopy (AFM) technique.


2021 ◽  
Vol 22 (Supplement_1) ◽  
Author(s):  
T Heseltine ◽  
SW Murray ◽  
RL Jones ◽  
M Fisher ◽  
B Ruzsics

Abstract Funding Acknowledgements Type of funding sources: None. onbehalf Liverpool Multiparametric Imaging Collaboration Background Coronary artery calcium (CAC) score is a well-established technique for stratifying an individual’s cardiovascular disease (CVD) risk. Several well-established registries have incorporated CAC scoring into CVD risk prediction models to enhance accuracy. Hepatosteatosis (HS) has been shown to be an independent predictor of CVD events and can be measured on non-contrast computed tomography (CT). We sought to undertake a contemporary, comprehensive assessment of the influence of HS on CAC score alongside traditional CVD risk factors. In patients with HS it may be beneficial to offer routine CAC screening to evaluate CVD risk to enhance opportunities for earlier primary prevention strategies. Methods We performed a retrospective, observational analysis at a high-volume cardiac CT centre analysing consecutive CT coronary angiography (CTCA) studies. All patients referred for investigation of chest pain over a 28-month period (June 2014 to November 2016) were included. Patients with established CVD were excluded. The cardiac findings were reported by a cardiologist and retrospectively analysed by two independent radiologists for the presence of HS. Those with CAC of zero and those with CAC greater than zero were compared for demographic and cardiac risks. A multivariate analysis comparing the risk factors was performed to adjust for the presence of established risk factors. A binomial logistic regression model was developed to assess the association between the presence of HS and increasing strata of CAC. Results In total there were 1499 patients referred for CTCA without prior evidence of CVD. The assessment of HS was completed in 1195 (79.7%) and CAC score was performed in 1103 (92.3%). There were 466 with CVD and 637 without CVD. The prevalence of HS was significantly higher in those with CVD versus those without CVD on CTCA (51.3% versus 39.9%, p = 0.007). Male sex (50.7% versus 36.1% p= <0.001), age (59.4 ± 13.7 versus 48.1 ± 13.6, p= <0.001) and diabetes (12.4% versus 6.9%, p = 0.04) were also significantly higher in the CAC group compared to the CAC score of zero. HS was associated with increasing strata of CAC score compared with CAC of zero (CAC score 1-100 OR1.47, p = 0.01, CAC score 101-400 OR:1.68, p = 0.02, CAC score >400 OR 1.42, p = 0.14). This association became non-significant in the highest strata of CAC score. Conclusion We found a significant association between the increasing age, male sex, diabetes and HS with the presence of CAC. HS was also associated with a more severe phenotype of CVD based on the multinomial logistic regression model. Although the association reduced for the highest strata of CAC (CAC score >400) this likely reflects the overall low numbers of patients within this group and is likely a type II error. Based on these findings it may be appropriate to offer routine CVD risk stratification techniques in all those diagnosed with HS.


Author(s):  
Byunghyun Kang ◽  
Cheol Choi ◽  
Daeun Sung ◽  
Seongho Yoon ◽  
Byoung-Ho Choi

In this study, friction tests are performed, via a custom-built friction tester, on specimens of natural rubber used in automotive suspension bushings. By analyzing the problematic suspension bushings, the eleven candidate factors that influence squeak noise are selected: surface lubrication, hardness, vulcanization condition, surface texture, additive content, sample thickness, thermal aging, temperature, surface moisture, friction speed, and normal force. Through friction tests, the changes are investigated in frictional force and squeak noise occurrence according to various levels of the influencing factors. The degree of correlation between frictional force and squeak noise occurrence with the factors is determined through statistical tests, and the relationship between frictional force and squeak noise occurrence based on the test results is discussed. Squeak noise prediction models are constructed by considering the interactions among the influencing factors through both multiple logistic regression and neural network analysis. The accuracies of the two prediction models are evaluated by comparing predicted and measured results. The accuracies of the multiple logistic regression and neural network models in predicting the occurrence of squeak noise are 88.2% and 87.2%, respectively.


Sign in / Sign up

Export Citation Format

Share Document