Machine-learning-based hemiplegic-gait detection using an inertial sensor located freely in a pocket (Preprint)

2021 ◽  
Author(s):  
Hangsik Shin

BACKGROUND In most previous studies, the acceleration sensor is attached to a fixed position for gait analysis. However, if it is aimed at daily use, wearing it in a fixed position may cause discomfort. In addition, since an acceleration sensor can be built into the smartphones that people always carry, it is more efficient to use such a sensor rather than wear a separate acceleration sensor. OBJECTIVE We aim to distinguish between hemiplegic and normal walking by using the inertial signal measured by means of an acceleration sensor and a gyroscope. METHODS We used a machine-learning model based on a convolutional neural network to classify hemiplegic gaits, and used the acceleration and angular velocity signals obtained from the system freely located in the pocket as inputs without any pre-processing. We evaluated the performance of the developed model by means of a clinical trial including a walking test on 42 subjects (57.8 ± 13.8 years, 165.1 ± 9.3 cm, 66.3 ± 12.3 kg) including 21 hemiplegic patients. RESULTS The developed model showed an accuracy of 0.86, a precision of 0.90, a recall of 0.99, an area under the receiver operating characteristic curve of 0.91, and an area under the precision-recall curve of 0.97. CONCLUSIONS We confirmed the possibility of distinguishing hemiplegic gaits by means of a machine-learning model without additional pre-processing or feature extraction, using a 6-axis inertial signal measured at random locations in a pocket, like a smartphone. CLINICALTRIAL SCH2016-130

PLoS ONE ◽  
2021 ◽  
Vol 16 (4) ◽  
pp. e0240200
Author(s):  
Miguel Marcos ◽  
Moncef Belhassen-García ◽  
Antonio Sánchez-Puente ◽  
Jesús Sampedro-Gomez ◽  
Raúl Azibeiro ◽  
...  

Background Efficient and early triage of hospitalized Covid-19 patients to detect those with higher risk of severe disease is essential for appropriate case management. Methods We trained, validated, and externally tested a machine-learning model to early identify patients who will die or require mechanical ventilation during hospitalization from clinical and laboratory features obtained at admission. A development cohort with 918 Covid-19 patients was used for training and internal validation, and 352 patients from another hospital were used for external testing. Performance of the model was evaluated by calculating the area under the receiver-operating-characteristic curve (AUC), sensitivity and specificity. Results A total of 363 of 918 (39.5%) and 128 of 352 (36.4%) Covid-19 patients from the development and external testing cohort, respectively, required mechanical ventilation or died during hospitalization. In the development cohort, the model obtained an AUC of 0.85 (95% confidence interval [CI], 0.82 to 0.87) for predicting severity of disease progression. Variables ranked according to their contribution to the model were the peripheral blood oxygen saturation (SpO2)/fraction of inspired oxygen (FiO2) ratio, age, estimated glomerular filtration rate, procalcitonin, C-reactive protein, updated Charlson comorbidity index and lymphocytes. In the external testing cohort, the model performed an AUC of 0.83 (95% CI, 0.81 to 0.85). This model is deployed in an open source calculator, in which Covid-19 patients at admission are individually stratified as being at high or non-high risk for severe disease progression. Conclusions This machine-learning model, applied at hospital admission, predicts risk of severe disease progression in Covid-19 patients.


Author(s):  
Winston T Wang ◽  
Charlotte L Zhang ◽  
Kang Wei ◽  
Ye Sang ◽  
Jun Shen ◽  
...  

Abstract Within COVID-19 there is an urgent unmet need to predict at the time of hospital admission which patients will recover from the disease, and how fast they recover in order to deliver personalized treatments and to properly allocate hospital resources so that healthcare systems do not become overwhelmed. To this end we have combined clinically salient CT imaging data synergistically with laboratory testing data in an integrative machine learning model to predict organ-specific recovery of patients from COVID-19. We trained and validated our model in 285 patients on each separate major organ system impacted by COVID-19 including the renal, pulmonary, immune, cardiac, and hepatic systems. To greatly enhance the speed and utility of our model, we applied an artificial intelligence method to segment and classify regions on CT imaging, from which interpretable data could be directly fed into the predictive machine learning model for overall recovery. Across all organ systems we achieved validation set area under the receiver operator characteristic curve (AUC) values for organ-specific recovery ranging from 0.80 to 0.89, and significant overall recovery prediction in Kaplan-Meier analyses. This demonstrates that the synergistic use of an AI framework applied to CT lung imaging and a machine learning model that integrates laboratory test data with imaging data can accurately predict the overall recovery of COVID-19 patients from baseline characteristics.


Author(s):  
Miguel Marcos ◽  
Moncef Belhassen-Garcia ◽  
Antonio Sanchez- Puente ◽  
Jesus Sampedro-Gomez ◽  
Raul Azibeiro ◽  
...  

BACKGROUND: Efficient and early triage of hospitalized Covid-19 patients to detect those with higher risk of severe disease is essential for appropriate case management. METHODS: We trained, validated, and externally tested a machine-learning model to early identify patients who will die or require mechanical ventilation during hospitalization from clinical and laboratory features obtained at admission. A development cohort with 918 Covid-19 patients was used for training and internal validation, and 352 patients from another hospital were used for external testing. Performance of the model was evaluated by calculating the area under the receiver-operating-characteristic curve (AUC), sensitivity and specificity. RESULTS: A total of 363 of 918 (39.5%) and 128 of 352 (36.4%) Covid-19 patients from the development and external testing cohort, respectively, required mechanical ventilation or died during hospitalization. In the development cohort, the model obtained an AUC of 0.85 (95% confidence interval [CI], 0.82 to 0.87) for predicting severity of disease progression. Variables ranked according to their contribution to the model were the peripheral blood oxygen saturation (SpO2)/fraction of inspired oxygen (FiO2) ratio, age, estimated glomerular filtration rate, procalcitonin, C-reactive protein, updated Charlson comorbidity index and lymphocytes. In the external testing cohort, the model performed an AUC of 0.83 (95% CI, 0.81 to 0.85). This model is deployed in an open source calculator, in which Covid-19 patients at admission are individually stratified as being at high or non-high risk for severe disease progression. CONCLUSIONS: This machine-learning model, applied at hospital admission, predicts risk of severe disease progression in Covid-19 patients.


2020 ◽  
Author(s):  
Ka Man Fong ◽  
Shek Yin Au ◽  
George Wing Yiu Ng ◽  
Anne Kit Hung Leung

Abstract Background: Researchers have long been struggling to improve the disease severity score in mortality prediction in ICU. The digitalization of medical health records and advancement of computation power have promoted the use of machine learning in critical care. This study aimed to develop an interpretable machine learning model using datasets from multicenters, and to compare with the APACHE IV, in predicting hospital mortality of patients admitted to ICU.Method: The datasets were assembled from the eICU database including 136145 patients across 208 hospitals throughout the U.S. and 5 ICUs in Hong Kong, including 10909 patients. The two datasets were first combined into one large dataset before 80:20 stratified split into the training set and the test set. The XGBoost machine algorithm was chosen to predict the hospital mortality. The variables in the model were the same as those included in the APACHE IV score. The discrimination and calibration of the model were assessed. The model would be interpreted using the Shapley Additive explanations values.Results: Of the 147054 patients in the whole cohort, the hospital mortality was 9.3%. The area under the precision-recall curve for the XGBoost algorithm was 0.57, and 0.49 for APACHE IV. Similarly, the XGBoost reached an area under the receiving operating curve (AUROC) of 0.90, while APACHE IV had an AUROC of 0.87. Additionally, the XGBoost algorithm showed better calibration than the APACHE IV. The three most important variables were age, heart rate, and whether the patient was on ventilator.Conclusions: The severity score developed by machine learning model using mutlicenter datasets outperformed the APACHE IV in predicting hospital mortality for patients admitted to ICU.


2020 ◽  
Author(s):  
Gang Luo ◽  
Claudia L Nau ◽  
William W Crawford ◽  
Michael Schatz ◽  
Robert S Zeiger ◽  
...  

BACKGROUND Asthma causes numerous hospital encounters annually, including emergency department visits and hospitalizations. To improve patient outcomes and reduce the number of these encounters, predictive models are widely used to prospectively pinpoint high-risk patients with asthma for preventive care via care management. However, previous models do not have adequate accuracy to achieve this goal well. Adopting the modeling guideline for checking extensive candidate features, we recently constructed a machine learning model on Intermountain Healthcare data to predict asthma-related hospital encounters in patients with asthma. Although this model is more accurate than the previous models, whether our modeling guideline is generalizable to other health care systems remains unknown. OBJECTIVE This study aims to assess the generalizability of our modeling guideline to Kaiser Permanente Southern California (KPSC). METHODS The patient cohort included a random sample of 70.00% (397,858/568,369) of patients with asthma who were enrolled in a KPSC health plan for any duration between 2015 and 2018. We produced a machine learning model via a secondary analysis of 987,506 KPSC data instances from 2012 to 2017 and by checking 337 candidate features to project asthma-related hospital encounters in the following 12-month period in patients with asthma. RESULTS Our model reached an area under the receiver operating characteristic curve of 0.820. When the cutoff point for binary classification was placed at the top 10.00% (20,474/204,744) of patients with asthma having the largest predicted risk, our model achieved an accuracy of 90.08% (184,435/204,744), a sensitivity of 51.90% (2259/4353), and a specificity of 90.91% (182,176/200,391). CONCLUSIONS Our modeling guideline exhibited acceptable generalizability to KPSC and resulted in a model that is more accurate than those formerly built by others. After further enhancement, our model could be used to guide asthma care management. INTERNATIONAL REGISTERED REPORT RR2-10.2196/resprot.5039


10.2196/22689 ◽  
2020 ◽  
Vol 8 (11) ◽  
pp. e22689
Author(s):  
Gang Luo ◽  
Claudia L Nau ◽  
William W Crawford ◽  
Michael Schatz ◽  
Robert S Zeiger ◽  
...  

Background Asthma causes numerous hospital encounters annually, including emergency department visits and hospitalizations. To improve patient outcomes and reduce the number of these encounters, predictive models are widely used to prospectively pinpoint high-risk patients with asthma for preventive care via care management. However, previous models do not have adequate accuracy to achieve this goal well. Adopting the modeling guideline for checking extensive candidate features, we recently constructed a machine learning model on Intermountain Healthcare data to predict asthma-related hospital encounters in patients with asthma. Although this model is more accurate than the previous models, whether our modeling guideline is generalizable to other health care systems remains unknown. Objective This study aims to assess the generalizability of our modeling guideline to Kaiser Permanente Southern California (KPSC). Methods The patient cohort included a random sample of 70.00% (397,858/568,369) of patients with asthma who were enrolled in a KPSC health plan for any duration between 2015 and 2018. We produced a machine learning model via a secondary analysis of 987,506 KPSC data instances from 2012 to 2017 and by checking 337 candidate features to project asthma-related hospital encounters in the following 12-month period in patients with asthma. Results Our model reached an area under the receiver operating characteristic curve of 0.820. When the cutoff point for binary classification was placed at the top 10.00% (20,474/204,744) of patients with asthma having the largest predicted risk, our model achieved an accuracy of 90.08% (184,435/204,744), a sensitivity of 51.90% (2259/4353), and a specificity of 90.91% (182,176/200,391). Conclusions Our modeling guideline exhibited acceptable generalizability to KPSC and resulted in a model that is more accurate than those formerly built by others. After further enhancement, our model could be used to guide asthma care management. International Registered Report Identifier (IRRID) RR2-10.2196/resprot.5039


2021 ◽  
Vol 11 (2) ◽  
pp. 529-534
Author(s):  
Kareen Teo ◽  
Ching Wai Yong ◽  
Joon Huang Chuah ◽  
Belinda Pingguan Murphy ◽  
Khin Wee Lai

Hospital readmission shortly after discharge is contributing to rising medical care costs. Attempts have been exerted to reduce readmission rates by predicting patients at high risk of this episode on the basis of unstructured clinical notes. Discharge summary as part of the clinical prose is effective at modeling readmission risk. However, the predictive value of notes written upon discharge offers few opportunities to reduce the chance of readmission because the target patient might have already been discharged. This paper presents the use of early clinical notes in building a machine learning model to predict readmission at 48 h immediately after a patient's admission. Extensive feature engineering, testing multiple algorithms, and algorithm tuning were performed to enhance model performance. A risk scoring framework that combines the data- and knowledge-driven feature scores in risk computation was developed. The proposed predictive model showed better prognostic capability than the machine learning model alone in terms of the ability to detect readmission. In specific, the proposed algorithm showed improvements of 11%–28% in sensitivity and 1%–3% in the area-under-the-receiver operating characteristic curve.


2020 ◽  
pp. 2001104 ◽  
Author(s):  
Guangyao Wu ◽  
Pei Yang ◽  
Yuanliang Xie ◽  
Henry C. Woodruff ◽  
Xiangang Rao ◽  
...  

BackgroundThe outbreak of the coronavirus disease 2019 (COVID-19) has globally strained medical resources and caused significant mortality.ObjectiveTo develop and validate machine-learning model based on clinical features for severity risk assessment and triage for COVID-19 patients at hospital admission.Method725 patients were used to train and validate the model including a retrospective cohort of 299 hospitalised COVID-19 patients at Wuhan, China, from December 23, 2019, to February 13, 2020, and five cohorts with 426 patients from eight centers in China, Italy, and Belgium, from February 20, 2020, to March 21, 2020. The main outcome was the onset of severe or critical illness during hospitalisation. Model performances were quantified using the area under the receiver operating characteristic curve (AUC) and metrics derived from the confusion-matrix.ResultsThe median age was 50.0 years and 137 (45.8%) were men in the retrospective cohort. The median age was 62.0 years and 236 (55.4%) were men in five cohorts. The model was prospectively validated on five cohorts yielding AUCs ranging from 0.84 to 0.89, with accuracies ranging from 74.4% to 87.5%, sensitivities ranging from 75.0% to 96.9%, and specificities ranging from 57.5% to 88.0%, all of which performed better than the pneumonia severity index. The cut-off values of the low, medium, and high-risk probabilities were 0.21 and 0.80. The online-calculators can be found at www.covid19risk.ai.ConclusionThe machine-learning model, nomogram, and online-calculator might be useful to access the onset of severe and critical illness among COVID-19 patients and triage at hospital admission.


Sarcoma ◽  
2020 ◽  
Vol 2020 ◽  
pp. 1-9 ◽  
Author(s):  
Ieva Malinauskaite ◽  
Jeremy Hofmeister ◽  
Simon Burgermeister ◽  
Angeliki Neroladaki ◽  
Marion Hamard ◽  
...  

Distinguishing lipoma from liposarcoma is challenging on conventional MRI examination. In case of uncertain diagnosis following MRI, further invasive procedure (percutaneous biopsy or surgery) is often required to allow for diagnosis based on histopathological examination. Radiomics and machine learning allow for several types of pathologies encountered on radiological images to be automatically and reliably distinguished. The aim of the study was to assess the contribution of radiomics and machine learning in the differentiation between soft-tissue lipoma and liposarcoma on preoperative MRI and to assess the diagnostic accuracy of a machine-learning model compared to musculoskeletal radiologists. 86 radiomics features were retrospectively extracted from volume-of-interest on T1-weighted spin-echo 1.5 and 3.0 Tesla MRI of 38 soft-tissue tumors (24 lipomas and 14 liposarcomas, based on histopathological diagnosis). These radiomics features were then used to train a machine-learning classifier to distinguish lipoma and liposarcoma. The generalization performance of the machine-learning model was assessed using Monte-Carlo cross-validation and receiver operating characteristic curve analysis (ROC-AUC). Finally, the performance of the machine-learning model was compared to the accuracy of three specialized musculoskeletal radiologists using the McNemar test. Machine-learning classifier accurately distinguished lipoma and liposarcoma, with a ROC-AUC of 0.926. Notably, it performed better than the three specialized musculoskeletal radiologists reviewing the same patients, who achieved ROC-AUC of 0.685, 0.805, and 0.785. Despite being developed on few cases, the trained machine-learning classifier accurately distinguishes lipoma and liposarcoma on preoperative MRI, with better performance than specialized musculoskeletal radiologists.


2021 ◽  
Vol 11 (11) ◽  
pp. 1055
Author(s):  
Pei-Chen Lin ◽  
Kuo-Tai Chen ◽  
Huan-Chieh Chen ◽  
Md. Mohaimenul Islam ◽  
Ming-Chin Lin

Accurate stratification of sepsis can effectively guide the triage of patient care and shared decision making in the emergency department (ED). However, previous research on sepsis identification models focused mainly on ICU patients, and discrepancies in model performance between the development and external validation datasets are rarely evaluated. The aim of our study was to develop and externally validate a machine learning model to stratify sepsis patients in the ED. We retrospectively collected clinical data from two geographically separate institutes that provided a different level of care at different time periods. The Sepsis-3 criteria were used as the reference standard in both datasets for identifying true sepsis cases. An eXtreme Gradient Boosting (XGBoost) algorithm was developed to stratify sepsis patients and the performance of the model was compared with traditional clinical sepsis tools; quick Sequential Organ Failure Assessment (qSOFA) and Systemic Inflammatory Response Syndrome (SIRS). There were 8296 patients (1752 (21%) being septic) in the development and 1744 patients (506 (29%) being septic) in the external validation datasets. The mortality of septic patients in the development and validation datasets was 13.5% and 17%, respectively. In the internal validation, XGBoost achieved an area under the receiver operating characteristic curve (AUROC) of 0.86, exceeding SIRS (0.68) and qSOFA (0.56). The performance of XGBoost deteriorated in the external validation (the AUROC of XGBoost, SIRS and qSOFA was 0.75, 0.57 and 0.66, respectively). Heterogeneity in patient characteristics, such as sepsis prevalence, severity, age, comorbidity and infection focus, could reduce model performance. Our model showed good discriminative capabilities for the identification of sepsis patients and outperformed the existing sepsis identification tools. Implementation of the ML model in the ED can facilitate timely sepsis identification and treatment. However, dataset discrepancies should be carefully evaluated before implementing the ML approach in clinical practice. This finding reinforces the necessity for future studies to perform external validation to ensure the generalisability of any developed ML approaches.


Sign in / Sign up

Export Citation Format

Share Document