scholarly journals Application of Machine Learning Algorithms to Depression Screening and Attempt at Pattern Extraction of Patient-Reported Outcomes that Negatively Affect Classification Accuracy (Preprint)

2017 ◽  
Author(s):  
Junetae Kim ◽  
Byungtae Lee ◽  
Sae Byul Lee ◽  
Il Yong Chung ◽  
Sei Hyun Ahn ◽  
...  

BACKGROUND Smartphone applications have recently been used as a breakthrough technology for monitoring mental health conditions in cancer outpatient settings. However, the use of electronic patient-reported outcomes (ePROs) on mental conditions through smartphone applications raises new concerns, which includes the question of the accuracy of depression screening. Thus, research is essential for improving the depression-screening performance. OBJECTIVE This study aims to (1) test whether deep-learning-based algorithms can overcome the limitations of traditional statistical methods in terms of depression screening accuracy. In addition, the study aims to (2) explore ePRO patterns that adversely affect depression screening accuracy. METHODS As a deep learning-based algorithm, a feedforward neural network algorithm was used. As a traditional statistical method, a random intercept logistic regression was employed. To explore the ePRO patterns that negatively impact model accuracy, mental fluctuations, missing data, and compounding effects between mental fluctuations and missing data were tested. The performances of the algorithms and the effects of the ePRO patterns were measured through the receiver operating characteristic comparison test. RESULTS The results of the study show that the performance of the deep-learning-based models was superior to that of the traditional statistical approach. The study found that mental fluctuations statistically reduced the accuracy of depression-screening models. A weak association between ePRO omissions and screening accuracy was found. Moreover, the compounding effects that had a negative effect on the depression screening accuracy were statistically significant. CONCLUSIONS Although well-trained deep-learning-based models exhibit excellent performance, they still have some limitations. Thus, it is very important to focus on data quality to predict health outcomes when using data that is difficult to quantify, such as mental conditions.

2015 ◽  
Vol 22 (3) ◽  
pp. 354-361 ◽  
Author(s):  
Natalie F Baruch ◽  
Ellen H O’Donnell ◽  
Bonnie I Glanz ◽  
Ralph HB Benedict ◽  
Alexander J Musallam ◽  
...  

Background: Little is known about long-term cognitive and patient-reported outcomes of pediatric-onset multiple sclerosis (POMS). Objective: The objective of this paper is to compare cognitive and patient-reported outcomes in adults with POMS vs. adult-onset MS (AOMS). Methods: We compared standardized patient-reported measures MSQOL54, MFIS, CES-D and SDMT in adult patients with MS onset prior to and after age 18, using data gathered in the Comprehensive Longitudinal Investigations in MS at Brigham and Women’s Hospital (CLIMB) study. Results: Fifty-one POMS and 550 AOMS patients were compared. SDMT scores were significantly lower in POMS after adjusting for age (−7.57 (−11.72, −3.43; p < 0.001), but not after adjusting for disease duration. Estimated group difference demonstrated lower normative z scores in POMS vs. AOMS in unadjusted analysis (−0.74 (95% CI: −1.18, −0.30; p = 0.0009) and after adjusting for disease duration (−0.60; 95%CI: −1.05, −0.15; p = 0.0097). Findings were unchanged in a subset of POMS diagnosed prior to age 18. In unadjusted and adjusted analyses, no significant differences were observed in health-related quality-of-life, fatigue, depression or social support between POMS and AOMS. Conclusions: Younger age of onset was associated with more impairment in information-processing speed in adults with POMS compared to AOMS, and remained significant when controlling for disease duration in age-normed analysis. The two groups were similar in terms of patient-reported outcomes, suggesting similar qualitative experiences of MS.


2020 ◽  
Author(s):  
Maike Richter ◽  
Michael Storck ◽  
Rogerio Blitz ◽  
Janik Goltermann ◽  
Juliana Seipp ◽  
...  

Multivariate predictive models have revealed promising results for the individual prediction of treatment response, relapse risk as well as for the differential diagnosis in affective disorders. Yet, in order to translate personalized predictive modelling from the research context to psychiatric clinical routine, standardized collection of information of sufficient detail and temporal resolution in day-to-day clinical care is needed, based on which machine learning algorithms can be trained. Digital collection of patient-reported outcomes (PROs) is a time- and cost-efficient approach to gain such data throughout the treatment course. However, it remains unclear whether patients with severe affective disorders are willing and able to participate in such efforts, whether the feasibility of such systems might vary depending on individual patient characteristics and if digitally acquired patient-reported outcomes are of sufficient diagnostic validity. To address these questions, we implemented a system for continuous digital collection of patient-reported outcomes via tablet computers throughout inpatient treatment for affective disorders at the Department of Psychiatry at the University of Muenster. 364 affective disorder patients were approached, 66.5% of which could be recruited to participate in the study. An average of four assessments were completed during the treatment course, none of the participants dropped out of the study prematurely. 89.3% of participants did not require additional support during data entry. Need of support with tablet handling and slower data entry pace was predicted by older age, whereas depression severity at baseline did not influence these measures. Patient-reported outcomes of depression severity showed high agreement with standardized external assessments by a clinical interviewer. Our results indicate that continuous digital collection of patient-reported outcomes is a feasible, accessible and valid method for longitudinal data collection in psychiatric routine, which will eventually facilitate the identification of individual risk and resilience factors for affective disorders and pave the way towards personalized psychiatric care.


2017 ◽  
pp. 1-10 ◽  
Author(s):  
Nicholas G. Wysham ◽  
Steven P. Wolf ◽  
Gregory Samsa ◽  
Amy P. Abernethy ◽  
Thomas W. LeBlanc

Purpose Routinely collected patient-reported outcomes (PROs) could provide invaluable data to a patient-centered learning health system but are often highly missing in clinical trials. We analyzed our experience with PROs to understand patterns of missing data using electronic collection as part of routine clinical care. Methods This is an analysis of a prospectively collected observational database of electronic PROs captured as part of routine clinical care in four different outpatient oncology clinics at an academic referral center. Results More than 24,000 clinical encounters from 7,655 unique patients are included. Data were collected via an electronic tablet–based survey instrument (Patient Care Monitor, version 2.0), at the time of clinical care, as part of routine care processes. Missing instruments (ie, no items completed) were submitted for 6.8% of clinical encounters, and 15.8% of encounters had missing items. Nearly 90% of all encounters involved < 10% missing items. In multivariable analyses, younger age, private health insurance, being seen in the breast oncology clinic, less time spent on the instrument, and longitudinal care were significantly associated with less missingness. Conclusion Embedding collection of electronic PRO data into routine clinical care yielded low rates of missing data in this real-world, prospectively collected database. In contrast to clinical trial experience, missingness improve with longitudinal care. This approach may be a solution to minimizing missingness of PROs in research or clinical care settings in support of learning health care systems.


2021 ◽  
Vol 39 (15_suppl) ◽  
pp. 1510-1510
Author(s):  
Ravi Bharat Parikh ◽  
Jill Schnall ◽  
Manqing Liu ◽  
Peter Edward Gabriel ◽  
Corey Chivers ◽  
...  

1510 Background: Machine learning (ML) algorithms based on electronic health record (EHR) data have been shown to accurately predict mortality risk among patients with cancer, with areas under the curve (AUC) generally greater than 0.80. While patient-reported outcomes (PROs) may also predict mortality among patients with cancer, it is unclear whether routinely-collected PROs improve the predictive performance of EHR-based ML algorithms. Methods: This cohort study included 8600 patients with cancer who had an outpatient encounter at one of 18 medical oncology practices in a large academic health system between July 1st, 2019 and January 1st, 2020. 4692 (54.9%) patients completed assessments of symptoms, performance status, and quality of life from the PRO version of the Common Terminology Criteria for Adverse Events and the Patient-Reported Outcomes Measurement Information System Global v.1.2 scales. We hypothesized that ML models predicting 180-day all-cause mortality based on EHR + PRO data would improve AUC compared to ML models based on EHR data alone. We assessed univariate and adjusted associations between each PRO and 180-day mortality. To train the EHR-only model, we fit a Least Absolute Shrinkage and Selection Operator (LASSO) regression using 192 EHR demographic, comorbidity, and laboratory variables. To train the EHR + PRO model, we used a two-phase approach to fit a model using EHR data for all patients and PRO data for those who completed assessments. To test our hypothesis, we compared the bootstrapped AUC, area under the precision-recall curve (AUPRC), and sensitivity at a 20% risk threshold for both models. Results: 464 (5.4%) patients died within 180 days of the encounter. Decreased quality of life, functional status, and appetite were associated with greater 180-day mortality (Table). Compared to the EHR-only model, the EHR + PRO model significantly improved AUC (0.86 [95% CI 0.85-0.86] vs. 0.80 [95% CI 0.80-0.81]), AUPRC (0.40 [95% CI 0.37-0.42] vs. 0.30 [95% CI 0.28-0.32]), and sensitivity (0.45 [95% CI 0.42-0.48] vs. 0.33 [95% CI 0.30-0.35]). Conclusions: Routinely collected PROs augment EHR-based ML mortality risk algorithms. ML algorithms based on EHR and PRO data may facilitate earlier supportive care for patients with cancer. Association of PROs with 180-day mortality.[Table: see text]


2018 ◽  
Vol 28 (5) ◽  
pp. 1439-1456 ◽  
Author(s):  
Daniel O Scharfstein ◽  
Aidan McDermott

Randomized trials with patient-reported outcomes are commonly plagued by missing data. The analysis of such trials relies on untestable assumptions about the missing data mechanism. To address this issue, it has been recommended that the sensitivity of the trial results to assumptions should be a mandatory reporting requirement. In this paper, we discuss a recently developed methodology (Scharfstein et al., Biometrics, 2018) for conducting sensitivity analysis of randomized trials in which outcomes are scheduled to be measured at fixed points in time after randomization and some subjects prematurely withdraw from study participation. The methodology is explicated in the context of a placebo-controlled randomized trial designed to evaluate a treatment for bipolar disorder. We present a comprehensive data analysis and a simulation study to evaluate the performance of the method. A software package entitled SAMON (R and SAS versions) that implements our methods is available at www.missingdatamatters.org .


2020 ◽  
Vol 38 (15_suppl) ◽  
pp. 520-520 ◽  
Author(s):  
André Pfob ◽  
Babak Mehrara ◽  
Jonas Nelson ◽  
Edwin G. Wilkins ◽  
Andrea Pusic ◽  
...  

520 Background: Post-surgical satisfaction with breasts is a key outcome for women undergoing cancer-related mastectomy and reconstruction. Current decision making relies on group-level evidence, which may not offer optimal choice of treatment for individuals. We developed and validated machine learning algorithms to predict individual post-surgical breast-satisfaction. We aim to facilitate individualized data-driven decision making in breast cancer. Methods: We collected clinical, perioperative, and patient-reported data from 3058 women who underwent breast reconstruction due to breast cancer across 11 sites in North America. We trained and evaluated four algorithms (regularized regression, Support Vector Machine, Neural Network, Regression Tree) to predict significant changes in satisfaction with breasts at 2-year follow up using the validated BREAST-Q measure. Accuracy and area under the receiver operating characteristics curve (AUC) were used to determine algorithm performance in the test sample. Results: Machine learning algorithms were able to accurately predict changes in women’s satisfaction with breasts (see table). Baseline satisfaction with breasts was the most informative predictor of outcome, followed by radiation during or after reconstruction, nipple-sparing and mixed mastectomy, implant-based reconstruction, chemotherapy, unilateral mastectomy, lower psychological well-being, and obesity. Conclusions: We reveal the crucial role of patient-reported outcomes in determining post-operative outcomes and that Machine Learning algorithms are suitable to identify individuals who might benefit from alternative treatment decisions than suggested by group-level evidence. We provide a web-based tool for individuals considering mastectomy and reconstruction. importdemo.com . Clinical trial information: NCT01723423 . [Table: see text]


2019 ◽  
Vol 5 (1) ◽  
pp. 00243-2018 ◽  
Author(s):  
Konstantinos Kostikas ◽  
Timm Greulich ◽  
Alexander J. Mackay ◽  
Nadine S. Lossi ◽  
Maryam Aalamian-Mattheis ◽  
...  

The association between clinically relevant changes in patient-reported outcomes (PROs) and forced expiratory volume in 1 s (FEV1) in patients with chronic obstructive pulmonary disease (COPD) has rarely been investigated.Using CRYSTAL, a 12-week open-label study in symptomatic, nonfrequently exacerbating patients with moderate COPD, we assessed at baseline the correlations between several PROs (Baseline Dyspnoea Index, modified Medical Research Council dyspnoea scale, COPD Assessment Test (CAT) and Clinical COPD Questionnaire (CCQ)), and between FEV1 and PROs. Associations between clinically relevant responses in FEV1, CAT, CCQ and Transition Dyspnoea Index (TDI) at week 12 were also assessed.Using data from 4324 patients, a strong correlation was observed between CAT and CCQ (rs=0.793) at baseline, with moderate or weak correlations between other PROs, and no correlation between FEV1 and any PRO. At week 12, 2774 (64.2%) patients were responders regarding TDI, CAT or CCQ, with 583 (13.5%) responding using all three measures. In comparison, 3235 (74.8%) were responders regarding FEV1, TDI, CAT or CCQ, with 307 (7.1%) responding concerning all four parameters.Increases in lung function were accompanied by clinically relevant improvements of PROs in a minority of patients. Our results also suggest that PROs are not interchangeable. Thus, the observed treatment success in a clinical trial may depend on the selected parameters.


Sign in / Sign up

Export Citation Format

Share Document