scholarly journals Leveraging the Power of Nondisruptive Technologies to Optimize Mental Health Treatment: Case Study (Preprint)

2020 ◽  
Author(s):  
Shiri Sadeh-Sharvit ◽  
Steven D Hollon

UNSTRUCTURED Regular assessment of the effectiveness of behavioral interventions is a potent tool for improving their relevance to patients. However, poor provider and patient adherence characterize most measurement-based care tools. Therefore, a new approach for measuring intervention effects and communicating them to providers in a seamless manner is warranted. This paper provides a brief overview of the available research evidence on novel ways to measure the effects of behavioral treatments, integrating both objective and subjective data. We highlight the importance of analyzing therapeutic conversations through natural language processing. We then suggest a conceptual framework for capitalizing on data captured through directly collected and nondisruptive methodologies to describe the client’s characteristics and needs and inform clinical decision-making. We then apply this context in exploring a new tool to integrate the content of therapeutic conversations and patients’ self-reports. We present a case study of how both subjective and objective measures of treatment effects were implemented in cognitive-behavioral treatment for depression and anxiety and then utilized in treatment planning, delivery, and termination. In this tool, called Eleos, the patient completes standardized measures of depression and anxiety. The content of the treatment sessions was evaluated using nondisruptive, independent measures of conversation content, fidelity to the treatment model, and the back-and-forth of client-therapist dialogue. Innovative applications of advances in digital health are needed to disseminate empirically supported interventions and measure them in a noncumbersome way. Eleos appears to be a feasible, sustainable, and effective way to assess behavioral health care.

10.2196/20646 ◽  
2020 ◽  
Vol 7 (11) ◽  
pp. e20646
Author(s):  
Shiri Sadeh-Sharvit ◽  
Steven D Hollon

Regular assessment of the effectiveness of behavioral interventions is a potent tool for improving their relevance to patients. However, poor provider and patient adherence characterize most measurement-based care tools. Therefore, a new approach for measuring intervention effects and communicating them to providers in a seamless manner is warranted. This paper provides a brief overview of the available research evidence on novel ways to measure the effects of behavioral treatments, integrating both objective and subjective data. We highlight the importance of analyzing therapeutic conversations through natural language processing. We then suggest a conceptual framework for capitalizing on data captured through directly collected and nondisruptive methodologies to describe the client’s characteristics and needs and inform clinical decision-making. We then apply this context in exploring a new tool to integrate the content of therapeutic conversations and patients’ self-reports. We present a case study of how both subjective and objective measures of treatment effects were implemented in cognitive-behavioral treatment for depression and anxiety and then utilized in treatment planning, delivery, and termination. In this tool, called Eleos, the patient completes standardized measures of depression and anxiety. The content of the treatment sessions was evaluated using nondisruptive, independent measures of conversation content, fidelity to the treatment model, and the back-and-forth of client-therapist dialogue. Innovative applications of advances in digital health are needed to disseminate empirically supported interventions and measure them in a noncumbersome way. Eleos appears to be a feasible, sustainable, and effective way to assess behavioral health care.


2021 ◽  
Author(s):  
William Lynch ◽  
Michael L. Platt ◽  
Adam Pardes

ABSTRACTPurposeAlthough depression and anxiety are the leading causes of disability in the United States, respectively, fewer than half of people diagnosed with these conditions receive appropriate treatment, and fewer than 10% receive measurement-based care (MBC), which is defined as behavioral health care based on and adapted in response to patient outcomes data collected throughout treatment. The NeuroFlow platform was developed with the goal of making MBC easier to deliver and more accessible within integrated behavioral health care. Data from over 3,000 users of the NeuroFlow platform were used to develop the NeuroFlow Severity Score (NFSS), a potential new measure for depression and anxiety. To begin evaluating the potential usefulness of this new measure, NFSSs were compared with validated measures for depression and anxiety, the Personal Health Questionnaire-9 (PHQ-9) and Generalized Anxiety Disorder-7 (GAD-7) scale, and clinician assessment.MethodsThe NFSS platform is used to record patient-reported and passively collected data related to behavioral health. An artificial-intelligence derived algorithm was developed that condenses this large number of measurements into a single score for longitudinal tracking of an individual’s depression and anxiety symptoms. Linear regression and Bland-Altman analyses were used to evaluate relationships and differences between NFSS and PHQ-9 or GAD-7 scores from over 35,000 NeuroFlow users. The NFSS was also compared to assessment by a panel of expert clinicians for a subset of 250 individuals.ResultsLinear regression results showed a strong correlation between NFSS and PHQ-9 (r=.74, P<.001) and GAD-7 (r=.80, P<.001) changes. There was also a strong positive correlation between the NFSS and expert panel clinical assessment (r=.80-.84, P<.001). Bland-Altman analysis and evaluation of outliers on regression analysis, however, show that the NFSS has significant differences from the PHQ-9.ConclusionsClinicians can reliably use the NFSS as a proxy measure for monitoring symptoms of depression and anxiety longitudinally. The NFSS may identify at-risk individuals who are not identified by the PHQ-9. Further research is warranted to evaluate the sensitivity and specificity of the NFSS.


2021 ◽  
Author(s):  
William Lynch ◽  
Michael L Platt ◽  
Adam Pardes

BACKGROUND Less than 10% of the individuals seeking behavioral health care receive measurement-based care (MBC). Technology has the potential to implement MBC in a secure and efficient manner. To test this idea, a mobile health (mHealth) platform was developed with the goal of making MBC easier to deliver by clinicians and more accessible to patients within integrated behavioral health care. Data from over 3000 users of the mHealth platform were used to develop an output severity score, a robust screening measure for depression and anxiety. OBJECTIVE The aim of this study is to compare severity scores with scores from validated assessments for depression and anxiety and scores from clinician review to evaluate the potential added value of this new measure. METHODS The severity score uses patient-reported and passively collected data related to behavioral health on an mHealth platform. An artificial intelligence–derived algorithm was developed that condenses behavioral health data into a single, quantifiable measure for longitudinal tracking of an individual’s depression and anxiety symptoms. Linear regression and Bland-Altman analyses were used to evaluate the relationships and differences between severity scores and Personal Health Questionnaire-9 (PHQ-9) or Generalized Anxiety Disorder-7 (GAD-7) scores from over 35,000 mHealth platform users. The severity score was also compared with a review by a panel of expert clinicians for a subset of 250 individuals. RESULTS Linear regression results showed a strong correlation between the severity score and PHQ-9 (<i>r</i>=0.74; <i>P</i>&lt;.001) and GAD-7 (<i>r</i>=0.80; <i>P</i>&lt;.001) changes. A strong positive correlation was also found between the severity score and expert panel clinical review (<i>r</i>=0.80-0.84; <i>P</i>&lt;.001). However, Bland-Altman analysis and the evaluation of outliers on regression analysis showed that the severity score was significantly different from the PHQ-9. CONCLUSIONS Clinicians can reliably use the mHealth severity score as a proxy measure for screening and monitoring behavioral health symptoms longitudinally. The severity score may identify at-risk individuals who are not identified by the PHQ-9. Further research is warranted to evaluate the sensitivity and specificity of the severity score.


10.2196/13855 ◽  
2020 ◽  
Vol 22 (2) ◽  
pp. e13855 ◽  
Author(s):  
Burkhardt Funk ◽  
Shiri Sadeh-Sharvit ◽  
Ellen E Fitzsimmons-Craft ◽  
Mickey Todd Trockel ◽  
Grace E Monterubio ◽  
...  

Background Digital health interventions (DHIs) are poised to reduce target symptoms in a scalable, affordable, and empirically supported way. DHIs that involve coaching or clinical support often collect text data from 2 sources: (1) open correspondence between users and the trained practitioners supporting them through a messaging system and (2) text data recorded during the intervention by users, such as diary entries. Natural language processing (NLP) offers methods for analyzing text, augmenting the understanding of intervention effects, and informing therapeutic decision making. Objective This study aimed to present a technical framework that supports the automated analysis of both types of text data often present in DHIs. This framework generates text features and helps to build statistical models to predict target variables, including user engagement, symptom change, and therapeutic outcomes. Methods We first discussed various NLP techniques and demonstrated how they are implemented in the presented framework. We then applied the framework in a case study of the Healthy Body Image Program, a Web-based intervention trial for eating disorders (EDs). A total of 372 participants who screened positive for an ED received a DHI aimed at reducing ED psychopathology (including binge eating and purging behaviors) and improving body image. These users generated 37,228 intervention text snippets and exchanged 4285 user-coach messages, which were analyzed using the proposed model. Results We applied the framework to predict binge eating behavior, resulting in an area under the curve between 0.57 (when applied to new users) and 0.72 (when applied to new symptom reports of known users). In addition, initial evidence indicated that specific text features predicted the therapeutic outcome of reducing ED symptoms. Conclusions The case study demonstrates the usefulness of a structured approach to text data analytics. NLP techniques improve the prediction of symptom changes in DHIs. We present a technical framework that can be easily applied in other clinical trials and clinical presentations and encourage other groups to apply the framework in similar contexts.


1976 ◽  
Vol 44 (6) ◽  
pp. 1008-1014 ◽  
Author(s):  
Paul R. Munford ◽  
Diane Reardon ◽  
Robert P. Liberman ◽  
Linda Allen
Keyword(s):  

Assessment ◽  
2021 ◽  
pp. 107319112199646
Author(s):  
Olivia Gratz ◽  
Duncan Vos ◽  
Megan Burke ◽  
Neelkamal Soares

To date, there is a paucity of research conducting natural language processing (NLP) on the open-ended responses of behavior rating scales. Using three NLP lexicons for sentiment analysis of the open-ended responses of the Behavior Assessment System for Children-Third Edition, the researchers discovered a moderately positive correlation between the human composite rating and the sentiment score using each of the lexicons for strengths comments and a slightly positive correlation for the concerns comments made by guardians and teachers. In addition, the researchers found that as the word count increased for open-ended responses regarding the child’s strengths, there was a greater positive sentiment rating. Conversely, as word count increased for open-ended responses regarding child concerns, the human raters scored comments more negatively. The authors offer a proof-of-concept to use NLP-based sentiment analysis of open-ended comments to complement other data for clinical decision making.


Author(s):  
Jacqueline Peng ◽  
Mengge Zhao ◽  
James Havrilla ◽  
Cong Liu ◽  
Chunhua Weng ◽  
...  

Abstract Background Natural language processing (NLP) tools can facilitate the extraction of biomedical concepts from unstructured free texts, such as research articles or clinical notes. The NLP software tools CLAMP, cTAKES, and MetaMap are among the most widely used tools to extract biomedical concept entities. However, their performance in extracting disease-specific terminology from literature has not been compared extensively, especially for complex neuropsychiatric disorders with a diverse set of phenotypic and clinical manifestations. Methods We comparatively evaluated these NLP tools using autism spectrum disorder (ASD) as a case study. We collected 827 ASD-related terms based on previous literature as the benchmark list for performance evaluation. Then, we applied CLAMP, cTAKES, and MetaMap on 544 full-text articles and 20,408 abstracts from PubMed to extract ASD-related terms. We evaluated the predictive performance using precision, recall, and F1 score. Results We found that CLAMP has the best performance in terms of F1 score followed by cTAKES and then MetaMap. Our results show that CLAMP has much higher precision than cTAKES and MetaMap, while cTAKES and MetaMap have higher recall than CLAMP. Conclusion The analysis protocols used in this study can be applied to other neuropsychiatric or neurodevelopmental disorders that lack well-defined terminology sets to describe their phenotypic presentations.


Sign in / Sign up

Export Citation Format

Share Document